I spent much of the data working up a function for level+trend dynamic linear modeling based upon the ** dlm** package by Petris, Petrone, and Campagnoli, while trying some calculations and code for

*regime shift detection*. One of the test cases I used was the sea level rise anomaly series produced by the CU Sea Level Research Group at the University of Chicago. That data looks like:

**(**

*Click on image for a larger size picture. Use browser Back button to return to blog.*)and is taken from this location.

The result of a DLM smooth looks like:

**( Click on image for a larger size picture. Use browser Back button to return to blog.)**

Initial estimates for the observational (co)variance, the state drift covariance, and the initial state covariance were based upon multiples of the covariance of the series, *not corrected for serial correlation*. Strictly speaking, that should be done, but as the covariance matrices are estimates, I did not think it necessary. I typically do it using the Politis and Romano *stationary bootstrap* but did not in this study. The objective wasn’t to get a handle on SLR as much as write and test code. Moreover, various multiples of these were tried in a set of by-hand runs, to see the effects upon the resulting smoothed trajectory.

In particular the observational (co)variance was modelled as twice the variance of the series. For the state drift covariance, I used the variance of the series for the level term, and unity for the trend term, also on a diagonal. For the initial state covariance, I used ten times the variance of the series for the level term, and, again, unity for the trend term.

The times for observations were re-registered by migration to be equispaced across the duration observations were taken. This was done by using an Akima interpolating spline of 3rd degree, averaging for ties, and using Akima’s improved method. (See the *aspline* function in the **R** *akima* package.)

The envelope about the series shows the one standard deviation confidence. Migrated data are shown as squares. Original data are shown as triangles.

I also wrote a stochastic search algorithm to look for better settings around the values chosen. It is not proper to limit this optimizer to maximizing classical log-likelihood, since that made states too sensitive to individual observations, so a regularizer was added to the utility function, one which penalizes excessive magnitudes of second derivatives. Unfortunately, this introduced yet another parameter, a smoothing coefficient, so I concluded that this algorithm was going down a frequentist rathole.

The other direction I might have done was applying *information criteria* to winnow alternative models. For instance, does a level step suffice? What does the trend term bring to the model? Good things to consider for future work!