**Happy Vernal Equinox! This post has been updated today with some of the equations which correspond to the models.**

An assessment of whether or not there was a meaningful slowdown or “hiatus” in global warming, was recently discussed by Tamino and others^{(i)} (see *comments* beginning here) in response to a paper by Fyfe, Meehl, England, Mann, Santer, Flato, Hawkins, Gillett, Xie, Kosaka, and Swart, with Ed Hawkins explaining in a more expository piece. Tamino has done his assessment using incremental linear models, correcting for el Niño, and a successively more elaborate series of corrections. I did a quick assessment using a *Rauch-Tung-Striebel smoother* using the *DLM package* of **R**, devised by Petris and Gilks. I used the , and a *level level model*, meaning one which admits only step changes. I have applied this same model elsewhere, for sea-level rise.

Taking each series separately, what this means is that the observations of the temperature anomaly in one of the series at time or are based upon a “true anomaly” at time or with zero mean Gaussian noise, added to it:

The “true anomaly” is modeled as simply as it can possibly be, yet allowing for full non-stationarity, as a random walk with Gaussian steps, or an example of a *Gaussian diffusion process*:

The filter-smoother finds the most probable values of given observations . For the HadCRUT4 and GISTEMP temperature anomaly series, was estimated using a Politis-Romano stationary bootstrap (because the series data are interdependent) giving . (The *tsbootstrap* function of the *tseries* package was used.) I arbitrarily set

and obtained the following:

**( Click on figure to see larger image, and use your browser Back button to return to this blog.)**

Of interest in the above is the behavior in the 2000-2015 region. Sure, the mean trends suggest a slowing, but the up-and-down single standard deviations are quite broad, and so the time-varying means don’t signify as much as they might. My point going into this is whether there is sufficient information in the observations to make proper distinctions.

We’ll see. For this is the first step of a more elaborate project ~~which I am going to place at ~~, one which overlaps with some of my professional work, my work on Town of Sharon hydrology, and involves new algorithms. *arXiv.org*~~That project will also address some of the material in a paper by Frankcombe, England, Mann, and Steinman.~~ I do not expect the project to add to the geophysical discussion, but it is an excuse to devise new methods, and see if the skepticism regarding the purported “hiatus” I share with Tamino stands up to a deeper look. I also feel that the goal, quoted by Fyfe, *et al*, to “Our goal here is to move beyond the purely statistical aspects of the slowdown, and to focus instead on improving process understanding and assessing whether the observed trends are consistent with our expectations based upon climate models”, deserves some response. The Kalman smoother state-space models are famous for allowing the introduction of arbitrarily complicated explanatory mechanisms for process, even if they only encapsulate superficial physics. There is also a question if something as specific as evidence for a purported slowdown requires the entire CMIP5-like apparatus to be established. Some argue that, in some cases, model-free forecasting can outperform mechanistic models (Perretti, Munch, Sugihara). Also, there are new non-parametric methods available which permit algorithms to find their own models using Bayesian principles and stochastic search. It will be interesting to see what these can do with this data.

I would also like to see assessments of model fit be done using quantitative measures, such as by use of the many available information criteria^{(ii)}, rather than by only differential modeling. I’ll endeavor to do that some in my sequel.

By the way, the code to do the above is included in this tarball, along with the data. Just be sure to unpack it into a single directory and *setwd* to there before trying to run.

*Update*, 20^{th} March 2016

The value of the variance for the process noise in the above was arbitrarily chosen to be the same as the empirically estimated observational variance of the observations in the separate cases of the HadCRUT4 series and GISTEMP. Moreover, the two time series are analyzed and depicted separately, even if they are practically independent measurements of the same thing. In fact, their correlation coefficient is in excess of 0.98 over their common run. Accordingly the true temperature anomaly should be estimable using both in combination, rather than portraying it twice, since presumably they both reflect the true anomaly series, . This update provides one view of that.

In particular, the model was changed to have a bivariate series, the first component being from HadCRUT4, the second from GISTEMP.

As before:

Here is the *covariance matrix* between the two series, and is constructed from empirical measurements. Let be the empirical variance estimated using *tsbootstrap* for the HadCRUT4 series on its common support with GISTEMP. Let be the empirical variance estimated using *tsbootstrap* for the GISTEMP series on its common support with HadCRUT4. Let be the empirically derived (Kendall) correlation coefficient between the two series, that 0.98 value mentioned above. Then the covariance between the two series is obtainable through the relation:

and by substituting for , and for , and for .

Then

The process noise variance, is not known, so instead of picking a value for it arbitrarily, it was estimated here using *maximum likelihood* as sketched in Petris, Petrone, and Campagnoli^{(iii)}, using their *dlmMLE* function. That produces .

Two cases were considered. The first is the estimate of the combined temperature anomaly series with the input data restricted to the common support. The second case considered was the same, except with a surrogate 2016 temperature anomaly of 0.894 added in, something which has not yet been observed.

The result of the first is:

**( Click on figure to see larger image, and use your browser Back button to return to this blog.)**

The HadCRUT4 and GISTEMP series are shown as before, but the bold red shows the estimate for in the common model. The *second-order Akaike information criterion* for this dataset and model is 604.1, a criterion useful for model comparison^{(iv)}.

The second includes that 0.894 degree C anomaly estimate for 2016, and is shown as:

**( Click on figure to see larger image, and use your browser Back button to return to this blog.)**

The *second-order Akaike information criterion* (“AICc”) for this dataset and model is 591.4, slightly better than before.

Note that when the 2016 surrogate is provided, the common estimate ever so slightly increases its slope. With the maximum likelihood estimate, the assessment is that the most recent uptick in global temperatures is judged, at least by the RTS filter, as if it were an outlier. More observations will, apparently, be needed before its mind is changed. Still, if the surrogate *is* included, the slope of the inferred curve leading to its 2015 value is very nearly linear, contrasted with the gentle kink down which is the solution without it. To the degree to which we accept the inference of the combined model, there is very little indication of a slowdown in warming, agreeing with Tamino, and disagreeing with Fyfe and company. But there are ways of improving this analysis.

The code to do this bit is included in this tarball, along with the data. As before, be sure to unpack it into a single directory and *setwd* to there before trying to run.

The use of the empirically-derived in the *seems* a little silly. Maximum likelihood is being used to estimate the process variance, , so why not use it to estimate as well? Because it results in a poorer fit. Here

and , but the second-order Akaike information criterion is now a whopping 982.7, so it’s a loss. With the 2016 surrogate the model gets even worse with the AICc being 990.4.

In another update to this post, the calculation for will be repeated but using a Bayesian computational technique, Gibbs sampling.

The code to do this bit with the MLE is included in this tarball, along with the data. As before, be sure to unpack it into a single directory and *setwd* to there before trying to run.

And, as promised, I’ll dig deeper still.

(i) To be complete, Tamino has repeatedly addressed this question in many earlier posts.

(ii) Briefer treatments are also available here, and here

(iii)See G. Petris, S. Petrone, P. Campagnoli, *Dynamic Linear Models with R*, Springer, 2009, Section 4.1, for details.

(iv)See K. P. Burnham, D. R. Anderson, *Model Selection and Multimodel Inference: A Practical Information Theoretic Approach*, 2^{nd} edition, Springer, 2002.

Pingback: On Munshi mush | Hypergeometric

Pingback: Repaired R code for Markov spatial simulation of hurricane tracks from historical trajectories | Hypergeometric

Pingback: What are the odds of net zero? | Hypergeometric

Pingback: Less evidence for a global warming hiatus | Hypergeometric