Comment on “Timescales for detecting a significant acceleration in sea level rise” by Haigh, et al

Amended, 1st May 2014.

The lead author, Dr Ivan Haigh, and I have had a very friendly discussion this paper and its context in detail. Now that I understand the context, and especially the atrocious maths of the Houston, Dean, and Watson publications sequence (see below), I feel it is important to move forward. Accordingly, I have signficantly revised the language below. I end with a new example and a general comment on how I think mistakes like the quadratic model of Houston, Dean, and Watson should be addressed.

This regards an article in Nature Communications, specifically:

I. D. Haigh, T. Wahl, E. J. Rohling, R. M. Price, C. B. Pattiaratchi, F. M. Calafat, S.Dangendorf, “Timescales for detecting a significant acceleration in sea level rise”, Nature Communications 5, Article number: 3635,, published 14 April 2014.

The reader familiar with my past contributions here might guess I do not like this paper. It is riddled with significance tests and, worse, reports nothing of effect size and precision. It also splashes about 95% confidence limits and makes many decisions about intervals being sampled depending upon these limits, specifically saying:

We identified, for each of the artificial time series, the end year of the period when the lower 95% confidence limit of the linear rate, for that particular period, was first (and following that, consistently) higher than the upper 95% confidence limits of the linear rates for the historic pre-2010 period; taking into account uncertainty due to future interannual variability as described below.

(Amended, 1st May 2014.)

The paper by Haigh, et al is one in an extended discussion, cited in their references and the list below, one begun by papers by Houston, Dean, and Watson. These commanded a response by Ramstorf and Vermeer, and eventually by Haigh, et al. These papers needed to use the terminology and context they set, both to make the argument compelling, and also to use the techniques and ideas prevalent in the field, that being coastal research and engineering.

  1. J.R. Houston, R.G. Dean, “Sea-level acceleration based on U.S. tide gauges and extensions of previous global-gauge analyses”, 2011,
    Journal of Coastal Research, 27(3), 409–417, ISSN 0749-0208.
  2. S. Rahmstorf, M. Vermeer, “Discussion of: ‘Houston, J.R. and Dean, R.G., 2011. Sea-Level Acceleration
    Based on U.S. Tide Gauges and Extensions of Previous Global-Gauge Analyses’, Journal of Coastal Research, 27(3),
    409-417. Journal of Coastal Research, 27(4), 784–787, ISSN 0749-0208.
  3. Reply to: Rahmstorf, S. and Vermeer, M., 2011. Discussion of: Houston, J.R.
    and Dean, R.G., 2011. Sea-Level Acceleration Based on U.S. Tide Gauges and
    Extensions of Previous Global-Gauge Analyses. Journal of Coastal Research,
    27(3), 409–417.
  4. P.J. Watson, “Is there evidence yet of acceleration in mean sea level rise around mainland Australia?”, 2011, Journal of
    Coastal Research, 27(2), 368–377, ISSN 0749-0208.
  5. M. Vermeer, S. Rahmstorf, “Global sea level linked to global temperature”, PNAS, www.pnas.orgcgidoi10.1073pnas.0907765106
  6. Houston, J.R., and Dean, R.G. 2013. Effects of sea-level decadal variability on acceleration and trend difference. Journal
    of Coastal Research, 29(5), 1062–1072. Coconut Creek (Florida), ISSN 0749-0208.
  7. A. C. Kemp, B. P. Horton, J. P. Donnelly, M. E. Mann, M. Vermeer, S. Rahmstorf, “Climate related sea-level variations
    over the past two millennia”, PNAS, 2011,
  8. Citation: Grinsted, A., J. C. Moore, and S. Jevrejeva (2009), Reconstructing sea level from paleo and projected temperatures 200 to 2100AD, Climate Dynamics,

This sequence represents quite a dustup! It was also picked up by media, especially in Australia, and used to argue things like “There’s no evidence for global climate change!”

Finally, they generate a large number of test series, as

To bound what kind of timeframe was needed to estimate SLR going into the future, Haigh, et al needed to generate projections, and projections having noise added.

We used the Allen and Smith [16] AR(1) model to generate the future time series of realistic interannual variability. First, Lag-1 autocorrelation and noise variance parameters were individually estimated from each of the 12 de-trended (using a linear rate estimated over the common period 1915–2009) sea level records (Table 1). For each of the 12 records in turn, we then used the AR(1) model, with the autocorrelation and variance parameters estimated from that particular historic record, to randomly generate 10,000 time series, which represent a range of realistic future (2010–2100) interannual variability (Fig. 2).

The M. R. Allen and L. A. Smith “[16]” paper appeared in Geophysical Research Letters, 21(10), 883-886, May 15, 1994, and was titled “Investigating the origins and significance of low-frequency modes of climate variability”. Their point was to investigate the Singular Spectrum Analysis (“SSA”) method in the presence of “red noise”, and produce a technique called “Monte Carlo SSA”. This technique was further developed by Allen and Smith in 1996, in an article in Journal of Climate, 9, 3373-3404, December 1996.
I fully expect that Haigh, et al should be familiar with this additional paper since it appeared well before their own. The point is, I believe, from what I can tell, Haigh, et al, did not use the Allen and Smith technique properly, since they are simply modeling a signal using an AR(1) process, and not considering its spectrum at all.
The citation of the Allen paper was intended to cite a use, from a paper in the field, of adding AR(1) noise to make a signal more naturally realistic. As he Abstract of the Allen and Smith 1996 paper reads:
This is important.

Singular systems (or singular spectrum) analysis (SSA) was originally proposed for noise reduction in the analysis of experimental data and is now becoming widely used to identify intermittent or modulated oscillations in geophysical and climatic time series. Progress has been hindered by a lack of effective statistical tests to discriminate between potential oscillations and anything but the simplest form of noise, that is, “white” (independent, identically distributed) noise, in which power is independent of frequency. The authors show how the basic formalism of SSA provides a natural test for modulated oscillations against an arbitrary “colored noise” null hypothesis. This test, Monte Carlo SSA, is illustrated using synthetic data in three situations: (i) where there is prior knowledge of the power-spectral characteristics of the the noise, a situation expected in some laboratory and engineering applications, or when the “noise” against which the data is being tested consists of the output of an independently specified model, such as a climate model; (ii) where a simple hypothetical noise model is tested, namely, that the data consists only of white or colored noise; and (iii) where a composite hypothetical noise model is tested, assuming some deterministic components have already been found in the data, such as a trend or annual cycle, and it needs to be established whether the remainder may be attributed to noise. The authors examine two historical temperature records and show that the strength of the evidence provided by SSA for interannual and interdecadal climate oscillations in such data has been considerably overestimated ….

Since Haigh, et al do not address singular spectra or SVD or EOFs anywhere in their paper, their claim that they “… used the Allen and Smith [16] AR(1) model to generate the future time series of realistic interannual variability” is badly overstated. Simulating noise using AR(n) is not at all an Allen and Smith innovation, being a standard device from time series analysis, available, in fact, in the arima.sim function of the base R installation. See, for instance, P. S. P. Cowpertwait, A. W. Metcalfe, Introductory Time Series with R, Springer, 2009, Subsection 6.6.1. Nevertheless, it is a basic simulation of noise, called “red noise”, it was used in Allen and Smith, although not for the purpose which Haigh, et al were putting it to, and, indeed, business and financial forecasters simulating future markets or profits add AR(1) (and other ARIMA noises) to their deterministic signals all the time, whether this is justified or not.

Indeed, from my perspective, thare serious enough flaws that I think (a) Allen and Smith have been misrepresented in their practice, (b) other students of the problem may be thereby misled by the Haigh, et al characterization, and (c), therefore, the Haigh, et al paper should be withdrawn from Nature Communications.

The above stricken statement was made without my awareness of the “dustup” I alluded to above, not being familiar with the practice in the field, and, in that context, I was wrong. With respect to the authors Houston, Dean, and Watson, if any papers should be withdrawn, it is theirs, for their methods are completely indefensible, and I can understand why Haigh, et al, and Rahmstorf and Vermeer needed to mount a quick response. Worse, I think the technique Dean, Houston, and Watson used, literally fitting a quadratic to sea level rise data, is so simple, it could be used to mislead political leaders, media, and a gullible public, who, generally speaking, would not comprehend the shortcomings. There was a real danger there! Apart from criticizing the methods of Houston and Dean from a statistical perspective, which some statistical organization should have taken on at the time, possibly the Section on Statistics and the Environment of the American Statistical Association, in my opinion, Haigh, et al had little choice but to argue on the basis of evidence.

It is entirely possible to do what they want to do with the data they have. I am simply objecting (strongly) to the methods they’ve employed. But I have no influence, not being a climate scientist, merely a lowly statistician who has used SSA with some success.

I also find their frolick with frequentist significance testing troubling, but I also know doing that is something which plagues a much of climate science technical reporting, a habit shared with meteorology.

(Added 28th April 2014.)
In my view the aforementioned repair could be done by either deleting the Allen and Smith reference and simply invoking the use of AR(1) as colored noise, or keeping the Allen and Smith reference, and developing noise and trends in the forecasted sea levels via a true Allen and Smith way by using the prediction method described in Section 5.3 of Ghil, Allen, Dettinger, Ide, Kondrashov, Mann, Robertson, Saunders, Tian, Varadi, and Yiou, “Advanced spectral methods for climatic time series”, Reviews of Geophysics, 40(1), 2002, The paper would be a lot strong using the latter method.

To see what’s illustrate the kind of thing that’s possible with SSA, although what may not be applicable to Haigh, et al, something which Haigh, Wahl, Jensen, and Frank used in another paper (cited below) but could not because of Nature journal page limits, consider another use of the SSA technique, one described by two papers:

  1. D. Kondrashov, R. Denton, Y. Y. Shprits, and H. J. Singer, “Reconstruction of gaps in the past history of solar wind parameters”, Geophysical Research Letters, 41, March 2014,
  2. D. H. Schoellhamer, “Singular spectrum analysis for time series with missing data”, Geophysical Research Letters, 28(16), 2001, 3187–3190,
  3. T. Wahl, J. Jensen, T. Frank, I. D. Haigh, Improved estimates of mean sea level changes in the German
    Bight over the last 166 years, Ocean Dynamics (2011) 61:701–715,

These examples are not from these papers, as this is my own work, but this work uses the techniques described there.

Many people know of the Keeling Curve. This is the record of atmospheric carbon dioxide recorded at Mauna Loa. SSA can be used to decompose this curve into components. Such a decomposition is shown below, where the curve, shown in black in the lower right, is broken up into a trend and two sinusoids, displayed to the left of the original data.


Suppose now some of the data in the curve were deleted or missing. Here, it’s possible to do a controlled experiment, and we can pretend that the data is missing, but we actually have it. So consider what can be done if the data is deleted from the portion of the Keeling Curve where the green dots are shown.


To see how close the reconstruction comes to the original data, the figure below shows the same, but with the reconstruction plotted in red atop the green dots that are missing and are being reconstructed.


Not too bad. Of course, projections or predictions of the kind Haigh, et al want to do are trickier.

(At ch1ppos’ request)


In future, I think it would be better in any field where there appears to be a serious statistical shortcoming in a published technique for practicing statisticians to be approached through their organizations, such as the ASA or the Royal Statistical Society, to critique and comment upon the methods. It would be incumbent upon such statisticians to teach why the proposed techniques had shortcomings, illustrating with drawbacks, and offering remedies.

About ecoquant

See Retired data scientist and statistician. Now working projects in quantitative ecology and, specifically, phenology of Bryophyta and technical methods for their study.
This entry was posted in Bayesian, climate, forecasting, geophysics, mathematics, maths, meteorology, physics, rationality, science, statistics, stochastic algorithms. Bookmark the permalink.

8 Responses to Comment on “Timescales for detecting a significant acceleration in sea level rise” by Haigh, et al

  1. The post has been heavily revised, to be consistent with the full story.

  2. @ch1ppos I am amending the post to include code (or a link to code) which does this. It won’t precisely reproduce the Keeling curve, nor is it well documented, but it’s the best I can provide for now. See the references given in the code for more information. This code was used with a particular set of time series which have been removed to protect the client.

    • ch1ppos says:

      Thanks. It turns out that the Keeling curve SSA example is in the documentation for Rssa::ssa

      • Yes, but as far as I know, the Keeling curve is simply analyzed by SSA. There is no example there for reconstructing a deleted portion.There is a “forecast” function which projects what CO2 concentrations would be if the most recent portion of the curve were deleted. The reference for reconstructing a deleted portion is given in the comments for the code I cited. It describes the specifics of the technique, and description of the method should be gotten from there.

  3. ch1ppos says:

    I am very intrigued by the additional information you posted about SSA. Would you mind sharing the R code you used for the Keeling Curve? Did you use a particular package?

    I am attempting to reproduce some of the Haigh study and I posted my initial code @

  4. ihaigh says:

    Hi Jan,

    I just read your critique of my paper with interest. Would you be willing to have a chat on the phone or via Skype? I would like to ask you a few things before I respond.

    All the best,

    Ivan Haigh

  5. Thanks for your comment.

    It wasn’t that the spectrum was poorly simulated. It’s that, based upon the material in the paper, NO singular spectrum was used, just the AR(1) spectrum.

    Trends are tricky. In climate observations, they are VERY tricky, in my opinion. A trend, if it is to mean anything, should have with it some scale in the size of the independent variable or predictor value, and should have some dynamic property the interpreter is expecting of it. Thus, to taking the Keeling Curve of atmospheric CO2 concentration, that can be decomposed into a seasonal component, a linear trend, and a random residual. Which is the “real” trend? The seasonal? The linear? The random residual? They each have arguments in their favor.

    Surely the long term linear trend is “a trend”, but it is also the least informative regarding processes in the signal. Because it is the next-to-lowest derivative, there are more ways it can be estimated, and more ways these can disagree. (You can take lots of different temporal supports on the Keeling, calculate slopes, and average them. Some of these overlap, so need to be corrected for mutual correlation, etc.)

    The seasonal trend is important because it is the curves major long term feature. It looks like it would persist whatever the linear trend did.

    The random residual is the trend with the greatest predictive power or information about this signal. Since both the linear trend and the seasonal trends are “smooth functions”, the only thing really interesting is the random residual. It’s the portion “adjusted for trend and seasonal variation”.

    So which is the real trend? Or are none of them?

  6. ch1ppos says:

    Nice post. I did not pick up on the issue of poorly simulated spectrum. I suspect that I would have been one of those misled by this study.

    Do you think that the overall premise of the paper has merit? The idea being that trends can be elucidated by smoothing. This would seem to be an elementary insight.

Leave a reply. Commenting standards are described in the About section linked from banner.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.