A response to a post on RealClimate

(Updated 2342 EDT, 28 June 2019.)

This is a response to a post on RealClimate which primarily concerned economist Ross McKitrick’s op-ed in the Financial Post condemning the geophysical community for disregarding Roger Pielke, Jr’s arguments. Pielke, in that link, is recounting his few person crusade (along with a couple of others) that, in terms of a climate emergency, paraphrasing, “There’s nothing to see here, folks. Move along home.”

The post was by Michael Tobis, a retired climate scientist, working as a software developer and science writer living in Ottawa, Ontario.

Unfortunately, in my opinion, along the way, Dr Tobis threw Statistics under a small bus. At least he mischaracterized it. I went on to agree with the criticism of McKitrick. But I felt that modern Statistics needed explaining, and I cautioned that a lot of the problem is the statistics which Pielke did was shoddy, using antiquated methods. I also pointed out that similar imperfections could be found in, for instance, The Journal of Climate. Separately, I’ve documented a particularly egregious case, one which contributed to the claim there was a “hiatus” in increasing global warming, something which was not true then, “if thou reckon right“, and turned out not to be true afterwards.

Anyway, apparently, RC isn’t going to post my Comment, so I’m placing it here, below, in its entirety. It’s up to them. The Comment may have been too far off the beaten path. I have also answered in the above a couple of my own questions about the original articles which pertain.

As of today, I note that my comment was indeed posted. The delay was probably due to simple moderation delay.] So, this This slightly revised version augments that.

I’m not sure which Pielke-Jr article McKitrick is referring, and, whichever it is, I mean one with technical details (link?), but, nevertheless, I wanted to gently disagree with part of Dr Tobis post above. Consider

Statistics is a vital tool of science, but it is not the only one. It is most effective when dealing with large quantities of data. Using statistical methods to detect the effect of one factor among several amounts to proving that the other factors did not align as a matter of happenstance. The more abundant the data, the less likely such a coincidence.

To the extent that Pielke-Jr is running some kind of hypothesis test for determining attribution, the problem isn’t Statistics, it’s wrong-headed and out-of-date statistical technique.

It is not true that inference and estimate rely upon “large quantities of data”. Sometimes, in fact, relative small amounts of well chosen data are far more powerful: Think of an experimental design with controls and balance. That’s important to remember. In many engagements I have with Big Data people, I need to emphasize that the size of your dataset isn’t the raw size, it’s the number of replicas of unique combinations of explanatory variables you have. If most in a big dataset have but one observation, that’s not a big dataset at all. If there is a wide range in the numbers of replicas, that’s a problem with balance, and it can even harm seemingly non-parametric techniques like cross-validation.

In fact, there is now a rich set of methods for embodying domain knowledge in statistical models, namely, the Bayesian hierarchical modeling approach, whether these derive from meteorology or climate science or baseball statistics. Judging from literature in say, Journal of Climate, few papers appear aware of these techniques, preferring to argue piecemeal from grounding in the domain. That’s okay, but large comprehensive studies are hard, and it’s unfortunate that these modern methods aren’t better understood and used.

Deniers and doubters, in my experience, are the least likely to be aware of such methods, and I find many instances, whether in denial-oriented assessments of temperature or sea level changes, where Statistics is practiced as if by rote, and with a lot of confusion. Time series seems to be a particular tripping point.

Others have noted these kinds of blemishes, but often such underscores don’t go on to indicate what should be done. There are many sound statistical analyses which demonstrate any of these claims, from the statistical significance of temperature rises, to increased damage from storms, to droughts and heatwaves, to phenological aberrations in species migrations. There is no doubt these are true.

There are many wonderful Bayesian analyses which can serve as examples. Schmittner, et al from Science in 2011 is wonderful (“Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum”). There are several papers where Professor Mark Berliner is a co-author, like his “Uncertainty and Climate Chance“, or, with Professor Levine, “Statistical Principles for Climate Change Studies“. The latter paper addresses the nuances of hypothesis testing, showing that while it seems simple, it is clouded with delicate assumptions. Smashing that all aside and oversimplifying, it’s that p-values (themselves) are random variables.

Statistics as a field hasn’t left these questions alone. The American Statistical Association has, like many scientific organizations, a formal statement on climate change. But, too, the status of Statistics in climate change science has been examined. There have been special issues of journals.

I also urge readers to be open-minded about applying techniques from machine learning and related areas to geophysical problems. They certainly have their faults and limitations, opacity in explanation being one formidable issue. But as O’Gorman and Dwyer showed in 2018 , ML techniques are beginning to make their presence felt in geophysics. See also Rasp, Pritchard, and Gentine in a 2018 PNAS.

STHM_2019-06-26_132748

Figure above is from the paper:

Christopher K Wikle, Ralph F Milliff, Doug Nychka & L Mark Berliner (2001) “Spatiotemporal hierarchical Bayesian modeling of tropical ocean surface winds”, Journal of the American Statistical Association, 96:454, 382-397, DOI: 10.1198/016214501753168109

About ecoquant

See https://wordpress.com/view/667-per-cm.net/ Retired data scientist and statistician. Now working projects in quantitative ecology and, specifically, phenology of Bryophyta and technical methods for their study.
This entry was posted in American Association for the Advancement of Science, American Meteorological Association, American Statistical Association, AMETSOC, Bayesian, climate change, ecology, Ecology Action, environment, evidence, experimental design, Frequentist, global warming, Hyper Anthropocene, machine learning, model comparison, model-free forecasting, multivariate statistics, science, science denier, statistical series, statistics, time series. Bookmark the permalink.

Leave a Reply