I attending the 2015 edition of the Southern New England Meteorology Conference in Milton, MA, near the Blue Hill, and its Blue Hill Climatological Observatory, of which I am a member as we as of the American Meteorological Society. I belong to these organizations because of my interest in climate, as people can tell from these blogs, and because of the journals like Journal of Climate, Journal of Applied Meteorology and Climatology, and Journal of Physical Oceanography. They do a lot with time series, something which is these days my primary professional interest. Meteorologists in the National Weather Service and in broadcast media or on the Web always have the demand to communicate complicated technical matters and concerning probability and risk with the generally innumerate public. Always able to learn something from them.
There was much discussion of the snows of last year’s Winter in New England, and an interesting bit about hydrology from Drs Ed Capone and Linda Hutchins. Related to statistics, I learned two specific things, both from Dr Brad Panovich of WCNC-TV, Charlotte, who did a talk titled “A perspective on the accuracy of meteorologists.”
The first pertained to an observation that because of spatial uncertainty, a statement of risk-as-probability for a particular city might stated as being, say, 20%. While that sounds like but 1-in-5, in fact, given the model, plan, analysis, and uncertainty, 20% might be maximum that can be assigned to any specific location, and that is important from the perspective of planning and risk assessment. The exciting thing about this observation is I believe it connects up with two textbooks I am studying, one by Michael Evans I am studying, Measuring Statistical Evidence Using Relative Belief, and the second by Sadanori Konishi and Genshiro Kitagawa, Information Criteria and Statistical Modeling. The first of these has received careful analysis and comment from a classical Bayesian perspective by the great and prolific Christian Robert. (See also.) In short, I see opportunities for re-expressing statements of risk as relative belief or changes in risk relative to before the new information is in hand. I also think that the practice of assessing predictive skill in meteorology remains kind of sloppy, even if they are beginning to address criteria like Brier scores, and the field could benefit with some of the capability of the information criteria schools.
The second was a personal communication with Dr Panovich, during lunch, where I asked about the cone of probability which is used to depict forecasts of tropical storm positions.
Despite some accolades this representation receives, it seems to me that the representation depicts an overly smooth risk density and perhaps not even that. It depicts position of storm center or something, and that’s nothing like, for instance, integrated risk of extreme rainfall. Moreover, these look unimodal and I can’t imagine actual risk laydowns being unimodal. Almost no actual densities are unimodal, including things like equilibrium climate sensitivity (“ECS”):
Well, Dr Panovich explained that indeed the representation was in fact a simplified one, and perhaps a better representation was needed. However, the version was a step in the improvement of conveying risk to the public. He indicated he liked the ensemble tracks from multiple models and, indeed, that is a kind of posterior density estimate which does suggest multimodality.
Incidentally, regarding ECS, note that the density given being on land is starkly different from that on the oceans or the global one.
And that brings me to the most disappointing presentation, one by former Chief Meteorologist and (apparently) co-founder of the Weather Channel, Dr Joseph D’Aleo. In addition to strongly suggesting solar forcing as the dominant determiner of climate, he rattled off well known global climatological oscillations or teleconnection drivers, like PDO, AMO, ENSO, NAO, as if they were new, and claimed his current company and employer, WeatherBell Analytics, used them successfully to predict long term trends of weather over the past couple of years. Surely, the influence of these oscillations on weather patterns is well known, and they have the advantage of being long-lived statistics derived from stable and well-measured quantities, like sea surface temperatures. (Plus some serious linear inverse modeling by NOAA.)
But appealing to these as determiners of climate is a fallacy of oversimplification and causal retreat. The argument blames these oscillations, but does not explain their own drivers. Solar forcing is claimed to be a significant determiner by D’Aleo, but the correlation coefficient which Dr D’Aleo in one place cited for the connection an oscillation and solar forcing was 0.7. There are other correlations between quantities of interest which are much higher, and these prove nothing, but suggest something worth checking out.
And, in fact, all the problems which plague any non-stationary time series, whether temperature or tropical storm incidence, surely plagues series of these indices: Offhand, it’s not known how much of the variability is from drivers, and how much of their behavior is due to resonances within the systems that produce them. In fact, emphasizing these oscillations and teleconnections, however real they may be, does not, in my limited opinion, add any clarity to what are the mechanisms by which excess energy due to CO2 forcing are being distributed throughout climate, and they don’t make distinguishing one forcer from another any easier.
Dr D’Aleo also suggested high latitude volcanic activity was related to solar forcing, a claim which he backed away from and characterized as “speculative” when questioned about it by a member of the audience.
It should be noted meteorologist Dr Joe Bastardi is also an employee of WeatherBell Analytics, being its Chief Forecaster.
A final note regarding my developing view on statistics, between Konishi and Kitagawa and the work of Kevin Burnham and David Anderson, Model Selection and Multimodel Inference on the one hand, and that of James Spall, Introduction to Stochastic Search and Optimization, on the other, I’m seeing myself less as a Bayesian statistician and more of a numerical analyst and engineer who views Bayesian inference as a problem of optimizing a posterior density. Moreover, model comparison in the Bayesian world has, in my limited opinion, not been cleanly worked out, so these information criteria are the most sensible means for comparison. I think meteorological forecasting could also benefit from this perspective. Maybe there’s a paper downstream about all this which might be presented at a future Southern New England conference.