I posted a response to a comment from the blog author at the ellipsis-loving … and Then There’s Physics. The figures didn’t make it into the comment, and, so, I am reproducing the intended comment in its entirety here.
ATTP, you were correctly pointing out I was partly incorrect, and certainly incomplete. Kudos to you, and apologies, and to the readers.
I hadn’t read Armour 2017. I have now. I did read ATTP’s assessment and, yes, it does mention Armour deals with nonlinearity. And, yes, it does mention that the histogram is from CMIP runs, but I interpreted it differently than it should have been interpreted. I have not read Richardson, and probably won’t. I also assumed that the Armour figure was something Stephens was using in his “criticism of excessive certainty” but have gone back and seen that there is another parse to this post which is consistent with Stephens not mentioning Armour at all.
I also have not read Stephens, and perhaps I should before commenting, but I won’t.
The point I tried to make was essentially that uncertainty and ignorance in a place where a decision ought to be made and when the consequences could be enormous is not the place to claim “It’s okay to remain ignorant.” Essentially, this is enshrining the “Do nothing until someone proves you have to do so” which might work for some common decisions, but taking a big ship into an iceberg-strewn sea because it hasn’t hit anything yet hardly seems prudent.
I also am not convinced, commenting with respect for Armour, that the adjustment for nonlinearity they attempt helps the argument much, and ATTP hinted at that in his previous post (beginning “… A few additional points. We don’t know that these adjustments are correct. However, we do have a situation where there is a mismatch between different climate sensitivity estimates …”). In the public discussion of climate change, highlighting these kinds of papers tends, I think, to convince people there’s more arbitrariness to this process than is correct. After all, there have been similar papers published by Meraner, Mauritsen, and Voigt, as well as Caballero and Huber, the latter focussing upon nonlinearity in ECS and having a good introduction. These emphasize Pierrehumbert’s comment “Here there (may) be dragons”, and, as of 2013,
…there have already been great strides in understanding the magnitude and pattern of warmth in hothouse climates, which have helped resolve some earlier modeling paradoxes, but much remains to be done. In particular, narrowing the broad error bars on past atmospheric CO2 is crucial to relating these climates to what is going on at present.
More recently there is the published work of Friedrich, Timmermann, Tigchelaar, Timm, and Ganopolski.
Consider Pierrehumbert’s equation (3.14) for temperature sensitivity (specifically mean surface temperature) with respect to some parameter, , where might be, as Pierrehumbert suggests, albedo, or CO2 concentration, or the solar constant:
Here is the top-of-atmosphere flux, and is outgoing longwave radiation at the surface (*). This is pretty standard, even if it is very general, much more general than, say, Armour’s equations (1)-(3). From a statistical perspective what’s striking about the above is that if
are each interpreted to be random variables worthy of estimation by whatever means, then that implies is a random variable which is drawn from a ratio distribution. And should the Highest Density Probability Interval for include zero, whatever the physical reason, the distribution of is pretty meaningless. A good physical imagination offers any number of ways this could happen, but Professor Pierrehumbert’s discussions in Section 3.4 of his book describes the possible (mathematical) range, irrespective of the geophysical details. And because what we are about is as a function of all relevant , that being a total differential, the excessive variability in any one such will dominate that of the rest. Note extreme variability is not our friend, no matter what vision of a cultural or economic future we might have.
If ECS is going to continue to be used as the basis of argument and policy, it seems to need to be made far more robust than it is. That’s the point of my argument for much more additional work. If we are to keep this troubled concept in the planning stables, we desperately need to understand the bounds on its applicability. Armour is a start, but Armour simply says there might be problems when we already know there are problems from theory. What we need are constraints. Otherwise, ECS is a “nice to have if the world were a different place.” But then we don’t really have it, except knowing that there could be “dragons” out there.
I think there are much better arguments, and there are much better problems to chase. For instance, here is the definitive plot from Fyfe, Gillett, and Zwiers:
I have noted (**; Section 7) that what’s wrong with this presentation is not that that the Highest Density Probability Interval for the climate models fails to overlap the observational mean and cloud, it’s that there is such a big difference between the observational variance and that of the model ensemble. The specifics of the discrepancy seen as a t-test based upon a difference in means led to the later explanation by Cowtan and Way and then a rebuttal by Fyfe and Gillett. I say, rather, that the reason for the discrepancy is deep, having to do more with the difference in variances (***), and probably not something we can expect most public or most policymakers to understand, at least without understanding something like Leonard Smith’s Chaos: A Very Short Introduction. The climate ensemble simulates all possible futures, and Earth takes one future at a time. I have read all around this in the literature, and there seems to be a confusion about what internal variability means. Yes, there’s unexplained internal variability, but there’s a lot of evidence for stochastic variability even if all the phenomena in internal variability were deeply understood. That’s important, because it makes what Bret Stephens and others like Judith Curry want to do a fundamentally flawed project. This stochastic variability on top of everything could be enough to send us all over some kind of potential cliff, even if emissions were managed to some precalculated minimax loss-versus-economic benefit point.
Here’s a rhetorical question when dealing with the public and policymakers: Why not go back to simple conservation of energy arguments, and point out that radiative forcing from CO2 is indisputable? The excess energy from forcing is going to go somewhere, and where it’s gone in the past may not be where it continues to go, ditto CO2 itself. Sure, this frustrates people who want a cost put on the phenomenon. But making up a cost is arguably worse than saying “We don’t have one.” Will the latter produce inaction? Possibly. But that’s what’s happening now, and people are trying to produce cost estimates.
Oh, and indeed, there are but 21 single socks in the Broman climate collection, per Armour’s count of the number of GCMs used reported at the top right of the second page of their article.
(*) See Professor Ray Pierrehumbert’s book for the intimate portrait of Earth as a planet, in the manner of Arnold Ross, with associated and very fine Python code.
(**) WARNING: Not peer-reviewed.
(***) Were the observational variance to be appreciably larger, the conclusion of a statistical test would be that the difference in means was less significant.