Updated 2018-11-14: See at bottom
Professor Nic Lewis has criticised the Resplandy, Keeling, et al report in Nature which I previously mentioned. A summary of his criticism appears in the somewhat libertarian ezine Reason. I have responded there, but their commenting policy limits a thorough response. Not all things can be answered in less than 150 or for that matter 2000 characters. Accordingly I have posted the response in full here, below the horizontal line.
I apologize to the readership for the poor formatting, such as lack of formatting which Reason, as ostentatious as it name sounds, is incapable of supporting in its comments. I didn’t feel it worth revising these here, even if WordPress is perfectly capable of doing that.
I preface by saying I’ve not read the preceding comments, and, so, I apologize if someone has already said what I’m going to say here. I have, of course, read the article above, which claims to represent Professor Lewis’ critique of Resplandy, et al (2018) fairly, I have had a quick read of the critique, although have not, for reasons that will become evident, invested the time to reproduce the calculations, and I have had a careful read of Resplandy, Keeling, et al (2018), the research paper of NATURE which is the subject of Professor Lewis’ critique.
In particular, being a quantitative engineer practiced in stochastic methods, in addition to the new use of atmospheric chemistry in the Resplandy, et al paper, I was also interested in the Delta-APO-observed uncertainty analysis described in their Methods section where, as is reported, they generated a million time series “with noise scaled to the random and systematic errors of APO data detailed in Extended Data Table 3”. Later, in the calculation Professor Lewis is apparently criticizing, Resplandy, et al report they computed the Delta-APO-climate trend using the standard deviation of these million realizations, arriving at the 1.16 +- 0.15 per meg reciprocal year value Professor Lewis so objects to. I can’t really tell from his mental arithmetic report and his least square trend report whether or not he did the million realization reproduction, but, as that is a major feature of the calculation, I rather doubt it. That’s because there are so many ways that could be set up which deserve reporting that are missing from his criticism. So either he did not calculate the result in the same way, or, if he did, he is not sharing the details in sufficient depth so we or Resplandy, et al can tell whether or not he did it the same way.
Given that this is origin of Professor Lewis’ critique and, then, the rather casual complaint about “anthropogenic aerosol deposition”, which is more present in the above (mis?)characterization of Lewis than in the original (only appears in footnote 8, and in a manner of explanation, not a criticism), the rest of Lewis’ pile-on founders if this is done wrong.
That’s the substance.
But what is really problematic is that Lewis’ critique is improper science. The way this gets done in peer review and in NATURE or SCIENCE or any other journals, including JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION or JOURNAL OF THE ROYAL STATISTICAL SOCIETY, with which I assume Professor Lewis is familiar, is that a letter is sent to the editors, with full technical details, almost akin to a research paper. Generally, original authors and the critic, in that setting, are in contact, and they agree to write a joint response, resolving the objection with more detail, or the critic presents in detail — far more than Professor Lewis did in his one-off PDF — why they believe the original to be mistaken, and then the original authors get a response.
This is why I don’t really take Professor Lewis’ criticism seriously. He hasn’t allowed the assembled, including NATURE’s technical audience, to be able to fully criticize his own criticism, by failing to document essential details. He is relying solely on his authority as a “statistician”.
In fact, there are other instances where Professor Lewis’ authority is circumscribed. For example, in 2013, Professor Lewis published a paper in JOURNAL OF CLIMATE titled “An objective Bayesian improved approach for applying optimal fingerprint techniques to estimate climate sensitivity” (vol 26, pages 2414ff) wherein he insists upon using a noninformative prior for the calculation of interest. That is certainly a permissible choice, and there is nothing technically wrong with the conclusion thus derived, However, by using citations to justify the practice, Lewis misrepresents the position of Kass and Wasserman (1996) who squarely identify proper Bayesian practice with using proper, non-uniform priors, and, moreover, identify several pitfalls with using uniform ones, pitfalls which, if Professor Lewis were faithful to his self-characterization of pursuing a Bayesian approach, should address. He does not in that paper and, so, invites the question of why. There Professor Lewis is questioning a calculating of a higher climate sensitivity from fingerprinting techniques. It appears that he’s seeking for a rationale why that might not be so. Surely invoking a device which admits uniform priors to obtain such might work, but it is hardly good Bayesian practice.
Accordingly, I wonder — for I cannot tell given what Professor Lewis has recorded in his cited objection — if the result of Resplandy, et al is what Professor Lewis’ real problem is, one where he exploits the subtle difference between doing a on-the-face-of-it linear squares on that with doing one based upon a million-fold stochastic simulation, a difference which the readers of REASON, for example, as erudite as they are, might not catch.
In my technical opinion, until Professor Lewis does the full work of a full scientific or statistical criticism, his opinion is not worth much and Resplandy, et al, have every right to ignore him.
Dr Ralph Keeling describes the smudge in the original study, and credits Prof Lewis for sending them on the right track. The details are included in a snap from the RealClimate summary below:
The revision is being submitted to Nature. Apparently, the problem is that the errors in the ensemble realization were correlated, and they did not account for this. I’ll reserve judgment until I see their corrected contribution.
One thing I’d say, however, is that if the ensemble was generated using something like a bootstrap, there’s no reason for the resulting errors to be correlated. I can’t say until I see the actual details. But, if I am correct, they could use a Politis-Romano stationary bootstrap instead, and this would have taken care of that. Note, in addition, the remark by Nordstrom.
Thanks. I’ve moved on.
Lewis’s full critique and supporting data are set forth in two posts at Judith Curry’s “Climate, Etc.” blog.
The post has been updated with the preliminary explanation by Professor Keeling regarding the crux of the underestimated uncertainty. I would also fault the authors for failing to document precisely how they generated their million estimates from which the trends were based, so this could be examined. Without such documentation, one can only assume the calculations were done correctly. In this case, they apparently were not.
While I did not look at the extended description of what Prof Lewis felt was an error at Climate, Etc., surely from the preliminary objection there was no reason to believe Prof Lewis had reproduced the calculation appropriately. I hope that Prof Lewis will submit to Nature his objection and letter as well.