## How to achieve Carbon neutrality in Massachusetts

Claire and I and our home are featured in the section on “Electrifying our energy supply” in the section “Local households making the switch to electricity”.

## 9 Misconceptions About Solar Energy

Claire and I and our home are featured in the case study in section “7. My home is not right for solar”.

## “Bayesian replication analysis” (by John Kruschke)

### “… the ability to express [hypotheses] as distributions over parameters …”

Bayesian estimation supersedes the t-test:

## Managed Retreat

The case for managed retreat” in Science, by Siders, Hino, and Mach, 2019.

Canal development on the north side of Roy Creek, Assawoman Bay

Homes on the cliff edge at Happisburgh in Norfolk demonstrating levels of erosion along the East Coast.

## The state of the science: “Heißzeit” … where we are heading.

If you do not change direction, you may end up where you are heading.

` ― Lao Tzu`

Professor Johan Rockström, again.

Yeah, and that makes me feel, this way …

## CBRA is awesome!

Hat tip to Professor Rob Young and Audubon for a great newsfilm.

## From the Promise Forward Department

So, I hereby promise to read, assess, and report here on the following new paper:

`Y. Zhang, C. Song, L. E. Band, G. Sun, "No proportional increase of terrestrial gross carbon sequestration from the greening Earth", Journal of Geophysical Research: Biogeosciences, 2018JG004917, 2019.`

This is scholarship along lines I have studied before, such as:

## How much CO2 your country can still emit, in three simple steps (reblog from RealClimate)

Everyone is talking about emissions budgets – what are they and what do they mean for your country? Our CO2 emissions are causing global heating. If we want to stop global warming at a given temperature level, we can emit only a limited amount of CO2. That’s our emissions budget. I explained it here at RealClimate a couple of years ago: First of all – what the heck is …

Source, RealClimate: How much CO2 your country can still emit, in three simple steps

## “Between grounded hope and radical hope, that’s what we’re going to need for climate change.”

Certainly, for me, one of the reasons to get out of bed is that we really haven’t tried everything. Having done miserably at communication, having done miserably at policy, having done miserably at market responses to climate change gives us a ton of hope, because we could do so much better.

The other thing is we’re short-sighted human beings on many counts, and yet our species has managed to build cathedrals that took 300 years apiece. So it’s not like we can’t. The future isn’t written yet. It is still open in terms of how it’s going to be shaped.

Still, what we have to realize — and what’s dawning on many people now — is that we have put a lot of CO2 in the atmosphere that won’t just come out tomorrow. That’s why we have to make space for grief, fear, and all the rest of it in public spaces and in our private lives.

We’re dealing with a global system that’s highly interconnected. We have set so many things in motion that if you tried to control it right now, you couldn’t. We have sailed a ship, and the question is, are we going to keep blowing wind into its sails and sending it off into even more troubled waters, or are we going to do what we can to smooth out the waters, and make sure the opening to the harbor is wide enough for everyone?

There is a ton of space left in terms of what we can do. We can’t just do anything we want, because of the things we have already set in motion, but we can stop making it worse, and there are so many options to deal with the challenges and to make life much less miserable for the vast majority of the world’s people.

So I think it’s a matter of priorities and values, and reckoning with what we have done. In the public sphere, it’s called political work. In the private sphere, there is deeply personal transformational work that needs to be done.

## 20 July 1969

I worked at IBM Federal Systems Division in Owego, NY from May, 1976 to March of 1994. I did a lot with the IBM System/4 Pi and its operating and support software.

## Shifting to a Sustainable Future (Professor Steven Chu)

A lecture at MIT, in 2018, as the Hoyt C Hottel Lecture in Chemical Engineering.

Notable quote: “The half-life of CO2 in atmosphere is 10,000 years.” (Professor Steven Chu)

## Unpacking and Packing (WHOI)

### “What does it take to unpack and repack R/V Neil Armstrong?”

That’s the R/V Neil Armstrong operated by Woods Hole Oceanographic Institution out of Woods Hole, MA.

A bit appropriate as the 50th anniversary of Moon Day approaches.

## In case you wondered if Carbon Dioxide increases caused climate change, here’s the latest news

In case you wondered if Carbon Dioxide (also called, carbonic acid, CO2) increases caused climate change, here’s the latest news … from 1856-1896:

## Solar plus storage is now cheaper than any non-solar electrical power

And, from that Lefty Socialist rag, Forbes.

## Natural Gas Companies are Doing Themselves In

As I wrote before, there will be no Golden Age of Natural Gas. The climate periodical DeSmog Blog now reports on its outright dissolution.

But the surprise is that natural gas miners, particularly shale gas miners, are apparently destroying themselves. This comes from an an article at DeSmog Blog recounting a talk by Steve Schlotterbeck, a former shale gas CEO. The following quotes the DeSmog Blog article heavily, but I have added material found in the public domain from Schlotterbeck himself and from gas and oil industry trades.

Steve Schlotterbeck, who led drilling company EQT as it expanded to become the nation’s largest producer of natural gas in 2017, arrived at a petrochemical industry conference in Pittsburgh Friday morning with a blunt message about shale gas drilling and fracking.

Actually, this was at the Northeast Petrochemical Exhibition and Conference held on 20th June 2019. Mr Schlotterbeck’s full slide deck is available.

“The shale gas revolution has frankly been an unmitigated disaster for any buy-and-hold investor in the shale gas industry with very few limited exceptions,” Schlotterbeck, who left the helm of EQT last year, continued. “In fact, I’m not aware of another case of a disruptive technological change that has done so much harm to the industry that created the change.”

“While hundreds of billions of dollars of benefits have accrued to hundreds of millions of people, the amount of shareholder value destruction registers in the hundreds of billions of dollars,” he said. “The industry is self-destructive.”

Schlotterbeck is not the first industry insider to ring alarm bells about the shale industry’s record of producing vast amounts of gas while burning through far more cash than it can earn by selling that gas. And drillers’ own numbers speak for themselves. Reported spending outweighed income for a group of 29 large public shale gas companies by \$6.7 billion in 2018, bringing the group’s 2010 to 2018 cash flow to a total of negative \$181 billion, according to a March 2019 report by the Institute for Energy Economics and Financial Analysis.

But Schlotterbeck’s remarks, delivered to petrochemical and gas industry executives at the David L. Lawrence Convention Center in Pittsburgh, come from an individual uniquely positioned to understand how major Marcellus drillers make financial decisions — because he so recently ran a major shale gas drilling firm. Schlotterbeck now serves as a member of the board of directors at the Energy Innovation Center Institute, a nonprofit that offers energy industry training programs.

His warnings on Friday were also offered in unusually stark terms.

“The technological advancements developed by the industry have been the weapon of its own suicide,” Schlotterbeck added, referring to the financial impacts of shale gas drilling on shale gas drillers. “And unfortunately, the industry still has not fully realized how it’s killing itself. Since 2015, there’s been 172 E&P company bankruptcies involving nearly a hundred billion dollars of debt.”

“In a little more than a decade, most of these companies just destroyed a very large percentage of their companies’ value that they had at the beginning of the shale revolution,” he said. “It’s frankly hard to imagine the scope of the value destruction that has occurred. And it continues.”

At the Friday conference, he displayed a slide showing the stock prices of eight major Marcellus shale gas drillers: Antero, Range Resources, Cabot Oil and Gas, Southwestern Energy, CNX Gas, Gulfport, Chesapeake Energy, and EQT, the company that Schlotterbeck ran until he resigned in March 2018. Seven of the eight companies saw their stock prices fall between 40 percent and 95 percent since 2008, the slide showed.

“Excluding capital, the big eight basin producers have destroyed on average 80 percent of the value of their companies since the beginning of the shale revolution,” Schlotterbeck said. “This is not the fall from the peak price during the shale decade, this is the drop in their share price from before the shale revolution began.”

Mr. Schlotterbeck credited the shale rush with lowering power and natural gas bills nationwide and offering significant economic benefits since 2008, when he said the shale revolution began.

“Nearly every American has benefited from shale gas, with one big exception,” he said, “the shale gas investors.”

Residents of communities where shale gas drilling and fracking have caused disruptions and health issues might take exception to Mr. Schlotterbeck’s categorical description of the beneficiaries of shale gas, as might climate scientists who have warned that the shale industry’s greenhouse gas emissions are so severe that burning gas for power may be worse for the global climate than burning coal.

Only Cabot Oil and Gas, which owns the rights to drill gas from roughly 174,000 acres, mostly in one county in the northeastern corner of Pennsylvania, saw its stock price rise since 2008, according to Schlotterbeck’s presentation.

Cabot remains at the center of disputes tied to water contamination, a gas well blow-out, and other problems in Dimock, PA. One major lawsuit in that dispute was filed against Cabot back in November 2009 and legal battles have continued since. The company has denied liability and settled on undisclosed terms with landowners along Carter Road in Dimock.

Schlotterbeck made no mention of Dimock, focusing his remarks on the economic decisions made by the shale gas industry’s corporate management and boards of directors — not just in the past, but also in the present.

“The fact is that every time they put the drill bit to the ground, they erode the value of the billions of dollars of previous investments they have made,” he said. “It’s frankly no wonder that their equity valuations continue to fall dramatically.”

Can’t happen soon enough in my book, even if they are hoping for an upside. I can’t seen how that will happen, with increasing prices. Of course, they could (try to) get a subsidy from the present federal government, but it would have to be using another so-called emergency declaration from 45.

There was also coverage by oil and gas industry periodicals. One or two are claiming the overall talk was positive, emphasizing Cabot’s performance. For example,

There’s another parse: The shale gas drillers understand they may be a day soon when they cannot afford to extract gas, even at the economies of fracking, where it’s done in part because environmental costs are imposed upon local communities’ water supplies and causing structural damage to homes and buildings. So they are in a panic and spending investors’ capital with the hopes that they recover as much of the explosive methane as possible so they can do something with it.

On the other hand, Schlotterbeck has continued to acknowledge the pipeline situation, particularly in the eastern United States, is a big headache. That’s terrific.

## “… [A] new scientific paper overstates forests’ potential” (Reynolds)

##### (On 2019-07-06, repaired a typo, and on 2019-07-16 linked in a post by Professor Stefan Rahmstorf at RealClimate.)

Jesse Reynolds at Legal Planet is on this.

But, as I noted at LinkedIn, even if I accept the entirety of the well-meaning paper by Bastin, et al admits that planting 500 billion trees, as they propose, will only solve 25% of the atmospheric CO2 problem. Actually, I believe they miscalculated that, but we’ll get to seeing how in a moment.

Let’s say the half trillion trees are planted, emissions of CO2 from human sources and other precursors, like CH4 are completely stopped (setting aside the challenge of how to get agriculture to stop emitting, too), and deforestation is stopped. Atmospheric CO2 is now about 414 ppm. The preindustrial baseline was 288 ppm. That means 126 ppm more CO2 is in atmosphere over pre-industrial. We probably don’t need to get to 288 ppm. 350 ppm will do. So that means we’re 64 ppm out, or so it seems. Atmospheric CO2 is increasing by about 2 ppm per year.

Bastin, et al estimate the half trillion trees will take out 200 GtC at maturity. 1 GtCO2 is 0.127 ppm. So 200 GtC is a bit more than 25 ppm. That’s 39% of 64 ppm or 20% of 128 ppm.

Setting aside that this won’t happen overnight, or what the associated emissions of planting 500 billion trees are, this has another problem, alluded to above. Atmosphere only retains about 40% of total human emissions. That means 60% of human emissions (already) either go into soils or into the oceans. (In the long run, CO2 in oceans will turn into carbonates, but this is a very slow process.)

###### (Graphic courtesy of NASA. Click on it if you want to see a larger version in a separate window.)

Most importantly, oceans and soils are in equilibrium with atmosphere. This means if a ppm of CO2 is drawn from atmosphere, the partial pressure of CO2 in atmosphere will be lowered, and the entire climate system will come to a new equilibrium, drawing CO2 from soils and oceans. In the end, the total amount of CO2 to extract isn’t 128 ppm or 64 ppm, but 128/0.4 ppm or 64/0.4 ppm. These are 320 ppm and 160 ppm, respectively. Also, as Dr Steven Chu has pointed out, we’re not really at 414 ppm CO2 but, after considering the methane (CH4) in atmosphere, other hydrocarbon greenhouse gases which break down into CO2, and other greenhouse gases like N2O, 500 ppm CO2e. He also indicates we’re probably going to get to at least 600 ppm CO2e.

What does that mean? That means, even accepting Bastin et al uncritically, their 500 billion trees will do 7% of going from 500 ppm to 350 ppm CO2e (and that’s generous because they do little about, say, N2O), and 4% of going from 600 ppm to 350 ppm CO2e.

This is why Reynolds and I and others say the article is misleading. I also claim the Project Drawndown is highly misleading. These are at least wishful environmentalism if not greenwashing. If a political movement hangs its hat on the proposal, it is greenwashing.

The limitations of planting forests for this purpose are well known. For instance,

 L. Nave, G. M. Domke, K. L. Hofmeister, U. Mishra, C. H. Perry, B. F. Walters, C. W. Swanston, “Reforestation can sequester two petagrams of carbon in US topsoils in a century“, PNAS, 2018.

A petagram of Carbon is a single GtC. (That’s because 1 Gt = 1 billion of a tonne = 1 billion of 1000 kilograms each = 1 billion of 1 million grams each = 10^15 grams = 1 petagram.) So, it’s not fast, either.

I’ve written about what it would take to really reduce CO2 in atmosphere at two blog posts:

and I have written about the problem of convincing greenhouse gases to remain in soils even if they are put there through afforestation.

In short, (1) it’s far better and cheaper not to put the emissions up there in the first place, and (2), if we do, we’d better be prepared to live with the consequences, because CO2 in atmosphere is very long-lived. I think, too, in our personal lives, we need to be looking at what really contributes to global emissions: Consumption and its upstream emissions accounts for a lot! So do McMansions and does expansion of suburbs.

## “‘Why hasn’t anyone told us of this before?‘”

### 2019-07-16, update

Professor Stefan Rahmstorf has posted a piece on this matter at RealClimate.

## Letter to the MIT community: Immigration is a kind of oxygen

(The following email was sent today to the MIT community by President L. Rafael Reif.)

To the members of the MIT community,

MIT has flourished, like the United States itself, because it has been a magnet for the world’s finest talent, a global laboratory where people from every culture and background inspire each other and invent the future, together.

Today, I feel compelled to share my dismay about some circumstances painfully relevant to our fellow MIT community members of Chinese descent. And I believe that because we treasure them as friends and colleagues, their situation and its larger national context should concern us all.

The situation

As the US and China have struggled with rising tensions, the US government has raised serious concerns about incidents of alleged academic espionage conducted by individuals through what is widely understood as a systematic effort of the Chinese government to acquire high-tech IP.

As head of an institute that includes MIT Lincoln Laboratory, I could not take national security more seriously. I am well aware of the risks of academic espionage, and MIT has established prudent policies to protect against such breaches.

But in managing these risks, we must take great care not to create a toxic atmosphere of unfounded suspicion and fear. Looking at cases across the nation, small numbers of researchers of Chinese background may indeed have acted in bad faith, but they are the exception and very far from the rule. Yet faculty members, post-docs, research staff and students tell me that, in their dealings with government agencies, they now feel unfairly scrutinized, stigmatized and on edge – because of their Chinese ethnicity alone.

Nothing could be further from – or more corrosive to ­– our community’s collaborative strength and open-hearted ideals. To hear such reports from Chinese and Chinese-American colleagues is heartbreaking. As scholars, teachers, mentors, inventors and entrepreneurs, they have been not only exemplary members of our community but exceptional contributors to American society. I am deeply troubled that they feel themselves repaid with generalized mistrust and disrespect.

The signal to the world

For those of us who know firsthand the immense value of MIT’s global community and of the free flow of scientific ideas, it is important to understand the distress of these colleagues as part of an increasingly loud signal the US is sending to the world.

Protracted visa delays. Harsh rhetoric against most immigrants and a range of other groups, because of religion, race, ethnicity or national origin. Together, such actions and policies have turned the volume all the way up on the message that the US is closing the door – that we no longer seek to be a magnet for the world’s most driven and creative individuals. I believe this message is not consistent with how America has succeeded. I am certain it is not how the Institute has succeeded. And we should expect it to have serious long-term costs for the nation and for MIT.

For the record, let me say with warmth and enthusiasm to every member of MIT’s intensely global community: We are glad, proud and fortunate to have you with us! To our alumni around the world: We remain one community, united by our shared values and ideals! And to all the rising talent out there: If you are passionate about making a better world, and if you dream of joining our community, we welcome your creativity, we welcome your unstoppable energy and aspiration – and we hope you can find a way to join us.

* * *

In May, the world lost a brilliant creative force: architect I.M. Pei, MIT Class of 1940. Raised in Shanghai and Hong Kong, he came to the United States at 17 to seek an education. He left a legacy of iconic buildings from Boston to Paris and China to Washington, DC, as well on our own campus. By his own account, he consciously stayed alive to his Chinese roots all his life. Yet, when he died at the age of 102, the Boston Globe described him as “the most prominent American architect of his generation.”

Thanks to the inspired American system that also made room for me as an immigrant, all of those facts can be true at the same time.

As I have discovered through 40 years in academia, the hidden strength of a university is that every fall, it is refreshed by a new tide of students. I am equally convinced that part of the genius of America is that it is continually refreshed by immigration – by the passionate energy, audacity, ingenuity and drive of people hungry for a better life.

There is certainly room for a wide range of serious positions on the actions necessary to ensure our national security and to manage and improve our nation’s immigration system. But above the noise of the current moment, the signal I believe we should be sending, loud and clear, is that the story of American immigration is essential to understanding how the US became, and remains, optimistic, open-minded, innovative and prosperous – a story of never-ending renewal.

In a nation like ours, immigration is a kind of oxygen, each fresh wave re-energizing the body as a whole. As a society, when we offer immigrants the gift of opportunity, we receive in return vital fuel for our shared future. I trust that this wisdom will always guide us in the life and work of MIT. And I hope it can continue to guide our nation.

Sincerely,

L. Rafael Reif

Posted in innovation, science

## “Strong First Quarter Growth and Fracked Gas Takes a Hit”

###### (To see a larger figure, click on image to open it in a new browser window. This is from ILSR‘s “Of New Power Generation, How Much is on the Roof? Quarterly Update — 2019 Q1“.)

It was once warned, and now it’s coming true: The government unsubsidized cost of utility scale solar is now cheaper than any fossil fuel, including fracked gas. What these cost curves do not show is the closing gap between residential PV generation and cost of electrical transmission from a utility. Residential PV is still twice as high, but eventually residential PV, especially if incentivized, will match or beat the cost of transmission. That is expected to occur by 2025. When that occurs, there is no reason at all to purchase that portion of electricity from a utility. Whether storage is used or not, whether net metering is available or not, whatever portion of electrical consumption for a home comes from solar PV is a clear win.

Utility scale solar is cheaper than any fossil fuel, including fracked gas.

## “Ten Fatal Flaws in Data Analysis” (Charles Kufs)

Professor Kufs has a fun book, Stats with Cats, and a blog. He also has a blog post tiled “Ten Fatal Flaws in Data Analysis” which, in general, I like. But the presentation has some shortcomings, too, which I note below.

1. Where’s the Beef? Essentially, there’s no analysis. There’s a data summary.
2. Phantom Populations Samples need to represent the population of interest. There has to be a population of interest. They need to have something specific in common which, if absent, has a good chance of affecting an outcome.
3. Wow, Sham Samples The population is real, but the samples don’t represent it well, or at all. But be careful, here! I don’t think Kufs emphasizes this enough: A sample need not contain observation of population groups in the same proportions with which they occur in the population. Professor Yves Tillé makes this point strongly in his book, Sampling Algorithms.
4. Enough is Enough No confidence and statistical power with too few, and no meaning with too many. I’d add, underscoring something Kufs says, be sure each category has a fair number of observations as well.
5. Indulging Variance Unless variance is assessed and reported, an analysis is not statistical and it’s not scientific. Means mean muck without accompanying reports of variability. There’s a lot to appreciate about variability. Properly assessed, variability can be an important tool, as it is capable of separating out subpopulations which otherwise have common means. Ignoring heteroscedasticity can break many a standard analytical tool. The place to start dealing with variance when studying a population is at the sampling plan. There’s often a tradeoff between low variance and low bias. Merely calculating and reporting a standard deviation at the end of an analysis is almost always insufficient treatment.
6. Madness to the Methods Data inspection, cleaning, correcting, and testing for assumptions of modeling are unglamorous, tedious, and time consuming parts of any statistical analysis or data science project. It is also a portion which is hard to defend, since, in industry, management is sometimes impatient to see results coming from an allocation of expensive people and resources. But these steps are totally necessary. Without the last step, testing, you cannot know if the data are sufficiently cleaned or representative.
7. Torrents of Tests Kufs treatment of the multiple testing problem is old (Bonferroni), whether addressed from a quasi-Frequentist perspective or the more modern Bayesian one. There are now techniques for controlling family-wise error rates when large numbers of tests are conducted. Bayesian methods don’t have a problem with multiple comparisons. That’s one reason why I use them (a lot).
8. Significant Insignificance and Insignificant Significance Ah, significance tests! I could go on and on about these, and often have. The kindest thing to say here is a quote from Jerome Cornfield (1976): “The most general and concise way of saying all this is that $p$-values depend on both the $x$ observed and on the other possible values of $x$ that might have been observed but [were not], i.e., the sample space, while the likelihood ratio depends only on the observed $x$.”
9. Extrapolation Intoxication This is a caution against making the same mistake NASA and its subcontractors did when making the fateful decision to launch the Space Shuttle Challenger after it was exposed to freezing temperatures.
10. Misdirected Models Models are critical for understanding data, whether they are derived from domain knowledge, like physical theory, or not. There are many ways theories or hypotheses upon which models are based can themselves be wrong. The most important criterion a theory or hypothesis needs to have is it must be falsifiable. Extending, then, every model much have diagnostics within it which tell when the model is broken. This can be inherent to the application of the model, or it can be done by comparing its performance with a straw man model which is completely nonsensical but is fit to the same data. However, in this day and age there are such things as non-mechanistic empirical dynamic modeling which have shown success in the absence of underlying theory.

In practice, I witness projects running aground on these sandbars way too often. Kufs’ cautions are good. But Kufs’ advice could use an update.

## “Climate Change: Information on potential economic effects could help guide Federal efforts to reduce fiscal exposure” (GAO, September 2017)

In September 2017, the U.S. General Accounting Office completed a report Climate Change: Information on Potential Economic Effects Could Help Guide Federal Efforts to Reduce Fiscal Exposure. A copy is at that link.

Foremost, in case anyone doubts it, there is an appreciable fiscal risk to the federal government. Quoting:

Over the last decade, extreme weather and fire events have cost the [F]ederal government over \$350 billion, according to the Office of Management and Budget. These costs will likely rise as the climate changes, according to the U.S. Global Change Research Program. In February 2013, GAO included Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks on its High-Risk List.

GAO was asked to review the potential economic effects of climate change and risks to the federal government. This report examines (1) methods used to estimate the potential economic effects of climate change in the United States, (2) what is known about these effects, and (3) the extent to which information about these effects could inform efforts to manage climate risks across the federal government. GAO reviewed 2 national-scale studies available and 28 other studies; interviewed 26 experts knowledgeable about the strengths and limitations of the studies; compared federal efforts to manage climate risks with leading practices for risk management and economic analysis; and obtained expert views.

This fiscal risk does not reflect risk to the private sector. It also is not a cost-benefit analysis of taking action versus not.

The report summarizes results in more detailed studies, such as the American Climate Prospectus: Economic Risks in the United States commissioned by the Risky Business Project(*) and performed by Rhodium Group, LLC, in 2014. The GAO also reviewed the American Climate Prospectur among other reports.

While GAO identified substantial uncertainties in projections of economic damage from climate change (**), the reviews in these reports was sufficient to provide substantial guidance.

Some key results (click images to see larger figures in separate browser windows):

This report is useful, beyond its summary of studies of climate change economic harm, and of results and reports as others, because it provides citations and links to several other reports the U.S. GAO has written regarding climate change risk to both the federal government and to states, along with recommendations. One important recommendation was to provide a central clearinghouse of climate change related information to help states and localities plan adaptation. This is recommended in

Climate Information: A National System Could Help Federal, State, Local, and Private Sector Decision Makers Use Climate Information, GAO-16-37 (Washington, D.C.: Nov. 23, 2015).

Of course, as the report indicates in a footnote:

Specifically, Executive Order 13783 rescinded the Climate Action Plan and revoked the executive order establishing the Council on Climate Preparedness and Resilience. Although Executive Order 13693 has not been revoked, it is uncertain whether the agency adaptation plans and other strategic planning efforts it calls for will continue.

This suggests the present posture of the federal government is to accept and incur the estimated costs inflicted upon its budget indefinitely.

## For Thursday, 27th June 2019.

### From AFP, “Mercury tops 45C in France as deadly heatwave roasts Europe” (28th June 2019).

And wildfires in Catalonia:

## A response to a post on RealClimate

##### (Updated 2342 EDT, 28 June 2019.)

This is a response to a post on RealClimate which primarily concerned economist Ross McKitrick’s op-ed in the Financial Post condemning the geophysical community for disregarding Roger Pielke, Jr’s arguments. Pielke, in that link, is recounting his few person crusade (along with a couple of others) that, in terms of a climate emergency, paraphrasing, “There’s nothing to see here, folks. Move along home.”

The post was by Michael Tobis, a retired climate scientist, working as a software developer and science writer living in Ottawa, Ontario.

Unfortunately, in my opinion, along the way, Dr Tobis threw Statistics under a small bus. At least he mischaracterized it. I went on to agree with the criticism of McKitrick. But I felt that modern Statistics needed explaining, and I cautioned that a lot of the problem is the statistics which Pielke did was shoddy, using antiquated methods. I also pointed out that similar imperfections could be found in, for instance, The Journal of Climate. Separately, I’ve documented a particularly egregious case, one which contributed to the claim there was a “hiatus” in increasing global warming, something which was not true then, “if thou reckon right“, and turned out not to be true afterwards.

Anyway, apparently, RC isn’t going to post my Comment, so I’m placing it here, below, in its entirety. It’s up to them. The Comment may have been too far off the beaten path. I have also answered in the above a couple of my own questions about the original articles which pertain.

### As of today, I note that my comment was indeed posted. The delay was probably due to simple moderation delay.] So, thisThis slightly revised version augments that.

I’m not sure which Pielke-Jr article McKitrick is referring, and, whichever it is, I mean one with technical details (link?), but, nevertheless, I wanted to gently disagree with part of Dr Tobis post above. Consider

Statistics is a vital tool of science, but it is not the only one. It is most effective when dealing with large quantities of data. Using statistical methods to detect the effect of one factor among several amounts to proving that the other factors did not align as a matter of happenstance. The more abundant the data, the less likely such a coincidence.

To the extent that Pielke-Jr is running some kind of hypothesis test for determining attribution, the problem isn’t Statistics, it’s wrong-headed and out-of-date statistical technique.

It is not true that inference and estimate rely upon “large quantities of data”. Sometimes, in fact, relative small amounts of well chosen data are far more powerful: Think of an experimental design with controls and balance. That’s important to remember. In many engagements I have with Big Data people, I need to emphasize that the size of your dataset isn’t the raw size, it’s the number of replicas of unique combinations of explanatory variables you have. If most in a big dataset have but one observation, that’s not a big dataset at all. If there is a wide range in the numbers of replicas, that’s a problem with balance, and it can even harm seemingly non-parametric techniques like cross-validation.

In fact, there is now a rich set of methods for embodying domain knowledge in statistical models, namely, the Bayesian hierarchical modeling approach, whether these derive from meteorology or climate science or baseball statistics. Judging from literature in say, Journal of Climate, few papers appear aware of these techniques, preferring to argue piecemeal from grounding in the domain. That’s okay, but large comprehensive studies are hard, and it’s unfortunate that these modern methods aren’t better understood and used.

Deniers and doubters, in my experience, are the least likely to be aware of such methods, and I find many instances, whether in denial-oriented assessments of temperature or sea level changes, where Statistics is practiced as if by rote, and with a lot of confusion. Time series seems to be a particular tripping point.

Others have noted these kinds of blemishes, but often such underscores don’t go on to indicate what should be done. There are many sound statistical analyses which demonstrate any of these claims, from the statistical significance of temperature rises, to increased damage from storms, to droughts and heatwaves, to phenological aberrations in species migrations. There is no doubt these are true.

There are many wonderful Bayesian analyses which can serve as examples. Schmittner, et al from Science in 2011 is wonderful (“Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum”). There are several papers where Professor Mark Berliner is a co-author, like his “Uncertainty and Climate Chance“, or, with Professor Levine, “Statistical Principles for Climate Change Studies“. The latter paper addresses the nuances of hypothesis testing, showing that while it seems simple, it is clouded with delicate assumptions. Smashing that all aside and oversimplifying, it’s that p-values (themselves) are random variables.

Statistics as a field hasn’t left these questions alone. The American Statistical Association has, like many scientific organizations, a formal statement on climate change. But, too, the status of Statistics in climate change science has been examined. There have been special issues of journals.

I also urge readers to be open-minded about applying techniques from machine learning and related areas to geophysical problems. They certainly have their faults and limitations, opacity in explanation being one formidable issue. But as O’Gorman and Dwyer showed in 2018 , ML techniques are beginning to make their presence felt in geophysics. See also Rasp, Pritchard, and Gentine in a 2018 PNAS.

## “Build way more wind and solar ‘than needed'”

Many people familiar with traditional energy networks, including the electrical grids of utilities, come with strong preconceptions to considering zero Carbon energy sources. This is particularly true of and for experts in traditional energy, including engineers. They focus upon the intermittency of such renewable sources. Translated, they don’t really mean intermittency, since all power sources can go offline, they mean they can’t flip a switch and deliver energy, or back up a faulted energy source upon demand. It is a mindset.

Some traditional energy sources, particularly large nuclear power plants, can go offline in short notice, and, because of their size, the rest of an electrical power grid needs to jump through hoops to respond, maintaining load.

The preconceptions extend to having what’s called in the consumer electronics business, a closed ecosystem (e.g., Apple products), where hardware and software options meet standards set by a central point. With the advent of distributed energy, notably solar PV on home and commercial rooftops, and demand response options, end users are becoming generators of electricity and, by withholding demand, can influence and help grids respond to loads. This changes the business relationship between “the edge” of electrical networks and their operators, something which utilities and RTOs/ISOs have only come to appreciate over several years, some more slowly than others.

Part of the problem of these attitudes is that they leak out into the mostly uneducated public and even among policymakers. The former head of EOER in Massachusetts, Matthew Beaton, refused to address issues of why Massachusetts couldn’t be more like Texas when directly questioned, appealing to the silly aphorism that renewables couldn’t be relied upon when “the sun don’t shine and the wind don’t blow”. But, in fact, that is a gross simplification, grounded in expectations set by years of operating conventional fossil fuel energy in the late 20th century.

The fact of renewables is that they have zero marginal cost. This means that, once constructed, they deliver energy for 2-3 decades with little additional financial inputs, governed solely by the circumstances of sun and wind. In comparison to traditional fossil fuel plants, they are also much less expensive to build. They do require more space, and they work best if scattered over a wide area, although exploiting rich, high wind areas like the offshore New England coast offers competing advantages.

There is an idea that a region or a state or a town or a home or business has a certain amount of energy “it needs”. That’s true, but to the degree demand can be adjusted or shaped in time and by choice means these aren’t inflexible. It is also true that not all consumers of electrical energy need the same quality of electrical power, although as far as I know, there are presently no plans to relax that standard, possibly because it trades an economy for a greater headache in managing different mixes of electrical energy.

One way to deal with having things like capacity factors, where a resource isn’t always available, is to massively overbuild the resource. This means building 3X to 8X the capacity typically expected based upon such “needs”. This is possible only because renewables are inexpensive to build, and they can be built rapidly. These costs, as I’ve recently observed, as have others like Professor Tony Seba and Haegel, Atwater, and colleagues have observed, are dramatically decreasing.

But Professors Richard Perez and Karl Rabago have underscored the idea of overbuild as a new principle of operation. The idea is not new, because it’s engineering commonsense. Indeed, the idea is part of the mix which Professor Mark Jacobson and colleagues proposed. Indeed, overprovisioning has long been a tactic used in the design of supercomputers.

In fact, when I worked with installers to buy a PV system for our home, I ran into this kind of counterproductive thinking. Installers wanted to maximize utilization of the PV panels across a year, and, when picking the number of panels, chose the number which maximized that measure. I, on the other hand, wanted to generate enough Watt-hours across the year to offset our entire electricity demand. We have a shaded situation, partly there from trees, so panels were not going to be uniformly illuminated. I finally found a terrific installer, RevoluSun, and they understood and wanted to do exactly what I thought and wanted. I ran into the same kind of thinking when I worked at IBM Federal Systems in upstate New York: Managers had a hard time understanding that if the objective was to get a calculation done as quickly as possible, there were times when only a portion of the computers in a multicomputer could be used, but all of them were needed for other parts of the calculation. They wanted to pick a number which maximized utilization at the price of slower computation. These were otherwise sharp people, but they were trapped in a certain way of thinking.

## 50 Terawatts of Solar Photovoltaics (“PV”) is now feasible by 2050

This is from an article in Science by Haegel, et al which just was released today. It means, documented in detail, that the projections of Professor Tony Seba are not only right on, but Professor Seba may have underestimated the impact.

To put this in perspective, in 2013, for all purposes from all sources, the entire world consumed 18 Terawatt-hours of energy. Accordingly, were the 50 Terawatt PV run for an hour, that would be nearly three times the need of all energy for 2013 for the world. Naturally, there are questions of storage, converting electrical energy to usable forms, etc. But, on the other hand, if there is 50 Terawatt-hours of energy available, no matter what the form, there is also significant economic incentive to learn how to do these things.

The actual paper from Science is:

N.M. Haegel, H. Atwater, T. Barnes, C. Breyer, A. Burrell, Y.-M. Chiang, S. De Wolf, B. Dimmler, D. Feldman, S. Glunz, et al. “Terawatt-scale photovoltaics: Transform global energy”, 2019, Science 364, 836–838.

Note this is a much more optimistic update of:

N. M. Haegel, R. Margolis, T. Buonassisi, D. Feldman, A. Froitzheim, R. Garabedian, et al., “Terawatt-scale photovoltaics: Trajectories and challenges”, 2017, Science, 356, 141–143.

from barely two years before. The authors say this themselves in the Abstract of the 2019 paper:

Solar energy has the potential to play a central role in the future global energy system because of the scale of the solar resource, its predictability, and its ubiquitous nature. Global installed solar photovoltaic (PV) capacity exceeded 500 GW at the end of 2018, and an estimated additional 500 GW of PV capacity is projected to be installed by 2022–2023, bringing us into the era of TW-scale PV. Given the speed of change in the PV industry, both in terms of continued dramatic cost decreases and manufacturing-scale increases, the growth toward TW-scale PV has caught many observers, including many of us (1), by surprise. Two years ago, we focused on the challenges of achieving 3 to 10 TW of PV by 2030. Here, we envision a future with ∼10 TW of PV by 2030 and 30 to 70 TW by 2050, providing a majority of global energy. PV would be not just a key contributor to electricity generation but also a central contributor to all segments of the global energy system. We discuss ramifications and challenges for complementary technologies (e.g., energy storage, power to gas/liquid fuels/chemicals, grid integration, and multiple sector electrification) and summarize what is needed in research in PV performance, reliability, manufacturing, and recycling.