neat stuff: new legs for de Broglie-Bohm pilot wave theory

See more at Professor John Bush‘s site:

See also work by my son, Jeff, for his doctoral dissertation, not regarding de Broglie-Bohm, but on corrals and scattering.

Posted in de Broglie-Bohm pilot wave theory, John Bush, quantum mechanics | 1 Comment

Quote from Max Planck

(Hat tip to Professor Richard Kleeman of the Courant Institute for Mathematical Sciences.)

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.”

   — Max Planck



For more information, see the excellent text, highly recommended for students of Climate Science, T. S. Kuhn, Black-Body Theory and the Quantum Discontinuity, 1894-1912, University of Chicago Press, 1978.

Also recommend for same audience: D. Archer, R. Pierrehumbert (eds.), The Warming Papers: The Scientific Foundation for the Climate Change Forecast, Wiley-Blackwell, 2011.

Posted in American Association for the Advancement of Science, Anthropocene, climate change, global warming, physics | Leave a comment

Senn’s `… never having to say you are certain’ guest post from Mayo’s blog

via S. Senn: Being a statistician means never having to say you are certain (Guest Post)

See also:

Posted in abstraction, American Association for the Advancement of Science, American Statistical Association, cancer research, data science, ecology, experimental design, generalized linear mixed models, generalized linear models, Mathematics and Climate Research Network, medicine, sampling, statistics, the right to know | Leave a comment

[reblog] David Suzuki: Consumer society no longer serves our needs

From David Suzuki, who I’ve cited here more and more often, from his blog post, Consumer society no longer serves our needs, of 11th January 2018.

An excerpt:

But where is the indication of our real status — Earthlings — animals whose very survival and well-being depend on the state of our home, planet Earth? Do we think we can survive without the other animals and plants that share the biosphere? And does our health not reflect the condition of air, water and soil that sustain all life? It’s as if they matter only in terms of how much it will cost to maintain or protect them.

Nature, increasingly under pressure from the need for constant economic growth, is often used to spread the consumption message. Nature has long been exploited in commercials — the lean movement of lions or tigers in car ads, the cuteness of parrots or mice, the strength of crocodiles, etc. But now animals are portrayed to actively recruit consumers. I’m especially nauseated by the shot of a penguin offering a stone to a potential mate being denigrated by another penguin offering a fancy diamond necklace.

How can we have serious discussions about the ecological costs and limits to growth or the need to degrow economies when consumption is seen as the very reason the economy and society exist?

This is a matter related to a point I’m planning to close with at a Needham Lyceum talk I’m giving on 11th February 2018 (0915 EST) at First Parish Needham, Unitarian Universalist (*). That is, to the degree to which economic systems, a human invention, or political systems, also humanly invented, cannot solve a dire situation we find ourselves in, these systems will be destroyed and surpassed, hopefully through some kind of peaceful disruption. By cannot solve I have a specific definition: Offering a solution to a dire problem which is infeasible or horrifically expensive is no solution.

What’s notable about both the responses of Presidents Obama and Trump to the climate crisis is that they both asserted solutions to it cannot involve significant negative impacts to the United States economy. I would suggest that, to the degree to which this is the best the United States Constitution offers, despite its remarkable construction and past triumphs, the U.S. Constitution is demonstrating this problem is beyond its capability to solve. However, I believe economics and the Constitution are separable, even if they do not seem so today, and I hope that if that separation is needed to fix climate, it will happen. If they are not, I believe the problem will be fixed, but with the loss of both, either in consequence or along the way.

Still, we could wake up:

(See Dream Catcher.)


* “Carbon Emissions and Climate: Where do we stand now, and what can be done if it all goes wrong?”
(in preparation).

Posted in Adam Smith, adaptation, affordable mass goods, Anthropocene, climate economics, climate justice, consumption, David Suzuki, ecological services, ecology, Ecology Action, economics, ethics, evidence, science, the right to be and act stupid, the right to know, the value of financial assets, tragedy of the horizon | Leave a comment

(thought of the day)

One accurate measurement is worth a thousand expert opinions.
Grace Murray Hopper

Hat tip to Pat’s blog.

Posted in statistics, Uncategorized | Leave a comment

wind+storage 2.1 ¢/kWh, solar+storage 3.6 ¢/kWh

Update, 2018-01-16

Vox has a widely acclaimed update to this story.

(rubbing hands gleefully)

Utility scale bids at Xcel Energy had median prices of 2.1 ¢/kWh for wind-with-storage, and 3.6 ¢/kWh for solar-with-storage.

Hat tip to Utility Dive.

In U.S. Energy Information Administration projections for 2020, the price per kWh of natural gas advanced combined cycle is 6.9 ¢/kWh and a spot price from CenterPoint Energy for commercial applications has it at 6.02 ¢/kWh (Minnesota).

To paraphrase the late Supreme Court Justice Antonin Scalia, fossil fuels for generating electricity are dead, dead, DEAD!.

And I delight in contemplating the days arriving soon when natural gas, oil, and coal, their pipelines and their shipping, are stranded assets. I’ve written about this often.

Posted in American Petroleum Institute, American Solar Energy Society, Amory Lovins, Bloomberg New Energy Finance, BNEF, bridge to somewhere, Buckminster Fuller, Cape Wind, Carbon Worshipers, clean disruption, CleanTechnica, climate economics, corporate litigation on damage from fossil fuel emissions, Cult of Carbon, decentralized electric power generation, decentralized energy, destructive economic development, distributed generation, economics, electrical energy storage, electricity, electricity markets, energy storage, energy utilities, FERC, Green Tech Media, ILSR, investment in wind and solar energy, Joseph Schumpeter, leaving fossil fuels in the ground, local generation, local self reliance, marginal energy sources, Massachusetts Clean Energy Center, microgrids, natural gas, petroleum, pipelines, public utility commissions, PUCs, rate of return regulation, regulatory capture, solar democracy, solar domination, solar energy, solar power, Spaceship Earth, stranded assets, sustainability, the energy of the people, the green century, the value of financial assets, Tony Seba, tragedy of the horizon, wind energy, wind power, zero carbon | Leave a comment

(repost) How the recent New England cold snap and nor’easter did not cause natural gas prices to spike

I wrote a piece a bit back about the volatility in natural gas prices. These were seized upon by proponents of natural gas pipelines, whether Gordon von Welie from ISO-NE, to various representatives of petroleum and power generators councils, or even that recurring denizen of the Commonwealth Magazine comments, NortheasternEE to, once again, argue that New England (read Massachusetts) needs new natural gas pipelines because cold pinches such as the most recently experienced caused huge financial harm to residents by spiking the prices of electricity and arguing, once again, that only bringing additional explosive methane by new pipelines could offset this. They, and even the editorial staff at Commonwealth claimed the generators of electricity had to switch to oil because of natural gas shortages.

Well, none of that is true, and turned out not to be. It was pretty self-evident that, at least, they did not know, since fuel mix used for generation is not something which is known at high aggregations of geography for a couple of days afterwards. And, as it turns out, little or no additional oil was needed, that even though Pilgrim nuclear went offline, renewables picked up the slack, driven there probably by the relatively high winds of the nor’easter. Indeed, Conservation Law Foundation (CLF) reports that, for a time, New England was getting as much electricity from renewables as it did from natural gas generation.

The details are, as I mentioned, at the blog post which has been updated a couple of times.

But I also want to take a moment to underscore how certain online media outlets are controlled by ensconced fossil fuel interests, like natural gas, the pipeline companies, and big utilities like Eversource, who are using heavy-handed legal threats to quash reports they do not like the public to know about. In particular Commonwealth Magazine appears to be a favorite mouthpiece for opponents of decentralized renewables, ranging from Associated Industries of Massachusetts to the New England Petroleum Council to Eversource. And, sure, they have run op-eds by individuals in favor of them from time to time, I’d say, to maintain the illusion of “balance”. But when their own editorial staff misrepresents matters of electrical generation as in the above, and do not get the story straight on the Marks, Mason, Mohlin, and Zaragoza-Watkins conference paper in terms of what it says, taking the pipeline proponent line and misrepresenting it as a product of the Environmental Defense Fund (EDF), then there’s something wrong with that source. I will not read or follow Commonwealth Magazine any longer. They even deleted two comments I made on these matters after their being posted for a half hour each.

While I have also let be known my view of the recent DPU demand charge decision, and I have listened to and attended presentations by officers of Governor Charlie Baker’s administration regarding energy policy and climate adaptation, in fact, there is little concrete evidence that what this administration is doing is but fig leaves and tokenism. Beginning with Governor Patrick and continuing under Governor Baker, the Massachusetts Department of Environmental Protection has seen its staff and budget repeatedly cut. The funding of the Municipal Vulnerability Preparedness program is pathetically small, and Governor Baker shows no willingness whatsoever to increase taxes to pay for any such plans, programs, or policies. Speaker of the House DeLeo probably contributes to that reluctance as well.

So, whatever happens to Massachusetts and to Boston, in terms of flooding and the like, can be put on Governor Baker’s head, and on Speaker DeLeo. They have heard about the urgency for over a decade, even if Baker was not Governor at the time. DeLeo has been Speaker since the time of the dinosaurs.

Posted in Uncategorized | Leave a comment

2017 Arctic Report Card

From NOAA.

2017 Arctic Report Card: Summer temperatures are rising rapidly in most Arctic seas, by Tom Di Liberto.

2017 Arctic Report Card: Extreme fall warmth drove near-record annual temperatures, by Rebecca Lindsey.

Posted in American Meteorological Association, AMETSOC, Anthropocene, Arctic, climate change, climate disruption, global warming, Hyper Anthropocene, NOAA | Leave a comment

a dystopian Commonwealth

I repeat a link to a post I made in May 2016 regarding how it seemed Governor Baker and Massachusetts House Speaker DeLeo were bent on a dystopian Massachusetts. Both then, and now, by the actions of their charges, they fail to really understand the importance of a clean energy future for the Massachusetts economy.

The present circumstances are the decision on Friday, 5 January 2018, to grant Eversource/NSTAR its request to essentially bust-up net metering in favor of a peak demand charge, and to permit it to eliminate time-of-use tariffs. The Acadia Center has more to say about this specific action. Not only does this have implications for decentralized energy adoption in Massachusetts, it also impedes important steps along the path of decarbonization, such as moving to electric air source heat pumps for heating and cooling, and adoption of electric vehicles. Indeed, if I were cynical, I’d say the next item on the Baker-DeLeo joint agenda is to fail to renew the Global Warming Solutions Act (GWSA) in 2020, thus relieving them of the responsibility of complying.

And why not? When I returned to work in Cambridge after the New Year, I was struck on how so many things simply do not work in Massachusetts, most notably what is laughingly called our public transportation system. But

  • streetlights were out, being worked by a crew of a half dozen and more Eversource employees in a trench,
  • escalators are still working which were not working before the holiday break,
  • the Town of Falmouth is being required to tear down two wind turbines it erected in a show of support for renewable energy and to earn revenue,
  • a fire alarm at the Route 128 Amtrak-MBTA station was still signaling the alarm I saw when I went into Cambridge when I returned from there,
  • and the event of Aquarium Station on the Blue Line being flooded during the recent nor’easter is written up in the Boston Globe and Commonwealth Magazine as simply a repeat of a problem which had occurred once before. Nothing to see here. Move along home.


(Plunge, an art exhibit by Michael Pinsky, shows sea level of meters higher than in 20th century on famous London landmarks.)

And I noted how, NOAA had presented that weather and other natural disasters cost the public in the United States a quarter of a trillion dollars in 2017, ignoring for the moment, the cost to private businesses and individuals both directly and through their insurance. See the details.

Posted in the tragedy of our present civilization, tragedy of the horizon, unreason, utility company death spiral | 1 Comment

FERC: No multi-billion dollar bailout for coal and nuclear generating facilities

Excerpts from statements by Richard Glick, FERC commissioner are given below. The Microgrid Knowledge (“MGK”) news article summarizes the context by writing:

The commission rejected the energy secretary’s assertion that retirement of coal and nuclear plants threatens electric resilience. Instead FERC plans to look at broader challenges that may influence the reliable flow of energy in competitive wholesale markets, among them severe weather, physical and cyber attacks, accidents and fuel supply disruptions … In rejecting the coal and nuclear subsidies, FERC doubled down on its commitment to competitive markets. Commissioner Cheryl LaFleur called the proposed tariff for coal and nuclear “far-reaching out-of-market approach” that would be “highly damaging to the ability of the market to meet customer needs.”

FERC opened a new Docket No. AD18-7-000, in their response.

Richard Glick:

I also believe that it is important to consider the advantages that newer technologies, such as distributed energy resources, energy storage, and microgrids, may offer in addressing resilience challenges to the bulk power system.

MGK continues:

He added that most power outages occur because of failures within the transmission and distribution system, and not because of a lack of power supply.

Mr Glick:

There is no evidence in the record to suggest that temporarily delaying the retirement of uncompetitive coal and nuclear generators would meaningfully improve the resilience of the grid. Rather, the record demonstrates that, if a threat to grid resilience exists, the threat lies mostly with the transmission and distribution systems, where virtually all significant disruptions occur. It is, after all, those systems that have faced the most significant challenges during extreme weather events.

(I have added emphasis here.)

FERC Commissioner Cheryl LaFleur also responded:

In effect, it sought to freeze yesterday’s resources in place indefinitely, rather than adapting resilience to the resources that the market is selecting today or toward which it is trending in the future.

Using the context provided by the MGK article, again:

Instead, FERC should guide grid operators to pursue resiliency within a system “that is likely to be cleaner, more dynamic, in some instances more distributed, and deployed by an efficient market for the benefit of customers,” LaFleur said.

Then, from and regarding FERC Commissioner Neil Chatterjee:

Commissioner Neil Chatterjee also voted to reject Perry’s proposal and open the new docket — but with some reservations.

Chatterje expressed concern about the “staggering” change the grid is undergoing, noting that between 2014 and 2015 alone, the U.S. added about 15,800 MW of natural gas, 13,000 MW of wind, 6,200 MW of utility scale solar photovoltaic, and 3,600 MW of distributed solar. Meanwhile, nearly 42,000 MW of synchronous generating capacity (coal, nuclear, and natural gas) retired between 2011 and 2014. An additional seven nuclear units, representing 10,500 MW, are set to retire by 2025.

A separate article, from Utility Dive, reports how new natural gas, not renewables, is the culprit in beating down demand for nuclear generation. This is based on a recent MIT study. A previous study, by government Department of Energy Argonne National Laboratory and Lawrence Berkeley National Laboratory, arrived at the same conclusion.

Posted in American Association for the Advancement of Science, American Solar Energy Society, Amory Lovins, Berkeley, Bloomberg New Energy Finance, BNEF, CleanTechnica, climate economics, decentralized electric power generation, distributed generation, electricity markets, energy utilities, FERC, green tech, grid defection, ILSR, investment in wind and solar energy, ISO-NE, John Farrell, Joseph Schumpeter, microgrids, rate of return regulation, stranded assets, sustainability, the energy of the people, the value of financial assets, Tony Seba, wind energy, wind power | Leave a comment

Michael Bloomberg speaks on the Sustainability Accounting Standards Board

Posted in Amory Lovins, Anthropocene, Bloomberg, Bloomberg New Energy Finance, BNEF, Michael Bloomberg, Michael Osborne, planning, resiliency, Richard Branson, stranded assets, supply chains, sustainability, Tony Seba | Tagged | Leave a comment

1992 World Scientists’ Warning to Humanity

Professor David Suzuki, as ever, reminds us urgent warnings about our `collision course with Nature’ are nothing new.

This one came in 1992

Introduction

Human beings and the natural world are on a collision course. Human activities inflict harsh and often irreversible damage on the environment and on critical resources. If not checked, many of our current practices put at serious risk the future that we wish for human society and the plant and animal kingdoms, and may so alter the living world that it will be unable to sustain life in the manner that we know. Fundamental changes are urgent if we are to avoid the collision our present course will bring about.

Read more here.

The above statement is also available as a PDF.

Professor Suzuki himself has deep insights regarding how to demonstrate that it is inevitable we must be having a huge impact on Earth’s ecosystems.

Posted in Anthropocene, David Suzuki, Hyper Anthropocene, scholarship, science | 1 Comment

reality of natural gas prices: volatile, undependable, and contrary to social interest

Updated, 11th January 2018

There’s been a lot written about natural gas, New England, and supposed price spikes due to constraints on pipeline capacity. I’ve had my turn a couple of times here (and here), as a matter of fact (to cite a couple).

That’s why it is refreshing to put prices of natural gas in perspective. Bloomberg did so yesterday.

There are a few things to note in these figures. The first is the striking lack of visual correlation between natural gas prices and heating degree days. For surely, if the claims of advocates of increased pipeline capacity regarding pipeline constraints in deep winter contributing to high prices, it is reasonable to expect that as heating requirements increase, natural gas prices will increase. In fact, however, natural gas prices seem to wander all over the place, and only occasionally have spikes which coincide with deep winter requirements.

Second, despite “the sky is falling talk” of recent pipeline proponents, including ISO-NE, 2018 is not really much of a price spike, certainly not compared with, say, 2014:

Third, if anything, the actual trace of natural gas prices suggestions nothing anyone can do will affect natural gas prices. They will be what they will be, and additional pipeline capacity or anything else can’t impose a lid on them, as plausible as the story-and-song are from pipeline proponents.

Indeed, if prices of energy, particularly electrical energy, are concerns, then the sensible way of moving forward is to make an even bigger investment, as a Commonwealth and as a region, in wind and solar energy, with energy storage added. Wind doesn’t really need the storage, but it then requires less thinking on ISO-NE’s part to manage the grid, since they seem to be less capable of doing it than, say, Belgium is. Only wind and solar can deliver constant-per-annum prices for 30 year ranges. In fact, further, if the residents of the Commonwealth are so concerned about per kilowatt hour prices of electricity, they should get over their parochial opposition to land-based wind turbines and especially opposition to community solar farms in their neighborhoods. The former is the cheapest way to generate electricity in the world, with offshore wind being much more expensive. (Why is anyone surprised about that?) Community solar is quiet, unobtrusive, and, backed by storage, can soon offer comparable energy prices.

Postscript

Interesting postscript to these series …. Entergy’s Pilgram Nuclear plant went offline during the recent storm. That resulted in the following fuel-mix for electricity, using data supplied by ISO-NE:

Note it was hydropower, not natural gas or oil, which made up for the shortfall.

Update Saturday, 6th January 2018


A
model energy policy for any state in New England, proposed here for Massachusetts. I’ve done a lot of studying of energy policy in the past year, and this gleans the absolute best from the likes from Dr Amory Lovins, Professor Tony Seba (see also), the Institute for Local Self-Reliance, and Sir Richard Branson.

Update, 2018-01-11

The Conservation Law Foundation (CLF) provides a detailed recap here of the recent cold spell in New England, its effects upon the electricity grid, the propaganda put out by gas and pipeline company associations as well as utilities, and what the real story was. An excerpt:

Clean Energy and Hydropower Kicked In to Fill the Gas Gap
One of the reasons the electricity system could easily handle this significant drop in gas-fired power was the performance of renewable energy and hydropower. During the interminable cold, these clean resources represented as much as 20 percent of our power. That’s right: clean energy was matching our gas-fired power.

Among those renewables, wind was leading the way while individual solar units were powering homes and business, keeping down demand for electricity from big power plants. So, if clean renewables flourish during these period of cold, why would we ever invest in more of the polluting gas that causes price volatility rather than clean, price stable, renewable energy?

The Grid Relied Too Much on Oil-fired Power, But It Needn’t Have
Much has been made of the fact that dirty oil was the fuel of choice for power generators when temperatures dipped to their lowest. But this spike in oil use is a direct result of our over-reliance on gas. It should concern us, but it’s important that we put it in perspective.

First, even with increased oil usage over the past couple of weeks, oil-fired power will constitute a tiny fraction of our overall power generation for the year. That means climate impacts from its overuse during the cold snap is minimal. Second, oil dominated in part due to an ISO-New England program that favors oil as a substitute for natural gas over other, cleaner alternatives. Generators switch to oil because ISO-New England gives them an incentive to do so.

Fortunately, our region is in the process of major investments in new clean energy that will be cheaper (and cleaner) than oil. Our ongoing clean energy investments will displace oil and gas polluters during future cold spells.

Electricity and Gas Prices Were High in Much of the U.S.
The extremely high price of gas resulted in a spike in electricity prices, a concern for each and every one of us when trying to pay our bills, and especially for low-income and vulnerable communities. But contrary to the gas industry’s fear-mongering, expanded gas pipelines won’t help. Indeed, gas prices were high over the last couple of weeks virtually countrywide.

Pennsylvania, New Jersey, Maryland, and a few other states that sit on top of large gas supplies and have built out their pipelines also saw their electricity prices spike during the cold weather – at times outpacing the spikes in New England. We saw the same phenomenon in 2014. So, even if we were willing to pay billions for new pipelines that will sit idle 95 percent of the time – and could stomach the high costs to our environment and climate – it’s clear that such a buildout is not the solution to high winter electricity prices. We can only achieve that by cutting our reliance on gas in favor of clean, price stable, renewable power sources.

What’s more, New England has been seeing its wholesale electricity prices decline steadily for three years, and the temporary increases associated with this recent arctic air are not likely to derail that trend.

New Pipelines Just Don’t Make Economic or Environmental Sense
If big new gas pipelines are the great solution claimed by Big Gas, then why don’t they invest their own money into the projects? Instead, they’d rather we the consumers take all the risk while they and utility companies take all the profits. We know a bad deal when we see one, though, and studies from the Massachusetts Attorney General’s Office and the Maine Public Utilities Commission, as well as from a few of the top energy consulting firms in the U.S., demonstrate that new pipelines will cost us much more than any supposed benefits.

Rather than saving New Englanders money as Big Gas likes to promise, new pipelines would end up costing those who pay a monthly electric bill as much as $277 million over the lifetime of the pipeline. This, along with serious legal issues, is why state courts and utility commissions have rejected pipeline proposals like Kinder Morgan’s Northeast Energy Direct pipeline and Spectra’s Access Northeast proposal.

Update, 2018-01-13

Even more about this discussion at CleanTechnica.

Posted in Amory Lovins, anomaly detection, Anthropocene, Bloomberg New Energy Finance, clean disruption, Cult of Carbon, decentralized electric power generation, distributed generation, electricity markets, evidence, explosive methane, financial series, fossil fuel infrastructure, fossil fuels, gas pipeline leaks, greenhouse gases, Hyper Anthropocene, investment in wind and solar energy, ISO-NE, leaving fossil fuels in the ground, local generation, local self reliance, natural gas, pipelines, public utility commissions, rate of return regulation, regulatory capture, reworking infrastructure, rights of the inhabitants of the Commonwealth, risk, stranded assets, supply chains, the stack of lies, the tragedy of our present civilization, Tony Seba, utility company death spiral, zero carbon | Leave a comment

perceptions of likelihood

That’s from this Github repository, maintained by Zoni Nation, having this description. The original data are from a study by Sherman Kent at the U.S. CIA, and is quoted in at least once outside source discussing the problem.

In addition to the base rate fallacy (see an investment-related definition, too), which is just ignorance of Bayes rule, the other thing that’s interesting is the subjectivity of the categories above, particularly if they are thought of in the context of assessing risk.

Posted in anti-intellectualism, Bayes, Bayesian, economics, fear uncertainty and doubt, games of chance, reason, risk, secularism, statistics, the right to be and act stupid, the right to know, the tragedy of our present civilization, unreason | Tagged | Leave a comment

Early 2018 Nor’easter

via Early 2018 Nor’easter

The following are from GFS/NCEP/U.S. National Weather Service model runs:



Bombogenesis indeed!

The following are from the Meteocentre UQAM in Montreal, PQ, Canada, running the European Weather Model, as well as others.

Posted in American Meteorological Association, atmosphere, National Center for Atmospheric Research, NOAA | 1 Comment

Klaus Lackner: brilliant mind with a good idea

Wally Broecker‘s “hat tip” of Lackner’s work:

Posted in Anthropocene, carbon dioxide, clean disruption, clear air capture of carbon dioxide, climate disruption, climate economics, climate justice, economics, emissions, evidence, fossil fuel divestment, global warming, greenhouse gases, Hyper Anthropocene, investment in wind and solar energy, investments, klaus lackner, leaving fossil fuels in the ground, Spaceship Earth, zero carbon | 1 Comment

Cloud Streets

From NASA’s Earth Observatory and MODIS, here’s cloud streets due to double inversion layers warm-atop-cold-atop-warm:

(Click image for a larger figure, and use your browser Back Button to return to blog.)
Dr Marshall Shepherd at Forbes puts the present cold snap in perspective. Dr Shepherd was previously President of the American Meteorological Society.

2018-01-03_192433

By the way, the AMS has a new publication, Explaining Extreme Events of 2016 from a Climate Perspective available. They did one like this for 2014, and this is the 2016 edition.

Dr Jennifer Francis once talked about how what’s going on in the Arctic could be behind Boston’s deep freeze, and why the Arctic matters.

And this is Professor Jim White talking about abrupt climate change and its relation to the Arctic:

Posted in American Meteorological Association, AMETSOC, Arctic, atmosphere, attribution, climate, Jennifer Francis, Marshall Shepherd | Leave a comment

What are the odds of net zero?

What’s the Question?

A question was posed by a colleague a couple of months ago: What are the odds of a stock closing at the same price it opened? I found the question interesting, because, at first, it appeared to be a one-dimensional version of another problem I looked at here, which was in two dimensions. Well, I have produced an estimate, and am reporting results here. My first impressions of the problem were wrong. It actually is a two dimensional problem, not a one dimensional one. And it is not the same as the earlier problem, because although one of its dimensions is discrete, the other, time, is (essentially) continuous. I’ll explain.

The Data

I obtained intraday trades, or “Time & Sales” records (as they are called) for a single stock on the NASDAQ for a 71 day period. The stock was the one my colleague asked about. This series consisted of 679,390 original records. After discarding corrections (about 1.1%), there were 679,314 records remaining. I also discard unnecessary columns, retain date, time, and price. The times are recorded in U.S. Central Standard Time and are available to millisecond resolution, although the source of the data assumes no responsibility on time accuracy, saying this portion of the data is what they get from the NASDAQ and they copy it. Prices are reported to cents resolution.

The data was grouped into days, and, so, there was a times series of trades within each day. The price of the first trade of the day was subtracted from the prices of the remaining trades and, so, the trades of each day are references with respect to the opening price. A net zero condition such as is indicated by the title of the blog post is if, therefore, the transformed final price of a day is zero, or, in actuality, within a penny of zero. The objective of the study is to estimate the odds of that happening.

The Data and Code are Provided for Examination

I am providing data and code supporting this study. There are available in an Atlassian Bitbucket a Git repository. In the provided data, I have omitted the ticker symbol, the base prices, the record flags, and the date portion of timestamps from these data, because:

  • There’s no reason to mention the publicly traded company involved.
  • I am not a financial advisor and I don’t want to run afoul of rules about seeming to give advice when I’m not.
  • I want to be able to provide the data so readers and students can reproduce what I did, but I don’t want to violate Terms and Conditions on the use of the data I obtained to support this from the site I purchased them.

Also, in the dataset provided, the dates have been replaced with a trading day number. These are all done to preserve the anonymity of the stock. For the study, all that’s needed is some label to group records together. Also, the data provided is the transformed data with the open trade price subtracted. Again, by removing the magnitude of the price, I’m attempting to protect stock anonymity. I have also provided a copy of the code which was used to perform the transformation.

Also, the size of the data, based upon 71 days, was arbitrary. On the one hand, it could be thought of as a cost constraint, that is, more data could cost more. If I’m having fun answering a question like this and writing it up, it might as well be one where some constraints typical of studying more serious questions arise. On the other hand, 71 days of intraday trading data isn’t negligible.

Approach

It’s possible to apply analytical models to the problem, and it’s almost unavoidable to use some theory for reasons that’ll be explained in the material to come. I also understand that this problem, with suitable assumptions, is a question addressed in standard financial trading studies, such as the result that the variations in stock prices intraday are t-distributed. Ultimately, and apparently, for large sets of stocks, daily fluctuations depend upon order flows. See J. C. Hull, Options, Futures, and Other Derivatives, 5th edition, Prentice-Hall, 2003, for more of this kind of theory.

For such a specific question, though, with such a limited dataset, I tend to avoid using models which depend on assumptions about the process at hand, or rely upon asymptotics. I also try to make as few distributional assumptions as I can, letting the data and its interaction with the question at hand speak for themselves. I would have liked to use a t-distribution for the variations in the model, but neither of the two Kalman filtering R packages I typically use, dlm and KFAS, offer such an option. It was important to use a package which could estimate time-varying covariances on its own, since these signals are not stationary.

That said, it is nevertheless true that no purely empirical approach will given a good answer with this dataset. The closest any close gets to the opening price in this 71 day dataset is a penny, and there are only two days when that is true. That would produce an estimate of zero odds. That not only violates Cromwell’s Rule, but it is wrong, because on a day after this dataset was compiled this stock did close at its opening. In fact, that event prompted the question.

The idea I chose was to model the movement of the stock from its open on any day to its close as a random walk, one that I’ve described before:

\mathring{s}_{t} = v_{t} + \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t})

v_{t+1} = v_{t} + \mathcal{D}_{2}(0, \sigma^{2}_{v})

Here \mathring{s}_{t} is the reported stock price at time t, an offset from the opening price of the day. The model allows for a noise process on the observation, adding \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}), which can be thought of as a distortion of the stock’s true, latent offset-from-opening-price value, v_{t}, including rounding of that price to a penny. So v_{t} undergoes steps drawn from the distribution \mathcal{D}_{2}(0, \sigma^{2}_{v}) and these form the basis for \mathring{s}_{t}, after being “smudged” by \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}).

The idea of using 71 sets of data are to characterize both \mathcal{D}_{2}(0, \sigma^{2}_{v}) and \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}) and their parameters. Once those are in hand, and their credible intervals, these can be used in a simulation for a large number of synthetic days. Given a big enough such population, it’s possible to count the number of times \mathring{s}_{t_{\text{final}}} = 0 or, more precisely, the number of times 0.01 > |\mathring{s}_{t_{\text{final}}}|.

On the Form of \mathcal{D}_{2}(0, \sigma^{2}_{v})

For the purposes here, \mathcal{D}_{2}(0, \sigma^{2}_{v}) \sim \mathcal{N}(0, \sigma^{2}_{v}). I’m not happy about that Gaussian, but it’s a start.

On the Form of \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t})

While dependence of \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}) upon v_{t} might be more complete, that problem is eclipsed by a practical one the source dataset suffers. Time & Sales records for different days don’t have trades registered to the same moments of the trading day, and in order to use these records in the manner I intend, I need to register them so. Accordingly, as will be seen below, I use a penalized smoothing spline from the R pspline package to create proxy series for each of the trading days, migrating their values onto a common time grid. When these data are used,

\mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}) = \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}})

and so that question is finessed because, despite the “chunkiness” of the trades, the result of the penalized spline is a continuous Real. Accordingly, the resulting disturbance has the same form as \mathcal{D}_{2}(0, \sigma^{2}_{v}), although with a difference variance, \sigma^{2}_{\mathring{s}}.

Implementation

To the accuracy of consideration, which is a trading time stamp resolution of one second, many observations in Time & Sales records are recorded at the same moment. Accordingly, these records are first pre-processed to keep only the latest trade for any given moment, so defined. There are 23,400 seconds in the trading day.

For each day of trades, the P-spline is calculated for the each of the intradays trading histories, and these are migrated onto the regular grid of trading seconds. There are 71 such days in the dataset. An example of such an interpolation, with data overprinted, is given in the figure below. It is from day 24 of the dataset:


(Larger version of figure can be seen by clicking on the above. Use your browser Back Button to return to blog.)

Next, a filtering, smoothing Kalman filter is applied to the 71 days of trades, seen as a 71-variate response with a common state and covariance terms. The covariances are estimated from the data using maximum likelihood. They are allowed to vary throughout the trading day.

Given the fitted model, 100 instances of the estimated states and \epsilon (or \mathcal{D}_{2}(0, \sigma^{2}_{v})) noise terms 71 trading days are simulated using the KFAS package’s simulateSSM function. The last value of each trading day is set aside. The values of the 100 states are taken as means of 100 Gaussian distributions. Noise terms differ for each of the 71 days, so a composite variance is calculated for each of the 100 simulations. That composite variance is calculated as a stationary bootstrap of the 71 days for each simulation, with a mean block size of 5 (days), with 1000 bootstrap replicas each. A stationary bootstrap is used because it is unlikely the variance for each day is independent of the others. The tsbootstrap function of the tseries package is used for the purpose.

A histogram of the 100 means from the final states is shown below:
(Larger version of figure can be seen by clicking on the above. Use your browser Back Button to return to blog.)

A histogram of the standard deviations (not variances) of the noise terms for each of the 100 simulations is shown below:

(Larger version of figure can be seen by clicking on the above. Use your browser Back Button to return to blog.)

These means and variances are treated as a mixture distribution of 100 Gaussians with the given 100 means and their corresponding 100 variances. Choices of which Gaussian is used in any instance are weighted equally, and 100,000 samples are drawn from this mixture. This is based upon a plan by Jack Baker from his vignette for the sgmcmc package, although here only univariates are drawn for each sample, and the code is a little different.

The number of trading days ending at an offset trade price within a penny of zero are counted, and the fraction of the total is taken as the probability of the intraday offset price or total net trades being zero.

The probability so derived is about 0.015, maybe a little less.

What this means in terms of waiting time, using a negative binomial model, is that the expected number of trading days before the first net zero is \frac{1-p}{p}, where p = 0.015 and the trials are independent of one another. For that value, this expected \frac{1-p}{p} \approx 66. For “contagious runs”, this could be longer or shorter, depending upon the serial correlation.

Criticisms

There are two shortcomings in the above calculation.

First, because there’s a need to register all trading days on a common time grid, by migrating stock prices using interpolation, it is possible the variance of the original dataset has been reduced. Surely, this is suggested a bit from the figure of the interpolation with points on top of it above. This is an unfortunate requirement of using the Kalman filter approach to estimation. It might be possible to correct for this effect by inflating the variance terms. However, it is also possible that the scatter observed in the figure is due to the chunkiness (“to the penny”) with which stock trades are reported and the stock price is effectively between two ticks.

Second, the distribution assumed for stock variation is Gaussian, primarily because the KFAS package does not, at present, support a t-distribution as one of its modeling options. Were that to be available, it would be interesting to repeat this calculation.

The effect of both these criticisms would be to reduce the probability of total net trades being zero. Accordingly, it seems leaving the probably at 0.01 or 1% is a good estimate, even if it needs to be corroborated by the two criticisms being addressed.

Posted in dependent data, evidence, financial series, investing, investments, model-free forecasting, numerical algorithms, state-space models, statistics, time series, trading | Leave a comment

From Xian’s blog, “drivers are not interested in maths formulas”

via drivers are not interested in maths formulas

Posted in Christian Robert, risk, statistics, Uncategorized | Tagged , | Leave a comment

Professor Kevin Anderson, from November 2017, on Democracy Now!

I have featured interviews with Professor Kevin Anderson before, one of the most direct and clear-minded authorities on the implications of continuing to drive climate change through fossil fuel emissions and a consumption-oriented Western lifestyle.
KA_2017-12-30_181139
In November 2017, around the time of the COP23 meetings, he was interviewed by Amy Goodman on DEMOCRACY NOW!. Links to the two interviews are included below. This is a kind of end-of-2017 wrap-up, although I might fit in one more post.

Despite what some might think of Democracy Now!, they are one of the only major news organizations which has interviewed people like Professor Anderson and a personal hero, the late German lawmaker and solar energy revolutionary, Hermann Scheer. (Seriously consider reading the most important of Scheer’s book. I’ve written about him many times.)

By the way, Professor Kevin Anderson and Dr John Broderick of the Tyndall-Manchester Climate Change Research Center have recently produced a report titled Natural gas and climate change (17 October 2017) which is linked.

Posted in Anthropocene, capitalism, climate change, climate disruption, climate economics, Democracy Now!, global warming, Hermann Scheer, Hyper Anthropocene, Kevin Anderson | Leave a comment

490+ ppm CO2e

Former Secretary of Energy Stephen Chu at Climate One in 2016.

We’ve made progress, but it is nowhere near fast enough. The internal combustion engine is on life support. Fossil fuel energy sources and companies are stranded assets and dead men walking.

Needing to do clear air direct capture of CO2 increasingly looks like something we’re going to have to solve. We don’t know how to scale it. It is horrifically expensive at the moment. And we’ll need to zero nearly all emissions of greenhouse gases to make it affordable.

Posted in Anthropocene, carbon dioxide, carbon dioxide capture, clear air capture of carbon dioxide, climate, climate change, climate data, greenhouse gases, Hyper Anthropocene, Stephen Chu | Leave a comment

Confidence intervals and that IPCC: Why climate scientists need statistical help

At Andrew Gelman’s blog (Statistical Modeling, Causal Inference, and Social Science), Ben Goodrich makes the interesting observation in a length discussion about confidence intervals, how they should be interpreted, whether or not they have any socially redeeming value, und so weiter. Dr Goodrich zings the opening paragraphs of the Summary for Policymakers from the AR5 report of the Intergovernmental Panel on Climate Change in its treatment of confidence intervals:

My current favorite example of the potential damage of confidence intervals is from the Summary for Policymakers of the Intergovernmental Panel on Climate Change.

http://www.ipcc.ch/pdf/assessment-report/ar5/syr/AR5_SYR_FINAL_SPM.pdf

To take one example, the first real paragraph says:

“The period from 1983 to 2012 was _likely_ the warmest 30-year period of the last 1400 years in the Northern Hemisphere, where such assessment is possible (_medium confidence_). The globally averaged combined land and ocean surface temperature data as calculated by a linear trend show a warming of 0.85 [0.65 to 1.06] °C {^2} over the period 1880 to 2012, when multiple independently produced datasets exist.”

The second footnote actually gets the definition of a confidence interval correct, albeit in a way that only a well-trained statistician would understand:

{^2}: Ranges in square brackets or following ‘±’ are expected to have a 90% likelihood of including the value that is being estimated

So, basically they are correctly saying to statisticians “The pre-data expectation of the indicator function as to whether the estimated confidence interval includes the true average temperature change is 0.9” and incorrectly saying to everyone else “there is a 0.9 probability that the average temperature rose between 0.65 and 1.06 degrees Celsius between 1880 and 2012”. I am hesitant to say it is okay for policymakers adopt the latter misinterpretation because they would misinterpret the former interpretation.

I think your main point comes down to what the alternative is. If confidence intervals are inevitable, then I guess it would be less damaging for people to interpret them incorrectly than correctly. But if confidence intervals can be replaced by Bayesian intervals and interpreted correctly, I think that would be preferable.

The report gets more convoluted in its attempt to simultaneously be useful to policymakers and not wrong statistically. The first footnote says

{^1}: Each finding is grounded in an evaluation of underlying evidence and agreement. In many cases, a synthesis of evidence and agreement supports an assignment of confidence. The summary terms for evidence are: limited, medium or robust. For agreement, they are low, medium or high. A level of confidence is expressed using five qualifiers: very low, low, medium, high and very high, and typeset in italics, e.g.,
_medium confidence_. The following terms have been used to indicate the assessed likelihood of an outcome or a result: virtually certain 99-100% probability, very likely 90-100%, likely 66-100%, about as likely as not 33-66%, unlikely 0-33%, very unlikely 0-10%, exceptionally unlikely 0-1%. Additional terms (extremely likely 95-100%, more likely than not >50-100%, more unlikely than likely 0-<50%, extremely unlikely 0-5%) may also be used when appropriate. Assessed likelihood is typeset in italics, e.g., _very likely_.

So, they are using the word "confidence" not in the technical sense of a "confidence interval" but to describe the degree of agreement among the scientists who wrote the report on the basis of the (presumably frequentist) studies they reviewed. And then they have a seemingly Bayesian interpretation of the numerical probability of events being true but without actually using Bayesian machinery to produce posterior expectations. If they had asked me (which they didn't), I would have said to just do Bayesian calculations and justify the priors and whatnot in the footnotes instead of writing this mess.

Now, the IPCC is no more special in this kind of abuse of terminology than the American Medical Association is. And, yes, maybe if statisticians helped them out, the Policymakers would fall asleep on page three. Still, people oughtn’t get to pick their own standards of evidence.

There’s a fair bit to digest there and many pithy remarks elsewhere. For example, Corey Yanofsky quotes work by Bickel and remarks

Also, confidence procedures can be consistent with the betting operationalization you’re discussing (https://arxiv.org/abs/0907.0139). You need a diachronic Dutch book to force full Bayes.

I did not realize there was a philosophy based upon Dutch books.

Posted in Bayesian, climate, IPCC, statistics | Leave a comment

Worthy of watching

https://www.democracynow.org/2017/12/25/noam_chomsky_in_conversation_with_amy

So, what say you? Why should Professor Chomsky should not be believed? Why and what evidence proves, nay, even suggests he’s other than spot on?

Posted in Anthropocene, climate, climate change, climate disruption, ethics, global warming, Hyper Anthropocene, Uncategorized, Unitarian Universalism | Leave a comment

What Al Gore, Paul Hawken, friends, and company laughingly call “progress”

10 years ago it was 384.26 ppm. That means it is increasing by 1.2 ppm per year.

Posted in American Meteorological Association, AMETSOC, Anthropocene, climate, climate change, climate data, climate disruption, ecology, environment, Humans have a lot to answer for, Hyper Anthropocene, Nature | 2 Comments

Our Nisse and his porridge, 24th December 2017

I celebrate a Norwegian custom, honoring the Nisse of the house and land on Christmas eve. (Swedish tomte.) While we don’t have a farm, Claire and I are avid environmentalists, my being such since 1971. So, any being who cares for animals and seems to have a long, natural life is an ally. So …

  • Porridge. Check.
  • Butter on porridge. Check.
  • Glass of ale. Check.

Yes, I’m an atheist, a physical materialist, and a member of a Unitarian Universalist congregation, member of the UU Humanists Society, but, still, there are the Nissen, and I applaud them! What’s rational about Nissen Culture! Also, I think panpsychism makes a bit of sense, and am a bit of a Platonist, at least with respect to mathematical forms.

Posted in Carl Safina, Earle Wilson, environment, environmental law, Henry David Thoreau, natural philosophy, naturalism, Nature, Uncategorized | Tagged , , , | Leave a comment

Merry Newtonmas tomorrow! On finding the area of the Batman Shape using Monte Carlo integration

It’s Newtonmas 2017 tomorrow!

What better way to celebrate than talk about integration!

The Batman Shape (sometimes called the Batman Curve, somewhat erroneously, I think) looks like this:

You can find details about it at Wolfram MathWorld, including its area in closed form. What’s interesting is integrating it without algebra, numerically. Quadrature would be difficult and a lot of work. Monte Carlo integration, not so much. What’s interesting is the relation between that and use of Monte Carlo integration for Bayesian computation, such as in Markov Chain Monte Carlo and slice sampling.

I was going to do the area of the Batman Shape using Monte Carlo integration and compare it with the exact value, but Jame Schloss has already done that. I might come back and do it using slice sampling, which will be a little interesting since it’s a two-dimensional figure. I’ll append that below if I do.

Posted in Bayes, Calculus, Markov Chain Monte Carlo | Tagged , , , , | Leave a comment

The Internet was not created “because of an intelligence effort”

This post is in response to this article at Quartz.

Whether the fundamental claim of the article is correct or not, that Google was founded with research funding from the intelligence community, it is decidedly not true that:

In fact, the internet itself was created because of an intelligence effort: In the 1970s, the agency responsible for developing emerging technologies for military, intelligence, and national security purposes—the Defense Advanced Research Projects Agency (DARPA)—linked four supercomputers to handle massive data transfers.

It was not called DARPA at the time. The later focus exclusively on defense technologies was a product of the Mansfield Amendment of 1973. Before that, ARPA was involved in creating GPS as well as other technologies for computing, including the very early Internet, then called the ARPAnet.

Indeed the purpose of the ARPAnet was not to “handle massive data transfers” but simply to link computers together and figure out the best way to do that. Remember, what we call the Internet was still off, begun in the early 1990s, including its separate follow-on, the World Wide Web. There was also an interest in figuring out the best ways of configuring large groups of computers, both civilian and military, for survival in a post-nuclear attack. The ARPAnet itself built on work which ARPA funded, including a Hawaiian network called ALOHAnet.

Later, this construct almost folded, because it had been assigned for care to the National Science Foundation, redubbed NSFnet, used principally for transfers of scientific data and large scale computing collaborations, and ended up sucking up all the funds NSF had. It wasn’t until former Vice President Al Gore, then a Senator, got involved, and negotiated what in effect was a large, federally-back bridge loan to transfer operation of the network into private hands, that the Internet became a reality. The account at Quartz completely misrepresents this history. If the Internet was always the “demon spawn” of the intelligence community, why didn’t it come to its rescue then?

I think people forget how intermingled defense and space funding of research was with technological development. Indeed, people forget how much funding for R&D brought us the world we live in. While federal funding for R&D has diminished, including defense funding, with the possible exception of NIH, what’s striking is that corporate funding for R&D has collapsed. While some, even many companies, fund what they call “research”, this is nearly indistinguishable from product development. Very few companies fund pure R&D, that is, research without a mission or purpose akin to scientific work. Google is one of those companies. (I know Microsoft is another.) There is a tendency for companies in high tech to “acquire technology” by buying start-ups, but few start-ups will make the breakthroughs that led to fundamental changes, whether the transistor, or Ethernet, or fiber optics.

And the federal government’s present mood — and perhaps that of a substantial chunk of the American public — is anti-Science. This has serious implications for both future economic development in the United States as well as the robustness of our military. While military R&D goes on behind the scenes in places like Lockheed-Martin, it suffers greatly from secrecy practices. While Google itself is highly secretive, it’s well known that if a culture wants to push the cutting edge of technology, imposing heavy secrecy and proprietary limits on open discussion and publication hurts more than it helps. For one, it limits the research communities’ criticism of the technology, which is always helpful. For another, it makes those developing these feel that have an advantage over everyone else, a feeling which is ephemeral at best: Most fundamental developments in Science and Engineering are being pursued concurrently by several groups, because they know what the fundamental issues are and where lies the frontier. This tends to retard technological development.

No, the best way to keep ahead of your competition, whether commercial or country, is to push the best technology with the best people, drop secrecy, and always stay ahead of everyone else. There are reasons for having secrecy … to protect short-time, tactical military advantage … but I have never been convinced, in the same way the late Senator Daniel Patrick Moynihan wasn’t (see also), that in the long run it is a win.

Posted in American Association for the Advancement of Science, basic research, IEEE, R&D, science, secrecy | Leave a comment

Miami Beach

(Hat tip to Yale Climate Connections)

Posted in Anthropocene, climate, climate change, climate disruption, climate economics, flooding, floods, Florida, hydrology, Hyper Anthropocene, sea level rise | Leave a comment

The Southern California fires, courtesy of NASA Earth Observatory

These images are from the NASA Earth Observatory.

Posted in Anthropocene, Arnold Schwarzennegger, climate change, climate disruption, Humans have a lot to answer for, Hyper Anthropocene, Jerry Brown | Leave a comment

tripleplus ungood: Long run hot climate models are also the most accurate reproducing today and recent past

Patrick Brown and Ken Caldeira dropped a bombshell into the recent (7 Dec 2017) issue of Nature, and the repercussions are echoing around the scientific world. (See, for example, the related article in MIT’s Technology Review.) To be crisp, current emissions trajectories and new understanding regarding climate sensitivity look like Earth is on a path to achieve +5\textdegreeC over pre-industrial by end of century.

That emissions have not abated is no secret. It seems to need to be repeated constantly, but it should be recalled that what matters for climate (radiative) forcing is the amount of cumulative emissions. That’s because CO2 takes a long time to scrub from atmosphere, so, if they are any emissions at all, this cumulative amount keeps building up, even if only 30% of total emissions remain in atmosphere. (They’re in the climate system, but in oceans and soils.)

What Brown and Caldeira have contributed is a look at the large set of climate models that are used for climate forecasting, and which of the set were the most successful at predicting present conditions given conditions in recent times. There are a large number of these, between three dozen and five dozen, depending upon how one counts, and their vintage. Some are better at certain aspects of climate than others. When forecasts are made, they are run in an ensemble fashion, meaning that all the models get a crack at the global conditions, obtained by observations-to-date, and then are run forward to project how things will be, given concentrations of CO2 in atmosphere, a rate of volcanic eruption, and so on. The UNFCCC forecasts and the U.S. National Climate Assessment (NCA) forecasts are based upon these ensemble runs. The forecasts are obtained as a weighted average of the ensembled outputs.

Recall that these climate models have been much maligned by people and groups who doubt that climate disruption is a serious or any risk to humanity and its economies. One thing claimed is that the long run projections of the climate model runs can’t be trusted because the ensemble does not do that great a job of predicting today from recent history of observations. That much is correct.

However, in a nutshell (see their paper for more, and the figure below, taken from that paper), what Brown and Caldeira did was to emphasize the contributions to temperature prediction of the subset of models which are the most skillful at predicting the present based upon recent observations. They used a technique called multivariate partial least squares regression. What they found was that, with that weighting, the predictions ran hotter than those using the entire ensemble without such a technique.


(Click on image to see a larger figure, and use browser Back Button to return to reading blog.)

This has two major implications, if this trend continues.

First, it means that the endless arguments about what is a good estimate for climate sensitivity have found a powerful resolution: Sensitivity looks high. If that is not the case, the then unreasonable skill of certain models at predicting the present needs to be explained.

Second, if we’ve really moved to the vicinity of +5\textdegreeC by 2100, then:

  1. climate bifurcations are well within possibility and the projections of University of Exeter Professor Tim Lenton based upon their analysis of observations look prescient.
  2. Carbon dioxide removal (CDR), perhaps using techniques like those pioneered Professor Klaus Lackner, look increasingly necessary, despite their outrageous expense and the multi-century timescale these operations must endure. The question of moral hazard of these technologies is rapidly being eclipsed by the facts that the world has not done enough to keep us out of serious trouble.

Note there’s No Free Lunch for fossil fuel emitters here, even granted CDR. That’s because it is so expensive to scrub CO2 from the climate system, it only makes economic sense if emissions are zeroed as rapidly as possible. Even so, given no emissions from combustion or energy for transport, production, harvesting, etc, just feeding people on the planet will release something like 2 GtC per year.

There is a nice Abstract of the finding by Dr Brown here:

The above is a lecture Professor Lenton on bifurcations in the climate system, and clues which suggest one is approaching, a trajectory which hopefully can be reversed.

These other outlets have covered this story, too, in addition to Technology Review:

In net, it looks like the IPCC may have underestimate future warming trends due to climate change.

Posted in American Association for the Advancement of Science, American Meteorological Association, American Statistical Association, AMETSOC, Anthropocene, bifurcations, clear air capture of carbon dioxide, climate, climate change, climate disruption, climate economics, climate models, critical slowing down, Cult of Carbon, destructive economic development, Global Carbon Project, global warming, Humans have a lot to answer for, Hyper Anthropocene, Kevin Anderson, leaving fossil fuels in the ground, radiative forcing, Spaceship Earth, the right to be and act stupid, the right to know, the stack of lies, the tragedy of our present civilization, the value of financial assets, Timothy Lenton, tragedy of the horizon | Leave a comment