Banner day for solar generation this early in the late Winter/early Spring season!

(Click on image to see a larger figure, and use browser Back Button to return to blog.)

Our system, and its supporting cast.

This is about energy democracy, as much as it is about other things.

Posted in American Solar Energy Society, Bloomberg New Energy Finance, green tech, RevoluSun, solar democracy, solar domination, solar energy, solar power, the energy of the people, the green century

M.G.L. 40A §3, next-to-last paragraph

“No zoning ordinance or by-law shall prohibit or unreasonably regulate the installation of solar energy systems or the building of structures that facilitate the collection of solar energy, except where necessary to protect the public health, safety or welfare.”

That’s from the Massachusetts General Laws. I added emphasis.

Posted in Bloomberg New Energy Finance, BNEF, citizenship, CleanTechnica, Commonwealth of Massachusetts, Constitution of the Commonwealth of Massachusetts, decentralized electric power generation, decentralized energy, economics, electricity, energy utilities, grid defection, local generation, local self reliance, Massachusetts, Massachusetts Clean Energy Center, solar democracy, solar domination, solar energy, solar power

Certainly not “clean coal”, but is zero emission natural gas combustion a key to a zero Carbon future?

Eli Rabett has a great idea over at Rabett Run.

And I particularly like the directions which commentators Russell Seitz and John O’Neill are going with it. Hmmm, Dimethyl ether as a fuel?

It’s been proposed.

(Click on image for a better look, and use browser Back Button to return to blog.

Update, 2018-02-25, 00:34 ET

Key to Eli’s suggestion is the Chaudhary-Bhaskarwar paper, which I did not highlight sufficiently in the above.

Also, there already are patents declared in this space:

Update, 2018-02-27

Per David B Benson, the original proposal by Eli is apparently something called the Allam power cycle, there is an update about it by its developer, and there is a project trying it nearing completion.

By the way, NetPower is the company developing the process and project.

Posted in American Association for the Advancement of Science, Anthropocene, Bloomberg New Energy Finance, bridge to somewhere, Buckminster Fuller, carbon dioxide, carbon dioxide capture, climate change, climate disruption, climate economics, dimethyl ether, global warming, Hyper Anthropocene, natural gas

Will soils hang on to their Carbon?

This is essentially no analysis, simply an index to recent research on the the matter of the soils reservoir for Carbon, and a little reaction.

To begin, here’s the part of the Carbon Cycle that’s involved:

Should this production increase, particularly if CO2 uptake of terrestrial plants wane, the 45% sink we’ve fortunately lived with could lessen, making our situation worse.

Here are some papers, including reports of large scale experiments. I follow with some thoughts and questions.

I am particularly intrigued with Metcalfe, and van Groenigen, Osenberg, Luo, and Hungate. Recent studies examining options to rebalance the Carbon Cycle by means such as enhanced weathering and afforestation by planting large numbers of plants like Jatropha curcas (see more) have revealed the resulting albedo change and moisture capture can change the climate of entire regions. If microbial communities reorganize in a big way, whether in temperate forests, in deserts, or in tundra, could they by themselves achieve change of regional climate? Could they be bioengineered? Do we understand that ecosystem well enough to predict how it would develop? Are there nonlinear surprises lurking there?

Posted in adaptation, American Association for the Advancement of Science, Anthropocene, argoecology, bacteria, being carbon dioxide, Carbon Cycle, carbon dioxide, Carl Safina, climate, climate change, climate disruption, Global Carbon Project, global warming, microbiomes, nonlinear, nonlinear systems | 1 Comment

“It should be illegal to deceive a country’s heart”

“I didn’t mean for this to happen.”

Intentions are irrelevant, despite what the law in one or more countries says.

Outcome and results are what matter.

Guns.

As I wrote,

Oh, I am frustrated, because a lot of this discussion is pure deflection, nothing more.

Facts are, there are technologies available, which are difficult to defeat, and detectable if defeated, which can guarantee that the only user of a gun is the authorized owner of a gun. While I am not a gun owner, nor would I be, I understand that people want guns for hunting, whether game or humans (“in self defense”). Still, public safety and public health seem adequate justifications for imposing technological controls, backed up by legal measures for incriminating those who try to defeat those controls.

Guns should be available to those who want them, for legal purposes, but the rest of us should have the right to know who they are, and the authorities we assign with responsibility should have the right to intervene when public safety is threatened.

And the rest of the “self defense against tyranny” is nonsense, not supported by the Constitution, although one could cobble together some legal theory along those lines from circumstantial historical evidence, I admit.

Americans are not exceptional, no matter what they think. To the degree they do, quality of living here is worse than in the rest of the world, despite the audacious earnings per capita we so champion. Try to attract the Best And The Brightest in the world with that!

This is a completely artificially hepped and hyped and parochial issue, disconnected from reality. It is the worst shame the United States is capable of indulging, making everything in its history hypocrisy and a laughingstock. Securing the world against Nazism but allowing random empowered-with-guns nutcases to assault schools and churches, leaving surviving innocents with psychological trauma? What in the world are you defending if that’s what you want to allow?

I’ve offered my recommendations. Gun violence is a disease. It should be treated like any other disease.

Update, 2018-02-23

Dr Heather Sher remarks at The Atlantic why AR-15 weapons and similar ought not be owned by civilians.

Posted in American Statistical Association, gun violence as public health crisis

The fate of Antarctica

That’s from NASA’s Jet Propulsion Laboratory at CalTech in Pasedena, CA.

The source article is:

A. S. Gardner, G. Moholdt, T. Scambos, M. Fahnstock, S. Ligtenberg, M. van den Broeke, J. Nilsson, “Increased West Antarctic and unchanged East Antarctic ice discharge over the last 7 years”, The Cryosphere, 12, 521-547, 2018.

How lucky do you feel, folks?

Posted in American Association for the Advancement of Science, American Meteorological Association, Anthropocene, being carbon dioxide, Boston, carbon dioxide, climate disruption, Cult of Carbon, flooding, floods, Florida, global warming, sea level rise

The global vegetative biosphere

(Click on figure to see a larger image, and use browser Back Button to return to blog)

Data derived in part from SeaWIFS and image is from the NASA Earth Observatory here.

Related links:

Curiously, the SeaWIFS mission site has been mothballed by NASA under the urgings of the present political administration in Washington. This is why, in part, I participated in this project in advance of their taking charge.

Image | Posted on by

Gun violence is a disease. It should be treated and managed as a disease.

David Hemenway spoke on this at last year’s annual meeting of the Boston Chapter of the American Statistical Association.

There are resources, as well as here.

Statistics as a field began squarely within the bounds of Epidemiology. Surely, this is a problem worthy of Statistics, Data Science, and Machine Learning.

Update, 2018-02-16, 09:54 ET

The Editors, Bloomberg: “The end of gun massacres begins with you”.

Update, 2018-02-19, 10:58 ET

And it can work.

Posted in epidemic of mass slaughter, ethics, evidence, firearms, guns | 1 Comment

Less evidence for a global warming hiatus, and urging more use of Bayesian model averaging in climate science

(This post has been significantly updated midday 15th February 2018.)

I’ve written about the supposed global warming hiatus of 2001-2014 before:

The current issue of the joint publication Significance from the Royal Statistical Society and the American Statistical Association has a nice paper by Professor Niamh Cahill of University College, Dublin. Professor Cahill is a colleague of Professor Stefan Rahmstorf, Dr Grant Foster (“Tamino”), and Professor Andrew Parnell. (Parnell is also from University College.) I’ll list a related history of their papers in a moment.

It’s good to see climate science and data treated well by statisticians, even though many geophysicists, oceanographers, and atmospheric scientists know something about statistics and data analysis (*). There is this paper by Professor Cahill, and the November 2017 issue of CHANCE was devoted to the subject. This is great, because the relationship between professional statisticians and climate scientists has been rocky at times. Notice some of the comments here and this rant. There is also some distrust of statistical methods from the geophysics side or, at least, from some atmospheric scientists. The great Jule Charney reportedly dismissed an analysis once by dubbing it “just curve fitting”, since the standard in his field was ab initio physics. And squarely within the margins of the present discussion, there is this gentle admonition from Drs Fyfe, Meehl, England, Mann, Santer, Flato, Hawkins, Gillett, Xie, Kosaka, and Swart that

The warming slowdown as a statistically robust phenomenon has also been questioned. Recent studies have assessed whether or not trends during the slowdown are statistically different from trends over some earlier period. These investigations have led to statements such as “further evidence against the notion of a recent warming hiatus” [Karl, T. R. et al. Science 348, 1469–1472 (2015)] or “claims of a hiatus in global warming lack sound scientific basis” [Rajaratnam, B., Romano, J., Tsiang, M. & Diffenbaugh, N. S. Climatic Change 133, 129–140 (2015)]. While these analyses are statistically sound, they benchmark the recent slowdown against a baseline period that includes times with a lower rate of increase in greenhouse forcing [Flato, G. et al. in Climate Change 2013: The Physical Science Basis (eds Stocker, T. F. et al.) Ch. 9 (IPCC, Cambridge Univ. Press, 2013)], as we discuss below.Our goal here is to move beyond purely statistical aspects of the slowdown, and to focus instead on improving process understanding and assessing whether the observed trends are consistent with our expectations based on climate models.

(Emphasis and references added to original text. That’s from the Fyfe, et al‘s paper “Making sense of the the early-2000s warming slowdown”.)

Professor Cahill’s article is an entirely plausible interpretation of the datasets NOAA, GISTEMP, HadCRUT4, and BEST, from a statistician’s perspective. That perspective includes the idea that if there is no information below some level in a signal to explain, assigning an interpretation to that residual provides no support to the interpretation. In other words, if properly extracted warming trends are subtracted from warming data, it is no doubt possible to fit, say, an atmospheric model to the residual. But if the residual contains no information, there are many processes which will fit it as well, even if the processes do not have a physical science basis. It is a standard problem in Bayesian analysis to do inference or modeling using multiple models choices, each having a prior weight. In conventional presentations of Bayesian analysis, priors are typically reserved for parameters. Work has progress on analysis using mixture models (Stephens, 2000, The Annals of Statistics), that is, where the distribution governing a likelihood function is a linear mixture of several, simpler distributions. Putting priors on M models involves M sets of parameters, \boldsymbol\theta_{j} and a weight, \alpha_{j}, with one j for each of the models. Here 1 = \sum_{j=1}^{M} \alpha_{j}, and 0 \le \alpha_{j} \le 1. The resulting posterior of the Bayesian analysis would have an equilibrium assignment of mass to each of the \boldsymbol\theta_{j} and their corresponding \alpha_{j}. These calculations are done using Bayesian model averaging (“BMA”), known since 1999. (See also Prof Adrian Raftery’s page on the subject.)

Fragoso and Neto (2015) have provided a survey of relevant methods along with a conceptual classification. It is no surprise some scholars have applied these methods to climate work:

These are a good deal more than “just curve fitting”, and BMA has been available since 2000. Fang and Li have been cited just 4 times (Google Scholar), and Bhat, et al just 16, and Smith, et al have 173. But Raftery, et al has been cited 1029 times. The majority of these citations are specific applications of the techniques to particular regions. The assessment paper by Weigel, Knutti, Liniger, and Appenzeller (“Risks of model weighting in multimodel climate projections”, Journal of Climate, August 2010, 23) is odd in a couple of respects: They cite the Raftery, et al paper above, but they don’t specifically discuss it. They also seem to continue to associate Bayesian methods with subjectivism, and entertain roles for both Frequentist and Bayesian methods. (That makes no sense whatsoever.) It’s not clear if the discussion is restricted to ensembles of climate models, which I suspect, or is a criticism of a wider set of methods. I agree climate ensembles like CMIP5 share components among their members, so are not independent, but, if BMA is used, that oughtn’t matter. BMA is not bootstrapping. Knutti also wrote an odd comment in Climatic Change [2010, 102(3-4), 395-404] where he seemed to downplay a role for combinations of models. Again, I think it’s important not to equivocate. Knutti’s “rain tent” analogy

We intuitively assume that the combined information from multiple sources improves our understanding and therefore our ability to decide. Now having
read one newspaper forecast already, would a second and a third one increase your confidence? That seems unlikely, because you know that all newspaper forecasts are based on one of only a few numerical weather prediction models. Now once you have decided on a set of forecasts, and irrespective of whether they agree or not, you will have to synthesize the different pieces of information and decide about the tent for the party. The optimal decision probably involves more than just the most likely prediction. If the damage without the tent is likely to be large, and if putting up the tent is easy, then you might go for the tent in a case of large prediction uncertainty even if the most likely outcome is no rain.

might apply to certain applications of multi-member climate ensembles, but certainly does not apply to uses of BMA. Here, for example, the paper of Cahill might consider alternative models to be those having different numbers and placements of breakpoints in trends. A run of a BMA consistent of such alternatives would yield a weighting for each, which could be interpreted as a plausibility score. Similar things are done in Bayesian cluster analysis, where the affinity of a data point for a particular cluster is scored rather than an absolute commitment to its membership. Indeed, without such an approach, determining the number of breakpoints in trends is pretty much ad hoc guesswork. BMA does not mean a literal average of outcomes.

Professor Cahill, along with Rahmstorf and Foster have responded to the Fyfe, et al critique in their “Global temperature evolution: recent trends and some pitfalls” [Environmental Research Letters, 12 (2017) 054001]:

We discuss some pitfalls of statistical analysis of global temperatures which have led to incorrect claims of an unexpected or significant warming
slowdown.

Cahill’s paper is readable and approachable, as are most papers in Significance.

Other papers about this subject are listed below, most from this team:


(*) Knowledgeable, yes. Dated, also yes. While, according to my occasional inspections of the publications of the American Meteorological Society, people are using Bayesian methods and means of computation more frequently. That’s good. But they are not using it as much as, say, population biologists and field ecologists do. I also heard a put-down of data science and machine learning methods at a recent symposium, principally complaining about the opacity of models so derived. While surely techniques from these fields have their limitations, it’s not at all clear to me that an ensemble of climate models which have been run 1000 years in order to initialize them is any more transparent than a recurrent neural network. Moreover, the dearth of uses for Bayesian model averaging apart from the original authors and in applications, discussed in the text above, suggests a certain reticence in pursuing modern techniques.
Posted in American Statistical Association, Andrew Parnell, anomaly detection, Anthropocene, Bayesian, Bayesian model averaging, Berkeley Earth Surface Temperature project, BEST, climate change, David Spiegelhalter, dependent data, Dublin, GISTEMP, global warming, Grant Foster, HadCRUT4, hiatus, Hyper Anthropocene, JAGS, Markov Chain Monte Carlo, Martyn Plummer, Mathematics and Climate Research Network, MCMC, model-free forecasting, Niamh Cahill, Significance, statistics, Stefan Rahmstorf, Tamino | 2 Comments

Undo your part

From Citizens Climate Lobby. Great slogan. And there’s a Boston Metro West chapter, among others. They principally argue for a Carbon tax or Carbon fee-and-dividend program.

There are a couple of things to note, however.


(The basic slide above is due to Dr Glen Peters of CICERO. Embellishments emphasizing the $500/tonne and $1000/tonne hacks on the ordinate and the heading “What about a price on Carbon?” were added by this author for emphasis.)

And here are two more detailed assessments:

Posted in Carbon Tax, Carbon Worshipers, climate change, climate economics, global warming | 1 Comment

“Carbon emissions and climate: Where do we stand, and what can be done if it all goes wrong?”

On Sunday, 11th February 2018, I presented an Abstract of a 3 hour talk on the subject, “Carbon emissions and climate: Where do we stand, and what can be done if it all goes wrong?” at the Needham Lyceum, hosted at the Unitarian Universalist church, First Parish, Needham, Massachusetts, of which I am a proud member.

This talk, the slides, and a longer 3 hour exposition of the talk are now available at the First Parish site. To be clear, the links there are a recording of the talk at the Lyceum, which, because of time limits, was but 45 minutes. I visited a subset of the slides for that talk, also linked from that page. I also provided a longer talk which addressed each and every slide. That longer address is also available at that page.

The basic purpose of the talk was setting up a discussion of “climate hacking” or geoengineering which seems increasingly necessary, citing, for instance, the lectures and presentations of the great Wally Broecker at BU. I have linked presentations by Broecker and colleague, Klaus Lackner, here and here before.

I am happy to answer questions about the talk, either via the email address listed on the talk at the first slide, or at this blog, in its comments.

Posted in Anthropocene, being carbon dioxide, Carbon Cycle, carbon dioxide, carbon dioxide capture, carbon dioxide sequestration, Carbon Tax, civilization, clear air capture of carbon dioxide, climate, climate change, climate disruption, COP21, Cult of Carbon, differential equations, dynamical systems, ecology, emissions, environment, exponential growth, fossil fuel divestment, fossil fuel infrastructure, fossil fuels, geoengineering, geophysics, Glen Peters, Global Carbon Project, global warming, greenhouse gases, Humans have a lot to answer for, Hyper Anthropocene, investments, James Hansen, Kerry Emanuel, liberal climate deniers, Mark Carney, Michael Bloomberg, Minsky moment, mitigation, nonlinear, nonlinear systems, oceanography, phytoplankton, population biology, population dynamics, precipitation, Principles of Planetary Climate, quantitative biology, quantitative ecology, radiative forcing, rationality, Ray Pierrehumbert, risk, sea level rise, sociology, stranded assets, supply chains, sustainability, T'kun Olam, the right to be and act stupid, the right to know, the tragedy of our present civilization, the value of financial assets, thermohaline circulation, tragedy of the horizon, unreason, UU, UU Needham, Wally Broecker, zero carbon

on nonlinear dynamics of hordes of people

I spent a bit of last week at a symposium honoring the work of Charney and Lorenz in fluid dynamics. I am no serious student of fluid dynamics. I have a friend, Klaus, an engineer, who is, and makes a living at it. I admire the people who are, principally people I read and try to understand who publish technical work on atmospheric and climate dynamics, like Professor Ray Pierrehumbert, and regarding ocean dynamics, like Dr Emily Shuckburgh, or Professor Lenny Smith.

And, today, we had a dramatic drop in the markets of the United States, notably the DJIA, which has the world abuzz in speculation. The Economist, ever the sophisticate, ponders:

The swoon set tongues to wagging, about its cause and likely effect. There can be no knowing about the former. Markets may have worried that rising wages would crimp profits or trigger a faster pace of growth-squelching interest-rate increases, but a butterfly flapping its wings in Indonesia might just as well be to blame. There is little more certainty regarding the latter. Commentators have been quick to pull out the cliches: that “the stock market is not the economy”, and that “stocks have predicted nine out of the past five recessions”. These points have merit. A big move in stock prices can signify some change in economic fundamentals, but it can just as easily signify nothing at all. For those not invested in the market, or whose investments consist mostly of retirement savings plunked into index funds, Monday’s crash matters about as much as Sunday’s Super Bowl result.


Ah, the “butterfly flapping its wings”, a Lorenz-inspired metaphor. Nevermind that the basic ideas of Lorenz in fluid dynamics were anticipated by many in other fields, much earlier, such as Poincaré. I don’t mean to diss Lorenz: Many fields require people to remind them of things they never noticed or forget: There’s Wilson in Geology with plate tectonics, and Efron in my own field of Statistics.

(Predator-prey systems of this Lotka-Volterra kind are coupled nonlinear differential equations which are capable of surprising their students.)
The point is that these are expressions of coupled systems of nonlinear differential equations. It is entirely within the nature and capacity of solutions of such system to surprise. And there is no causality in their expression (*). Indeed, in such a world, the very notion of causality is laughably quaint. (See J. D. Norton for more.)

So, people might struggle and strive to explain why the DJIA dropped precipitously at 1500 ET on 5 February 2018. Sure, the tie to 1500 ET is curious, but, other than that, there are unbounded numbers of reasons why it might, and why we will never and could never understand the impetus. And it necessarily follows that, if the initiation cannot be predicted or understood, the soothing words of analysts and sage traders that there is nothing to be seen here, from this, and, so, requires no action, cannot be considered as seriously as they might want them to be. Given the circumstances and the mechanism, who really knows? The nonlinear equations are what they are. Noone has a faithful simulation of them in their pocket.


(*) I remind the audience that I am no financial advisor or counsellor, and any actions taken — or not taken — as a result of my writings here are entirely the responsibility of the reader. Thanks.

Posted in Anthropocene, bifurcations, biology, Carl Safina, causation, complex systems, dynamic generalized linear models, dynamic linear models, dynamical systems, ecological services, ecology, Emily Shuckburgh, finance, Floris Takens, fluid dynamics, fluid eddies, games of chance, Hyper Anthropocene, investments, Lenny Smith, Lorenz, nonlinear, numerical algorithms, numerical analysis, politics, population biology, population dynamics, prediction markets, Principles of Planetary Climate, public transport, Ray Pierrehumbert, risk, sampling networks, sustainability, Timothy Lenton, Yale University Statistics Department, zero carbon, ``The tide is risin'/And so are we'' | 1 Comment

neat stuff: new legs for de Broglie-Bohm pilot wave theory

See more at Professor John Bush‘s site:

See also work by my son, Jeff, for his doctoral dissertation, not regarding de Broglie-Bohm, but on corrals and scattering.

Posted in de Broglie-Bohm pilot wave theory, John Bush, quantum mechanics | 1 Comment

Quote from Max Planck

(Hat tip to Professor Richard Kleeman of the Courant Institute for Mathematical Sciences.)

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.”

   — Max Planck



For more information, see the excellent text, highly recommended for students of Climate Science, T. S. Kuhn, Black-Body Theory and the Quantum Discontinuity, 1894-1912, University of Chicago Press, 1978.

Also recommend for same audience: D. Archer, R. Pierrehumbert (eds.), The Warming Papers: The Scientific Foundation for the Climate Change Forecast, Wiley-Blackwell, 2011.

Posted in American Association for the Advancement of Science, Anthropocene, climate change, global warming, physics

Senn’s `… never having to say you are certain’ guest post from Mayo’s blog

via S. Senn: Being a statistician means never having to say you are certain (Guest Post)

See also:

Posted in abstraction, American Association for the Advancement of Science, American Statistical Association, cancer research, data science, ecology, experimental design, generalized linear mixed models, generalized linear models, Mathematics and Climate Research Network, medicine, sampling, statistics, the right to know

[reblog] David Suzuki: Consumer society no longer serves our needs

From David Suzuki, who I’ve cited here more and more often, from his blog post, Consumer society no longer serves our needs, of 11th January 2018.

An excerpt:

But where is the indication of our real status — Earthlings — animals whose very survival and well-being depend on the state of our home, planet Earth? Do we think we can survive without the other animals and plants that share the biosphere? And does our health not reflect the condition of air, water and soil that sustain all life? It’s as if they matter only in terms of how much it will cost to maintain or protect them.

Nature, increasingly under pressure from the need for constant economic growth, is often used to spread the consumption message. Nature has long been exploited in commercials — the lean movement of lions or tigers in car ads, the cuteness of parrots or mice, the strength of crocodiles, etc. But now animals are portrayed to actively recruit consumers. I’m especially nauseated by the shot of a penguin offering a stone to a potential mate being denigrated by another penguin offering a fancy diamond necklace.

How can we have serious discussions about the ecological costs and limits to growth or the need to degrow economies when consumption is seen as the very reason the economy and society exist?

This is a matter related to a point I’m planning to close with at a Needham Lyceum talk I’m giving on 11th February 2018 (0915 EST) at First Parish Needham, Unitarian Universalist (*). That is, to the degree to which economic systems, a human invention, or political systems, also humanly invented, cannot solve a dire situation we find ourselves in, these systems will be destroyed and surpassed, hopefully through some kind of peaceful disruption. By cannot solve I have a specific definition: Offering a solution to a dire problem which is infeasible or horrifically expensive is no solution.

What’s notable about both the responses of Presidents Obama and Trump to the climate crisis is that they both asserted solutions to it cannot involve significant negative impacts to the United States economy. I would suggest that, to the degree to which this is the best the United States Constitution offers, despite its remarkable construction and past triumphs, the U.S. Constitution is demonstrating this problem is beyond its capability to solve. However, I believe economics and the Constitution are separable, even if they do not seem so today, and I hope that if that separation is needed to fix climate, it will happen. If they are not, I believe the problem will be fixed, but with the loss of both, either in consequence or along the way.

Still, we could wake up:

(See Dream Catcher.)


* “Carbon Emissions and Climate: Where do we stand now, and what can be done if it all goes wrong?”
(in preparation).

Posted in Adam Smith, adaptation, affordable mass goods, Anthropocene, climate economics, climate justice, consumption, David Suzuki, ecological services, ecology, Ecology Action, economics, ethics, evidence, science, the right to be and act stupid, the right to know, the value of financial assets, tragedy of the horizon

(thought of the day)

One accurate measurement is worth a thousand expert opinions.
Grace Murray Hopper

Hat tip to Pat’s blog.

Posted in statistics, Uncategorized

wind+storage 2.1 ¢/kWh, solar+storage 3.6 ¢/kWh

Update, 2018-01-16

Vox has a widely acclaimed update to this story.

(rubbing hands gleefully)

Utility scale bids at Xcel Energy had median prices of 2.1 ¢/kWh for wind-with-storage, and 3.6 ¢/kWh for solar-with-storage.

Hat tip to Utility Dive.

In U.S. Energy Information Administration projections for 2020, the price per kWh of natural gas advanced combined cycle is 6.9 ¢/kWh and a spot price from CenterPoint Energy for commercial applications has it at 6.02 ¢/kWh (Minnesota).

To paraphrase the late Supreme Court Justice Antonin Scalia, fossil fuels for generating electricity are dead, dead, DEAD!.

And I delight in contemplating the days arriving soon when natural gas, oil, and coal, their pipelines and their shipping, are stranded assets. I’ve written about this often.

Posted in American Petroleum Institute, American Solar Energy Society, Amory Lovins, Bloomberg New Energy Finance, BNEF, bridge to somewhere, Buckminster Fuller, Cape Wind, Carbon Worshipers, clean disruption, CleanTechnica, climate economics, corporate litigation on damage from fossil fuel emissions, Cult of Carbon, decentralized electric power generation, decentralized energy, destructive economic development, distributed generation, economics, electrical energy storage, electricity, electricity markets, energy storage, energy utilities, FERC, Green Tech Media, ILSR, investment in wind and solar energy, Joseph Schumpeter, leaving fossil fuels in the ground, local generation, local self reliance, marginal energy sources, Massachusetts Clean Energy Center, microgrids, natural gas, petroleum, pipelines, public utility commissions, PUCs, rate of return regulation, regulatory capture, solar democracy, solar domination, solar energy, solar power, Spaceship Earth, stranded assets, sustainability, the energy of the people, the green century, the value of financial assets, Tony Seba, tragedy of the horizon, wind energy, wind power, zero carbon

(repost) How the recent New England cold snap and nor’easter did not cause natural gas prices to spike

I wrote a piece a bit back about the volatility in natural gas prices. These were seized upon by proponents of natural gas pipelines, whether Gordon von Welie from ISO-NE, to various representatives of petroleum and power generators councils, or even that recurring denizen of the Commonwealth Magazine comments, NortheasternEE to, once again, argue that New England (read Massachusetts) needs new natural gas pipelines because cold pinches such as the most recently experienced caused huge financial harm to residents by spiking the prices of electricity and arguing, once again, that only bringing additional explosive methane by new pipelines could offset this. They, and even the editorial staff at Commonwealth claimed the generators of electricity had to switch to oil because of natural gas shortages.

Well, none of that is true, and turned out not to be. It was pretty self-evident that, at least, they did not know, since fuel mix used for generation is not something which is known at high aggregations of geography for a couple of days afterwards. And, as it turns out, little or no additional oil was needed, that even though Pilgrim nuclear went offline, renewables picked up the slack, driven there probably by the relatively high winds of the nor’easter. Indeed, Conservation Law Foundation (CLF) reports that, for a time, New England was getting as much electricity from renewables as it did from natural gas generation.

The details are, as I mentioned, at the blog post which has been updated a couple of times.

But I also want to take a moment to underscore how certain online media outlets are controlled by ensconced fossil fuel interests, like natural gas, the pipeline companies, and big utilities like Eversource, who are using heavy-handed legal threats to quash reports they do not like the public to know about. In particular Commonwealth Magazine appears to be a favorite mouthpiece for opponents of decentralized renewables, ranging from Associated Industries of Massachusetts to the New England Petroleum Council to Eversource. And, sure, they have run op-eds by individuals in favor of them from time to time, I’d say, to maintain the illusion of “balance”. But when their own editorial staff misrepresents matters of electrical generation as in the above, and do not get the story straight on the Marks, Mason, Mohlin, and Zaragoza-Watkins conference paper in terms of what it says, taking the pipeline proponent line and misrepresenting it as a product of the Environmental Defense Fund (EDF), then there’s something wrong with that source. I will not read or follow Commonwealth Magazine any longer. They even deleted two comments I made on these matters after their being posted for a half hour each.

While I have also let be known my view of the recent DPU demand charge decision, and I have listened to and attended presentations by officers of Governor Charlie Baker’s administration regarding energy policy and climate adaptation, in fact, there is little concrete evidence that what this administration is doing is but fig leaves and tokenism. Beginning with Governor Patrick and continuing under Governor Baker, the Massachusetts Department of Environmental Protection has seen its staff and budget repeatedly cut. The funding of the Municipal Vulnerability Preparedness program is pathetically small, and Governor Baker shows no willingness whatsoever to increase taxes to pay for any such plans, programs, or policies. Speaker of the House DeLeo probably contributes to that reluctance as well.

So, whatever happens to Massachusetts and to Boston, in terms of flooding and the like, can be put on Governor Baker’s head, and on Speaker DeLeo. They have heard about the urgency for over a decade, even if Baker was not Governor at the time. DeLeo has been Speaker since the time of the dinosaurs.

Posted in Uncategorized

2017 Arctic Report Card

From NOAA.

2017 Arctic Report Card: Summer temperatures are rising rapidly in most Arctic seas, by Tom Di Liberto.

2017 Arctic Report Card: Extreme fall warmth drove near-record annual temperatures, by Rebecca Lindsey.

Posted in American Meteorological Association, AMETSOC, Anthropocene, Arctic, climate change, climate disruption, global warming, Hyper Anthropocene, NOAA

a dystopian Commonwealth

I repeat a link to a post I made in May 2016 regarding how it seemed Governor Baker and Massachusetts House Speaker DeLeo were bent on a dystopian Massachusetts. Both then, and now, by the actions of their charges, they fail to really understand the importance of a clean energy future for the Massachusetts economy.

The present circumstances are the decision on Friday, 5 January 2018, to grant Eversource/NSTAR its request to essentially bust-up net metering in favor of a peak demand charge, and to permit it to eliminate time-of-use tariffs. The Acadia Center has more to say about this specific action. Not only does this have implications for decentralized energy adoption in Massachusetts, it also impedes important steps along the path of decarbonization, such as moving to electric air source heat pumps for heating and cooling, and adoption of electric vehicles. Indeed, if I were cynical, I’d say the next item on the Baker-DeLeo joint agenda is to fail to renew the Global Warming Solutions Act (GWSA) in 2020, thus relieving them of the responsibility of complying.

And why not? When I returned to work in Cambridge after the New Year, I was struck on how so many things simply do not work in Massachusetts, most notably what is laughingly called our public transportation system. But

  • streetlights were out, being worked by a crew of a half dozen and more Eversource employees in a trench,
  • escalators are still working which were not working before the holiday break,
  • the Town of Falmouth is being required to tear down two wind turbines it erected in a show of support for renewable energy and to earn revenue,
  • a fire alarm at the Route 128 Amtrak-MBTA station was still signaling the alarm I saw when I went into Cambridge when I returned from there,
  • and the event of Aquarium Station on the Blue Line being flooded during the recent nor’easter is written up in the Boston Globe and Commonwealth Magazine as simply a repeat of a problem which had occurred once before. Nothing to see here. Move along home.


(Plunge, an art exhibit by Michael Pinsky, shows sea level of meters higher than in 20th century on famous London landmarks.)

And I noted how, NOAA had presented that weather and other natural disasters cost the public in the United States a quarter of a trillion dollars in 2017, ignoring for the moment, the cost to private businesses and individuals both directly and through their insurance. See the details.

Posted in the tragedy of our present civilization, tragedy of the horizon, unreason, utility company death spiral | 1 Comment

FERC: No multi-billion dollar bailout for coal and nuclear generating facilities

Excerpts from statements by Richard Glick, FERC commissioner are given below. The Microgrid Knowledge (“MGK”) news article summarizes the context by writing:

The commission rejected the energy secretary’s assertion that retirement of coal and nuclear plants threatens electric resilience. Instead FERC plans to look at broader challenges that may influence the reliable flow of energy in competitive wholesale markets, among them severe weather, physical and cyber attacks, accidents and fuel supply disruptions … In rejecting the coal and nuclear subsidies, FERC doubled down on its commitment to competitive markets. Commissioner Cheryl LaFleur called the proposed tariff for coal and nuclear “far-reaching out-of-market approach” that would be “highly damaging to the ability of the market to meet customer needs.”

FERC opened a new Docket No. AD18-7-000, in their response.

Richard Glick:

I also believe that it is important to consider the advantages that newer technologies, such as distributed energy resources, energy storage, and microgrids, may offer in addressing resilience challenges to the bulk power system.

MGK continues:

He added that most power outages occur because of failures within the transmission and distribution system, and not because of a lack of power supply.

Mr Glick:

There is no evidence in the record to suggest that temporarily delaying the retirement of uncompetitive coal and nuclear generators would meaningfully improve the resilience of the grid. Rather, the record demonstrates that, if a threat to grid resilience exists, the threat lies mostly with the transmission and distribution systems, where virtually all significant disruptions occur. It is, after all, those systems that have faced the most significant challenges during extreme weather events.

(I have added emphasis here.)

FERC Commissioner Cheryl LaFleur also responded:

In effect, it sought to freeze yesterday’s resources in place indefinitely, rather than adapting resilience to the resources that the market is selecting today or toward which it is trending in the future.

Using the context provided by the MGK article, again:

Instead, FERC should guide grid operators to pursue resiliency within a system “that is likely to be cleaner, more dynamic, in some instances more distributed, and deployed by an efficient market for the benefit of customers,” LaFleur said.

Then, from and regarding FERC Commissioner Neil Chatterjee:

Commissioner Neil Chatterjee also voted to reject Perry’s proposal and open the new docket — but with some reservations.

Chatterje expressed concern about the “staggering” change the grid is undergoing, noting that between 2014 and 2015 alone, the U.S. added about 15,800 MW of natural gas, 13,000 MW of wind, 6,200 MW of utility scale solar photovoltaic, and 3,600 MW of distributed solar. Meanwhile, nearly 42,000 MW of synchronous generating capacity (coal, nuclear, and natural gas) retired between 2011 and 2014. An additional seven nuclear units, representing 10,500 MW, are set to retire by 2025.

A separate article, from Utility Dive, reports how new natural gas, not renewables, is the culprit in beating down demand for nuclear generation. This is based on a recent MIT study. A previous study, by government Department of Energy Argonne National Laboratory and Lawrence Berkeley National Laboratory, arrived at the same conclusion.

Posted in American Association for the Advancement of Science, American Solar Energy Society, Amory Lovins, Berkeley, Bloomberg New Energy Finance, BNEF, CleanTechnica, climate economics, decentralized electric power generation, distributed generation, electricity markets, energy utilities, FERC, green tech, grid defection, ILSR, investment in wind and solar energy, ISO-NE, John Farrell, Joseph Schumpeter, microgrids, rate of return regulation, stranded assets, sustainability, the energy of the people, the value of financial assets, Tony Seba, wind energy, wind power

Michael Bloomberg speaks on the Sustainability Accounting Standards Board

Posted in Amory Lovins, Anthropocene, Bloomberg, Bloomberg New Energy Finance, BNEF, Michael Bloomberg, Michael Osborne, planning, resiliency, Richard Branson, stranded assets, supply chains, sustainability, Tony Seba | Tagged

1992 World Scientists’ Warning to Humanity

Professor David Suzuki, as ever, reminds us urgent warnings about our `collision course with Nature’ are nothing new.

This one came in 1992

Introduction

Human beings and the natural world are on a collision course. Human activities inflict harsh and often irreversible damage on the environment and on critical resources. If not checked, many of our current practices put at serious risk the future that we wish for human society and the plant and animal kingdoms, and may so alter the living world that it will be unable to sustain life in the manner that we know. Fundamental changes are urgent if we are to avoid the collision our present course will bring about.

Read more here.

The above statement is also available as a PDF.

Professor Suzuki himself has deep insights regarding how to demonstrate that it is inevitable we must be having a huge impact on Earth’s ecosystems.

Posted in Anthropocene, David Suzuki, Hyper Anthropocene, scholarship, science | 1 Comment

reality of natural gas prices: volatile, undependable, and contrary to social interest

Updated, 11th January 2018

There’s been a lot written about natural gas, New England, and supposed price spikes due to constraints on pipeline capacity. I’ve had my turn a couple of times here (and here), as a matter of fact (to cite a couple).

That’s why it is refreshing to put prices of natural gas in perspective. Bloomberg did so yesterday.

There are a few things to note in these figures. The first is the striking lack of visual correlation between natural gas prices and heating degree days. For surely, if the claims of advocates of increased pipeline capacity regarding pipeline constraints in deep winter contributing to high prices, it is reasonable to expect that as heating requirements increase, natural gas prices will increase. In fact, however, natural gas prices seem to wander all over the place, and only occasionally have spikes which coincide with deep winter requirements.

Second, despite “the sky is falling talk” of recent pipeline proponents, including ISO-NE, 2018 is not really much of a price spike, certainly not compared with, say, 2014:

Third, if anything, the actual trace of natural gas prices suggestions nothing anyone can do will affect natural gas prices. They will be what they will be, and additional pipeline capacity or anything else can’t impose a lid on them, as plausible as the story-and-song are from pipeline proponents.

Indeed, if prices of energy, particularly electrical energy, are concerns, then the sensible way of moving forward is to make an even bigger investment, as a Commonwealth and as a region, in wind and solar energy, with energy storage added. Wind doesn’t really need the storage, but it then requires less thinking on ISO-NE’s part to manage the grid, since they seem to be less capable of doing it than, say, Belgium is. Only wind and solar can deliver constant-per-annum prices for 30 year ranges. In fact, further, if the residents of the Commonwealth are so concerned about per kilowatt hour prices of electricity, they should get over their parochial opposition to land-based wind turbines and especially opposition to community solar farms in their neighborhoods. The former is the cheapest way to generate electricity in the world, with offshore wind being much more expensive. (Why is anyone surprised about that?) Community solar is quiet, unobtrusive, and, backed by storage, can soon offer comparable energy prices.

Postscript

Interesting postscript to these series …. Entergy’s Pilgram Nuclear plant went offline during the recent storm. That resulted in the following fuel-mix for electricity, using data supplied by ISO-NE:

Note it was hydropower, not natural gas or oil, which made up for the shortfall.

Update Saturday, 6th January 2018


A
model energy policy for any state in New England, proposed here for Massachusetts. I’ve done a lot of studying of energy policy in the past year, and this gleans the absolute best from the likes from Dr Amory Lovins, Professor Tony Seba (see also), the Institute for Local Self-Reliance, and Sir Richard Branson.

Update, 2018-01-11

The Conservation Law Foundation (CLF) provides a detailed recap here of the recent cold spell in New England, its effects upon the electricity grid, the propaganda put out by gas and pipeline company associations as well as utilities, and what the real story was. An excerpt:

Clean Energy and Hydropower Kicked In to Fill the Gas Gap
One of the reasons the electricity system could easily handle this significant drop in gas-fired power was the performance of renewable energy and hydropower. During the interminable cold, these clean resources represented as much as 20 percent of our power. That’s right: clean energy was matching our gas-fired power.

Among those renewables, wind was leading the way while individual solar units were powering homes and business, keeping down demand for electricity from big power plants. So, if clean renewables flourish during these period of cold, why would we ever invest in more of the polluting gas that causes price volatility rather than clean, price stable, renewable energy?

The Grid Relied Too Much on Oil-fired Power, But It Needn’t Have
Much has been made of the fact that dirty oil was the fuel of choice for power generators when temperatures dipped to their lowest. But this spike in oil use is a direct result of our over-reliance on gas. It should concern us, but it’s important that we put it in perspective.

First, even with increased oil usage over the past couple of weeks, oil-fired power will constitute a tiny fraction of our overall power generation for the year. That means climate impacts from its overuse during the cold snap is minimal. Second, oil dominated in part due to an ISO-New England program that favors oil as a substitute for natural gas over other, cleaner alternatives. Generators switch to oil because ISO-New England gives them an incentive to do so.

Fortunately, our region is in the process of major investments in new clean energy that will be cheaper (and cleaner) than oil. Our ongoing clean energy investments will displace oil and gas polluters during future cold spells.

Electricity and Gas Prices Were High in Much of the U.S.
The extremely high price of gas resulted in a spike in electricity prices, a concern for each and every one of us when trying to pay our bills, and especially for low-income and vulnerable communities. But contrary to the gas industry’s fear-mongering, expanded gas pipelines won’t help. Indeed, gas prices were high over the last couple of weeks virtually countrywide.

Pennsylvania, New Jersey, Maryland, and a few other states that sit on top of large gas supplies and have built out their pipelines also saw their electricity prices spike during the cold weather – at times outpacing the spikes in New England. We saw the same phenomenon in 2014. So, even if we were willing to pay billions for new pipelines that will sit idle 95 percent of the time – and could stomach the high costs to our environment and climate – it’s clear that such a buildout is not the solution to high winter electricity prices. We can only achieve that by cutting our reliance on gas in favor of clean, price stable, renewable power sources.

What’s more, New England has been seeing its wholesale electricity prices decline steadily for three years, and the temporary increases associated with this recent arctic air are not likely to derail that trend.

New Pipelines Just Don’t Make Economic or Environmental Sense
If big new gas pipelines are the great solution claimed by Big Gas, then why don’t they invest their own money into the projects? Instead, they’d rather we the consumers take all the risk while they and utility companies take all the profits. We know a bad deal when we see one, though, and studies from the Massachusetts Attorney General’s Office and the Maine Public Utilities Commission, as well as from a few of the top energy consulting firms in the U.S., demonstrate that new pipelines will cost us much more than any supposed benefits.

Rather than saving New Englanders money as Big Gas likes to promise, new pipelines would end up costing those who pay a monthly electric bill as much as $277 million over the lifetime of the pipeline. This, along with serious legal issues, is why state courts and utility commissions have rejected pipeline proposals like Kinder Morgan’s Northeast Energy Direct pipeline and Spectra’s Access Northeast proposal.

Update, 2018-01-13

Even more about this discussion at CleanTechnica.

Posted in Amory Lovins, anomaly detection, Anthropocene, Bloomberg New Energy Finance, clean disruption, Cult of Carbon, decentralized electric power generation, distributed generation, electricity markets, evidence, explosive methane, financial series, fossil fuel infrastructure, fossil fuels, gas pipeline leaks, greenhouse gases, Hyper Anthropocene, investment in wind and solar energy, ISO-NE, leaving fossil fuels in the ground, local generation, local self reliance, natural gas, pipelines, public utility commissions, rate of return regulation, regulatory capture, reworking infrastructure, rights of the inhabitants of the Commonwealth, risk, stranded assets, supply chains, the stack of lies, the tragedy of our present civilization, Tony Seba, utility company death spiral, zero carbon

perceptions of likelihood

That’s from this Github repository, maintained by Zoni Nation, having this description. The original data are from a study by Sherman Kent at the U.S. CIA, and is quoted in at least once outside source discussing the problem.

In addition to the base rate fallacy (see an investment-related definition, too), which is just ignorance of Bayes rule, the other thing that’s interesting is the subjectivity of the categories above, particularly if they are thought of in the context of assessing risk.

Posted in anti-intellectualism, Bayes, Bayesian, economics, fear uncertainty and doubt, games of chance, reason, risk, secularism, statistics, the right to be and act stupid, the right to know, the tragedy of our present civilization, unreason | Tagged

Early 2018 Nor’easter

via Early 2018 Nor’easter

The following are from GFS/NCEP/U.S. National Weather Service model runs:



Bombogenesis indeed!

The following are from the Meteocentre UQAM in Montreal, PQ, Canada, running the European Weather Model, as well as others.

Posted in American Meteorological Association, atmosphere, National Center for Atmospheric Research, NOAA | 1 Comment

Klaus Lackner: brilliant mind with a good idea

Wally Broecker‘s “hat tip” of Lackner’s work:

Posted in Anthropocene, carbon dioxide, clean disruption, clear air capture of carbon dioxide, climate disruption, climate economics, climate justice, economics, emissions, evidence, fossil fuel divestment, global warming, greenhouse gases, Hyper Anthropocene, investment in wind and solar energy, investments, klaus lackner, leaving fossil fuels in the ground, Spaceship Earth, zero carbon | 1 Comment

Cloud Streets

From NASA’s Earth Observatory and MODIS, here’s cloud streets due to double inversion layers warm-atop-cold-atop-warm:

(Click image for a larger figure, and use your browser Back Button to return to blog.)
Dr Marshall Shepherd at Forbes puts the present cold snap in perspective. Dr Shepherd was previously President of the American Meteorological Society.

2018-01-03_192433

By the way, the AMS has a new publication, Explaining Extreme Events of 2016 from a Climate Perspective available. They did one like this for 2014, and this is the 2016 edition.

Dr Jennifer Francis once talked about how what’s going on in the Arctic could be behind Boston’s deep freeze, and why the Arctic matters.

And this is Professor Jim White talking about abrupt climate change and its relation to the Arctic:

Posted in American Meteorological Association, AMETSOC, Arctic, atmosphere, attribution, climate, Jennifer Francis, Marshall Shepherd

What are the odds of net zero?

What’s the Question?

A question was posed by a colleague a couple of months ago: What are the odds of a stock closing at the same price it opened? I found the question interesting, because, at first, it appeared to be a one-dimensional version of another problem I looked at here, which was in two dimensions. Well, I have produced an estimate, and am reporting results here. My first impressions of the problem were wrong. It actually is a two dimensional problem, not a one dimensional one. And it is not the same as the earlier problem, because although one of its dimensions is discrete, the other, time, is (essentially) continuous. I’ll explain.

The Data

I obtained intraday trades, or “Time & Sales” records (as they are called) for a single stock on the NASDAQ for a 71 day period. The stock was the one my colleague asked about. This series consisted of 679,390 original records. After discarding corrections (about 1.1%), there were 679,314 records remaining. I also discard unnecessary columns, retain date, time, and price. The times are recorded in U.S. Central Standard Time and are available to millisecond resolution, although the source of the data assumes no responsibility on time accuracy, saying this portion of the data is what they get from the NASDAQ and they copy it. Prices are reported to cents resolution.

The data was grouped into days, and, so, there was a times series of trades within each day. The price of the first trade of the day was subtracted from the prices of the remaining trades and, so, the trades of each day are references with respect to the opening price. A net zero condition such as is indicated by the title of the blog post is if, therefore, the transformed final price of a day is zero, or, in actuality, within a penny of zero. The objective of the study is to estimate the odds of that happening.

The Data and Code are Provided for Examination

I am providing data and code supporting this study. There are available in an Atlassian Bitbucket a Git repository. In the provided data, I have omitted the ticker symbol, the base prices, the record flags, and the date portion of timestamps from these data, because:

  • There’s no reason to mention the publicly traded company involved.
  • I am not a financial advisor and I don’t want to run afoul of rules about seeming to give advice when I’m not.
  • I want to be able to provide the data so readers and students can reproduce what I did, but I don’t want to violate Terms and Conditions on the use of the data I obtained to support this from the site I purchased them.

Also, in the dataset provided, the dates have been replaced with a trading day number. These are all done to preserve the anonymity of the stock. For the study, all that’s needed is some label to group records together. Also, the data provided is the transformed data with the open trade price subtracted. Again, by removing the magnitude of the price, I’m attempting to protect stock anonymity. I have also provided a copy of the code which was used to perform the transformation.

Also, the size of the data, based upon 71 days, was arbitrary. On the one hand, it could be thought of as a cost constraint, that is, more data could cost more. If I’m having fun answering a question like this and writing it up, it might as well be one where some constraints typical of studying more serious questions arise. On the other hand, 71 days of intraday trading data isn’t negligible.

Approach

It’s possible to apply analytical models to the problem, and it’s almost unavoidable to use some theory for reasons that’ll be explained in the material to come. I also understand that this problem, with suitable assumptions, is a question addressed in standard financial trading studies, such as the result that the variations in stock prices intraday are t-distributed. Ultimately, and apparently, for large sets of stocks, daily fluctuations depend upon order flows. See J. C. Hull, Options, Futures, and Other Derivatives, 5th edition, Prentice-Hall, 2003, for more of this kind of theory.

For such a specific question, though, with such a limited dataset, I tend to avoid using models which depend on assumptions about the process at hand, or rely upon asymptotics. I also try to make as few distributional assumptions as I can, letting the data and its interaction with the question at hand speak for themselves. I would have liked to use a t-distribution for the variations in the model, but neither of the two Kalman filtering R packages I typically use, dlm and KFAS, offer such an option. It was important to use a package which could estimate time-varying covariances on its own, since these signals are not stationary.

That said, it is nevertheless true that no purely empirical approach will given a good answer with this dataset. The closest any close gets to the opening price in this 71 day dataset is a penny, and there are only two days when that is true. That would produce an estimate of zero odds. That not only violates Cromwell’s Rule, but it is wrong, because on a day after this dataset was compiled this stock did close at its opening. In fact, that event prompted the question.

The idea I chose was to model the movement of the stock from its open on any day to its close as a random walk, one that I’ve described before:

\mathring{s}_{t} = v_{t} + \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t})

v_{t+1} = v_{t} + \mathcal{D}_{2}(0, \sigma^{2}_{v})

Here \mathring{s}_{t} is the reported stock price at time t, an offset from the opening price of the day. The model allows for a noise process on the observation, adding \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}), which can be thought of as a distortion of the stock’s true, latent offset-from-opening-price value, v_{t}, including rounding of that price to a penny. So v_{t} undergoes steps drawn from the distribution \mathcal{D}_{2}(0, \sigma^{2}_{v}) and these form the basis for \mathring{s}_{t}, after being “smudged” by \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}).

The idea of using 71 sets of data are to characterize both \mathcal{D}_{2}(0, \sigma^{2}_{v}) and \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}) and their parameters. Once those are in hand, and their credible intervals, these can be used in a simulation for a large number of synthetic days. Given a big enough such population, it’s possible to count the number of times \mathring{s}_{t_{\text{final}}} = 0 or, more precisely, the number of times 0.01 > |\mathring{s}_{t_{\text{final}}}|.

On the Form of \mathcal{D}_{2}(0, \sigma^{2}_{v})

For the purposes here, \mathcal{D}_{2}(0, \sigma^{2}_{v}) \sim \mathcal{N}(0, \sigma^{2}_{v}). I’m not happy about that Gaussian, but it’s a start.

On the Form of \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t})

While dependence of \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}) upon v_{t} might be more complete, that problem is eclipsed by a practical one the source dataset suffers. Time & Sales records for different days don’t have trades registered to the same moments of the trading day, and in order to use these records in the manner I intend, I need to register them so. Accordingly, as will be seen below, I use a penalized smoothing spline from the R pspline package to create proxy series for each of the trading days, migrating their values onto a common time grid. When these data are used,

\mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}}, v_{t}) = \mathcal{D}_{1}(0, \sigma^{2}_{\mathring{s}})

and so that question is finessed because, despite the “chunkiness” of the trades, the result of the penalized spline is a continuous Real. Accordingly, the resulting disturbance has the same form as \mathcal{D}_{2}(0, \sigma^{2}_{v}), although with a difference variance, \sigma^{2}_{\mathring{s}}.

Implementation

To the accuracy of consideration, which is a trading time stamp resolution of one second, many observations in Time & Sales records are recorded at the same moment. Accordingly, these records are first pre-processed to keep only the latest trade for any given moment, so defined. There are 23,400 seconds in the trading day.

For each day of trades, the P-spline is calculated for the each of the intradays trading histories, and these are migrated onto the regular grid of trading seconds. There are 71 such days in the dataset. An example of such an interpolation, with data overprinted, is given in the figure below. It is from day 24 of the dataset:


(Larger version of figure can be seen by clicking on the above. Use your browser Back Button to return to blog.)

Next, a filtering, smoothing Kalman filter is applied to the 71 days of trades, seen as a 71-variate response with a common state and covariance terms. The covariances are estimated from the data using maximum likelihood. They are allowed to vary throughout the trading day.

Given the fitted model, 100 instances of the estimated states and \epsilon (or \mathcal{D}_{2}(0, \sigma^{2}_{v})) noise terms 71 trading days are simulated using the KFAS package’s simulateSSM function. The last value of each trading day is set aside. The values of the 100 states are taken as means of 100 Gaussian distributions. Noise terms differ for each of the 71 days, so a composite variance is calculated for each of the 100 simulations. That composite variance is calculated as a stationary bootstrap of the 71 days for each simulation, with a mean block size of 5 (days), with 1000 bootstrap replicas each. A stationary bootstrap is used because it is unlikely the variance for each day is independent of the others. The tsbootstrap function of the tseries package is used for the purpose.

A histogram of the 100 means from the final states is shown below:
(Larger version of figure can be seen by clicking on the above. Use your browser Back Button to return to blog.)

A histogram of the standard deviations (not variances) of the noise terms for each of the 100 simulations is shown below:

(Larger version of figure can be seen by clicking on the above. Use your browser Back Button to return to blog.)

These means and variances are treated as a mixture distribution of 100 Gaussians with the given 100 means and their corresponding 100 variances. Choices of which Gaussian is used in any instance are weighted equally, and 100,000 samples are drawn from this mixture. This is based upon a plan by Jack Baker from his vignette for the sgmcmc package, although here only univariates are drawn for each sample, and the code is a little different.

The number of trading days ending at an offset trade price within a penny of zero are counted, and the fraction of the total is taken as the probability of the intraday offset price or total net trades being zero.

The probability so derived is about 0.015, maybe a little less.

What this means in terms of waiting time, using a negative binomial model, is that the expected number of trading days before the first net zero is \frac{1-p}{p}, where p = 0.015 and the trials are independent of one another. For that value, this expected \frac{1-p}{p} \approx 66. For “contagious runs”, this could be longer or shorter, depending upon the serial correlation.

Criticisms

There are two shortcomings in the above calculation.

First, because there’s a need to register all trading days on a common time grid, by migrating stock prices using interpolation, it is possible the variance of the original dataset has been reduced. Surely, this is suggested a bit from the figure of the interpolation with points on top of it above. This is an unfortunate requirement of using the Kalman filter approach to estimation. It might be possible to correct for this effect by inflating the variance terms. However, it is also possible that the scatter observed in the figure is due to the chunkiness (“to the penny”) with which stock trades are reported and the stock price is effectively between two ticks.

Second, the distribution assumed for stock variation is Gaussian, primarily because the KFAS package does not, at present, support a t-distribution as one of its modeling options. Were that to be available, it would be interesting to repeat this calculation.

The effect of both these criticisms would be to reduce the probability of total net trades being zero. Accordingly, it seems leaving the probably at 0.01 or 1% is a good estimate, even if it needs to be corroborated by the two criticisms being addressed.

Posted in dependent data, evidence, financial series, investing, investments, model-free forecasting, numerical algorithms, state-space models, statistics, time series, trading