## Cathy O’Neil’s WEAPONS OF MATH DESTRUCTION: A Review

(Revised and updated Monday, 24th October 2016.)

Weapons of Math Destruction, Cathy O’Neil, published by Crown Random House, 2016.

This is a thoughtful and very approachable introduction and review to the societal and personal consequences of data mining, data science, and machine learning practices which seem at times extraordinarily successful. While others have breached the barriers of this subject, Professor O’Neil is the first to deal with it in the call-to-action manner it deserves. This is a book you should definitely read this year, especially if you are a parent. It should be required reading for anyone who practices in the field before beginning work.

I have a few quibbles about the book’s observations based on its very occasional leaps of logic and some quick interpretations of history.

For example, while I wholeheartedly deplore the pervasive use of e-scores and a financing system which confounds absence of information with higher risk (that is, fails to posit and apply proper Bayesian priors), the sentence “But framing debt as a moral issue is a mistake”, while correct, ignores the widespread practice of debtors courts and prisons in the history of the United States. This is really not something new, only a new form. Perhaps it is more pervasive.

For a few of the cases used to illustrate WMDs, there are other social changes which exacerbate matters, rather than abused algorithms being a cause. For instance, the idea of individual home ownership was not such a Big Deal in the past, especially for people without substantial means. These less fortunate individuals resigned themselves to renting their entire lives. Having a society and a group of banks pushing home ownership onto people who can barely afford it sets them up for financial hardship, loss of home, and credit.

What will be interesting to see is where the movement to fix these serious problems will go. Protests are good and necessary but, eventually, engagement with the developers of actual or potential WMDs is required. An Amazon review is not a place to write more of this, nor give some of my ideas. Accordingly, I have written a full review at my blog for the purpose.

My primary recommendation is a plea for rigorous testing of anything which could become a WMD. It’s apparent these systems touch the lives of many people. Just as in the case of transportation systems, it seems to me that we as a society have very right to demand these systems be similarly tested, beyond the narrow goals of the companies who are building them. This will result in fewer being built, but, as Dr O’Neil has described, building fewer bad systems can only be a good thing.

(The above is the substance of a review I wrote at Amazon for the book.)

Here are some of my ideas

While a social movement may be a good way to start, and raise consciousness, I think more specific steps are needed. In particular, codifying acceptable technical practice in an IEEE or ISO standard might be a way to identify those companies which take care in their use and application of this technology. I emphasize application because it seems to me the action side of the process needs to be constrained in addition to the data gathering side. While some regulatory lasso needs to be thrown around the froth and foment of Web-scraping, data dredging companies and startups that deeply affect people’s lives, I also just don’t think social pressure for exploiters of these to act more ethically will do it. A compliance procedure for an IEEE or ISO standard would make what was being done more transparent, as well as constrain it. Of course, proposing and negotiating such a standard could take a long time, and may fall short of ambition. Would government agencies be willing to undergo compliance assessment under these standards? If not, is that letting a wolf into the henhouse?

This book is also a call to statisticians to do a better job educating the general public about risk and variability. Some have tried, such as David Spiegelhalter, Stephen Fienberg (who coauthored an article in 1980 which gave stark warnings about designing police patrol experiments), the collection edited by Joseph Gastwirth published in 2000, and others. That some education officials failed so completely to understand basic ideas about variability when assessing “value-added scores” in education means these decision makers and managers missed something very key in their quantitative educations. There were calls for considering racial bias at the Bureau of Justice Statistics back in the 1990s (e.g., Langan, 1995). There are an increasing number of complaints by the statistical community, such as in the current issue of Significance, the joint publication of the Royal Statistical Society and the American Statistical Association, regarding turnkey software which purports to help automate policing. In particular, the recent issue features an article by Kristian Lum and William Isaac called “To predict and serve?” not only highlights a disturbing instance of abuse of “predictive policing” software in Oakland, CA, but also suggests a technique for demonstrating where such software falls down. It also gives a number of references, including citations of articles cautioning regarding misuse. Alas, they also point out that they were able to do this with but one popular software package, and the other vendors refused to cooperate. Wouldn’t it be appropriate to insist that if such software is being used to drive as socially powerful a force as policing it be subjected to independent review and assessment?

While there is evidence there has been concerns repeatedly expressed, perhaps it will take something like Weapons of Math Destruction and the attendant media focus to make progress. Clearly, drawbacks cited by other experts have not prevented abuse.

Additional comments, demurrals, and quibbles

1. In the chapter “Shell Shocked”, regarding D. E. Shaw, the tendency to keep portions of process “need to know” illustrates the limitations of any classification system when dealing with highly technical matters and systems which benefit from many eyes. It reminds me of the report by the late Richard Feynman in his Surely You’re Joking, Mr Feynman on how he was prohibited (at least for a long while) from telling the engineers he supervised on the Manhattan Project what they were working on so they could use their physics knowledge to help keep their calculations correct, despite the protestations of Project management that they were not progressing quickly enough.
2. Same chapter, regarding “Very few people had the expertise and the information required to know what was actually going on statistically, and most of the people who did lacked the integrity to speak up”: Those who remained silent in such circumstances, in my opinion, despite the training they had which told them to know better, carry most of the responsibility for the consequences under such circumstances.
3. In the chapter regarding “stop and frisk”, regarding the statement “The Constitution, for example, presumes innocence and is engineered to value it”, I disagree the Constitution presumes innocence. It presumes parties ought to be treated equitably. I think “innocence” is far too abstract a property for any legal system or process to determine, except when defined in the narrow sense of “Found not guilty of a specific formal charge.” That’s not “innocence” in the abstract sense. Indeed, a bit farther down, “The Constitution’s implicit judgment is that freeing someone who may well have committed a crime, for lack of evidence, poses less of a danger to our society than jailing or executing an innocent person” is that point exactly.
4. Farther down, regarding “And the concept of fairness utterly escapes them. Programmers don’t know how to code for it, and few of their bosses ask them to”, in my opinion, it’s really not that hard, for it is an extension of the entropy measure. I think the problem is that this is not seen as important to specify. I also don’t know if we’d be much better off if there were a good measure of “fairness”.
5. The problem cited in
The unquestioned assumption that locking away ‘high risk’ prisoners for more time makes society safer. It is true, of course, that prisoners don’t commit crimes against society while behind bars.

is not new. Norbert Wiener observed in his book Cybernetics that killing difficult people makes society safer still, yet that is too brutal or honest a proposal for most to contemplate, even if it is the logical extensions of the present system. He surely was not advocating that, and was, in fact, reacting most strongly against frontal lobotomy as a form of “treatment” for mental patients. His point was to highlight the hypocrisy of using convenience in managing them to justify treatment. Also, prisoners can commit crimes against society while behind bars, even if they only harm one another: Surely society has an interest is assuring that prisoners are safe, lest additional punishments be levied upon them without due process.

6. Regarding “…for the benefit of both the prisoners and society at large”, society shows no common agreement regarding what the point of incarceration in standard prisons (not those for “white collar criminals”) is … Is it correction and rehabilitation? Or punishment? Or vengeance?
7. In the chapter “Ineligible to Serve”, regarding “If his principal online contact happened to be Google’s Sergey Brin, or Palmer Luckey, founder of the virtual reality maker Oculus BR, Pedro’s social score would no doubt shoot through the roof”, of course, not all good candidates are online, and it’s a pretty strong constraint (and problem!) to assume they are.
8. In the chapter “Sweating Bullets”, regarding Clifford’s drastic change in scores, I’m most amazed that the test administrators and interpreters don’t know about proper variability or how consider it. It seems to me they could not possibly be qualified for the positions they have if they don’t. But, again, as mentioned above, this is a failure of statistical and mathematical education, or the appreciation of it by this society.
9. In the chapter “No Safe Zone”, regarding “We’ve already discussed how the growing reliance on credit scores across the economy …”, a lot of this practice, too, is based upon an implicit assumption and tenet of faith that “the markets” will weed out practitioners of this kind of statistical voodoo. “The markets” have no way to understand this stuff, and whatever natural selection they might apply is horribly inefficient and has little statistical power. An appeal to “the markets” and to “competition” is a fig leaf covering sloppy policy, again in my opinion.
10. In the same chapter, regarding “The model is fine-tuned to draw as much money as possible from this subgroup. Some of them, inevitably, fall too far, defaulting on their auto loans, credit cards, or rent. That further punishes their credit scores, which no doubt drops them into an even more forlorn microsegment”, well, that’s it, isn’t it? It depends upon your loss function and the designers of this process, which can only be laughingly called an optimization algorithm, did a piss poor job of doing that design.
11. In the same chapter, regarding “This undermines the point of insurance, and the hits will fall especially hard on those who can least afford them”, unfortunately, I just don’t buy that most insurers are that good at what they are supposed to do, with apologies to statistical actuaries. Some may indulge in the kind of statistical fallacy which Dr O’Neil describes, but it seems many don’t even properly consider the risks they know about. For example, some insurers don’t properly consider increased losses at coasts from storms and sea level rise. I don’t know if this is a product of actuarial consideration, or if the actuaries are constrained by management and the companies’ policies on what they can consider, or if their results are filtered by the same. No doubt their reinsurers do, and some rely upon generous interpretations of “flood damage” to avoid paying out. Nevertheless, these are not behaviors associated with the fiendishly clever and discriminating inference engines, human or otherwise, which are implied by Dr O’Neil’s explanation and postulated mechanism. Accordingly, I fail to see a plausible mechanism for this kind of thing happening, as nefarious as it is. Moreover, credit agencies and the like have an organic and unchecked internal error rate, and these errors work to frustrate precise predictions of risk, as well as associations of individuals with clusters, even if such errors can by themselves cause harm. I think it’s even a fair question to ask if deterministic associations of individuals with any group is ever proper statistical practice: It should be an affinity score or membership number against each group. I’ve made that observation in my own professional practice, and a common response is, “Well, that algorithm doesn’t scale.” Therein lies, I believe, a lot of the problem.
12. Same chapter, regarding the conclusion “If we don’t wrest back a measure of control, these future WMDs will feel mysterious and powerful. They’ll have their way with us, and we’ll barely know it’s happening”, there are some ways of “wresting control”, even if most people will engage in them. (Many people seem starkly unaware of their self-interest.) One way is to “jam” the signal being fed to the systems and deliberately increasing the variance of their observations. This can be done by interfering with your location reported through cell phones, or simply mixing up what you do during the day, reducing consistency of patterns. The other way is to selectively lie. For instance, for years, in order to confound mail order catalogues, and other online solicitations, I have been misrepresenting my birth date. I acknowledge this kind of practice, even if widely adopted, won’t solve most of the problem.
13. In the chapter “The Targeted Citizen”, regarding “I wouldn’t yet call Facebook or Google’s algorithms political WMDs … Still, the potential for abuse is vast. The drama occurs in code and behind imposing firewalls”, there’s nothing new in that view, long warned about by Lawrence Lessig in his book Code 2.0. In fact, some consider this a feature, keeping control of online things from governments and such. Lessig warns in his writings, however, that it is not turning out that way.
14. In the chapter “Conclusion”, regarding “Dismantling a WMD doesn’t always offer such obvious payoff … For most of them, in fact, WMDs appear to be highly effective”, how the devil can they tell? I don’t see any evidence in the research presented that these companies and organizations do anything like a comprehensive testing program, that is one that assures the (written) objectives are met (in the real world), not merely that the code implements the requirements. To use the example by Lum and Isaac in “Predict and serve?” cited above, many companies or even organizational units won’t open their algorithms to outside scrutiny. That could be because of a desire to protect something proprietary, or it could be that the algorithms really don’t work well, and they are trying to sell shoddy algorithms as if they do, even to other units of the same business.
15. In the chapter “Conclusion”, regarding the Derman and Wilmott “oath”, I respectfully but strongly disagree with it. The same could be said of all of Physics. And I don’t know what “overly impressed with Mathematics” means. Apart from lip service to a goal, people could insist that these systems undergo a comprehensive and rigorous — and necessarily expensive — testing program like many other systems which interact with the physical world do, for instance, aircraft. As a colleague observed after a discussion about this, it could be just as well said that the Mathematics was done badly and no independent check on it was available.
16. Finally, in the chapter “Conclusion”, regarding “Though economists may attempt to calculate costs for smog or agricultural runoff, or the extinction of the spotted owl, numbers can never express their value”, I have a couple of things to say. First, I agree that a one-dimensional characterization of any complicated system or process or person, like a spotted owl, is doomed to be woefully incomplete. Second, I agree that economic assessments of these, if honest, must be based upon behavioral economics, and not upon the pseudo-objective rantings of the Chicago School, or Austrian, and, so, they are highly contingent and, being so, unsuitable for policy. But, third, I do think it is possible to quantitatively characterize such complicated things, and, if well done, these can be of great use to society and in solving its problems. The placement of the new Hoover Dam bypass (chronicled by Henry Petroski) and assessments of ecosystem services are two small examples. As any casual reader of this blog will note, I continue to be very enthusiastic regarding the economic prospects of solar PV as a technology for good, not only to advance zero Carbon energy, but as a basis for a helpful and common discussion among members of this United States society who can’t seem to agree on much of anything, and also to advance the revolution championed by the late Hermann Scheer, that of bringing control of the energy supply back to the people and, thereby, control of their democracy. This is an area where putting quantitative measures on often intangible things happens systematically.

One thing I fear when faced with these kinds of issues, and it’s something I have seen elsewhere in this society, especially among my younger colleagues, is a devolution into insidious cynicism. This is sometimes wrapped in a mantra which argues “you can only control yourself and doing anything else is engaging in an immoral act”, possibly substantiated by an appeal to Buberian ethics. And, ironically or hypocritically, the same complaintants will continue to work for companies with a deep investment in facilitating this kind of WMD engineering, even if the companies don’t build WMDs themselves. (How many companies profit from the existence and operations of Facebook?) Especially given the insights of behavioral economists like Daniel Kahneman, I hope the insights Dr O’Neil has don’t end with their merely being presented. My definition of a successful technology is one that does not depend upon people being good or morally perfect in order for it to “do no harm”. (I have been influenced a good deal towards this view by the lectures of Professor Sheila Widnall of MIT.) In fact, my standard is that every successful technology must assume people are imperfect, morally corruptible, and self-interested, and yet do perform its function nonetheless. If it cannot work under those conditions, any device or technology is broken. And I continue to be heartened by both the successes of engineering and science, and the deep mathematics that unpin them, especially as exemplified in the talent and smarts of young people pursuing these to make the world a better place, for all of its beings and creatures.

## “Getting past grudging precautions: How the next President should address climate change”

Professor David Titley (see also, and here) writes in the online newsletter DefenseOne:

Many observers think climate change deserves more attention. They might be surprised to learn that U.S. military leaders and defense planners agree. The armed forces have been studying climate change for years from a perspective that rarely is mentioned in the news: as a national security threat. And they agree that it poses serious risks.

I spent 32 years as a meteorologist in the U.S. Navy, where I initiated and led the Navy’s Task Force on Climate Change. Here is how military planners see this issue: We know that the climate is changing, we know why it’s changing and we understand that change will have large impacts on our national security. Yet as a nation we still only begrudgingly take precautions.

True, the Pentagon is a major emissions generator, and that will need to be dealt with. But the emissions from U.S. natural gas use dwarf those of the U.S. military many times. In 2015, these were 1.5 GtCO2. For all energy uses, they were about 5.8 GtCO2. I reserve the term liberal climate deniers for people who, while they supposedly accept climate change, it’s human causes, and the mitigation necessary at face value, refuse to do the proper triage to see what needs to be reduced the most, and exploit the cause to further their own political agendas. That doesn’t help to fix the problem, and all engineering fixes involve tradeoffs.

## Polls, Political Forecasting, and the Plight of Five Thirty Eight

On 17th October 2016 AT 7:30 p.m., Nate Silver of FiveThirtyEight.com wrote about how, as former Secretary of State Hillary Clinton’s polling numbers got better, it was more difficult for FiveThirtyEight‘s models to justify increasing her probability of winning, although it did “stabilize” their predictions. Mr Silver is being a bit too harsh on their models, since the problem is fundamental, not just something which afflicts their particular model. In Mr Silver’s defense, he did write:

But there’s some truth to the notion that she’s encountering diminishing returns. And that’s for a simple reason: 88 percent and 85 percent are already fairly high probabilities. Our model is going to be stingy about assigning those last 10 or 15 percentage points of probability to Clinton as she moves from the steep, middle part of the probability distribution to the flatter part.

Well, maybe, except I’m not sure that is assignable to Secretary Clinton. It’s a mathematical phenomenon, one which Mr Silver may be aware of, but apparently did not want to comment upon saying “Before this turns into too much of a math lesson …”. I say Why not a math lesson?.

In particular, as a probability of an event, any event, gets more and more above 50% (or, symmetrically, less than 50%), the amount of information needed to “push it” the same distance it has gone grows, and as the probability (or improbability) of the event approaches certainty, the efficiency of additional information to improve the determination gets worse. It’s possible to be quantitative about all this.

Let’s have a look at this in the hypothetical case of two presidential candidates, one called T and one called H. Suppose that, with time, T‘s probability of winning, denoted here $[\mathbf{T}]$, decreases from 0.50. Since I’m only considering two candidates, $[\mathbf{H}] = 1 - [\mathbf{T}]$, so, then, $[\mathbf{H}]$ increases away from 0.50, and they sum to unity. This is a system with two components, and it’s entropy is equal to

$-[\mathbf{T}] \log_{2}{(\mathbf{T})} - [\mathbf{H}] \log_{2}{(\mathbf{H})}$.

Entropy for this system will hereafter be denoted $E(p)$. The amount of information needed to move, say, $[\mathbf{H}]$ up a unit of probability is the decrease in the entropy at the new state of affairs with respect to the old one. Adding information is kind of doing work, although, in this case, the “work” is evidence collected from polls and other sources.

So, for example, the amount of entropy when both candidates are tied is exactly 1 bit. (Entropy and information are measured in bits or nats.) When $[\mathbf{H}]$ is about 0.8885, the entropy is 0.5 bits. When $[\mathbf{H}]$ is 0.91, the entropy is about 0.436 bits. The rate of change of entropy with $[\mathbf{H}]$ is simply the derivative of $E(p)$ with respect to $p$, or $\log_{2}{(\frac{p}{1-p})}$, or, in other words, the log of the odds ratio, sometimes called “log odds”. If someone were to try to assess these “diminishing returns”, they might compare the additional information needed to progress to that needed to move from $[\mathbf{H}] = 0.50$ to $[\mathbf{H}] = 0.60$. So, let’s plot that:

(Click on image for a larger figure, and use browser Back Button to return to blog.)

So, again, what this shows is how much additional information is needed per unit of probability of winning compared to the information needed to improve chances of winning from 0.50 to 0.60 plotted for various probabilities of winning. Highlighted on the figure is the 0.90 probability of winning, close to present estimates, and it shows that the amount of information needed to improve by 1 unit is one hundred and nine times that needed to improve from 50% chance of winning to 60% chance of winning.

So, what does this mean in the context of political forecasts or, for that matter, any forecasts?

First, as suggested by Mr Silver, once you are at $[\mathbf{H}] = 0.9$, the additional information or evidence needed to move it higher, to $[\mathbf{H}] = 0.91$ or $[\mathbf{H}] = 0.92$ is substantial. In fact, just going from 0.90 to 0.91 requires almost six more multiples of the change in evidence needed from 0.50 to 0.60. This is true of any application. For example, to demonstrate, say, that a given engineered system has a reliability of, say, 0.995 requires a lot of testing and a lot of work, and necessarily takes a long time, simply because that 0.995 criterion is way out there on the “evidence sill”.

Second, this mathematical fact tends to downplay the significance of changes at high probabilities of winning. Going from 0.90 to 0.91 may not sound like a lot, but the information gathered to justify it is necessarily substantial.

Third, there are limits to political forecasting. These are not because the models are poor, or the techniques are poor, but because there is only so much information available in political polls and other sources. These observations have their own variability or noise, and that limits their information content. At some point in the above figure, the information content of the polls or observations is exhausted, and whatever uncertainty remains is the best anyone can do. This isn’t to say polling could not be improved, or samples might not be larger, or more systematic surveys might not be taken to improve results, using stratified sampling and other techniques. (These are pretty standard anyway, although they can cost a lot of money.) It’s just that you cannot squeeze more out of a set of data than it has. It also means there are limits to what political forecasting can do, even a group as talented as fivethirtyeight.com.

Nevertheless, if a particular candidate, say, H has $[\mathbf{H}] = 0.91$, that’s pretty darn good, especially when you consider the amount of information needed to establish that, and what that means, for example, about evidence for their popularity among the public. And this is an insight which I don’t believe is made available by examining variance of Bernoulli variables or coefficients of variation, measures which seem inappropriate this far out on the Bernoulli tail.

If you’d like to learn more about this kind of thing, I recommend Professor John Baez’s series of posts on information geometry. It is a little mathematical, but the investment in time and mind assets are decidedly worth it. There are many analogies between information and entropy and physical processes. For example, borrowing from classical statistical mechanics in Physics, information in this instance can be thought of as the additional cooling needed to bring a two-state system into a more rigid configuration, kind of like approaching Absolute Zero, at least with respect to entropy of perfect crystals.

## “All models are wrong. Some models are useful.” — George Box

(Image courtesy of the Damien Garcia.)

As a statistician and quant, I’ve thought hard about that oft-cited Boxism. I’m not sure I agree. It’s not that there is such a thing as a perfect model, or correct model, whatever in the world that would mean, but it is that whatever is appealed to in our heads or “gut” when we look at a model and find it wanting is, well, just another model. Frankly, after a lot of practice, I think models get a bad rap: I don’t think it makes any sense at all to look at observations without having at least an informal model in mind. Observations are necessary but rarely can they stand on their own; not and have a chance of being generalized, anyway. That’s because we really can’t see something, let alone understand, unless it’s abstracted away, beyond details.

(Image courtesy of the Open Motion Planning Library.)

I like this other quote, from physicist Arthur Eddington:

It is also a good rule not to put overmuch confidence in the observational results that are put forward until they are confirmed by theory.

## A state that doesn’t provide zero Carbon energy is at a competitive disadvantage

As of September, 62 of the country’s largest corporations had indicated their energy priorities by endorsing the Corporate Renewable Energy Buyers Principles. Other large institutions such as universities and military bases are moving in that direction as well.

Adam Kramer, the executive vice-president of strategy for Switch, a “transformational technology idea engine,” reflected on the company’s thinking in choosing Michigan for a new facility.

He said, “Our first question was: ‘Can you get us our power needs?’ The second question was: ‘Can you get us 100 percent renewable?’ If the answer was no, Michigan wasn’t going to be part of the site selection. From our perspective, energy is our lifeblood.”

States are eager to meet these companies’ demands, seeking the economic development prizes that follow.

“If we just think about large IT companies and the next big data center, access to 100 percent renewable energy for many of them is a requirement,” said Ryan Hodum, vice president of David Gardiner & Associates, a clean-energy adviser to businesses. “So a state that doesn’t provide that is at a competitive disadvantage.”

See more about the graphic.

## The Budget

Certain claims regarding contributions of health programs to the United States federal budget in a debate last night made me curious, and so I checked the figures on this from the Office of Management and Budget. Of special importance to me is the National Science Foundation (NSF), having a budget in 2015 of $6.8 billion, and the National Oceanic and Atmospheric Administration (NOAA), having a budget in 2015 of US$5.4 billion. Contrast these with what claims the most in the budget. I’ve never thought that discrepancy wise. And, no, while I respect the work done by people in the Department of Energy and Department of Defense on basic science, much of that is not cutting edge, since it is hid from public peer review and, so, is inaccessible to scientific method, and is not available to the scientific community at large. There is also science related to forestry and biology buried in the Department of the Interior budget, which is $12.9 billion overall. It is also interesting to track the federal deficit, much touted in certain circles, against outlays for all defense-related programs: I do not know how much of the budget of various intelligence agencies is included in defense, since, at least at one time or another, these expenditures were considered too secret for the public to see. As a consequence, there is probably some additional expenditure on what can only reasonably be considered defense expenditures not shown in the above coming from these sources. These amounts, according to the Wikipedia reference, could be up to an additional$50 billion. Partial numbers are available.

It is interesting, as well, to track the federal deficit, much touted in certain circles, against outlays for Health and Human Services:

Both series of expenditures strongly correlate with the size of the deficit. However, there’s a strong argument that deficits as accounted in their present form do not matter. Professor Daniel Shaviro, at that link, argues Laurence Kotlikoff‘s idea of generational accounting. In the end, however, Shaviro concludes:

I find this norm unpersuasive. The real issue is the overall distribution of lifetime consumption between succeeding generations. This, in turn, depends less on fiscal policy than on present generations’ overall rate of saving and productivity of investment, along with decisions within the household concerning such matters as child care, educational investment, and the rate of divorce. There is no apparent reason why government fiscal policy, which is merely one component of everything we do that affects our descendants, should be generationally “balanced.” Even the narrower claim that reducing tax lag, by increasing national saving, would shift lifetime consumption in the right direction, may not be correct. For example, if technological advances cause people fifty years from now to be wealthier than we are—just as we are wealthier than people fifty years ago, and they are wealthier than people fifty years earlier still—then changing fiscal policy to benefit future generations would amount to playing Robin Hood in reverse. While per capita societal wealth is not certain to continue increasing, our inability to predict the future makes it hard to know what generational policy would be best.

I’d say, too, that beyond expenditures, the generations alive today and those two preceding them have unfairly burdened future generations with an environmental load and commitment they are going to have to work off. Indeed, much of the expenditure and economic success we have seen in the OECD nations to date is fundamentally tied to wasting future environmental services, services which our children and theirs will not have.

## “BlackRock Investment Fund will include climate change as risk factor for portfolio”

BlackRock, the world’s largest private investment fund, has announced that it will include climate change as an important factor in how it assigns risks to its investment portfolio …

BlackRock is not your average investment fund. With $4.9 trillion in assets, it is the biggest private investment fund in the world. Naturally, what it says, and more important, what it does, matters. In September 2016, it issued a report that, to put it mildly, may become a turning point in the annals of global investing and risk management. In unequivocal language, it said, “Investors can no longer ignore climate change. Some may question the science behind it, but all are faced with a swelling tide of climate-related regulations and technological disruption.” From BlackRock itself: Most industries lag insurers when it comes to properly accounting for and pricing risks of climate-related events. Many equity investors ignore climate risk, and credit investors and ratings agencies do not routinely assess it. Property markets often ignore extreme weather risk, even in highly exposed coastal areas. Most asset owners do not measure their exposure to potentially stranded assets such as high-cost fossil fuel reserves that may have to be written off if their use is impaired by climate change regulation. Who can blame them? There is little evidence that assets more susceptible to climate change and related regulatory risks trade at a discount to the market. A simple analysis of monthly returns in the MSCI World Index shows low carbon-intensive equities (those with the lowest carbon emissions by revenues as of 2014) have outperformed those with the highest carbon intensity over the past 20 years. Yet this outperformance vanishes after stripping out the impact of common return factors such as size and geography, we found. In other words, we found there has been no climate change risk premium for equities. Yet this does not mean there will be no premium in the future. In fact, we think there likely will be one. Many countries are set to adopt carbon taxes or cap-and-trade (emissions trading) programs to help meet their INDC targets. Greater transparency on climate risks and exposures will likely lead to a gradual discounting of companies and assets exposed to climate risk — and increase the value of those most resilient to these risks. Some asset owners are already divesting from carbon intensive equities, while others are ‘hedging’ their carbon exposure by investing in renewables, energy efficiency and clean tech. It can be costly to underestimate environmental risks. Just ask BP’s equity and debt holders. See the summary story here, and BlackRock‘s own explanation here and here. (Click on image to see a larger figure, and use browser Back Button to return to blog.) (Click on image to see a larger figure, and use browser Back Button to return to blog.) ## Our uncontrolled experiment with Earth as an Astrophysics problem set Hat tip to And then there’s Physics …: On climate change and Astrobiology , by Adam Frank. ## Enough Already “If you’re in a hole, stop digging.” ## Uniform sampling of a disk, and implications for sampling the Internet Suppose you want to uniformly sample from the interior of a circle of unit radius, in other words, from a unit disk. The “gut feel” way is to pick a random angle, $\theta$, in radians uniformly from 0 to $2\pi$, and then a random radius, $r$, uniformly from 0 to 1. Do this a bunch of times, and plot the result: Oops. Something’s gone wrong! The density of points in the center of the disk are higher than at the edges, and, in fact, the density goes down as the edge is approached. Now, there are approximate workarounds involving more computation. Rejection sampling is one that comes to mind. In that case, instead of drawing values for parameters of a polar distribution, the idea is to generate from a 2-by-2 square centered on the origin, and then reject instances outside of a circle with unit radius, also centered at the origin. But this is wasteful and really not necessary. The alternative is simple. The key observation is that for any randomly chosen radius $r$, the number of points on that radius ought to be proportional to $r$ if, in fact, the disk is going to be uniformly dense in points. In other terms, the radius probability density function of points, $f(r)$ ought to be proportional to $r$, or, formally, $f(r) = k r$ for some positive constant $k$. Since $1 = \int_{0}^{1} f(r)\,\mathrm{d}r$ by definition of a probability density function, where the upper limit is the radius of the disk, we have: $1 = \int_{0}^{1} f(r)\,\mathrm{d}r = \int_{0}^{1} k r\,\mathrm{d}r = [\frac{k}{2} r^2]_{0}^{1} = \frac{k}{2}$, so $k = 2$. That also gives the cumulative distribution function. That’s really what we want for a particular simulated choice of $r$. It is $r^2$. So to find the corresponding $r$, it’s calculated using the inverse cumulative distribution function is used. Specifically, $u = r^2$, so $r = \sqrt u$. And when that’s done we get: Okay, so what does this have to do with the Internet? A lot of present day assessment of the Internet is done using two basic tools, ping and traceroute. Both have as a key element the idea that a packet is sent towards some target. An engineering feature of such packets on the Internet is that they contain a piece of control information called time to live or TTL. This is woven into the fundamental fabric of the Internet so packets don’t just flood it and make it useless. The basic idea is that when a node on the Internet receives a packet, and it is not intended for it, it decrements the TTL by one, overwriting that field with the decremented value, and then sends the revised packet on its merry way towards the target. Should the decrement result in a TTL value of zero, however, rather than sending the packet on, the node crafts a letter to the original sender of the packet saying in effect “You did not affix sufficient postage”. That letter is called a TTL-exceeded ICMP message, and it contains, among other things, the address of the node at which the TTL-exceeded event occurred. That’s good because that tells the sender (us!) how far the packet went and that’s exactly how ping and traceroute are used explore the Internet … They don’t know these addresses, so to explore what addresses are out there and who’s connected to whom, traceroute explores by sending packets with successively greater TTLs out towards the target, until it is reached. The transition of a packet from one node to another along its way to a target is called a hop. Traceroutes are devices for elucidating the hops taken to arrive at a target. Now, a ping is like a traceroute except that it involves sending a packet to specific recipients who will respond back with the time the packet was received. This lets engineers do things like measure latency. But, as you might imagine, ping packets also have TTLs and in fact you can think of the interior of the loop of a traceroute as involving doing a ping. If an engineer wants to explore the structure of the Internet, traceroutes are good for getting basic structure. (There are offline sources as well, not important for this post.) But if the engineer wants to obtain a representative set of addresses across the Internet for a study, say, a representative sample of all addresses within some number of hops of an origin or vantage point, the arguments about the disk above say that tabulating all the addresses seen within that number of hops and their frequency is going to be a biased representation of this number of addresses. If all the addresses at a TTL value of $j$ are collected, and all the addresses at $j+1$ are collected, and so on, this amounts to uniform sampling in TTL. After all, TTL is just a kind of distance and, so, given what was shown about disks above, what this means is that if this is the sampling of TTLs done or kept after a traceroutes or pings campaign, the nodes closer to the vantage point are overrepresented in comparison with ones farther away. Accordinging, whatever statistics are collected are heavily biased by the vantage point, more than they would if visibility were the only concern. So, the argument above suggests a remedy. Rather than doing or keeping addresses associated with every TTL, the addresses associated with TTLs up to some maximum (say 40) should be retained so the TTLs are picked if they equal the rounded value of $40 \sqrt u$ for a uniform random draw $u \sim \mathcal{U}(0,1)$. Otherwise, the annular bias shown in the first figure will afflict the measurements taken. This can be done by postprocessing, or it could be incorporated into the sampling plan. In fact, there are advantages to incorporating this into the sampling plan. Nodes and paths leading to them excessively close to the vantage point are oversampled in the original plan, and unnecessarily so. This costs in unwarranted network load, and in complaints of abuse by network nearest neighbors. If the sampling plan can incorporate the square root factor, then the effort and load applied to that section of the network can be reallocated more usefully, at addresses farther away, with no additional cost. See? Math rules. Posted in Uncategorized | Leave a comment ## Just a lil’ bit o’ a drought … Nothing to be alarmed about … (!) ## Generation: Westwood Studios, September 2016 (Click on image to see a larger figure, and use browser Back Button to return to blog.) (Click on image to see a larger figure, and use browser Back Button to return to blog.) As mentioned before, you can watch the generation yourself. ## Who paved the roads? Professor Tony Seba of Stanford University is a great leader, visionary, speaker, and business expert. He often starts his talks with two successive public domain images to illustrate technological and business disruption. These are shown below. One is a photograph of Fifth Avenue in New York City on Easter morning in 1900. The second is a photograph from almost the same place on Easter morning in 1913. Professor Seba’s point, and in part mine, is that in one, transportation by the relatively wealthy is dominated by horse-and-buggy. In the second, a mere 14 years later, it is dominated by the automobile. My point and question relate to the complaint of some, who apparently ignore or disregard the tremendous subsidies provided to fossil fuels via tax incentives, direct subsidies, permissions to drill on public lands, and giving their distribution networks the power of eminent domain, that zero Carbon energy, principally wind and solar, are unfair competitors because they are being heavily subsidized by governments, local governments, state governments, and federal governments. To that point and complaint, I refer to these pictures and note that the road in 1913 is paved, in contrast with the dirt road of 1900. My question is Who built and paid for the paved road? The people who owned the cars were relatively wealthy, and were not in the majority. They did not pay for the paved roads out of their own pockets. The paved roads were key to the spread of the automobile, because the rough, bumpy roads literally shook early models apart. So, in order for automobiles to spread, something had to be done about roads, and that was expensive. Facts are, governments did something about it. In this case, it was New York City. But note this was done in the same span of time that the automobile was adopted, obsolescing the horse-and-buggy, and changing forever the way that a City, like New York, would think about transport. And, to my mind, there is no different between that and the subsidies given to wind, solar energy, and energy storage. ## Republican Governor Charles D. Baker, The Commonwealth of Massachusetts: On CLIMATE An Executive Order, No. 569 ESTABLISHING AN INTEGRATED CLIMATE CHANGE STRATEGY FOR THE COMMONWEALTH WHEREAS, climate change presents a serious threat to the environment and the Commonwealth’s residents, communities, and economy; WHEREAS, extreme weather events associated with climate change present a serious threat to public safety, and the lives and property of our residents; WHEREAS, the Global Warming Solutions Act (the “GWSA”) directs the Secretary of Energy and Environmental Affairs and the Department of Environmental Protection to take certain steps to reduce greenhouse gas emissions and prepare for the impacts of climate change, including setting statewide greenhouse gas emissions limits for 2020, 2030, 2040 and 2050; WHEREAS, the statewide greenhouse gas emissions limit for 2020 is 25% below the 1990 level of emissions and the corresponding limit for 2050 is 80% below the 1990 level of emissions, but no interim limits have yet been set for 2030 or 2040; WHEREAS, the Commonwealth can provide leadership by reducing its own emissions from state operations, planning and preparing for impending climate change, and enhancing the resilience of government investments; WHEREAS, the transportation sector continues to be a significant contributor to greenhouse gas emissions in the Commonwealth, and is the only sector identified through the GWSA with a volumetric increase in greenhouse gas emissions; WHEREAS, the generation and consumption of energy continues to be a significant contributor to greenhouse gas emissions in the Commonwealth, and there is significant potential for reducing emissions through continued diversification of our energy portfolio and adoption of a comprehensive energy plan; WHEREAS, on May 17, 2016, the Supreme Judicial Court ruled that the steps mandated by the GWSA include promulgation of regulations by the Department of Environmental Protection “that establish volumetric limits on multiple greenhouse gas emissions sources, expressed in carbon dioxide equivalents, and that such limits must decline on an annual basis; WHEREAS, the ambitious goals for greenhouse gas emissions established by the GWSA will help to mitigate future climate change, strong and prompt action beyond emission reductions is required to meet the serious threats presented by climate change and associated extreme weather events; WHEREAS, our state agencies and authorities, as well as our cities and towns, must prepare for the impacts of climate change by assessing vulnerability and adopting strategies to increase the adaptive capacity and resiliency of infrastructure and other assets; WHEREAS, the Executive Office of Public Safety and Security and its constituent agencies, including the Massachusetts Emergency Management Agency, have deep institutional expertise in preparing for, responding to, and mitigating damage from natural hazards; and WHEREAS, only through an integrated strategy bringing together all parts of state and local government will we be able to address these threats effectively; NOW, THEREFORE, I, CHARLES D. BAKER, Governor of the Commonwealth of Massachusetts, by virtue of the authority vested in me by the Constitution, Part 2, c. 2, Section 1, Art. 1, do hereby order as follows: • Section 1. The Secretary of Energy and Environmental Affairs shall coordinate and make consistent new and existing efforts to mitigate and reduce greenhouse gas emissions and to build resilience and adapt to the impacts of climate change. To achieve these objectives the Secretary shall lead the efforts set out in this Executive Order, and shall: • a. continue to consult the GWSA Implementation Advisory Committee for advice on greenhouse gas emission reduction measures, including recommendations on establishing statewide greenhouse gas emissions limits for 2030, and 2040 pursuant to Section 3(b) of Chapter 21N of the General Laws by December 31, 2020 and December 31, 2030, respectively; • b. expand upon existing strategies for the Commonwealth to lead by example in making new, additional reductions in greenhouse gas emissions from Government operations; • c. work, in consultation with the Secretary of Transportation, with New England and Northeastern state transportation, environment and energy agencies to develop regional policies to reduce greenhouse gas emissions from the transportation sector consistent with meeting the GWSA’s 2050 and interim emissions limits; • d. continue to lead on reform of regional wholesale electric energy and capacity markets to ensure that state mandates for clean energy are achieved in the most cost-effective manner; • e. publish, within two years of this Order, and update every five years thereafter, a comprehensive energy plan which shall include and be based upon reasonable projections of the Commonwealth’s energy demands for electricity, transportation, and thermal conditioning, and include strategies for meeting these demands in a regional context, prioritizing meeting energy demand through conservation, energy efficiency, and other demand-reduction resources in a manner that contributes to the Commonwealth meeting each of these limits; and • f. ensure that efforts to meet greenhouse gas emissions limits are consistent with and supportive of efforts to prepare for and adapt to the impacts of climate change and extreme weather events as detailed in Section 3 of this order. • Section 2. The Department of Environmental Protection shall promulgate final regulations that satisfy the mandate of Section 3(d) of Chapter 21N of the General Laws by August 11, 2017, having designed such regulations to ensure that the Commonwealth meets the 2020 statewide emissions limit mandated by the GWSA. In order to ensure that the Department’s regulations meet this requirement on this schedule, the Department of Environmental Protection shall: • a. establish an internet portal through which interested parties, including affected businesses and members of the public, may propose regulatory approaches for the Department’s consideration; • b. revise the Global Warming Solutions Act requirements for the Massachusetts Department of Transportation set forth in 310 C.M.R. 60.05 to establish declining annual aggregate emissions limits; • c. consider limits on emissions from, among other sources or categories of sources, the following: • (i) leaks from the natural gas distribution system; • (ii) new, expanded, or renewed emissions permits or approvals; • (iii) the transportation sector or subsets of the transportation sector, including the Commonwealth’s vehicle fleet; and • (iv) gas insulated switchgear; • d. publish, no later than December 16, 2016, the notice associated with these regulations as required by Section 5 of Chapter 30A of the General Laws; and • e. hold, no later than February 24, 2017, a public hearing associated with these regulations as required by Section 5 of Chapter 30A of the General Laws. • Section 3. The Secretary of Energy and Environmental Affairs and the Secretary of Public Safety shall coordinate efforts across the Commonwealth to strengthen the resilience of our communities, prepare for the impacts of climate change, and to prepare for and mitigate damage from extreme weather events. In order to facilitate this coordination, the Secretaries shall: • a. within two years of this Order, publish a Climate Adaptation Plan that includes a statewide adaptation strategy incorporating: • (i) observed and projected climate trends based on the best available data, including but not limited to, extreme weather events, drought, coastal and inland flooding, sea level rise and increased storm surge, wildfire, and extreme temperatures; • (ii) guidance and strategies for state agencies and authorities, municipalities and regional planning agencies to proactively address these impacts through adaptation and resiliency measures, including guidance regarding changes to plans, by-laws, regulations, and policies; • (iii) clear goals, expected outcomes, and a path to achieving results; • (iv) approaches for the Commonwealth to lead by example to increase the resiliency of Government operations; • (v) policies and strategies for ensuring that adaptation and resiliency efforts complement efforts to reduce greenhouse gas emissions and contribute towards the Commonwealth meeting the statewide emission limits established pursuant to the GWSA; and • (vi) strategies that conserve and sustainably employ the natural resources of the Commonwealth to enhance climate adaptation, build resilience and mitigate climate change; • b. within one year of this Order, establish a framework for each Executive Office to assess its and its agencies’ vulnerability to climate change and extreme weather events, and to identify adaptation options for its and its agencies’ assets; • c. within one year of this Order, establish a framework for each City and Town in the Commonwealth to assess its vulnerability to climate change and extreme weather events, and to identify adaptation options for its assets; • d. provide technical assistance to Cities and Towns to complete vulnerability assessments, identify adaptation strategies, and begin implementation of these strategies; • e. implement the Climate Adaptation Plan upon its completion; and • f. update the Climate Adaptation Plan at least every five years, incorporating information learned from implementing the Plan and the experiences of agencies, and Cities and Towns in assessing and responding to climate change vulnerability. • Section 4. The Secretary of each Executive Office shall designate an existing employee to serve as the Secretariat’s Climate Change Coordinator. Each Climate Change Coordinator shall: • a. serve as the Secretariat’s point person regarding climate change mitigation, adaptation and resiliency efforts; • b. meet under the leadership of personnel from the Executive Office of Energy and Environmental Affairs and the Executive Office of Public Safety and Security to assist in the development and implementation of the Climate Adaptation Plan; • c. within two years of this Order, assess the vulnerability to climate change and extreme weather events for the Coordinator’s Executive Office and for each agency within the Coordinator’s Executive Office and identify adaptation options for the assets of such Executive Office and agencies; and • d. incorporate results from vulnerability assessments into existing policies and plans for the Executive Office and its agencies • Section 5. This Executive Order shall be reviewed no later than December 31, 2019, and every five years thereafter. Given at the Executive Chamber in Boston this 16th day of September in the year of our Lord two thousand sixteen and of the Independence of the United States of America two hundred forty-one. CHARLES D. BAKER GOVERNOR Commonwealth of Massachusetts (Emphasis added.) ## “Predicting annual temperatures a year ahead” (Dr Gavin Schmidt at REALCLIMATE) Dr Schmidt is essentially betting that the trend, seen as a random variable, will regress towards the smooth mean. I have a post at Nate Silver’s 538 site on how we can predict annual surface temperature anomalies based on El Niño and persistence – including a (by now unsurprising) prediction for a new record in 2016 and a slightly cooler, but still very warm, 2017. The key results are summarized in the figures that show how residual variations in the global temperatures (after detrending) related to the ENSO phase at the beginning … ## XKCD tells it all Alerted to the existence of the image by Tamino. The figure is due to the irrepressible Randall Munroe. ## Bastardi’s Bust Famous climate denialist Joe Bastari of WeatherBELL Analytics LLC, formerly of Accuweather.com made a prediction on Arctic ice recovery back in 2010 (when at AccuWeather), and observations have since made his “studies” laughable. I have heard his colleague, Joseph D’Aleo speak at the Southern New England Meteorology Conference in 2015. Notably, he is also associated with the Heartland Institute where he was/is a “resident expert”. Update, 2016-10-07 I did not realize that Tamino had encounters with Joe Bastardi at his blog at the time I wrote this post. Here are the links: ## ‘A Time To Choose’ Charles Ferguson and a Time To Choose”. (Much large image available by clicking on photo. Use browser Back Button to return to blog.) Trailer: ## Hermine Unique among Storms’ Post-tropical storm Hermine is the story of the emergence of weather chimeras. Simple. The forecasting precedents have changed. We cannot look to the past to anticipate the future any longer. We’re playing by different rules. And we don’t know what their implications are, because the experiment has never been run before. We’re running it. And we have no idea what will happen. But we’re continuing it nonetheless. This hasn’t been anticipated. This hasn’t be war-gamed. See also Eric Holthaus’ opinion, and another view he has, about risk. Welcome to the Hyper Anthropocene. Hermine still developing. Predictions are for it to hold in place off the East Coast for several days, due to a blocking pattern known as a “Rex Block”. This and many gems from Eric Holthaus’ update, excerpted here. Unusually placed, for a weather/climate piece, at election/polling guru Nate Silver’s FiveThirtyEight.com. Eric Holthaus at FiveThirtyEight: Based on the current forecasts, Post-Tropical Cyclone Hermine is a storm without a good historical comparison. Hermine was once a tropical cyclone that made landfall in Florida, but that seems like ages ago. It has now transitioned to its post-tropical stage after moving northeast across land, off the coast of North Carolina, where it’s partially drawing energy from the jet stream. Hermine is forecast to affect the Mid-Atlantic over the next several days as a hurricane-strength storm, with a potentially historic coastal flood. Of the 10 or so meteorologists I’ve talked to in the last… View original post 1,024 more words ## Once more, with feeling: Responding to Kostrzewa in The Providence Journal It’s making the rounds. Today it’s John Kostrzewa, Assistant Managing Editor of The Providence Journal, arguing the necessity of natural gas and its pipelines with his “Why R.I.’s economy needs a natural-gas pipeline”. And my response, below, which allowed me to dig a little deeper into these matters than I had time to do yesterday with the same kind of response for Massachusetts. The thing about the response for Rhode Island was that the character count for a response is severely constrained, meaning I could not document as many of my assertions as I would have liked. I have included them in the post below. Mr Kostrzewa piles on to the usual arguments supporting the expansion of natural gas in New England, this time focussing upon Rhode Island. Prices are high because there’s insufficient energy. Electricity prices are high, especially in winter because there’s insufficient natural gas. Businesses need energy for growth, and most importantly for creating jobs. Natural gas produces jobs. All these are myths. The fundamental fact about prices for a kilowatt-hour (“KWh”) of electricity in New England is that, per person, we use less electricity. The expenses of the inefficient and old grids dating from the 20th century are spread over fewer KWh, so cost per KWh is higher. If total cost of electricity paid per month is compared with other states, efficient New England and Rhode Island ride pretty low. The cost for Rhode Island is$107/month, the 11th cheapest in the entire country, compared with D.C. and New Mexico which pay $82/month and$88/month, respectively, and South Carolina and Hawaii which are tied for a whopping $177/mo. Massachusetts pays$115/month and is the 16th cheapest. (These figures are available here.) Natural gas friendly Wyoming also pays $107/month for electricity, and that’s not because electricity per KWh is expensive. It’s not. The U.S. Energy Information Administration (“EIA”) gives it at$.115 per KWh compared with Rhode Island’s $.18 per KWh. Wyoming uses more per person. (Massachusetts electricity costs$.1906 per KWh.)

One might as well argue that natural gas is responsible for the high electric rates, since it provided 94% of Rhode Island’s electricity in 2014 and 95% in 2015. Oil actually increased its share from 2014 to 2015 from 1.4% to 1.5%. Waste-to-energy facilities produce 3%, and renewables a mere 0.4%. How much more gas can Rhode Island use? At most 6%. (See EIA data for all these.) Think building pipelines in Rhode Island are to help Rhode Island? No. This is Spectra/Algonquin madly trying to make up for the setbacks they’ve received on their Access Northeast pipeline project, before FERC shuts them down.

It’s all about Joseph Schumpeter people. (See also.)

## Gustin and companies lack technological and business imagination

Carl Gustin, a consultant to the New England Coalition for Affordable Energy, which “includes many of New England’s major business and industry organizations and labor representatives”, wrote an op-ed in favor of additional natural gas and pipelines for Massachusetts in Commonwealth‘s online magazine today. I posted a detailed rebuttal and, after an hour or so online, it was removed. I am reproducing it below. And here, on my blog, I can say more of what I think.

Mr Gustin and his New England Coalition are shills for fossil fuel energy companies that find themselves threatened with the surge in renewable energy. Clearly, like some presidential candidates, they have really thin skins.

Mr Gustin’s depiction of the electricity generation situation in Massachusetts and Texas is misleading at best, and, judging by the actual numbers at the U.S. Energy Information Administration, consists of cherry-picking from sources and years which make his case. Of course, that does not depict reality.

Let’s take Texas, for example. Their electrical energy needs in 2015 were 14 times larger than Massachusetts. They got 10% of their energy from wind, and a miniscule amount of their energy from solar (0.09%). They got 53% of their energy from natural gas and 9% from nuclear. Yet despite their large contribution from wind and large commitment to natural gas, over 2015, there was little correlation between use of natural gas and availability of wind (-0.3). That means, no, there is no offsetting of energy with natural gas when winds don’t blow. Clearly the Wall Street Journal article was incorrect in its 16% of energy claim. And while Texas might be expecting a “huge surge in solar capacity”, that’s because it presently has none, so of course if you start from a baseline of near zero, it seems like a lot. And one wonders what that false emphasis means about the perspective and motivations of the author.

In fact, in 2015 and in contrast and despite the difference in latitude and weather, Massachusetts got a full 2% of its energy from solar In fact, in 2015 it got three times as much energy from solar as from wind, despite all the ballyhooing about offshore wind and onshore turbines. Since 2014, the amount of electrical energy Massachusetts generated from solar has doubled. In the first 6 months of 2016, not the sunniest season, it STILL got 2% of its electricity from solar. Wind generation, in contrast, remained flat from 2014 to 2015, at 0.7%. Solar could keep doubling or, at least, there’s no technical reason it could not. Any of the fears about grid instability don’t happen until it is about 14% of total general, and if it doubles each year, that’s 3-5 years away, probably 5 years, since Beacon Hill has it in a stranglehold.

And what of the offsetting of nuclear and coal with renewable energy in Massachusetts? Compared to Texas, we don’t have any to speak of. Surely, there is no evidence that even the doubling of solar caused the decrease of Massachusetts electrical generation by nuclear from 18% to 15% and by coal from 9% to 7%. In fact it is mathematically impossible that renewables affected nuclear and coal generation AT ALL. In contrast, electrical generation using natural gas increased from 58% in 2014 to 64% in 2015. There’s your nuclear- and coal-killer.

And it is no wonder that GE is not a leader in renewable energy generation, at least not any longer. That field is dominated by Vestas, Siemens, Alstom Wind, Hitachi, and the surging Chinese producers CNR, CSIC, and Ming Yang. Placing bets on goal, against the outlooks of financial scholars like Bloomberg, is foolhardy, especially if, as Mark Carney, Bank of England governor suggests, these assets are likely to be stranded by regulation and insurance costs. If the op-ed is correct — and there’s plenty of evidence in its sloppy use of numbers elsewhere that it is not — GE may be turning to “clean coal” because it does not know how to compete in any other market.

The citizens of the Commonwealth ought not to be fooled. Companies deeply invested in fossil fuels are terrified that their markets and industries will encounter “Minsky moments“, causing their assets and prices to suddenly plummet. Why else, despite natural gas stranglehold on electrical generation in Massachusetts to a full 64% are the utilities and those companies crying “the sky is falling” as is Mr Gustin? Building generation and pipelines is for them an existential struggle, and they are trying to force governments to do “sunk cost buy-ins” of their assets so that, no matter what disasters unfold, those governments will be stuck with their long term investments. They are, despite their pleas, no friends to renewable energy, they are no “bridges to the future”. Their business plans do not have any renewables-accelerated depreciation schedules or phase-out timetables. They want continued revenues.

In any case, as IBM, Kodak, and Barnes & Noble have painfully learned, if technology is on your competitors side, there is no winning against it, even if you as a company own government. And it’s silly for citizens to bet on the wrong side, even if some will.

Late breaking: Why renewables are good for business, despite some claims otherwise.

Update, 2016-09-01, 22:21 EDT

I had a look at GE’s 10-K for 2015. After Mr Gustin and, presumably, his source, Rakesh Sharma at Investopedia, quoting the Wall Street Journal, gushed about GE’s pursuit of coal, both of them neglected to read that 10-K which clearly states what GE wants from Alstom:

A new segment named Renewable Energy was created that includes GE’s legacy onshore wind business and the wind and hydro businesses acquired from Alstom.

GE Renewable Energy makes renewable power sources affordable, accessible, and reliable for the benefit of people everywhere. With one of the broadest technology portfolios in the industry, Renewable Energy creates value for customers with solutions from onshore and offshore wind, hydro, and emerging low carbon technologies. With operations in 40+ countries around the world, Renewable Energy can deliver solutions to where its customers need them most.

• Onshore Wind – provides technology and services for the onshore wind power industry by providing wind turbine platforms and hardware and software to optimize wind resources. Wind services help customers improve availability and value of their assets over the lifetime of the fleet.
• Digital Wind Farm is a site level solution, creating a dynamic, connected and adaptable ecosystem that improves our customers’ fleet operations.
• Offshore Wind – offers its high-yield offshore wind turbine, Haliade 150-6MW, which is compatible with bottom fixed and floating foundations. It uses the innovative pure torque design and the Advanced High Density direct-drive Permanent Magnet Generator. Wind services support customers o over the lifetime of their fleet.
• Hydro – provides full range of solutions, products and services to serve the hydropower industry from initial design to final commissioning, from Low Head / Medium / High Head hydropower plants to pumped storage hydropower plants, small hydropower plants, concentrated solar power plants, geothermal power plants and biomass power plants.

Renewable energy is now mainstream, able to compete with conventional options on an unsubsidized basis in many locations today. New innovations such as the digitization of renewable energy will continue to drive down costs. Worldwide competition for power generation products and services is intense. Demand for power generation is global and, as a result, is sensitive to the economic and political environments of each country in which we do business. Our Wind business is subject to certain global policies and regulation including the U.S. Production Tax Credit and incentive structures in China and various European countries. Changes in such policies may create unknown impacts or opportunities for the business.

Yep, cherry-pickin’ at its worst.

About the only business GE is not in is solar PV.

What a shame. It was a good company, that GE, in its day:

## NextGen VOICES: On data’, On setbacks’, and On discovery’

Science Magazine has a periodic column called Science in brief and occasionally that column features a set of what they call “NextGen VOICES”, meaning young scientists. They gather the survey using Twitter (of course) via the hashtag #NextGenSci. For the week of 1st July 2016, Science asked:

In April, we asked young scientists to use exactly six words to create a story about the life of a scientist in your field. We received almost 400 responses, some frustrated, some inspiring, some humorous, and all describing a life unique to a scientist. We have printed some of the most interesting responses here.

Here are some excerpts, from topics of interest to me.

On data

Big data! Clean: No statistical power.
Abhishek Noroula, Bioinformatics, Sweden

Data overload: Juggling balls, many fall.
Noa Sher, Cell Therapy, Israel

P equals 0.051? Repeat? Abandon? Bayes?
Rosa Li, Psychology and Neurosciences, USA

On setbacks

Mice eaten by cats, graduation delayed.
Chenggang Yan, Intelligent Information Processing, China

Exciting new result! No … coding mistake.
Frank X. Vasquez, Chemistry, USA

Results were promising, until they weren’t.
David Edward Gilbert, Energy and Evnrionmental Genomics, USA

On discovery

Scientist,looking closely, mistakenly finds truth.
Joshua Isaac James, Digital Forensic Science, South Korea

## `The Future of Energy’

Writing in Newsweek, Kevin Maney talks about the Future of Energy and Elon Musk’s long term plan to kill Big Oil.
Hat tip to Peter Sinclair at Climate Denial Crock of the Week where I first found the mention of the article, and at Rawstory, which reprinted Maney’s article in Newsweek with persmission.

I am not posting an excerpt because Newsweek wants their income, but it’s good to read this, since I have learned and argued similar things elsewhere on this blog.

Die größte PV-Dachanlage in der Region Hannover und eine der größten Anlagen in ganz Niedersachsen zum Zeitpunkt der Fertigstellung.

## “Sharon’s Water Problem” (by Paul Lauenstein)

(Click on image to see a bigger version of this figure. Use your browser Back Button to return to this blog.)

The town of Sharon, MA, has a water problem. Click on the link and see Paul’s presentation about it.

Many places have water problems, but Sharon’s is severe, and emblematic of the poor planning and mismanagement of natural resources which characterizes local, regional, state, and federal governance.

As climate changes, it will get worse.

After all, there isn’t that much water to be had.

## “Getting our heads out of the sand: The facts about sea level rise” (Robert Young)

If current luck holds, North Carolina may well escape the 2013 hurricane season without the widespread damage that has so frequently plagued the fragile coastal region in recent years. Unfortunately, this brief respite is almost certainly only that — a temporary breather.

Experts assure us that the impacts of climate change (including rising oceans and frequent, damaging storms) are sure to remake the coast in myriad ways over the decades to come and will, quite likely, permanently submerge large tracts of real estate.

So, what does our best science predict? And what can and should we do — especially in a state in which policymakers have actually passed a law denying that sea level rise is even occurring?

Dr. Robert Young of Western Carolina University, professor of geology, an accomplished author and a nationally recognized expert on the future of our developed shorelines, explores answers to there and related questions.

NC Policy Watch presents — a Crucial Conversation Featuring Dr. Robert S. Young, professor of geology and Director of the Program for the Study of Developed Shorelines at Western Carolina University.

See their Storm Surge Viewer, especially if you are interested in buying or developing shoreline property.

## Time to turn page on natural gas – CommonWealth Magazine

Plugging In Plugging In Energy and the Environment Gas pipeline firm says it’s full-speed ahead By John Flynn, Lee Olivier and Bill YardleySee all » Plugging In Plugging In Energy and the Environment SJC nixes ‘pipeline tax’ By Bruce MohlSee all » Plugging In Plugging In Energy and the Environment Energy bill a solid step(…)

Also see this, and this.

## Eversource withdraws from the Spectra-Algonquin “Access Northeast” pipeline project

(Click on image to see a bigger copy. Use browser Back Button to return to blog.)

Yes!

Now let’s hope the remaining customers for Spectra’s Access Northeast pull out, and FERC denies permission to proceed. Their next meeting is 22nd September 2016.

Update, 2016-08-24

National Grid and the rest of the utilities have pulled out of Spectra-Algonquin’s Access Northeast.

More.

## “Naïve empiricism and what theory suggests about errors in observed global warming”

A post from one of my favorite statistics-oriented bloggers, Variable Variability, dealing with a subject too casually passed over.

## “Understanding Climate Change with Bill Nye”, on Dr Neil deGrasse Tyson’s “Star Talk”

Bill Nye hosts Dr Neil deGrasse Tyson‘s Star Talk Radio, featuring climate change and NASA’s Dr Gavin Schmidt. (See also RealClimate.)

## ECS2x, land, sea, and all that

from http://dx.doi.org/10.1126/science.1203513

P.S. I wrote more here. Reproduced below …

Practical likelihood functions are very flat-topped, so the idea that a maximum likelihood function (MLE) can be confined to a point is a theoretical mirage. See Chapter 3 of S. Konishi, G. Kitagawa, Information Criteria and Statistical Modeling, Springer, 2008. Even if you want to set aside Bayesian considerations, whose priors tend to sharpen the posteriors, the best you can do is expected likelihoods, because likelihoods in practice, just like p-values, are random variables. Accordingly, the MLE is a neighborhood, because a point has probability mass zero.

Besides, … the question of multimodality [wasn’t addressed]. Actual Expected Climate Sensitivity is a combination of the densities over oceans and land, each of which have different distributions and modes. (See https://goo.gl/pB7H24 which is from http://dx.doi.org/10.1126/science.1203513) Accordingly, their combination is (at least) bimodal. Ocean ECS has 4 modes. Land ECS has 2 modes, one slightly higher than the other, the higher being at +3.4°C and the second at about +3°C. Worse, the variance of land ECS is over twice than of oceans.

Finally, what you should be looking at is the ECS2x over land, not combined. Even if granted to want to go with the location of the highest mode, that’s +3.4°C.

Posted in climate | Leave a comment