Why smooth?

I’ve encountered a number of blog posts this week which seem not to understand the Bias-Variance Tradeoff in regard to Mean-Squared-Error. These arose in connection with smoothing splines, which I was studying in connection with multivariate adaptive regression splines, that is actually something different than smoothing splines. (I will have a post here soon on multivariate adaptive regression splines, or the earth procedure as it’s called.)

The general notion some people seem to have is that smoothing splines throw away information and introduce correlation where there isn’t any, and it distorts scientific data. A particularly obnoxious example of this is at science denier William Briggs’ blog. Another, milder instance is at a blog post by a blogger called “Joseph” who specializes, he says, in “A closer look at scientific data and claims, with an emphasis on anthropogenic global warming.” I was going to put in a comment at the blog, but apparently comments there are closed, or at least no longer work. (So do some links to data from that post.) So, instead, I’m putting it here. I already answered a question at Stats Stackexchange which invoked Briggs.

Smoothing is not about making a picture nicer or losing information. It is about the bias-variance tradeoff. Given that minimizing mean squared error in fitting data with a non-parametric (or, for that matter, any) model is important, introducing a bias in a model, such as smoothing in a spline can reduce variability and, so, reduce overall mean squared error of a fit.

The Wikipedia page shows the connection with bias and variance, and the proof of their relationship.

It was an important finding by Stein in 1955, which gave rise to deliberately introducing some bias via things like James-Stein estimators in order to improve overall performance. Prior to Stein’s insight, classical statistics only considered unbiased estimators, and that insight showed that procedures like maximum likelihood estimation were not optimal, even if they work well a lot of the time.

And, accordingly, “Joseph”‘s criticism of the Law Dome CO2 data is not well founded. I bring his and the reader’s attention to a paper co-authored by Etheridge, one of the co-authors of the Law Dome work, about why smoothing splines are used.

Note mean-squared-error is disguised in various powerful measures of model fit, like the Akaike Information Criterion.

Update, 2016-12-27: Smooth, yes, but don’t ever expect to see the smoothed curve realized

While the smoothed version of a series can and often does provide an estimate with the least mean-squared-error, if properly chosen, it is a different question whether the presentation of such a smoothed curve is the best to convey the series, especially if communicating with the statistically uninitiated. The smoothed version of a curve is an idealization, intended for purposes of forecasting, or prediction (they are not the same), and sometimes for helping to tease out physical mechanisms giving rise to the observed phenomenon.

For one thing, the smoothed or idealized curve has zero probability of actually being realized, even on the span of support for which it is calculated. Actual realizations of the phenomenal or observed series will have excursions from the smooth guided by the distribution of its residuals, and it is entirely a part of the series to see these excursions and, moreover, expect that if (it were possible to draw) another realization of the series, there would be a different set of excursions applied.

For another, the general public does not seem to get the idea of a data series with random excursions atop a pattern, and appear to approach these matters as if they were entirely deterministic. That’s a very classical kind of notion: The Watchmaker’s Universe. In this view, the only reason why phenomena are not perfectly predicted is because we have but imperfect knowledge of the science involved, or of Nature, or something, and only a Deity knows these (notwithstanding the Deity knowing what all individuals will choose if Free Will is posited). A different view, more modern is that even a Deity cannot predict perfectly how another realization of these stochastic phenomena will play out.

So, the best way to communicate this variability to me is to present the observed data from the series, present the smoothed realization, and then present a cloud or ensemble of draws from the smoothed curve with excursions governed by residuals atop of it. For example,

(Click on image to see larger figure, and use browser Back Button to return to blog.

By the way, the example above shows two competing models for the smooth to the data.

If dependent data are to be emphasized, then using ensembles of tracks such as the reasonably famous hurricane tracks are useful:

(Click on image to see larger figure, and use browser Back Button to return to blog.

93% of year is free of cost, with heat, cooling, hot water, powered by free solar PHOTONS

A retrospective, something we all now need. Remember:

Well, today, we reached a landmark with our 10 kW solar array. The numbers aren’t completely in yet (*), since we don’t have a total for electrical energy consumption, but 93% of the days this year were powered by the energy generated by our 10 kW rooftop solar installation consisting of SunPower panels installed by RevoluSun.

This, admittedly, involves using the facilities of Eversource as a big energy storage facility, via net metering. It’s possible that, in the future, that role will be provided by someone or something else.

In addition, our nearly 10 MWh of generation this year will produce nearly 10 SRECs, which earn, per the Renewable Portfolio Standard incentives, an additional $2600 of income. And, note, we are heating, cooling, and heating hot water all with high efficiency zero Carbon energy. The additional 25 days or overage of energy we take comes from wind farms, by our choice, and paying a premium above the base rate from Eversource. Not too shabby. Not too shabby at all.﻿ And, yeah, we’re crazy about doin’ this. It’s awesome. Beat that, fossil fuels. You don’t need a Carbon Tax to be incentivized to do this. This is profitable right here and now. And this is snowy, cold Massachusetts. Think of what an Oklahoma could do? (*) I will update these results in a later post when we have the full year’s energy needs complete. Indigenous peoples win a reprieve, and, no matter what comes later, a solid victory “The Army has determined that additional discussion and analysis are warranted in light of the history of the Great Sioux Nation’s dispossessions of lands, the importance of Lake Oahe to the Tribe, our government-to-government relationship, and the statute governing easements through government property.” http://player.theplatform.com/p/7wvmTC/MSNBCEmbeddedOffSite?guid=n_msnbc_pipeline_161204 November 14, 2016 Moira Kelley (DOA) 703-614-3992, moira.l.kelley.civ@mail.mil Jessica Kershaw (DOI), interior_press@ios.doi.gov Washington, D.C. — Today, the Army informed the Standing Rock Sioux Tribe, Energy Transfer Partners, and Dakota Access, LLC, that it has completed the review that it launched on September 9, 2016. The Army has determined that additional discussion and analysis are warranted in light of the history of the Great Sioux Nation’s dispossessions of lands, the importance of Lake Oahe to the Tribe, our government-to-government relationship, and the statute governing easements through government property. The Army invites the Standing Rock Sioux Tribe to engage in discussion regarding potential conditions on an easement for the pipeline crossing that would reduce the risk of a spill or rupture, hasten detection and response to any possible spill, or otherwise enhance the protection of Lake Oahe and the Tribe’s water supplies. The Army invites discussion of the risk of a spill in light of such conditions, and whether to grant an easement for the pipeline to cross Lake Oahe at the proposed location. The Army continues to welcome any input that the Tribe believes is relevant to the proposed pipeline crossing or the granting of an easement. While these discussions are ongoing, construction on or under Corps land bordering Lake Oahe cannot occur because the Army has not made a final decision on whether to grant an easement. The Army will work with the Tribe on a timeline that allows for robust discussion and analysis to be completed expeditiously. We fully support the rights of all Americans to assemble and speak freely, and urge everyone involved in protest or pipeline activities to adhere to the principles of nonviolence. Army POC: Moira Kelley (703) 614-3992, moira.l.kelley.civ@mail.mil The Department of the Army will not approve an easement that would allow the proposed Dakota Access Pipeline to cross under Lake Oahe in North Dakota, the Army’s Assistant Secretary for Civil Works announced today. Jo-Ellen Darcy said she based her decision on a need to explore alternate routes for the Dakota Access Pipeline crossing. Her office had announced on November 14, 2016 that it was delaying the decision on the easement to allow for discussions with the Standing Rock Sioux Tribe, whose reservation lies 0.5 miles south of the proposed crossing. Tribal officials have expressed repeated concerns over the risk that a pipeline rupture or spill could pose to its water supply and treaty rights. “Although we have had continuing discussion and exchanges of new information with the Standing Rock Sioux and Dakota Access, it’s clear that there’s more work to do,” Darcy said. “The best way to complete that work responsibly and expeditiously is to explore alternate routes for the pipeline crossing.” Darcy said that the consideration of alternative routes would be best accomplished through an Environmental Impact Statement with full public input and analysis. The Dakota Access Pipeline is an approximately 1,172 mile pipeline that would connect the Bakken and Three Forks oil production areas in North Dakota to an existing crude oil terminal near Pakota, Illinois. The pipeline is 30 inches in diameter and is projected to transport approximately 470,000 barrels of oil per day, with a capacity as high as 570,000 barrels. The current proposed pipeline route would cross Lake Oahe, an Army Corps of Engineers project on the Missouri River. Statement from Attorney General Loretta Lynch: A little history, from The Daily Show: I love the rhetorical question, “What are we gonna do, just not use oil?”, from Trevor Noah. Update, 2016-12-06 For Energy Transfer Partners, which says the 1,170-mile pipeline is 92 percent complete, the move smacked of politics. In a statement Sunday night, the company said the “further delay is just the latest in a series of overt and transparent political actions by an administration which has abandoned the rule of law in favor of currying favor with a narrow and extreme political constituency.” The “rule of law.” What law? A law that has bought and sold jurists to decide against indigenous peoples of North American for two centuries? A law that has enabled stripping of forests and justified destruction of animals, of habitats, of systems upon which we, collectively and ultimately, depend? What constituency? One that is hurling itself headlong off a cliff, so it can get to work five minutes faster, to earn money to Buy More Junk in a season rationalized by appeal to a mere story about a deity who justifies that constituencies’ mistreatment of other peoples and the land and of Nature because, well, the deity wouldn’t come here otherwise, would he? Extreme? Energy companies who haphazardly subject their employees and their families to cancer? And to dangers of mine collapse? And risks of explosion? Who put delivery of energy above life? Energy forms we don’t even need? Am I angry? You bet. And I will delight in the day when the enablers, the stockholders of these companies lose their shirts when their assets are stranded because they’ve been beaten by technology in the open market. That will happen, no matter what party is in control or who is in the White House. (It may not happen fast enough to save our grandchildren, but, hey, a big chunk of the American public has shown they don’t give a flea’s ass care about that.) And the leaders of these companies? They’ll go off to some off-the-Florida coast island and live out their days on the legal profits of their scam, at the stockholders’ expense. Do I care? No. The stockholders deserve every moment of fear and discomfort. Cape Cod National Seashore: Testament to how fragile our collective hold is on any land IDL TIFF file (Click on photo to see larger image, and use browser Back Button to return to blog.) How Cape Cod changes. (Click on photo to see larger image, and use browser Back Button to return to blog.) “Climate Change and the Post-Election Blues” (a reblog of a post by Meredith Fowlie at The Energy Institute, BerkeleyHAAS) Re: Meredith Fowlie, “Climate change and the post-election blues”, from The Energy Institute, BerkeleyHAAS Some direction. My only comments regard Dr Fowlie’s LCoE analysis. While correct from its perspective, LCoE depends upon the viewpoint of the cost efficiency. For example, because residential PV is generated close to the consumption point, it avoids Sankey inefficiencies from upstream, primarily due to conversion losses when stepping up and down. So, from the perspective of cost of energy, there is a benefit to local generation. Note most wind generation does not have this efficiency either. The other efficiency which an “at delivery point” LCoE fails to see is use of capital. In particular, private capital is being deployed to construct residential PV and, to some extent, wind. Now, one can argue that capital costs of wind are recovered from ratepayers, but in the case of solar PV, unless some of those incentives like the ITC are factored into the CoE locally, seen as rewards for putting up capital, the price to the consumer using the PV is exaggerated. If they are not included, it seems that the social benefits of not having to raise or bear the cost of capital for that portion of generation ought to be reflected as well. I am living in a very blue state. The graph below charts Google searches for “stages of grief”. The spike in grief-stricken web/soul searching corresponds with- you guessed it- the 2016 election. The map shows where, in the days following the election, these searches were happening. Not surprisingly, post-election blues show up disproportionately in blue states. Graph: Generated by Google trends (search term = “stages of grief”, region = United States). The numbers represent search interest (by week) relative to the highest point over the past 5 years. A value of 100 is the peak popularity for the term. Map: Also generated by Google trends, measures search term popularity as a fraction of total searches in that state. Deeper blue indicates higher popularity of Trump grief in the week following the election. Many of us who are feeling blue about what a Trump presidency could usher in (or throw… View original post 981 more words “… The Green-Feminine stereotype and its effect on sustainable consumption” Updated, 2016-11-28 I heard about this study earlier this year, and queued it up for a careful examination. I got to that today. The article is: A R Brough, J E B Wilkie, J Ma, M S Isaac, D Gal, “Is Eco-Friendly unmanly? The Green-Feminine stereotype and its effect on sustainable consumption”, Journal of Consumer Research, December 2016, 43(4) 567-582. It’s not a study I would draw deep conclusions from, and I find their generalizations unwarranted. Sample smudges: • Based upon 7 studies. One study had 127 university students, using a hypothesis drawn from the literature. 52% are male. A second had “… 194 students (45.9% male; mean age = 23.05) simultaneously recruited from two private universities to participate in an online survey.” (Emphasis added.) A third had “… 131 individuals (58.0% male; mean age = 35.21) recruited on Mechanical Turk to participate in an online study session.” A fourth had 403 “… American men (mean age = 32.68) were recruited from Mechanical Turk to participate in the study.” (Note that on the fourth study, the scholars qualified the results with “Although only males had been recruited to participate in this study, fourteen participants reported their gender as female and were therefore excluded. This left 389 participants for the analysis.”) A fifth had 472 “… participants (49.4% male; mean age = 35.29 recruited from Mechanical Turk completed the study.” There were also studies 6A and 6B, whose descriptions add no insight. • For the interpretation of the first study, “There was no difference in IAT D-score by participant gender; F(1, 57) = .11, p = .74, ηp2 < .01, suggesting that both men and women cognitively associate the concepts of greenness and femininity.'' I need not say more. They are erroneously interpreting a high p-value as meaning the null is confirmed. Later “… the target’s gender and environmental behavior remained significant predictors of femininity (p’s .37).” • The litany of woes continues in later summaries: “Because the two dependent measures were highly correlated (α = .87), we averaged them to form a composite evaluation measure.” The scholars draw overly strong conclusions from the limited data in hand. For example, they write: … [W]e provide the first experimental evidence of the implicit cognitive association between the concepts of greenness and femininity (study 1), and show that this association can affect both social judgments (study 2) and self-perception (study 3) among both men and women. Focusing on the downstream consequences of this green-feminine stereotype, studies 4-6 suggest that as a result of gender identity maintenance, gender cues (e.g., those that threaten or affirm a consumer’s gender identity or that influence a brand’s gender associations) are more likely to affect men’s (vs. women’s) preferences for green products and willingness to engage in green behaviors. Further, they have the audacity to claim “More generally, our findings also add to a growing body of research pointing to a link between identity and consumers’ tendency to engage in sustainable behavior.” What “growing body of research?” A correlation-only-based study like J A Lee, S J S Holden, “Understanding the determinants of environmentally conscious behavior,” Psychology and Marketing, 1999, 16(5), 373-92? Apart from the sampling issues (Mechanical Turk? Really?), the wholesale neglect of repeated uses of the same population for successive tests with no corrections (even if this passes a smell test’ in their field, and, obviously, satisfies their peer reviewers), and the failure to estimate in-sample versus out-of-sample effects through some kind of bootstrap or cross-validation means that, for all we know, these conclusions are limited to the samples the scholars took. Since Turk was used in most of the tests, it could not have hurt to repeat the same study with another draw for each from the general population, or seeing how much their p-values varied with matched subset of the samples they had. In the very first study, the scholars reported a p-value less than 0.001. That is extraordinary with a sample size of only 127. Posted in Frequentist, statistics | Leave a comment Getting back to 350 ppm CO2: You can’t go home again (Major update of this piece included below.) You can’t. It’ll cost much more than 23 times 40 times the Gross World Product to do it. And, in any case, you need to go to where you need to be to avoid the problem in the first place. But I get ahead of myself … [A qualification: The techniques described here are limited to those where the technology is identified well enough to be able to assign a cost estimate per tonne of CO2 for removal. I did not treat any speculative, as yet undeveloped techniques.] Update, 2016-11-27 J Hansen, M Sato, P Kharecha, K von Schuckmann, D J Beerling, J Cao, S Marcott, V Masson-Delmotte, M J Prather, E J Rohling, J Shakun, P Smith, “Young People’s Burden: Requirement of Negative CO2 Emissions”, Earth System Dynamics Discussions, 2016, 1-40. Update, 2016-11-28 Updated with footnote (*) addressing other geoengineering techniques like terraforming. From time to time I’ve posted on technical efforts to reduce CO2 concentrations after having exceeded some viscerally objectionable level. I put up another post here. However, despite discussions of this available on blogs, in technical literature, and in peer-reviewed journals, there is little discussion of the sheer cost and magnitude of doing this. There has been some discussion of the pitfalls of doing the cheaper solar radiation management, which limits warming but does nothing for ocean acidification which, as the recent episode of Years of Living Dangerously testifies, will deprive at least hundreds of millions of people around Earth their food supply. I won’t repeat here why CO2 cannot reasonably be scrubbed by natural processes on any time scale which matters to people, or why the only way to stop the increase in concentration in atmosphere is to zero all CO2 emissions and related ones, like methane (CH4) which decompose into CO2. This is purely a look at the economic feasibility of doing something after the fact should people decide, collectively, that the consequences of emitting greenhouse gases at a rate faster than any time in a hundred million years or so was a bad idea. And I won’t address how long it would take Earth to get sane again once such a project succeeded. Needless to say there are time lags involved, and anyone with experience shooting skeet should well know what happens if time lags are not considered during the exercise. To begin with, the idea of clear air capture or direct capture of carbon dioxide is explained and argued by the great oceanographer and climate scientist, Professor Wally Broecker, in an article titled “Does air capture constitute a viable backstop against a bad CO2 trip?” Broecker concludes in that article Because of this very wide range, it is widely believed that the cost would lie somewhere in the middle, leading to a consensus cost of about 600 dollars a ton of CO2 (American Physical Society, 2011). If this proves to be the price, then air capture of CO2 is unlikely to be viable. Professor Broecker does go on to urge research and development in such a massive global apparatus, concluding As much of the world’s GNP goes into producing CO2, reversing the trend by air capture will be a very expensive proposition. But looked at in a positive way, the capture and storage of CO2 would create an industry 10 to 20 percent the size of the energy industry (i.e., lots of jobs). Once implemented, it would raise the price of fossil fuel energy, supplying an additional edge for renewable sources. But let’s see what’s meant here in terms of investment, using the American Physical Society price of US$600/tonne (2010 dollars) as a start, and how low the price per captured tonne of CO2 needs to be in order to be plausible.

We are currently at 404 ppm CO2:

(Click on image to see a larger figure, and use browser Back Button to return to blog.)
Depending upon success with curtailing emissions, represented by concentration pathways, these are the concentrations we might see:

(Click on image to see a larger figure, and use browser Back Button to return to blog.)
What this means in terms of forcings is summarized at Wikipedia with the conservative values** presented in the table below:

(Click on image to see a larger figure, and use browser Back Button to return to blog.)
(Details about RCPs are available here.)

To complete the picture, here are the latest forcing estimates, from Potsdam:

(Click on image to see a larger figure, and use browser Back Button to return to blog.)
The sea level rise impacts are probably understated in the Wikipedia table, due to underestimates of ice sheet effects, and poor constraints on process.

The figures suggest that if RCP 8.5 (“business as usual”) is pursued, 1220 ppm CO2 by 2100 is completely within reach. But to show how expensive clear air capture is, I’ll use RCP 6.0, which ends up, in 2100, emitting per year just half per year that RCP 8.5 does. Overall, RCP 6.0 ends up with 55% of the total cumulative CO2 emissions that RCP 8.5 does, and reaches 730 ppm at 2100. I’ll assume no negative emissions technology has been deployed at that point, and then assume it is instantaneously operational at 2100. I’ll further assume that the target of direct air capture is to reduce CO2 concentrations to the relatively benign but still not completely safe 350 ppm that the hard-working proponents of 350.org espouse. (If 350 ppm had never been exceeded, we’d still witness the eventual melt of a lot of ice sheets, although this would be slower.)

The first thing to realize is that direct air capture necessarily assumes emissions of CO2 have stopped and the job is draw down the emissions that are there. While there is a natural decline of emissions, about 200 ppm in 400 years, and 250 ppm in 1000 years, it plateaus and decreases very slowly afterwards. The rule of thumb is that 40% of cumulative carbon dioxide remains in atmosphere after 1000 years. If direct air capture were deployed, it would need to counter the ongoing emissions and work to draw down preexisting concentrations of CO2. Worse, to the degree that, for instance, fugitive CH4 and other species which decompose into CO2 are released, these would not be available for removal immediately, but would continue to contribute over their decay cycles.

Accordingly, deployment of direct air capture means that the entire economic cost of going to zero Carbon emissions is borne at the outset.

Then, assuming the climatic conditions associated with 730 ppm at 2100 for RCP 6.0 are intolerable, I assume the globe deploys direct air capture at US$600/tonne CO2. Note that such scrubbing of atmosphere will not reverse sea level rise, since heat in oceans (and, in general, in water) is released only on time scales of tens of thousands of years. Moreover, there is a slow outgassing of CO2 from oceans once atmospheric concentrations diminish, and this outgassing proceeds only at a natural rate, one which may not be consistent with engineering targets. So, 730 ppm to 350 ppm involves direct capture and permanent sequestration of 380 ppm of CO2. Each 0.127 ppm corresponds to a billion tonnes of CO2. Accordingly, $\frac{380 \text{ppm}}{0.127 \text{ppm/GtCO2}}$ = 2992 GtCO2 = 3 trillion tonnes CO2. At US$600 per tonne, that’s US$1800 trillion in 2010 dollars. To give you an idea of the size of this number, the entire gross world product in 2014 was$78 trillion dollars. Accordingly, the cost of coming down 380 ppm after we zero CO2 emissions is $\frac {\1800}{\78} = 23$ times the gross world product in 2014. That’s simply not feasible in any scenario.

How much cheaper must direct air capture get in order for it to be feasible? Well, let’s take a megaproject, like the construction of the Chunnel across the English channel. This cost about US$7 billion in 1994 dollars. In 2010 dollars that’s US$10.3 billion. So, suppose we are willing to spend the equivalent of 100 Chunnel projects to make civilization viable on Earth again. That’s about US$1 trillion in 2010 dollars. Assume this measure of feasibility and plausibility, direct air capture of CO2 with sequestration needs to be $\frac{\600\text{per tonne}}{1800}$ or US$0.33 per tonne in 2010 dollars.

I don’t care what technology you have in mind, that ain’t gonna happen.

Direct air capture of CO2 is tough because there are so few molecules per unit volume to catch.

Update, 2017-01-13

One important aspect the above neglects is CO2 dissolved in the oceans and captured by the soils. The point is made most directly in a 2015 paper by Tokarska and Zickfield and in its supplemment. The same idea was discussed earlier, not in the context of geoengineering through CO2 capture, but in terms of the lifetime of atmospheric CO2 and its effects after human emissions were zeroed. See Archer, et al, 2009, and Solomon, et al. The implications for global containment policy were described in a 2012 paper by Matthews, Solomon and Pierrehumbert where they argue (a) atmospheric concentrations and emissions intensity are, for physical reasons, not really useful gauges of progress in containing the effects of human-created climate change, and (b) there should be a renewed emphasis upon cumulative Carbon emissions. Their arguments seemed to have been missed by many who seem to think that if emission rates plateau we’ll see some useful response from the climate system.

In short, oceans and soils are, in the long run, in equilibrium with atmosphere with respect to any particular gaseous species like CO2. In the short run, they are not, because it takes time (decades) for oceans to take up their share of free CO2 because of complicated mixing processes. Similarly with soils, although I’ve never seen a time constant for that process. Not sure there is one. But in the end, oceans pick up 30% of human CO2 emissions. (Eli Rabett does a nice review of the chemistry here.) Soils and trees and things, primarily old growth forests and other terrestrial ecosystems, pick up another 25%. These figures are the result of careful Carbon accounting and measurements. (See also.) Assuming these continue picking excess CO2 up at these rates, should emissions stop, and then reverse with negative emissions technology, what would happen?

As concentrations of CO2 in atmosphere decrease, oceans and terrestrial ecosystems are out-of-equilibrium, so the process reverses: CO2 there eventually begins coming back out into the atmosphere. The net effect of this, and why my calculations here understate the cost of clear air capture, is that what needs to be removed is not just the concentration of CO2 in atmosphere, but essentially all that people have ever released to the climate system!

If these additional reservoirs are included, then the total cost of extraction and sequestration nearly doubles, giving the 40 times number quoted at the revised outset. There’s nothing special about that. This simply reflects the fossil fuel carbon dioxide that has been so far stored in oceans and forest soils of the total amount emitted, which is at least 50% of emissions.

And if that isn’t bad enough, there is some evidence (see Section 2.7) that these sinks of CO2 are slowing down in their ability to temporarily hold CO2 meaning that more, as a fraction, will go into the atmosphere.

There’s an amazing group of people who keep track of Carbon accounts year over year. (Thanks to Glen Peters for pointing me to this.)

Now, it’s clear who is at fault and who, properly speaking, should bear the burden of doing these crazy things if they were needed and if they were feasible.

(Click on image to see a larger figure, and use browser Back Button to return to blog.)

(Click on image to see a larger figure, and use browser Back Button to return to blog.)

(Click on image to see a larger figure, and use browser Back Button to return to blog.)

Pick on China all you want, but since radiative forcing of atmosphere and oceans is due to cumulative emissions, not instantaneous emissions, Europe and the United States carry the greatest responsibility, including contributing their share of deforestation. Moreover, if China were excessively penalized, effectively this would be a tariff on products manufactured there, and, in the end, the consumers of North America and Europe would end up paying.

(Click on image to see a larger figure, and use browser Back Button to return to blog.)

Note that OCO-2, the satellite system*** which produced these figures, data products which support state efforts to manage their fossil fuel emissions, may be on President-elect Trump’s “hit list” of systems to be terminated because of his commitment to shut down “the politicized science” of climate. (I’ll have more to say about that soon.)

* This post does not address terraforming techniques like those proposed by Ornstein, Aleinov, and Rind in the 2009 paper “Irrigated afforestation of the Sahara and Australian Outback to end global warming”, Climatic Change 97:409–437, or that by Becker, Wulfmeyer, Berger, Gebel, Munch in 2013, “Carbon farming in hot, dry coastal areas: an option for climate change mitigation”, Earth System Dynamaics, 4, 237–251. For a discussion of these proposals along with solar radiation management and other proposals like iron fertilization of oceans, see Keller, Feng, and Oschlies, “Potential climate engineering effectiveness and side effects during a high carbon dioxide-emission scenario”, Nature Communications, 5 3304 (2014).

** As mentioned a bit later in the post, these are conservative because they do not reflect
the full contribution of ice sheet disintegration and our scientific failure to account for all contributers.

*** A satellite does not suffice for producing such informative data. There are ground stations, previous calibration data, registration sets, and, of course, lots of talented people who operate such equipment and process data to produce usable products. This is the same with any set of scientific or, for that matter, intelligence community products.

“Negative emissions” (from ATTP)

Key quote-within-a-quote from article at And Then There’s Physics:

If the expected negative emissions cannot ultimately be achieved, the decades in which society had allowed itself a slower, softer transition would turn out to be a dangerous delay of much-needed rapid emission reductions. Saddled with a fossil fuel-dependent energy infrastructure, society would face a much more abrupt and disruptive transition than the one it had sought to avoid. Having exceeded its available carbon budget, and unable to compensate with negative emissions, it could also face more severe climate change impacts than it had prepared for.

I went to some Departmental talks recently and discovered that some of my colleagues are researchering possible carbon sequestration technologies. This could be very important, but appealing to negative emission technologies is often quite strongly criticised. The basic argument (which has some merit) is that providing this as a possibility can provide policy makers with an argument for delaying action that might reduce emissions sooner.

Although I have some sympathy with these criticisms, I do have some issues with them. One is that it often involves criticising climate models that include negative emission pathways. The problem I have with this is that they seem to use “climate model” as a catch all for any kind of model associated with climate change. However, there are a large number of different models. Some are trying to understand how our climate responds to changes, and – typically – use concentration pathways. Others try…

View original post 442 more words

the Pale Blue Dot, once again

No other perspective matters, however disenfranchised some may feel. For, ignoring that perspective, they will fight Something they cannot beat.

Remember, Nature Bats Last.

Obviously, sea level rise and climate change are a hoax …

(Click on image to see a larger picture, and use browser Back Button to return to blog.)

The seawater in that parking lot is a foot deep.

People can deny what’s happening in any of several varied ways. They can claim it does not affect them.

In the end, Nature will speak and, as Richard Feynman insisted, the “truth will out.”

(Photo credit and story to the Scituate Wicked Local of 15th November 2016.)

Update, 2016-11-17

More photos and a story from the Capital Weather Gang at the Washington Post.

Should choice belong to those who contribute the most?

When running a corporation there are various kinds of productivity measures that can be used. There are bizzare ones like return on controllable assets (ROCA), and typical ones like overall revenue, or overall profit. When judging productivity of employees and divisions, measures like revenue per division, or revenue per employee are used.

If the United States were a corporation, and we further imagine states are comparable to divisions, the most valuable states are those who have a high gross state product per person. This means that, in terms of the overall United States domestic product, these individuals are contributing more to overall wealth than are citizens in other states. Gross state product itself is interesting, but it is less a fair comparison since states having simply large numbers of people win over smaller ones.

What’s interesting is that, while there are exceptions, most of the high GSP-per-capita states voted Democratic in the recent 2016 Presidential election. In fact 70% did. These are the top 20, from the most productive per capita to the least:

 Washington, DC D Delaware D Alaska R North Dakota R Connecticut D Wyoming R Massachusetts D New York D New Jersey D Oregon D Washington D Virginia D Maryland D Texas R Colorado D Illinois D California D Nebraska R Hawaii D South Dakota R

It is interesting that D.C., while not a state, ranks above them all.

So, in all the explaining that Trump won because people feel left out of the discourse, there are two points to be made.

First, it’s possible that those who have influence do so because they are, in fact, more productive. There is an argument, from a capitalist mindset, that those who contribute the most to the common enterprise should indeed have the most say.

And, second, people who feel left out could choose to move to a state where they contribute and, on average, earn more. That they don’t is actually their own choice.

So, ironically, it is not that states which vote Democratic have a lock on politics at the expense of those who do not. They also contribute to the common wealth the most. Shareholders in corporations with more shares have more say. Oughtn’t voters in such states do the same?

(Figure deleted, 4 December 2016.)

Update, 2016-11-20

I added something related to this in a comment at FT today:

I also think that the “elitism” of which some Trump supporters complain is itself a myth and a mirage. A possible response, noting that a substantial chunk of the GDP of the USA comes from those states accused of being elitist, is to, for a chance, actually exert there power and influence as elites rather than trying to get along with everyone else.

If some people are going to claim they don’t want to have certain kinds of people in their states and their communities, some of those same people should understand very very clearly that their behaviors, attitudes, and expressed opinions will not be tolerated in communities and states which try to be open to all nationalities and outlooks from everywhere, whether Muslim, gay, Jewish, atheist, trans, bisexual, queer, or otherwise.

And, should there be an attempt at some kind of punitive response from the new Presidency and Congress, they need us more than we need them. We can find other customers. And they cannot replace the skills and capabilities of these “elitist” regions without, ironically, buying those skills from outside the United States. They can develop them elsewhere in the country, but that would take years and years, and could not, without diversity in origin and outlook, be successful. Just look at the makeup of any major university: MIT, Harvard, UC Berkeley, Duke, CalTech, University of Massachusetts, University of Texas, University of Chicago, and so on.

Moreover, corporations affected by such measures can work against them.

Hopefully it won’t come to that, but, as the reason for my FT comment shows, the Trump entourage has a really thin skin, and puts “winning” and it’s own image ahead of sense. They could really misfire in a big way, say, in cybersecurity.

Update, 2016-12-04

From The New York Times, 3rd December 2016.

Posted in capitalism, corporations, elitism, politics, United States | 1 Comment

Overreaction to the Trumpistas?

Anyone who thinks the reaction of people in the streets against the election of Donald Trump to be President is an overreaction, or, by extension, the fierce opposition to voters who chose to elect him is somehow lacking understanding or is unfair, needs to have a good look at the responses of many other people unfiltered by what’s called U.S. media journalism.

I am shooting snaps of these sources and placing them here, partly because they are volatile and are likely to change in a day, and partly because some of them are behind paywalls.

In my reviews, some of the most startling one can be found on the international page of Der Spiegel/Spiegel ONLINE:

Accordingly, the reaction of people in the streets is not simply the response of some hot heads, or sore losers. There’s a large chunk of the world population as concerned, even if they are not demonstrating. Some governments are reacting and posturing, too.

It’s also interesting to see Der Spiegel blame new media in part for this development, in their article on Google and Facebook.

It is self-delusion to take any comfort from the technical majority being in support of Clinton, since however way you slice it, almost half voted for Trump.

Trump’s supporters cannot change natural reality, even through Trump; we can and should continue to act, in hope

I like his emphasis upon Remember residual damage, and that’s There’s no rewind button.

That’s Alex Steffen, who has the most optimistic post-Trump spiel I’ve read.

And there’s always the optimism that comes with China’s warning Trump about not taking action on climate.

Another Secretary of State showing “poor judgment” regarding classified information

Henry Kissinger was caught at the UN having a Top Secret document on his desk, improperly and carelessly, a photo which appeared in Newsweek on 22 November 22 1971:

(Click on photo for a larger image, and use browser Back Button to return to blog.)
Where was Congress and the FBI then?

The illegal we do immediately. The unconstitutional takes a little longer.

— Secretary of State and UN Ambassador Henry Kissinger

“Trump can’t stop the energy revolution” (Bloomberg)

Excerpt:

Onshore wind is already cost-competitive with natural gas in the U.S. and solar costs are falling rapidly. This chart shows the levelized cost of electricity (LCOE) for various generating technologies entering service in 2022, taking into account capital, fuel, financing, maintenance and other costs. It’s subject to a lot of assumptions, but you’ll get the general idea.

Prescience and Response

I have a foreboding of an America in my children’s or grandchildren’s time — when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness…

The dumbing down of America is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.

Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark, 1997 (Emphasis added)

I’ve often thought that Sagan’s Science as a Candle ought to be required reading for UUs, if anything could be made required reading. 😉

On climate, our task now is not to put our heads down, save what little we can.
Our task is to raise our vision, and fight for everything.

Alex Steffen, 12:42 EST, 9 November 2016

On the rise of the Trumpistas …

Just a couple of things to write about The Obvious. I have written a couple of longer thoughts as Comments, here and here, at … And Then There’s Physics.

I reiterate that I don’t believe any voter was hoodwinked, that they knew exactly what they were getting with the Trump junta, and that it represents a significant portion of America becoming unhinged from reality. It’s possible to speculate why, but probably not really constructive, just consoling: The United States seems more and more an empire in decline*, and it is not that it isn’t a great country — of course, it is — it’s just that when you’ve been an empire, there’s a certain kind of self-imaging and set of expectations that comes with that, and when things seem powerless, not “on top of the hill” any longer, whether economically or internationally, based upon history, notably Great Britain, the public does not like that. I think people want the lifestyle and economy they came to expect, and it is not in hand. I say this has nothing to do with cultural changes or immigration or increased numbers of guaranteed social goods or government spending but, rather, that we, as a country, have used up our technological and creative capital which came with a generation of kids, now upper middle aged adults, who sought after and were trained in engineering and science, and gave us the best productivity ever seen. That time is past. And we are paying for it.

I entirely expect Science in the United States will suffer, and that’s all the more reason to support your favorite institution yourself (see the Donate) and support organizations which defend Science, like AAAS and UCS. It’s only logical that federal support for climate science will be hobbled, now that Smith and Imhofe have free reign. It might even be made illegal for a federal worker to utter “climate change” as it was in Florida. I expect a flight of scientists from the federal establishment to other places, and probably to other countries where they can.

I entirely expect the line that scientists are “climate alarmists” will get tremendous play, from leaders and in media, because, for the most part, the media are cowards.

And, unfortunately, the Trumpistas will assure that serious action on climate mitigation will be delayed yet another 8 years, even if Democrats regain in 2020, and that a bunch of fossil fuel infrastructure with half century lifetimes will get built. The United States already has a lot to answer for in creating the climate emergency. It will have a great deal more.

But, no matter, reality is reality, and Nature will respond, eventually.

There are some things which won’t change.

While the solar energy stocks today are taking a hit (but, by the way, as is the entire energy sector, like Exxon-Mobil, although they aren’t taking as big a hit), and I’m sure the solar ETF TAN will sell off substantially, the facts are that unsubsidized solar PV and unsubsidized solar PV+storage are on track to dominate fossil fuels and conventional energy generation, based purely on cost. Adoption will slow, but the areas which have the wisdom to invest in these technologies, even in the absence of incentives, will come out ahead after these Dark Times.

And, especially in a world where people are encouraged to pursue the RCP 8.5 scenario, it’s entirely possibly we’ll hear “bigly” from Nature sooner rather than later. One thing that has changed with the ascendence of the Trumpistas is that this could, unfortunately, in the end be a very good thing, especially if the Midwest United States suffers from it. That’s cruel, but I don’t know how else to knock sense into people. Discussion and politics and other compassionate means obviously do not work. The Trumpistas will no doubt label such events as unfortunate natural occurrences.’

Personal choice and action are more important than ever.

Welcome, by the way, to the zombie apocalypse.

* This is a new view on my part.

Westwood Solar & Energy Fair

Today.

Position yourself to ride the Energy Revolution. Adapt to warming due to human-caused climate change in the Northeast U.S. by changing over your heating and cooling sources. Make Money. Increase the value of your home. Move towards your residential independence from the grid. Have a softer imprint on ecosystems.

How?

Come and find out.

Update, 2016-11-06

Update, 2016-11-07

Why natural gas is a problem for the Massachusetts GWSA

The Massachusetts Global Warming Solutions Act (“GWSA”) requires that Massachusetts must limit its emissions in four important sectors to less than 80% of emissions in the year 1990 beginning in 2050, and its emissions must decrease year after year beginning from when the law went into effect in 2009. The figure below, based upon just released data from the U.S. Energy Information Administration through December 2014, shows why increasing use of natural gas is a problem for compliance with the GWSA, even if it provides temporary relief by weaning Massachusetts off coal and oil:

(Click on graph to see a bigger picture, and use browser Back Button to return to blog.)
The data are shown in dots, and a smoothing spline is used to interpolate and then extrapolate the trends. The spar (or regularizing coefficient) for the smoothing splines used was always 0.7.

Point update, 2016-11-05, 10:39 EDT
[Due to a question from Paul Lauenstein, I am adding that, apparently, the emissions above include methane emissions from leaks, as the cited page states “These leaks were the source of about 29% of total U.S. methane emissions, but only about 2% of total U.S. greenhouse gas emissions in 2013”. The latter is a little disingenuous, since in 2013 natural gas was a relative small part of the energy mix and long term it will be larger. But, nevertheless, the chart apparently includes these leaks in its trending for natural gas. Note that if these leaks were to be fixed, to the extent they can be, then the natural gas trendline will be shallower.]

Massachusetts seems to be doing well on its targets, as long as it can keep reductions going, and there isn’t too much uncertainty (“variability behind the scenes”) in the CO2 emissions. Unfortunately, natural gas use is increasing, and at some point, emissions from burning natural gas will predominate in the Commonwealth’s emissions. At that point, natural gas infrastructure will need to be retired if the trend is to continue. None of the existing natural gas infrastructure built in the last decade has built into its price early retirements or accelerated depreciation. I made that point in my testimony from yesterday. Some time around 2030, natural gas is going to have to begin to rapidly go away, to be replaced by zero Carbon sources.

We might as well transition rapidly to zero Carbon source for energy now. Even allocating just a significant fraction of the investments proposed for natural gas infrastructure would yield a great deal of efficiency measures, energy storage, and zero Carbon generation, allowing the Northeast United States and Massachusetts in particular to continue as a champion of clean energy.

Update, 2016-11-04

The choice of spar in the above was somewhat arbitrary, so I re-did the calculation using an evidence-based metric, and penalized spline regression via the R package pspline. The message turns out the same:

(Click on graph to see a bigger picture, and use browser Back Button to return to blog.)

The code, data, and the above figure are available.

Update, 2016-11-06

(Click on graph to see a bigger picture, and use browser Back Button to return to blog.)

Confidence and prediction intervals derived from the U.S. EIA data for Massachusetts overall emissions and natural gas emissions. Intervals were obtained by 3-fold validation, sometimes called 3-folkd jackknifing. Essentially all combinations of the 35 data points in each set taken 35-3 or 32 at a time were taken, and splines constructed in each case. These then were each plotted. The draws from the natural gas set were independent of those from the MA emissions set.

A surprise is that the data itself contains suggestions that both MA emissions could increase, and natural gas emissions decrease. Also, the prediction interval densities are strikingly multimodal.

Testimony to MassDEP, M.G.L. chapter 21N, section 3(d) (from the Global Warming Solutions Act)

I testified to the Massachusetts Department of Environmental Protection (“MassDEP”) yesterday regarding means of enforcing limits as required by the Massachusetts General Laws, Chapter 21N, Section 3(d), otherwise known as (a portion of) the Massachusetts Global Warming Solutions Act, as recently interpreted by the Massachusetts Supreme Judicial Court, in Kain & Others vs the Massachusetts Department of Environmental Protection. Below are my verbal comments, although I presented them ad libitum to keep them shorter, so others would have time to speak. (I was the first one to testify.) I have also submitted written comments which are available as a PDF document here.

Thank you for this opportunity to testify regarding the Global Warming Solutions Act (“GWSA”) and the roles of MassDEP and EER.

I would like to make three points, and I will leave further details and documentation to the
written version of my testimony.

1. Measure emissions, don’t merely accumulate self-reported numbers and project trends.
2. Compliance with GWSA is a large challenge for the Commonwealth. Its management and administration deserves additional people and additional funding. The Department and its allied teams in DPU and DOER should propose such in the next budget cycle.
3. Use the markets, and stop getting in their way when they can help achieve the purposes of the GWSA.

On the ﬁrst point, the 2020 Plan for implementing GWSA as recently clariﬁed by the Supreme Judicial Court continues to only make sense if only goals were being pursued, but not limits. I mean these terms narrowly, in the manner they were used in the SJC case. To assure annual reductions in emissions, assessments of their point-in-time volumes must necessarily take much less than a year to complete. I urge the Deparment to pursue a campaign of scientiﬁcally monitoring emissions independently of its established system of reporting, even if such reports are based upon ANSI standards. Language to support such monitoring activities is in the GWSA itself and in the SJC’s decision. There are several ways this can be done, which I have detailed in my written statement. For example, such a system is in place in California, operated by that state’s EPA and its Air Resources Board. Such measurement is cheaper than onerous life cycle inventories of greenhouse gas emissions and reporting.

On the second point, compliance with GWSA limits is a big job. It is bigger than compliance with mercury reduction two decades ago, when the MassDEP staff was double what it is presently. The legislature cannot seriously expect such enforcement without providing adequate stafﬁng and funding to the Department and its allied teams, DPU and DOER. This administration can begin by proposing additional staff in its budget. The means by which these funds are raised might help nudge us collectively towards the GWSA’s limits. Staff can be thinned as GWSA limits are met.

On the third point, in my opinion, the legislature and the administration are indirectly making the achievement of GWSA limits more difﬁcult. By leaving and sometimes creating obstacles in the way of the markets and technological innovators, they are costing the Department and the Commonwealth more time and treasure than otherwise would be needed to achieve GWSA limits. I speak of the energy revolution which attends the dramatic improvements exempliﬁed by the experience curves for solar PV and storage. While administration and legislature have done much to introduce the Commonwealth to these technologies, and they should be thanked for their efforts, they also support and introduce roadblocks, such as caps on incentivized participation. These impede free market competitive challenges and aggressive innovation, primarily by small companies. The road to the 2050 GWSA limits is made all the much harder by equivocating about how we can continue to use fossil fuels and still decarbonize. No company has the early retirement of their fossil fuel infrastructure on their depreciation schedules.

Thank you for listening.

An example of technology the future will bring … Solpad.

In a recent interview, Professor Tony Seba of Stanford University predicted that solar+storage was going to achieve parity with average grid transmission costs by 2022. This is what he called “god parity”, because even if utilities generated at zero cents per kWh, they could not compete with when energy was available.

Well, today I learned of the second revolutionary technology, after SonnenCommunity, that comes packaged and disruptive, ready to tell utilities and regulatory agencies to Go Away when they become, uh, difficult.

I’m not saying it works. I’m not saying it will win. I don’t know how efficient it is. But it is an example of the kind of high technology competition electric and gas utilities will be facing in the next several years. They are, as I’ve written before, the walking dead. If they want a future, they need to get with the program, and start supporting solar PV+storage in a big way. Otherwise, they deserve the oblivion they deserve, as do all that continue to support them.

Introducing Solpad, the residential microgrid, by SunCulture Solar.

Take that, Elon Musk.

Note that a modular system like Solpad, with its trivial interconnections, threatens to disrupt not only fossil fuel energy, but existing solar installation companies. The simplicity of interconnect might be worth some efficiency loss.

If Donald Trump WERE to get elected …

… Yeah, think the unthinkable: Suppose Donald Trump were to get elected. From recent behaviors of markets, it’s plausible that having him as President, or his reversing moves to zero Carbon energy, or banning research on climate change would be the least of our collective troubles, at least initially. In fact, it would bring new meaning to the term “dump Trump”.

In particular, the U.S. national debt is okay to the extent to which others are willing to give us credit. Our country has been, until recently, considered the safest of safe havens. Countries have, in the recent past, threatened to liquidate their investments here for a variety of reasons. Saudi Arabia is the third largest owner of such debt (petrodollars), and has threatened to dump them over the Congressional probes of its role in the 11 Sept 2001 attacks. It could liquidate for other reasons. China threatened to dump Treasurys back in 2007 and actually liquidated a bunch in 2015. My idea and concern are hardly original:

Any suggestion that he might promote populist fiscal policies would point towards deficit spending and, therefore, higher yields. Another important consideration would be his administration’s attitude towards the Fed. Even a hint that the central bank’s independence might be attacked would spook bond markets.

Further afield, an antagonistic foreign policy could compound this situation. The US relies on foreign investors to fund its current account deficit and a significant share of the Treasury market.

Any wrong move internationally might risk the willingness of foreign central banks to maintain the high levels of reserves they currently hold in Treasuries. Equally, foreign investors own a significant chunk of the US corporate bond market; recent estimates put the figure at about 35 per cent and rising after a period of sturdy growth in overseas appetite. International relations would need to be highly mindful of this domestic cornerstone.

Mr Trump’s mooted policy on redeeming Treasury debt under par could also have a radical impact if it were followed through. In a television interview this year, the Republican nominee outlined the approach as a solution to the US government’s dependence on low interest rates to keep its debt sustainable.

Of course, there is the question if investors dump dollars, where will they place their assets that’s safer? But I think that’s precisely the point: If Trump were elected, the safety of the United States as a haven for wealth would be greatly undercut. The expatriation of U.S. wealth would be a crisis, and investors need to consider the possibility that a mercurial President Trump would take direct action and prevent currency from leaving by fiat. And China found a place to put its US\$200 billion when it dumped in 2015.

If Treasuries and stocks were liquidated, not only would there be a 1929-style Depression like crash, the currency would move into deflation, jobs would be lost by the millions, companies would bankrupt, and tangible assets and production independent of currency would be financial king. People would lose their retirements, now invested in the stock and bond markets since the 1980s Republican conversion of retirement to self-managing, and the numbers on public assistance would go way up. Ironically, a President Trump would have a situation as bad and possibly worse than what faced President Obama when he came into office, but in this case it would have been instigated by the threatened isolationist and protectionist policies Trump championed.

Interesting …

In such an environment, revenue and production from home solar panels priced in terms of deflated dollars with minimum price guaranteed by the state would be handsome. Just trying to look on the bright side of a horrible possibility …

What’s going on in the ocean off the Northeast United States

Hint: Climate change has somethin’ to do with it.

Schematic diagram illustrating the component parts of the AMOC and the 26◦ N observing system. Black arrows represent the Ekman transport (predominantly northward). Red arrows illustrate the circulation of warm waters in the upper 1100 m, and blue arrows indicate the main southward ﬂow of colder deep waters. The array of moorings used to measure the interior geostrophic transport is illustrated too.

(Above is from Figure 1 of “Observed decline of the Atlantic meridional overturning circulation, 2004–2012”, D. A. Smeed, G. D. McCarthy, S. A. Cunningham, E. Frajka-Williams, D. Rayner, W. E. Johns, C. S. Meinen, M. O. Baringer, B. I. Moat, A. Duchez, and H. L. Bryden, from Ocean Science, 10, 2014, 29–38.)

AMOC slowdown: Connecting the dots, by Stefan Rahmstorf, on RealClimate.

“Hurricanes, Sea Level, and Baloney” (from Tamino)

WUWT has a post in which Neil Frank proclaims that Hillary Clinton is no hurricane expert but he is. (Frank’s post was originally published on The Daily Caller, but was reprinted on WUWT with permission.) He objects to Clinton having recently said that “Hurricane Matthew was likely more destructive because of climate change.”

View original post 504 more words

Evidence this form of government and Constitution does not know how to address climate disruption

Evidence this form of government and Constitution does not know how to address let alone solve climate disruption …

Look at the topsy-turvy plight of Washington State’s Initiative 732 Carbon Fee-and-Dividend bill: Even people who think it’s a good idea cannot get out of each other’s way. Even James Hansen has a different take on it.

And, of course, there is also the matter of the previous blog post.

On failing to learn important lessons

As previously posted here, people along coasts and their governments, are failing to learn the lessons of both climate-induced sea level rise, and storms like Extratropical Sandy.

Now, it’s startlingly clear how ignorant people are of these necessary lessons. The New Jersey coast, where Sandy hit the hardest, has been rebuilt, utilities and all. Acknowledged, the homes are on stilts, and some of the local officials have doubts of the wisdom of this rebuild, even if they went along, but here, in great detail, is a record of how people of some means are going to try to deal with the inevitability of climate change. It is an article from Inside Climate News, by Leslie Kaufman which details not only this reconstruction, but the federal subsidies and incentives which implicitly refuse to admit climate change, whether because federal agencies are hamstrung by legislative constraints, or because citizens simply insist upon their right to occupy their property, no matter what the evidence is otherwise.

It is ironic to me that a state much aligned by the environmentalists and liberals of the Northeast, Texas, not only is beating California in zero Carbon energy generation (excluding hydropower), and has three cities in the Environmental Protection Agency‘s 2016 list of top 100 green power organizations (Dallas, Houston, and Austin), Texas has also pioneered regulatory takings due to flooding, as well as legal obligation of governments to mitigate environmental damage when foreseen, as in the case of flood.

I have linked the talk by Professor Robert Young in again below:

Cathy O’Neil’s WEAPONS OF MATH DESTRUCTION: A Review

(Revised and updated Monday, 24th October 2016.)

Weapons of Math Destruction, Cathy O’Neil, published by Crown Random House, 2016.

This is a thoughtful and very approachable introduction and review to the societal and personal consequences of data mining, data science, and machine learning practices which seem at times extraordinarily successful. While others have breached the barriers of this subject, Professor O’Neil is the first to deal with it in the call-to-action manner it deserves. This is a book you should definitely read this year, especially if you are a parent. It should be required reading for anyone who practices in the field before beginning work.

I have a few quibbles about the book’s observations based on its very occasional leaps of logic and some quick interpretations of history.

For example, while I wholeheartedly deplore the pervasive use of e-scores and a financing system which confounds absence of information with higher risk (that is, fails to posit and apply proper Bayesian priors), the sentence “But framing debt as a moral issue is a mistake”, while correct, ignores the widespread practice of debtors courts and prisons in the history of the United States. This is really not something new, only a new form. Perhaps it is more pervasive.

For a few of the cases used to illustrate WMDs, there are other social changes which exacerbate matters, rather than abused algorithms being a cause. For instance, the idea of individual home ownership was not such a Big Deal in the past, especially for people without substantial means. These less fortunate individuals resigned themselves to renting their entire lives. Having a society and a group of banks pushing home ownership onto people who can barely afford it sets them up for financial hardship, loss of home, and credit.

What will be interesting to see is where the movement to fix these serious problems will go. Protests are good and necessary but, eventually, engagement with the developers of actual or potential WMDs is required. An Amazon review is not a place to write more of this, nor give some of my ideas. Accordingly, I have written a full review at my blog for the purpose.

My primary recommendation is a plea for rigorous testing of anything which could become a WMD. It’s apparent these systems touch the lives of many people. Just as in the case of transportation systems, it seems to me that we as a society have very right to demand these systems be similarly tested, beyond the narrow goals of the companies who are building them. This will result in fewer being built, but, as Dr O’Neil has described, building fewer bad systems can only be a good thing.

(The above is the substance of a review I wrote at Amazon for the book.)

Here are some of my ideas

While a social movement may be a good way to start, and raise consciousness, I think more specific steps are needed. In particular, codifying acceptable technical practice in an IEEE or ISO standard might be a way to identify those companies which take care in their use and application of this technology. I emphasize application because it seems to me the action side of the process needs to be constrained in addition to the data gathering side. While some regulatory lasso needs to be thrown around the froth and foment of Web-scraping, data dredging companies and startups that deeply affect people’s lives, I also just don’t think social pressure for exploiters of these to act more ethically will do it. A compliance procedure for an IEEE or ISO standard would make what was being done more transparent, as well as constrain it. Of course, proposing and negotiating such a standard could take a long time, and may fall short of ambition. Would government agencies be willing to undergo compliance assessment under these standards? If not, is that letting a wolf into the henhouse?

This book is also a call to statisticians to do a better job educating the general public about risk and variability. Some have tried, such as David Spiegelhalter, Stephen Fienberg (who coauthored an article in 1980 which gave stark warnings about designing police patrol experiments), the collection edited by Joseph Gastwirth published in 2000, and others. That some education officials failed so completely to understand basic ideas about variability when assessing “value-added scores” in education means these decision makers and managers missed something very key in their quantitative educations. There were calls for considering racial bias at the Bureau of Justice Statistics back in the 1990s (e.g., Langan, 1995). There are an increasing number of complaints by the statistical community, such as in the current issue of Significance, the joint publication of the Royal Statistical Society and the American Statistical Association, regarding turnkey software which purports to help automate policing. In particular, the recent issue features an article by Kristian Lum and William Isaac called “To predict and serve?” not only highlights a disturbing instance of abuse of “predictive policing” software in Oakland, CA, but also suggests a technique for demonstrating where such software falls down. It also gives a number of references, including citations of articles cautioning regarding misuse. Alas, they also point out that they were able to do this with but one popular software package, and the other vendors refused to cooperate. Wouldn’t it be appropriate to insist that if such software is being used to drive as socially powerful a force as policing it be subjected to independent review and assessment?

While there is evidence there has been concerns repeatedly expressed, perhaps it will take something like Weapons of Math Destruction and the attendant media focus to make progress. Clearly, drawbacks cited by other experts have not prevented abuse.

1. In the chapter “Shell Shocked”, regarding D. E. Shaw, the tendency to keep portions of process “need to know” illustrates the limitations of any classification system when dealing with highly technical matters and systems which benefit from many eyes. It reminds me of the report by the late Richard Feynman in his Surely You’re Joking, Mr Feynman on how he was prohibited (at least for a long while) from telling the engineers he supervised on the Manhattan Project what they were working on so they could use their physics knowledge to help keep their calculations correct, despite the protestations of Project management that they were not progressing quickly enough.
2. Same chapter, regarding “Very few people had the expertise and the information required to know what was actually going on statistically, and most of the people who did lacked the integrity to speak up”: Those who remained silent in such circumstances, in my opinion, despite the training they had which told them to know better, carry most of the responsibility for the consequences under such circumstances.
3. In the chapter regarding “stop and frisk”, regarding the statement “The Constitution, for example, presumes innocence and is engineered to value it”, I disagree the Constitution presumes innocence. It presumes parties ought to be treated equitably. I think “innocence” is far too abstract a property for any legal system or process to determine, except when defined in the narrow sense of “Found not guilty of a specific formal charge.” That’s not “innocence” in the abstract sense. Indeed, a bit farther down, “The Constitution’s implicit judgment is that freeing someone who may well have committed a crime, for lack of evidence, poses less of a danger to our society than jailing or executing an innocent person” is that point exactly.
4. Farther down, regarding “And the concept of fairness utterly escapes them. Programmers don’t know how to code for it, and few of their bosses ask them to”, in my opinion, it’s really not that hard, for it is an extension of the entropy measure. I think the problem is that this is not seen as important to specify. I also don’t know if we’d be much better off if there were a good measure of “fairness”.
5. The problem cited in
The unquestioned assumption that locking away ‘high risk’ prisoners for more time makes society safer. It is true, of course, that prisoners don’t commit crimes against society while behind bars.

is not new. Norbert Wiener observed in his book Cybernetics that killing difficult people makes society safer still, yet that is too brutal or honest a proposal for most to contemplate, even if it is the logical extensions of the present system. He surely was not advocating that, and was, in fact, reacting most strongly against frontal lobotomy as a form of “treatment” for mental patients. His point was to highlight the hypocrisy of using convenience in managing them to justify treatment. Also, prisoners can commit crimes against society while behind bars, even if they only harm one another: Surely society has an interest is assuring that prisoners are safe, lest additional punishments be levied upon them without due process.

6. Regarding “…for the benefit of both the prisoners and society at large”, society shows no common agreement regarding what the point of incarceration in standard prisons (not those for “white collar criminals”) is … Is it correction and rehabilitation? Or punishment? Or vengeance?
7. In the chapter “Ineligible to Serve”, regarding “If his principal online contact happened to be Google’s Sergey Brin, or Palmer Luckey, founder of the virtual reality maker Oculus BR, Pedro’s social score would no doubt shoot through the roof”, of course, not all good candidates are online, and it’s a pretty strong constraint (and problem!) to assume they are.
8. In the chapter “Sweating Bullets”, regarding Clifford’s drastic change in scores, I’m most amazed that the test administrators and interpreters don’t know about proper variability or how consider it. It seems to me they could not possibly be qualified for the positions they have if they don’t. But, again, as mentioned above, this is a failure of statistical and mathematical education, or the appreciation of it by this society.
9. In the chapter “No Safe Zone”, regarding “We’ve already discussed how the growing reliance on credit scores across the economy …”, a lot of this practice, too, is based upon an implicit assumption and tenet of faith that “the markets” will weed out practitioners of this kind of statistical voodoo. “The markets” have no way to understand this stuff, and whatever natural selection they might apply is horribly inefficient and has little statistical power. An appeal to “the markets” and to “competition” is a fig leaf covering sloppy policy, again in my opinion.
10. In the same chapter, regarding “The model is fine-tuned to draw as much money as possible from this subgroup. Some of them, inevitably, fall too far, defaulting on their auto loans, credit cards, or rent. That further punishes their credit scores, which no doubt drops them into an even more forlorn microsegment”, well, that’s it, isn’t it? It depends upon your loss function and the designers of this process, which can only be laughingly called an optimization algorithm, did a piss poor job of doing that design.
11. In the same chapter, regarding “This undermines the point of insurance, and the hits will fall especially hard on those who can least afford them”, unfortunately, I just don’t buy that most insurers are that good at what they are supposed to do, with apologies to statistical actuaries. Some may indulge in the kind of statistical fallacy which Dr O’Neil describes, but it seems many don’t even properly consider the risks they know about. For example, some insurers don’t properly consider increased losses at coasts from storms and sea level rise. I don’t know if this is a product of actuarial consideration, or if the actuaries are constrained by management and the companies’ policies on what they can consider, or if their results are filtered by the same. No doubt their reinsurers do, and some rely upon generous interpretations of “flood damage” to avoid paying out. Nevertheless, these are not behaviors associated with the fiendishly clever and discriminating inference engines, human or otherwise, which are implied by Dr O’Neil’s explanation and postulated mechanism. Accordingly, I fail to see a plausible mechanism for this kind of thing happening, as nefarious as it is. Moreover, credit agencies and the like have an organic and unchecked internal error rate, and these errors work to frustrate precise predictions of risk, as well as associations of individuals with clusters, even if such errors can by themselves cause harm. I think it’s even a fair question to ask if deterministic associations of individuals with any group is ever proper statistical practice: It should be an affinity score or membership number against each group. I’ve made that observation in my own professional practice, and a common response is, “Well, that algorithm doesn’t scale.” Therein lies, I believe, a lot of the problem.
12. Same chapter, regarding the conclusion “If we don’t wrest back a measure of control, these future WMDs will feel mysterious and powerful. They’ll have their way with us, and we’ll barely know it’s happening”, there are some ways of “wresting control”, even if most people will engage in them. (Many people seem starkly unaware of their self-interest.) One way is to “jam” the signal being fed to the systems and deliberately increasing the variance of their observations. This can be done by interfering with your location reported through cell phones, or simply mixing up what you do during the day, reducing consistency of patterns. The other way is to selectively lie. For instance, for years, in order to confound mail order catalogues, and other online solicitations, I have been misrepresenting my birth date. I acknowledge this kind of practice, even if widely adopted, won’t solve most of the problem.
13. In the chapter “The Targeted Citizen”, regarding “I wouldn’t yet call Facebook or Google’s algorithms political WMDs … Still, the potential for abuse is vast. The drama occurs in code and behind imposing firewalls”, there’s nothing new in that view, long warned about by Lawrence Lessig in his book Code 2.0. In fact, some consider this a feature, keeping control of online things from governments and such. Lessig warns in his writings, however, that it is not turning out that way.
14. In the chapter “Conclusion”, regarding “Dismantling a WMD doesn’t always offer such obvious payoff … For most of them, in fact, WMDs appear to be highly effective”, how the devil can they tell? I don’t see any evidence in the research presented that these companies and organizations do anything like a comprehensive testing program, that is one that assures the (written) objectives are met (in the real world), not merely that the code implements the requirements. To use the example by Lum and Isaac in “Predict and serve?” cited above, many companies or even organizational units won’t open their algorithms to outside scrutiny. That could be because of a desire to protect something proprietary, or it could be that the algorithms really don’t work well, and they are trying to sell shoddy algorithms as if they do, even to other units of the same business.
15. In the chapter “Conclusion”, regarding the Derman and Wilmott “oath”, I respectfully but strongly disagree with it. The same could be said of all of Physics. And I don’t know what “overly impressed with Mathematics” means. Apart from lip service to a goal, people could insist that these systems undergo a comprehensive and rigorous — and necessarily expensive — testing program like many other systems which interact with the physical world do, for instance, aircraft. As a colleague observed after a discussion about this, it could be just as well said that the Mathematics was done badly and no independent check on it was available.
16. Finally, in the chapter “Conclusion”, regarding “Though economists may attempt to calculate costs for smog or agricultural runoff, or the extinction of the spotted owl, numbers can never express their value”, I have a couple of things to say. First, I agree that a one-dimensional characterization of any complicated system or process or person, like a spotted owl, is doomed to be woefully incomplete. Second, I agree that economic assessments of these, if honest, must be based upon behavioral economics, and not upon the pseudo-objective rantings of the Chicago School, or Austrian, and, so, they are highly contingent and, being so, unsuitable for policy. But, third, I do think it is possible to quantitatively characterize such complicated things, and, if well done, these can be of great use to society and in solving its problems. The placement of the new Hoover Dam bypass (chronicled by Henry Petroski) and assessments of ecosystem services are two small examples. As any casual reader of this blog will note, I continue to be very enthusiastic regarding the economic prospects of solar PV as a technology for good, not only to advance zero Carbon energy, but as a basis for a helpful and common discussion among members of this United States society who can’t seem to agree on much of anything, and also to advance the revolution championed by the late Hermann Scheer, that of bringing control of the energy supply back to the people and, thereby, control of their democracy. This is an area where putting quantitative measures on often intangible things happens systematically.

One thing I fear when faced with these kinds of issues, and it’s something I have seen elsewhere in this society, especially among my younger colleagues, is a devolution into insidious cynicism. This is sometimes wrapped in a mantra which argues “you can only control yourself and doing anything else is engaging in an immoral act”, possibly substantiated by an appeal to Buberian ethics. And, ironically or hypocritically, the same complaintants will continue to work for companies with a deep investment in facilitating this kind of WMD engineering, even if the companies don’t build WMDs themselves. (How many companies profit from the existence and operations of Facebook?) Especially given the insights of behavioral economists like Daniel Kahneman, I hope the insights Dr O’Neil has don’t end with their merely being presented. My definition of a successful technology is one that does not depend upon people being good or morally perfect in order for it to “do no harm”. (I have been influenced a good deal towards this view by the lectures of Professor Sheila Widnall of MIT.) In fact, my standard is that every successful technology must assume people are imperfect, morally corruptible, and self-interested, and yet do perform its function nonetheless. If it cannot work under those conditions, any device or technology is broken. And I continue to be heartened by both the successes of engineering and science, and the deep mathematics that unpin them, especially as exemplified in the talent and smarts of young people pursuing these to make the world a better place, for all of its beings and creatures.

“Getting past grudging precautions: How the next President should address climate change”

Professor David Titley (see also, and here) writes in the online newsletter DefenseOne:

Many observers think climate change deserves more attention. They might be surprised to learn that U.S. military leaders and defense planners agree. The armed forces have been studying climate change for years from a perspective that rarely is mentioned in the news: as a national security threat. And they agree that it poses serious risks.

I spent 32 years as a meteorologist in the U.S. Navy, where I initiated and led the Navy’s Task Force on Climate Change. Here is how military planners see this issue: We know that the climate is changing, we know why it’s changing and we understand that change will have large impacts on our national security. Yet as a nation we still only begrudgingly take precautions.

True, the Pentagon is a major emissions generator, and that will need to be dealt with. But the emissions from U.S. natural gas use dwarf those of the U.S. military many times. In 2015, these were 1.5 GtCO2. For all energy uses, they were about 5.8 GtCO2. I reserve the term liberal climate deniers for people who, while they supposedly accept climate change, it’s human causes, and the mitigation necessary at face value, refuse to do the proper triage to see what needs to be reduced the most, and exploit the cause to further their own political agendas. That doesn’t help to fix the problem, and all engineering fixes involve tradeoffs.

Polls, Political Forecasting, and the Plight of Five Thirty Eight

On 17th October 2016 AT 7:30 p.m., Nate Silver of FiveThirtyEight.com wrote about how, as former Secretary of State Hillary Clinton’s polling numbers got better, it was more difficult for FiveThirtyEight‘s models to justify increasing her probability of winning, although it did “stabilize” their predictions. Mr Silver is being a bit too harsh on their models, since the problem is fundamental, not just something which afflicts their particular model. In Mr Silver’s defense, he did write:

But there’s some truth to the notion that she’s encountering diminishing returns. And that’s for a simple reason: 88 percent and 85 percent are already fairly high probabilities. Our model is going to be stingy about assigning those last 10 or 15 percentage points of probability to Clinton as she moves from the steep, middle part of the probability distribution to the flatter part.

Well, maybe, except I’m not sure that is assignable to Secretary Clinton. It’s a mathematical phenomenon, one which Mr Silver may be aware of, but apparently did not want to comment upon saying “Before this turns into too much of a math lesson …”. I say Why not a math lesson?.

In particular, as a probability of an event, any event, gets more and more above 50% (or, symmetrically, less than 50%), the amount of information needed to “push it” the same distance it has gone grows, and as the probability (or improbability) of the event approaches certainty, the efficiency of additional information to improve the determination gets worse. It’s possible to be quantitative about all this.

Let’s have a look at this in the hypothetical case of two presidential candidates, one called T and one called H. Suppose that, with time, T‘s probability of winning, denoted here $[\mathbf{T}]$, decreases from 0.50. Since I’m only considering two candidates, $[\mathbf{H}] = 1 - [\mathbf{T}]$, so, then, $[\mathbf{H}]$ increases away from 0.50, and they sum to unity. This is a system with two components, and it’s entropy is equal to

$-[\mathbf{T}] \log_{2}{(\mathbf{T})} - [\mathbf{H}] \log_{2}{(\mathbf{H})}$.

Entropy for this system will hereafter be denoted $E(p)$. The amount of information needed to move, say, $[\mathbf{H}]$ up a unit of probability is the decrease in the entropy at the new state of affairs with respect to the old one. Adding information is kind of doing work, although, in this case, the “work” is evidence collected from polls and other sources.

So, for example, the amount of entropy when both candidates are tied is exactly 1 bit. (Entropy and information are measured in bits or nats.) When $[\mathbf{H}]$ is about 0.8885, the entropy is 0.5 bits. When $[\mathbf{H}]$ is 0.91, the entropy is about 0.436 bits. The rate of change of entropy with $[\mathbf{H}]$ is simply the derivative of $E(p)$ with respect to $p$, or $\log_{2}{(\frac{p}{1-p})}$, or, in other words, the log of the odds ratio, sometimes called “log odds”. If someone were to try to assess these “diminishing returns”, they might compare the additional information needed to progress to that needed to move from $[\mathbf{H}] = 0.50$ to $[\mathbf{H}] = 0.60$. So, let’s plot that:

(Click on image for a larger figure, and use browser Back Button to return to blog.)

So, again, what this shows is how much additional information is needed per unit of probability of winning compared to the information needed to improve chances of winning from 0.50 to 0.60 plotted for various probabilities of winning. Highlighted on the figure is the 0.90 probability of winning, close to present estimates, and it shows that the amount of information needed to improve by 1 unit is one hundred and nine times that needed to improve from 50% chance of winning to 60% chance of winning.

So, what does this mean in the context of political forecasts or, for that matter, any forecasts?

First, as suggested by Mr Silver, once you are at $[\mathbf{H}] = 0.9$, the additional information or evidence needed to move it higher, to $[\mathbf{H}] = 0.91$ or $[\mathbf{H}] = 0.92$ is substantial. In fact, just going from 0.90 to 0.91 requires almost six more multiples of the change in evidence needed from 0.50 to 0.60. This is true of any application. For example, to demonstrate, say, that a given engineered system has a reliability of, say, 0.995 requires a lot of testing and a lot of work, and necessarily takes a long time, simply because that 0.995 criterion is way out there on the “evidence sill”.

Second, this mathematical fact tends to downplay the significance of changes at high probabilities of winning. Going from 0.90 to 0.91 may not sound like a lot, but the information gathered to justify it is necessarily substantial.

Third, there are limits to political forecasting. These are not because the models are poor, or the techniques are poor, but because there is only so much information available in political polls and other sources. These observations have their own variability or noise, and that limits their information content. At some point in the above figure, the information content of the polls or observations is exhausted, and whatever uncertainty remains is the best anyone can do. This isn’t to say polling could not be improved, or samples might not be larger, or more systematic surveys might not be taken to improve results, using stratified sampling and other techniques. (These are pretty standard anyway, although they can cost a lot of money.) It’s just that you cannot squeeze more out of a set of data than it has. It also means there are limits to what political forecasting can do, even a group as talented as fivethirtyeight.com.

Nevertheless, if a particular candidate, say, H has $[\mathbf{H}] = 0.91$, that’s pretty darn good, especially when you consider the amount of information needed to establish that, and what that means, for example, about evidence for their popularity among the public. And this is an insight which I don’t believe is made available by examining variance of Bernoulli variables or coefficients of variation, measures which seem inappropriate this far out on the Bernoulli tail.

If you’d like to learn more about this kind of thing, I recommend Professor John Baez’s series of posts on information geometry. It is a little mathematical, but the investment in time and mind assets are decidedly worth it. There are many analogies between information and entropy and physical processes. For example, borrowing from classical statistical mechanics in Physics, information in this instance can be thought of as the additional cooling needed to bring a two-state system into a more rigid configuration, kind of like approaching Absolute Zero, at least with respect to entropy of perfect crystals.