## Result of our own fiddling: Bob Watson and climate risk

https://sms.cam.ac.uk/media/746045

Professor Bob Watson, University of East Anglia, presents the summary risk, climate change:

The question is not whether the Earth’s climate will change in response to human activities, but when, where and by how much. Human activities are changing the Earth’s climate and further human-induced climate change is inevitable. Indeed the climate of the next few decades will be governed by past emissions. The most adverse consequences of human-induced climate change will be in developing countries and poor people within them. Climate change threatens to bring more suffering to the one billion people who already go to bed hungry every night and the approximately 2 billion people exposed to insect-borne diseases and water scarcity. Sea level rise threatens to displace tens of millions of people in deltaic areas and low-lying small island states. Climate change will undermine the ability of many poor people to escape poverty and the long-term sustainable economic development of some countries. Hence, climate change is not only an environmental issue, but a development and security issue. The challenge is to limit the magnitude and rate of human-induced climate change, and simultaneously reduce the vulnerability of socio-economic sectors, ecological systems and human health to current and projected climate variability by integrating climate concerns into local and national economic planning. Technological options for reducing greenhouse gas emissions cost-effectively over the next few decades already exist. However, the required transition to a very low carbon economy (a reduction in global emissions by at least 50% by 2050) will require a technological evolution in the production and use of energy, energy sector reform, appropriate pricing policies and behavior change, coupled with a more sustainable agricultural sector and reduced deforestation. This transition to a low-carbon economy must be achieved while improving access to affordable energy in developing countries, which is critical for economic growth and poverty alleviation, and while ensuring adequate affordable and nutritious food. The challenge is to negotiate a long-term (up to 2050) global regulatory framework that is equitable with common but differentiated responsibilities and has intermediate targets that can reduce greenhouse emissions to a level that limits the increase in global mean surface temperature to 2C above pre-industrial levels. While this goal has been widely accepted, the current rate of growth in emissions globally, coupled with a failure in Copenhagen to agree to stringent targets to reduce emissions, makes this goal extremely difficult, hence the world needs to be prepared to adapt to a 4C warmer world.

## Welcome to snowy New England … Bad place for solar PV, right?

And this is ISO-NE, who, as little as three years back were highly sceptical anything other than additional natural gas generation could supply the ever increasing electrical power needs of the region, particularly with the withdrawal of generation from oil, coal, and nuclear sources scheduled for the period.

Oh. So, perhaps, maybe, Professor Tony Seba nailed it right on all along. What a concept.

Welcome to New England. Bad place for solar PV, right? So why can’t you make it work, Texas, or South Carolina, or Florida, or Georgia, or North Carolina, or Arizona? What are you dumb or something?

Hat tip to SP Global for the original article.

## One of the happiest two hours I’ve spent in months: A Professor Tony Seba update

From end of 2018:

from alianza FiiDEMAC.

And, indeed, it was one of the most uplifting two hours I’ve recently spent. I have long been an admirer of Professor Tony Seba. I have read his books. This was an update on how he now sees the world.

As someone who embraces the legal logic of the Juliana v United States lawsuit, I do not have much confidence in politics being able to mitigate climate disruption. Both political parties in the United States have been repeatedly warned of the consequences of continuing the policy of mining and emitting and their inevitable disruptions of the climate. And, while, technically, United States emissions have plateaued, this is a result of our collective exporting our manufacturing emissions to China.

So, politically, efforts to mitigate climate change, in the United States, but not only in the United States, but also in the OECD, have been an abysmal failure. How depressing. And the death throes of the so-called Green New Deal do not inspire.

I have stated my problems with matters as they are. (Context.) I am pessimistic that the last branch of the United States government will intervene appropriately. They haven’t shown enthusiasm.

And, as I made clear in my statement, this is not a cause for despair. There will be a response. Unfortunately, by the abrogation of interest and concern on the part, firstly, of the general public in the matter, the displacements in jobs, social equity, and wealth which will inevitably occur by their collective lack of engagement will be painful. Nevertheless, this disruption will happen, since economics, at least in OECD countries, are primary.

Climate change will be mitigated, perhaps a bit late, and probably with an incredible loss of present wealth, because of bad bets on the part of the wealthy. I really do not find any reason to sympathize with them. I believe the less privileged won’t be impacted any more than they usually are, and, in the dissolution of wealth which will inevitably occur, they may have opportunities they did not have previously. In any case, the presumed omniscience on the part of the Haves over the Have Nots in United States society should be destroyed in concept, although the ignorance of some publics regarding our present leadership gives me some pause in this conclusion.

In any event, I feel this change is inexorable, not, as Professor Tony Seba repeatedly emphasizes, because of do-good environmental policies, but because the time of zero Carbon energy and smart distribution of it via computation has arrived.

And, frankly, as uncharitable as the opinion might seem, I have zero commiseration with those who opposed the advance of such zero Carbon energy, whether that means they lose their jobs, lose their investments, or cannot provide for their offspring. For they are the reason why, after more than 20 years of knowing about climate change, we have collectively done nothing, and, in process, thrown doubt at Science and Engineering and Mathematics, they deserve no sympathy, and no consideration. Let them be a lesson.

It is also notably that electorate should be highly cautious of urgings on the part of fossil fuel interests, including extractions companies as well as their supporters, to reimburse them for losses relating to this disruption. There is ample evidence they saw what was coming and chose to oppose it rather than adapting to it. That was a choice. That was their right. But they should not be given a penny because they chose wrongly. There is nothing more fundamental to free market capitalism than the principle that those who make bad bets should bear the full cost of making those bad bets.

## “Ridiculously well-designed rockets”, not to mention some seriously awesome Mathematics

I’m just amazed with the quality of their controls systems, understated in the video, but which are absolutely critical to success.

For more technical details, see:

B. Açıkmeşe, J. M. Carson III, L. Blackmore, “Lossless convexification of nonconvex control bound and pointing constraints of the soft landing optimal control problem“, IEEE Transactions on Control Systems Technology, 21(6), November 2013.

## Macros in R

### via Macros in R

• The gtools package of R which enables these.
• There’s a description and motivation beginninng on page 11 of an (old: 2001) R News issue.

They have been around a long time, but I haven’t tried them.

I will.

## Temperatures, Summers, Germany, ≈ 50.5N to 57.5N latitude

###### (Click on figure for larger image and use browser Back Button to return to blog.)

Hat tip to Gregor Aisch, Adam Pearce, and Steve Hoey, and sourced from the mashup dataset and visuals by Lisa Charlotte Rost.

Mr Aisch’s innovation was to use Loess regression to display. Loess is one of a set of local regression methods. I personally prefer p-spline smoothing (penalized spline regression).

In contrast, Boston is 42.4N, Bangor, Maine is 44.8N, and Montreal, Quebec, Canada is 45.5N.

## “Rising seas erode $15.8 billion in home value from Maine to Mississippi” From the First Street Foundation‘s press release, with selected figures below. This is based upon the methods described in: S. A. McAlpine, J. R. Porter, “Estimating recent local impacts of Sea-Level Rise on current real-estate losses: A housing market case study in Miami-Dade, Florida“, (open access) Population Research and Policy Review, December 2018, 37(6), 871–895. ###### (Click on image to see larger figure and use browser Back Button to return to blog.) ###### (Click on image to see larger figure and use browser Back Button to return to blog.) ## Procrustes tangent distance is better than SNCD I’ve written two posts here on using a Symmetrized Normalized Compression Divergence or SNCD for comparing time series. One introduced the SNCD and described its relationship to compression distance, and the other applied the SNCD to clustering days at a high school based upon patterns of electricity consumption. Having good tools for making such comparisons is important, because such bases for clustering and exploration are useful when examining large datasets, like the hydrological datasets I’ve previously described. I am also finally getting around to doing something with these datasets, a project I put off because of my commitments to climate activism over the last few years. Despite my earlier enthusiasm for SNCD as a tool for series comparisons, it turns out there is a better measure, something called Procrustes tangent distance (“PTD”). I discovered this in the second edition of a book by I. L. Dryden and K. V. Mardia, called Statistical Shape Analysis, with Applications in R (2016) and through related literature and scholarship. A key paper is J. T. Kent, K. V. Mardia, “Shape, Procrustes tangent projections and bilateral symmetry“, Biometrika, 2001, 88(2), 469-485 (with correction). PTD is superior because it and related efforts reduce shape comparisons like that of two time series to ordinary multivariate analysis. (See pertinent book by Mardia, J. Kent, and J. Bibby as well.) For purposes of statistical analysis, it’s difficult to get better than that. This is an outcome of a problem area dubbed Generalized Procrustes Analysis (“GPA”), and arises in applications where biological shapes need to be matched, such as bivalve shells. It also arises in archaeological work where automated methods for matching shards of pottery are engaged. These techniques and problems have deep connections to differential geometry and have engaged other great minds besides Mardia, Dryden, and Kent. PTD may not be the last word. In particular, C. P. Klingenberg, L. R. Monteiro, “Distances and directions in multidimensional shape spaces: Implications for morphometric applications“, Systematic Biology, 54(4), 1 August 2005, 678–688 reviewed some criticisms of PTD, along with discussion by Dryden and Mardia, with others. My application is more modest than the general multidimensional shapes problem, being limited strictly to two dimensions where some of these complications to not arise. Unfortunately, the details of defining the Procrustes tangent distance are involved. Procrustes analysis begins with the consideration of $k$ $m$-dimensional landmarks and proceeds to the recovery of a rotational invariant shape, obtained by maximizing the trace of a product, $\text{tr}(\mathbf{A} \mathbf{Q})$, involving a symmetric landmarks distance matrix, $\mathbf{A}$ and a rotation matrix, $\mathbf{Q}$, over all $\mathbf{Q}$. The value of the trace and the maximizing rotation is found using the SVD, and that is also used in the practical construction of the PTD. The next step is a linearization by constructing a tangent space, namely, the Procrustes tangent space, and an associated tangent matrix, $\mathbf{T}$, which is constructed as follows. Let $\mathbf{A}_{1}, \mathbf{A}_{2}$ be two sets of $k$-by-$m$ landmarks matrices. Recall these are landmark coordinates in $m$ dimensions and there are $k$ of them. Find the maximum over rotation matrices $\mathbf{Q}$ of $\text{tr}(\mathbf{A}_{2}^{\top}\mathbf{A}_{1}\mathbf{Q}) = \alpha$ Call that maximum point $\hat{\mathbf{Q}}$. Then $\mathbf{T} = \mathbf{A}_{1} \mathbf{Q} - \alpha \mathbf{A}_{2}$ and this can be re-expressed, after some algebra, as $\mathbf{A}_{1} = (\cos{(\rho \mathbf{A}_{2} + \mathbf{T})})\mathbf{Q}^{\top}$ Because of an implicit constraint on $\alpha$, $\rho$ turns out to be a bounded, non-negative Riemannian distance between $\mathbf{A}_{1}$ and $\mathbf{A}_{2}$ and their shapes. While the equation above could be solved using non-linear minimization, there are more direct approaches sketched in Kent and Mardia. Moreover, my calculations of PTD are obtained by calls to the function procGPA from the shapes package offered by I. L. Dryden. The article by Klingenberg and Monteiro cited above also gives a qualitative overview. The insight for applicability to time series comes from this sketch: Applying the PTD to unique pairs of edges results in: Note however that the traces in the picture could just as well be three different time series. Accordingly, the PTD for shapes also yields distances between time series. Does this generalize, however? Do the distances continue to make sense even when the series differ in other ways? Consider ###### (Click image to see a larger figure, and use browser Back Button to return to blog.) In the labeling atop of each, the “L” factor is inversely proportional to slope, except for the zero case, which is a zero slope. In the same, the “W” factor is inversely proportional to frequency. What does the PTD produce as distances among these? Note the the larger the number in the following figure, the farther away the cases are: ###### (Click image to see a larger figure, and use browser Back Button to return to blog.) The distances show that irrespective of slope, the PTD is picking up ripple trains with the same frequency. Some are annotated. Note that these distances have been multiplied by 100 times to get the distances in a range where they register well in the plot. What this means is that PTD considers all the cases pretty close to one another in shape. Nevertheless, it is capable of good discriminations. What does SNCD do with the same 16 cases? ###### (Click image to see a larger figure, and use browser Back Button to return to blog.) In short, the divergences are very difficult to reconcile with any pattern of similarity. Even shorter, SNCD butchered it. Code for calculating these figures and results is available in my Google repository. Finally, I have repeated the analysis of high school electricity consumption clustering with PTD and found it gave nearly identical results to use of SNCD, ## “Unpleasant surprises in the greenhouse” (in memorium, Professor Wallace Broecker) These are excerpts from a 1987 paper by Professor Wallace Broecker, widely acknowledged to be one of the greatest climate scientists and oceanographers in the last century. . . . Posted in science | Leave a comment ## One possible way to do small, modular nuclear power Featured in Science Magazine today, NuScale Power, a spinout from Oregon State University, is planning simpler, smaller, safer gang-lashable nuclear reactors, with a trial in the early 2020s. A schematic is shown below. As I’ve noted here elsewhere, the reason why conventional nuclear reactor designs have a negative learning curve is because the industry did not turn the nuclear reactors into commodities, taking advantage of large scale replication. Despite the unhappiness some have with nuclear power, it is clear that a good solution to most of its ills, including cost and rollout time, would be a godsend for providing the massive amounts of electrical power we need to electrify the entire United States and the world. I continue to argue that those who oppose such developments on some kind of principle do not understand or appreciate the desperate solution situation with respect to climate change we have placed ourselves, and the soon-to-be-realized consequences. ## Legacy It should be noted that, exponential growth is a plank in the theoretical framework of modern Ecology. See L. Pásztor, Z. Botta-Dukát, G. Magyar, T. Gzárán, G. Meszéna, Theory-Based Ecology: A Darwinian approach, 2016. Dr Suzuki points out that, objectively, people are big animals, and the total biomass on Earth due to human beings is quite large. We are also large in terms of our demands upon the natural world, and, in fact, each one of us consumes many times more than the world’s natural carrying capacity for us. This is possible because of technology, and fossil fuels. ## Professor Kevin Anderson: “Climate’s holy trinity” ## 24th January 2019, Oxford, England, UK Appalling failure: Who is responsible: Yeah, it’s us. ## On bag bans and sampling plans Plastic bag bans are all the rage. It’s not the purpose of this post to take a position on the matter. Before you do, however, I’d recommend checking out this: and especially this: Good modern governance means having evidence-based decisions. So, if a bag ban of any kind, or a bag tax of any kind is going to be imposed, it makes sense to assess how much and what kinds of use of bags are prevalent before the ban or tax, and how this changes after the ban or tax. This kind of thing used to need to be done with professional surveyors and statisticians. But with the availability of online datasets, access to the experience of others, widely available and open-source computing, and new survey technology and methods, expensive professional options aren’t the only way this can be done. Professional surveyors tend to argue otherwise. But, facts are, you can learn a lot by using Google Earth and Google Maps these days. Surveys are designed around answering specific questions. If the objective is to estimate how many bags of one kind or another are being consumed per week in a town or county, that’s one question. If the objective is to estimate how many people regularly choose paper over plastic, or bring-their-own-bags, that’s an entirely different question. The governance and the group need to choose what’s important to them. Surveys are also designed around the skillsets of the people involved in conducting them. With a volunteer organization, it is important that the procedure be something they can readily be trained in, and I say “trained” because no survey can do without training, however simple. Surveys also ought to be easy on the surveyors, especially if they are volunteers. The requirements of when they need to be on stations oughtn’t be so onerous that they might not arrive on time, or not show up at all, and, worse, misrepresent to the group what happened. So, for instance, even if there are shoppers using bags in a store at 6:00 a.m., it’s probably not going to get covered well if a sampling plan were to require it. Surveys also ought to be easy to explain to those who want to know how they were done. Along with this, it is critically important that, as part of an analysis of the primary quantities of interest, like plastic bags used per week, the survey’s contribution to overall uncertainty is quantified. All that noted, there is a lot interested and committed citizens can do to gather data like this, and interpret results. This can be important, whether or not a town or county governance consider its findings as inputs. It can serve as a check on their result. It can also serve as a check on their budget, meaning why did the pay for some expensive professional organization to do something when something good enough for the purpose could have been had much cheaper? That said, any old surveying or sampling technique which appears to be good enough isn’t good enough. That is, there is some training and learning involved. Returning to the bag ban matter, use of bags is key to the project. As with any policy, if a regulation is imposed and there is no evidence it helps or has untoward consequences, it ought to be revoked. To do that means measuring a baseline, and then measuring after the regulation is in place. It probably is a good idea to measure a couple of times after the regulation is in place. In statistics and engineering, this general approach is called A/B testing, which is explained better here. As mentioned above, how one gets counts — they are nearly always counts — and then analyzes them depends very much on the question being asked. But, in the case of bags, there’s the really important question of where and when to sample. In this case, I’m setting aside bags given out in stores other than grocery stores. And, in this case, I’m using the example of my home town, Westwood, Massachusetts. Counting noses or bags assumes there’s a sampling frame in hand. In Westwood’s case, the concern is the population of residents or visitors frequenting local grocery stores. And in Westwood’s case, there’s a desire to count people and their preferences for bags, whether plastic, paper, some mix, or whether they bring their own bags and some mix, as well as size of order. So this means counting people. There are three grocery stores in Westwood: Roche Brothers in Islington, Wegman’s at University Station, and Lambert’s. There are convenience and other stores which sell small amount of groceries, but these were assumed to show behaviors which would be exhibited by the populations of the three majors. But, still, surveying these stores either demands deep cooperation on the part of their owners, an outrageous commitment on the part of volunteers, or a sampling scheme that is constructed with knowledge of who goes where when. Where to find such a thing? Google. Most substantial grocery store entries on Google Maps now present a bar chart of when they are most frequently visited. It looks like this: Now, I’ve discovered that that dashed line is a fixture marking a certain number of visits per hour. It is constant for a given store across days of the week. And it is at least roughly consistent across stores in an area. This is great. This was helped, in part, by a visit to one of the stores by a volunteer to take data for a half hour. She was collecting data for me, and was also trying out a data collection form and seeing how difficult it was or easy to get the kind of data that was pertinent. Doing this is an excellent idea. But, wait, you say: These aren’t numbers. It’s a bar chart. Digitizing. I learned this when I took courses in Geology. An amazing amount of data is recovered by digitizing figures in scientific journals. Why not Google? There are several digitizing applications out there. I’ve tried a couple and, so far, I like WebPlotDigitizer best. So, I did. Digitize, that is. How? Here I’ve marked, by hand, two points on the bar chart, attempting to ascertain the height of the dashed line in pixels from the baseline. Note the original images aren’t produced to the same resolution or size, so it’s important to calibrate each one. In the upper right you can see WebPlotDigitizer‘s close-up of the place where the cursor is. That’s a little hard to see, since there’s so much real estate there, but here’s a close-up of the bottom: And here’s a close-up of the upper right, a close-up of a close-up: The completed digitization looks like this: and results in a .csv file which looks partly like: There look to be extra points in the digitization, which I’ll explain. It is important to note that the code I reference later which is available to the public demands digitization be done in this style. That code has no other documentation. I don’t give a recipe. That said, it’s not difficult to figure out. The first point I take is the baseline, not in any of the bars of the bar plot. The next point I take is on the dashed upper score. I then do two bars, taking the baseline of the bar, and the upper horizontal of the bar. The rest of the bars have only their upper horizontal marked. The point is to get a good estimate of the baseline, obtained as an average of three baseline observations, the initial outside of bars, and then from two bars. There is one observation on the upper score, and then there are observations of the upper horizontals for the rest of the bars. The heights of the bars can be estimated from the difference between the reading for their upper bars and the estimate of the baseline as the mean of three observations. These can be divided by the distance between the estimate of baseline and the upper score in order to calculate a portion of the range to upper score. Note that because these are pixel coordinates, the ordinate values of the observations higher up on the bar plot are lower in coordinates than, say, the baselines. This is because distances on the ordinate are measured (ultimately) as pixels from the top of the image. Accordingly, some distance calculations need to have their signs reversed. The accompanying R code reads in these .csv files and then extracts the heights for each of the hours in a day. There is a system for the Westwood case which you can understand by reading the code where each of the separate stores’ files are assimilated into a single corresponding matrix of scored versus hour of day and day of week. In the end what’s in hand is a matrix of values proportional to numbers of visits to the stores. Calibration and actual counts have indicated that a value of unity corresponds to about 140 visitors. Now that traffic to stores is available, or at least, something proportional to traffic, it is a matter of constructing a sampling plan. A plan which is proportional to traffic makes the most sense. This is equivalent to sampling time intervals where probability of electing an interval is proportional to the estimated traffic in the interval. For this study, the surveyors expressed a desire not to be surveying more than 60-90 minutes at a time. I settled for 60 minutes. So the question became one of finding a set of samples of individual hours for a store weighted by the probability of traffic. The melt.array function of the R reshape2 package was handy here, and I was able to use the sampling-without-replacement of the R built-in sample to achieve the appropriate election. The volunteers had a strict constraint on the total number of times they wanted to visit stores. The code in the R file generateSamplingPlan.R produces several options, based upon the setting of the N.stage1 variable. They also did not want to survey before 9:00 a.m. and after 10:00 p.m. The result is a sampling plan which looks a little like this: The code and data supporting this post is available in a repository. Note that it is live, and exists to support an ongoing project, so there is no promise of stability. Note, however, that it is subject to Google’s version controls system. So, what happens after the regulation or ordinance is adopted? What’s the sampling plan to find out how things are going? At first it seems that simply repeating the days and times would be the best. On the other hand, remember that the sampling plan designed was intended to expose data collection to as representative a set of people as could be had given the constraints on that sampling. So, in principle, it shouldn’t harm at all to generate new sampling plans with the same constraints, ones which, invariably, will give other times and days. They are all vehicles for getting at what the population prefers. ## A lagomorph has an idea which might save the world Eli, who offers a clever and consistent consumption-based accounting scheme. Aside | Posted on by | Leave a comment ## “Renewables are set to penetrate the global energy system more quickly than any fuel in history” (BP, 2019 Energy Outlook) Selections from BP Energy Outlook: 2019 edition: In the ET scenario, the costs of wind and solar power continue to decline significantly, broadly in line with their past learning curves. To give a sense of the importance of technology gains in supporting renewables, if the speed of technological progress was twice as fast as assumed in the ET scenario, other things equal, this would increase the share of renewables in global power by around 7 percentage points by 2040 relative to the ET scenario, and reduce the level of CO2 emissions by around 2 Gt. The impact of these faster technology gains is partly limited by the speed at which existing power stations are retired, especially in the OECD. If, in addition to faster technological gains, policies or taxes double the rate at which existing thermal power stations are retired relative to the ET scenario, the reduction in emissions is doubled. This suggests that technological progress without other policy intervention is unlikely to be sufficient to decarbonize the power sector over the Outlook. The ‘Lower carbon power’ scenario described below considers a package of policy measures aimed at substantially decarbonizing the global power sector. The extent to which the global power sector decarbonizes over the next 20 years has an important bearing on the speed of transition to a lower-carbon energy system. In the ET scenario, the carbon intensity of the power sector declines by around 30% by 2040. The alternative ‘Lower-carbon power’ (LCP) scenario considers a more pronounced decarbonization of the power sector. This is achieved via a combination of policies. Most importantly, carbon prices are increased to$200 per tonne of CO2 in the OECD by 2040 and $100 in the non-OECD – compared with$35-50 in OECD and China (and lower elsewhere) in the ET scenario.

Carbon prices in the LCP scenario are raised only gradually to avoid premature scrapping of productive assets.

There is one gloomy projection. Despite the progress on the world scene,

The share of renewables in the US fuel mix grows from 6% today to 18% by 2040.

If that were to come true, in the context of these other changed, it is possible the United States would be regarded a pariah state and have economic sanctions imposed upon it. But … these projections have several built-in assumptions. Recall, BP is a bit like the U.S. Energy Information Administration and the world IAEA in that they are an established bureaucracy of forecasters. Both EIA and IAEA have systematically underestimated the acceleration in solar and wind adoption over the last decade.

Also, it is telling that BP’s assessment regarding the slowness with which wind and energy displace fossil fuel generation is because of the capital costs of retiring existing generation and replacing it. There are two points here.

First, the incremental capital costs for substituting solar+wind+storage for the same unit of fossil fuel energy is much smaller, as long as the accounting is done correctly. In particular, the costs to society are not just the generating plant, but the capital infrastructure needed to mind and bring the fuel to the point of combustion. There are also tremendous Sankey losses associated with Carnot cycle energy production. (See also.) That’s wasted money.

Second, the BP analysis clearly assumes the market and business structure for providing such energy remains intact. That assumption is big, one akin to assuming the there will always be a Sears and always be a Kodak. If, in fact, there are energy sources available at much lower costs per kWh or BTU, the market isn’t going to care about the sunk costs of existing players. It will go around them, and they will either seek government subsidies to remain intact, or economically die.

So, the “pariah state” outcome for the United States is too gloomy. I, instead, see a United State whose economic productivity might be increasing assaulted by challenges from climate change, including impacts to personal wealth and, so, unwillingness to consume at rates comparable to before, direct damage to productive capacity, including extensive damage to supply chains within country, and to basic infrastructure that permits people to get to their jobs, and costs in insurance and of doing business. But, I also see a hunger for cheaper everything, especially energy, and a thriving market willing to supply that with wind and solar and storage, widely distributed, overcoming zoning and other objections because many people have abandoned suburbs due to affordability and proximity to work, and because the gap between cost of energy from zero Carbon sources is so huge, a tenth of the comparable cost from fossil fuel sources.

It’s one thing to be a zealot for fossil fuels. It’s something else to ignore paying but 10% of the cost of something if zealotry is pursued.

## Tit-For-Tat in Repeated Prisoner’s Dilemma: President Donald Trump creates the Green New Deal

Jonathan Zasloff at Legal Planet offers “Donald Trump creates the Green New Deal”. The closing excerpt:

But what goes around comes around. A President Harris, or Warren, or Booker, etc. etc. can just as easily declare a National Emergency on Climate Change — one that would have a far better factual predicate than Trump’s patently false border emergency — and he or she will a lot more money to move around. After all, a lot of the climate crisis is about infrastructure, and if the relevant statute allows the President to move money from one project to another, then it is very easy to do that. Or the $100 billion that DOD has for national security emergencies: given that both the Pentagon and the heads of the national intelligence agencies have already said that climate represents a serious national security challenge, it’s not a hard legal lift (assuming intellectually honest and consistent judges, which of course we cannot). This fund must be for a military purpose, and a smarter, more energy efficient energy grid could do the trick. It’s no way to run a democracy. But Trump and the GOP have made it clear that they do not believe in democracy, and as Robert Axelrod demonstrated years ago in his classic book The Evolution of Cooperation, the best strategy in repeat-player games to facilitate cooperation is playing Tit-For-Tat. See also Generous Tit-For-Tat. ### Update, 2019-02-18 Dan Farber writes on “National Security, Climate Change, and Emergency Declarations” at Legal Planet that: If the Supreme Court upholds Trump, it will have to uphold an emergency declaration for climate change. One reason why it would be hard for the Supreme Court to overturn a climate change declaration is that some attributes of climate change and immigration are similar. Both issues involve the country’s relations with the outside world, an area where presidential powers are strong. But it isn’t as if we suddenly found out about border crossings or climate change. Given these similarities, it would be very difficult for the conservative majority to explain why it was deferring to the President in one case but not the other. The only major difference actually cuts strongly in favor of an emergency declaration for climate change: The U.S. government has already classified climate change as a serious threat to national security, and it is a threat that is getting stronger daily. Recent science indicates that climate action is even more urgent than we thought. Trump’s stated justification in his proclamation is that “the problem of large-scale unlawful migration through the southern border is long-standing, and despite the executive branch’s exercise of existing statutory authorities, the situation has worsened in certain respects in recent years.” Trump’s stated justification in his proclamation is that “the problem of large-scale unlawful migration through the southern border is long-standing, and despite the executive branch’s exercise of existing statutory authorities, the situation has worsened in certain respects in recent years.” Climate change, too, is a “longstanding problem,” and it certainly has gotten worse despite the effort of the executive branch (Obama) to address the problem. Federal agencies, as well as Congress, have made it clear that climate is a serious threat to our nation. | Leave a comment ## “What’s new with recycling” spoke in Norwell, at the South Shore Natural Science Center, a couple of weeks ago: ## “Is the Green New Deal’s ambition smart policy?” Ann Carlson is the Shirley Shapiro Professor of Environmental Law and the co-Faculty Director of the Emmett Institute on Climate Change and the Environment at UCLA School of Law. Writing at Legal Planet, she takes on assessing the Green New Deal, admitting she is “conflicted about a proposal that seems untethered to what is actually achievable.” She begins: At the the heart of the Green New Deal — which demands slashing U.S. carbon emissions by 2030 by shifting to 100 percent clean energy — is a major conundrum. Even the most enthusiastic proponents of ambitious climate policy don’t believe the goals are achievable, technologically let alone politically. Stanford Professor Marc Z Jacobsen, for example, among the most ardent advocates for decarbonizing the electricity grid completely, believes that we can achieve 100 percent renewable energy by 2050, three decades after the Green New Deal’s target date. Ernie Monitz, the former Secretary of Energy under President Obama, laments that he “cannot see how we could possibly go to zero carbon in a 10-year time frame.” A number of columnists have noted that the Green New Deal will never become law because of its expense, its political impracticability and its technological infeasibility. And yet, the Green New Deal has attracted huge public support, the endorsement of all of the 2020 Democratic candidates for President, and a large number of Senators and members of Congress. It promises to mobilize a generation of young activists to work to solve the existential crisis of their lives. Read on. She’s more optimistic than it sounds, although, I think Professor Carlson is realistic. I remarked in a comment: I wish the GND proponents well, too, although I worry about a couple of things. First, the comparison with other environmental programs, while inspiring, is a little inappropriate. There has never been a problem of this scale, and not one whose amplification is so thoroughly integrated in with the daily comforts of affluent humans. Fossil fuels do have high energy densities, and that can be convenient. Also, related to this, benefits do not accrue if we simply cease emitting. We have a timetable, and Nature will not scrub the harmful materials on any reasonable human timetable, and conditions at the moment we succeed at achieving zero emissions will persist for centuries. The alternative, artificial removal of atmospheric CO2, is both horrifically expensive (multiples of 2014 Gross World Product size at present prices) and pursuit of the technology has been explicitly rejected by GND proponents. (They’ve ruled out advanced nuclear technologies, too.) Second, without policy which is “tethered to what is actually achievable”, GND suggests the bar is lower than it actually is and could, in itself, both present a moral hazard and make people think climate change is not being mitigated purely for reasons of politics and greed. (This is in bounds because the rejection of negative emissions technology is done because it, too, could be a kind of moral hazard.) Sure, those are involved, but it is also true people don’t like the things that a GND-style solution, or a Professor Mark Z Jacobson solution entail. In my opinion, their choice is silly, but people are people. Third, aspirational, engineering-free solutions to big, big problems are likely to founder, because they won’t assess and contain their own complications, particularly if they are rushed. Uncoordinated rollout of zero Carbon energy won’t only trash pieces of the grid which will have repercussions for the less well off and people of color, but could also exacerbate climate conditions and regional weather. Large scale plantings, for example, of Jatropha curcas, thought to be a way of doing rapid CO2 drawdown and projecting biodiesel oils, could change albedo in the wrong direction for the arid regions it loves, and, indeed, could do itself in if the same regions transform into tropics. Uncoordinated rollouts of wind farms will affect weather system energies. That’s no reason not to do it, but it needs to be studied and thought through. Fourth, there is (still) a substantial education component needed, one done in a manner that evades the impression climate change-fixing proponents are pulling their punches. For if byproducts of climate change are severe enough to move people into action, and gets them to accept sacrifices needed to do so, then they probably will expect to see improvements once these changes are made. The science says that expectation is unreasonable, because of the inertia of the climate system and because the human emissions impact is a perturbation on a geological scale in a geological moment. The political ramifications of this realization are both difficult to assess but could be damaging to the long term health of the collective project. I did not mention other things, such as the intrinsic greenhouse gas emissions from agriculture, even if planting, harvesting, fertilization, transport, and processing are all decarbonized. Cement production is a big piece of emissions, too. The troubling thing is that GND doesn’t mention these: It focuses almost exclusively upon energy. ### Update, 2019-02-11, 23:45 ET ## From the YEARS Project: How Climate Impacts Mental Health (#climatefacts) Also the magnificent “We should never have called it Earth“, also from Dr Marvel. In “Hope, despair and transformation: Climate change and the promotion of mental health and wellbeing“, ritze, Blashki, Burke, and Wiseman [International Journal of Mental Health Systems, 2008, 2(13)] note in a section titled “Emotional distress arising from awareness of climate change as global environmental threat”: The question that McKibben raises is how psychologically, emotionally and politically should we as human beings respond to this fundamental change in the relationship between the human species and the world we inhabit? . . . For many people, the resulting emotions are commonly distress and anxiety. People may feel scared, sad, depressed, numb, helpless and hopeless, frustrated or angry. Sometimes, if the information is too unsettling, and the solutions seem too difficult, people can cope by minimising or denying that there is a problem, or avoiding thinking about the problems. They may become desensitised, resigned, cynical, skeptical or fed up with the topic. The caution expressed by climate change skeptics could be a form of denial, where it involves minimising the weight of scientific evidence/consensus on the subject. Alternatively, it could indicate that they perceive the risks of change to be greater than the risks of not changing, for themselves or their interests … . . . Notwithstanding the enormity of the climate change challenge, we know what many of the solutions are, and there are many actions that citizens can take individually and collectively to make a difference at household, local, national and global level. When people have something to do to solve a problem, they are better able to move from despair and hopelessness to a sense of empowerment. Blashki, et al include a table from the Australian Psychological Society about how individuals can respond to the stress of being aware of climate change and its impacts: Finally, there is the tongue-in-cheek yet serious work by Nye and Schwarzennager: ## Alright! I’m tired of all this serious shtuff … It’s time for some CLIMATE ADAM! ## Status of Solar PV in Massachusetts At Solar Power Northeast, the DOER of Massachusetts noted that with the mandated 400 MW of qualified projects program review upcoming, and heavy volume deployed in National Grid territory, there is strong consideration to expand and evolve the SMART program. ## “Applications of Deep Learning to ocean data inference and subgrid parameterization” This is another nail in the coffin of the claim I heard at last year’s Lorenz-Charney Symposium at MIT that machine learning methods would not make a serious contribution to advancements in the geophysical sciences. ### T. Bolton, L. Zanna, “Applications of Deep Learning to ocean data inference and subgrid parameterization“, Journal of Advances in Modeling Earth Systems, 2019, 11. ## The shelf-break front, fisheries, climate change, and finding things out Claire and I do. ## Wake up, Massachusetts! Especially, Green Massachusetts! I’ve been looking over the set of bills proposed for the current Massachusetts legislative session. There are more of them, all dealing with aspects of greening energy supply and transport. And Governor Baker’s S.10 is very welcome. (By the way, I don’t see any counter-proposals from those who don’t like the Governor politically, so, I’d say, they have no right to complain.) Adaptation to climate in Massachusetts is a serious thing: and there will be many uncomfortable choices we’ll be facing soon, both pocketbook choices and choices of social equity. Indeed, many of the bills have environmental justice and social justice aspects. I’m all for that, as long as these are put in perspective. It’s 2019. While Massachusetts has a Global Warming Solutions Act, it’s far from perfect, putting up an imperfect target of 2050 and, even then, deliberating excluding whole classes of emissions, such as waste-to-energy facilities. Even accepting it as a great goal, even if the impacts upon Massachusetts are controlled by many and varied parties all over the world, the Commonwealth currently has no believable roadmap for achieving those goals which are, after all, a law. This is especially true relating to transportation and to heating of homes. The world’s bullseye for containing emissions — a long shot — is 2030. Some say even that’s too late, given we’ve made so little progress, and governments and communities are faced with buying fossil fuel infrastructure and retiring it early, well ahead of the end of its depreciation lifetime. All the evidence year after year is that the rate of impact from climate change is accelerating. What Massachusetts faces is the discomfort and significant cost of purchasing homes — at a substantial loss to their owners, and loss in tax base for their towns — on the coasts and inland which are too risky for their inhabitants, their towns, and the Commonwealth to permit their owners to continue to live there. This is called managed retreat (see also). And I see nothing, other than S.10, which begins to address this. And S.10 is modest. I also don’t see on the energy side a developed appreciation for What’s Happening Out There. Climate change is important. It is the issue. Environmental justice or not, social justice or not, if this problem is not solved, none of the progress that has been made in 150 years of social advancement will matter: “All the good you’ve done, all the good you can imagine doing will be wiped out, just wiped out ….” (Van Jones). But, and these aspects are good, that’s not the only dynamic for which Massachusetts needs to plan. Have you looked at solar and wind costs to generate a KWh of electricity recently? They are tearing through the floor, especially onshore wind but, soon to be followed by solar. Why? Because Mr Market is seeing that their plummeting costs are not fantasies — Forbes writes about this all the time these days — they are a result of a differentiating technology, and that, yeah, there’s a pony in the barn. Solar and wind, supported by and supporting expanding energy storage, are going to Eat the Lunch of everyone in the energy industry. And this is happening with the fiercest antagonist to these technologies occupying the United States White House, with many supporting opponents numbered among the Republicans of Congress. Imagine what they will do with tailwinds? But, there’s a problem. Massachusetts residents do not like to live near wind turbines or even large solar farms. Some complain that solar farms cause leveling of new growth forest — even if new growth forest does little or nothing to sequester CO2 — and impact habitat. And they just don’t like the looks. Massachusetts residents who say these things are really complaining about the low energy density per unit area which solar and wind have. That’s true. Fossil fuels have a high energy density. Nuclear power has a high energy density. Hydropower has a reasonably high energy density, but you can’t just find it anywhere. If you want to supply energy needs with wind and solar, you need a lot of land. Massachusetts isn’t a big state. Accordingly, if you want to supply energy needs with with and solar, you need to build them close to where people live. That’s better, in fact, because then you don’t need to run ecosystem-destroying transmission lines through forests. If this is unacceptable, and you don’t want CO2 emissions, there is no choice but nuclear power or hydropower. As I noted, there’s only so much hydropower, and there needs to be cooperation among the people who live in states the transmission needs to cut in order to get access to it. Nuclear power, as presently practiced, has a large cost problem. There are measures being pursued to fix that, but it’s not clear how soon these will be available. We need nuclear power that’s modular, with small units, that can be combined into arbitrary sizes, that can be toggled on and off as needed, that’s air-cooled, where each of the units are portable. We need nuclear power in commodity chunks. The industry chose not to do that in the 1960s and they have suffered with their choice ever since. Modular units can just be trucked away intact if they are broken or need their wastes scrubbed. If a unit fails, the generation doesn’t all go down because there are many more companions generating. Having cooling water is an ecological and climate problem — many reactors need to go offline if their nearby cooling rivers dry up in droughts — so air cooling is a natural response. But nuclear power isn’t popular. Facts are, unless Massachusetts residents opt for onshore wind turbines and big solar, both backed by substantial storage, all located near residentially zoned areas, they are going to end up with natural gas as their energy supply. It’s dense. It can be hidden. But, if they do, the future of Massachusetts not only lacks a clean energy future, it also has a future of a rustbelt. That’s because natural gas will eventually be the most expensive energy source. Coal and oil will be long gone. Conventional nuclear power is too expensive even now because they suffer from a negative learning curve. Everyone will be using wind and solar, backed in places by storage, but as everyone adopts these, the storage will be needed less and less. What will be Massachusetts’ fate? With expensive electrical energy, not only will companies not want to do business in Massachusetts because their energy supply isn’t clean, an increasing criterion over time, due to shareholders and customers, but it will be the most expensive energy anywhere. It will get worse. The companies supplying Massachusetts don’t live in isolation. Selling natural gas anywhere will become more and more difficult, and some and eventually all of those companies will go bankrupt. To maintain energy, Massachusetts will need to buy those assets and run them, perhaps by giving them to someone else to run, but this will be expensive, and this will go on the tax base. That will be an additional disincentive for companies to build and work in Massachusetts, and for people to live in Massachusetts. In addition, there will be the inevitable costs and charges from climate change. These Massachusetts does not have complete control, but to the degree it doesn’t champion means for zeroing emissions and using 100% zero Carbon energy, it will stifle its significant voice encouraging others that this is a feasible model. That voice can do more to nudge the rest of the world in the zero Carbon direction, much more than anything Massachusetts will do by zeroing its own emissions. These costs will ultimately fall on the Commonwealth’s books and, so, upon the taxpayers, whether they live it or not, whether or not the ability of the Commonwealth to pay is supposedly constrained by law. Solvency is a powerful reason for overturning laws. So, from what I see, either Massachusetts residents learn to live next to onshore wind and big solar farms, or they choose new nuclear power — and we don’t know how long that’ll take — or they choose natural gas, with the economic downsides I have just described. I don’t think many in the progressive and environmental movements in Massachusetts have thought about these tradeoffs. They somehow think demand can be reduced so these tradeoffs are not necessary. They are not thinking quantitatively, or, for that matter, factually. It appears to me many of them have an agenda to pursue, and evidence just gets in the way. This is not serving the Commonwealth. Climate reality is an elixir which exposes the truth. Whether it’s Thwaites Glacier or the slowdown of the Gulf Stream, or excessive precipitation, Massachusetts will need to deal with these. Fortunately, should Massachusetts residents change their minds, onshore wind turbines are very easy and inexpensive to construct, as are big solar farms. And flooded properties are cheap to buy up. What kind of future do you want, Massachusetts? Do you want to plan, and help it be a good one? Or do you want to bury your head in the ever eroding sand? ## “Climate change is coming for your assets” ## Repeating Bullshit # Yeah, how much was it? And was it different? I mean, not based on how Curry or Tisdale feel, but by the numbers. Question: How does a dumb claim go from just a dumb claim, to accepted canon by the climate change denialati? Answer: Repetition. Yes, keep repeating it. If it’s contradicted by evidence, ignore that or insult that. Repeat it again. If you’re asked for evidence, ignore that or insult that, just keep repeating it. That’s how things get burned into brains. View original post 539 more words ## Stream flow and P-splines: Using built-in estimates for smoothing Mother Brook in Dedham Massachusetts was the first man-made canal in the United States. Dug in 1639, it connects the Charles River at Dedham, to the Neponset River in the Hyde Park section of Boston. It was originally an important source of water for Dedham’s mills. Today it serves as an important tool for flood control on the Charles River. Like several major river features, Mother Brook is monitored by gauges of flow maintained by the U.S. Geological Survey, with careful eyes kept on their data flows by both agencies of the Commonwealth of Massachusetts, like its Division of Ecological Restoration, and by interested private organizations, like the Neponset River Watershed Association and the Charles River Watershed Association. (I am a member of the Neponset River Watershed Association.) The data from these gauges are publicly available. Such a dataset is a good basis for talking about a non-parametric time series smoothing technique using P-splines (penalized B-splines), an example of local regression, and taking advantage of the pspline package to do it. Since this, like most local regression techniques, demands a choice of a smoothing parameter, this post strongly advocates for pspline as a canonical technique because: • it features a built-in facility for choosing the smoothing parameter, one based upon generalized cross validation, • like loess and unlike lowess in R, it permits multiple response vectors and fits all of them simultaneously, and • with the appropriate choice in its norder parameter, it permits the estimation of derivatives of the fitted curve as well as the curve itself. Finally, note that while residuals are not provided directly, they are easy to calculate, as will be shown here. In fairness, note that loess allows an R formula interface, but both smooth.Pspline and lowess do not. Also, smooth.Pspline is: • intolerant of NA values, and • demands the covariates each be in ascending order. ##### Note from 2019-01-30 Note that the lack of support by the pspline package for the multivariate case has thrown, so to speak, the gauntlet down, in order to find a replacement. Since I’m the one who, in the moment, is complaining the loudest, the responsibility falls to me. So, accordingly, I commit to devising a suitable replacement. I don’t feel constrained by the P-spline approach or package, although I think it foolish not to use it if possible. Such a facility will be the subject of a future blog post. Also, I’m a little joyful because this will permit me reacquaintance with some of the current FORTRAN language definition, using the vehicle of Simply Fortran, and its calling from R. This is sentimental, since my first programming language was FORTRAN IV on an IBM 1620. ### References For completeness, consider the AdaptFit package and related SemiPar package which also offer penalized spline smoothing but are limited in their support for multiple responses. ##### (Update, 2019-01-29) I re-encountered this paper by Professor Michael Mann from 2004 which addresses many of these issues: Incidentally, Professor Mann is in part responding to a paper by Soon, Legates, and Baliunas (2004) criticizing estimators of long term temperature trends. The Dr Soon of that trio is the famous one from the Heartland Institute who has been mentioned at this blog before. ### The dataset What’s does stream flow on Mother Brook look like? Here’s eight years of it: ##### (Click on image for a larger figure, and use browser Back Button to return to blog.) ### Smoothing with P-splines, Generalized Cross Validation Using a cubic spline model, the package pspline finds a smoothing parameter (“spar“) of 0.007 is best, giving a Standard Error of the Estimate (“SEE”) of 0.021: ##### (Click on image for a larger figure, and use browser Back Button to return to blog.) Forcing the spline fit to use spar values which are larger, one of 0.5, and one of 0.7 produces a worse fit. This can also be seen in their larger G.C.V criteria, of 228 and of 237, compared with the automatic 185: ##### (Click on image for a larger figure, and use browser Back Button to return to blog.) ##### (Click on image for a larger figure, and use browser Back Button to return to blog.) ### Code The code for generating these results is shown below.  # # Mother Brook, P-spline smoothing, with automatic parameter selection. # Jan Galkowski, bayesianlogic.1@gmail.com, 27th January 2019. # Last changed 28th January 2019. # library(random) # For external source of random numbers library(FRACTION) # For is.wholenumber library(tseries) # For tsbootstrap library(pspline) source("c:/builds/R/plottableSVG.R") randomizeSeed<- function(external=FALSE) { #set.seed(31415) # Futz with the random seed if (!external) { E<- proc.time()["elapsed"] names(E)<- NULL rf<- E - trunc(E) set.seed(round(10000*rf)) } else { set.seed(randomNumbers(n=1, min=1, max=10000, col=1, base=10, check=TRUE)) } return( sample.int(2000000, size=sample.int(2000, size=1), replace=TRUE)[1] ) } wonkyRandom<- randomizeSeed(external=TRUE) stopifnot( exists("MotherBrookDedham") ) seFromPspline<- function(psplineFittingObject, originalResponses, nb=1000, b=NA) { stopifnot( "ysmth" %in% names(psplineFittingObject) ) # ysmth<- psplineFittingObject$ysmth
#
if (is.null(dim(originalResponses)))
{
N<- length(which(!is.na(ysmth)))
stopifnot( length(originalResponses) == N )
} else
{
stopifnot( all( dim(originalResponses) == dim(ysmth) ) )
N<- nrow(ysmth)
}
#
if (is.na(b))
{
b<- round(N/3)
} else
{
stopifnot( is.wholenumber(b) && (4 < b) && ((N/100) < b) )
}
#
R<- originalResponses - ysmth
#
# Don't assume errors are not correlated. Use the Politis and Romano stationary
# bootstrap to obtain estimates of standard deviation(s) and Mean Absolute Deviation(s),
# where these are plural of there is more than one response.
#
# The standard error of the estimate is then just adjusted for the number of non-NA
# observations.
#
if (is.null(dim(originalResponses)))
{
Ny<- 1
booted.sd<- tsbootstrap(x=R, nb=nb, statistic=function(x) sd(x, na.rm=TRUE), m=1, b=b, type="stationary")
SD<- mean(booted.sd$statistic) SEE<- SD/sqrt(N) booted.mad<- tsbootstrap(x=R, nb=nb, statistic=function(x) mad(x, constant=1, na.rm=TRUE), m=1, b=b, type="stationary") MAD<- mean(booted.mad$statistic)
} else
{
Ny<- ncol(ysmth)
SD<- rep(NA, Ny)
SEE<- rep(NA, Ny)
for (j in (1:Ny))
{
nonNA<- which(!is.na(R[,j]))
booted.sd<- tsbootstrap(x=R[nonNA,j], nb=nb, statistic=function(x) sd(x, na.rm=TRUE), m=1, b=b, type="stationary")
SD[j]<- mean(booted.sd$statistic) SEE[j]<- SD/sqrt(length(nonNA)) booted.mad<- tsbootstrap(x=R[nonNA,j], nb=nb, statistic=function(x) mad(x, constant=1, na.rm=TRUE), m=1, b=b, type="stationary") MAD[j]<- mean(booted.mad$statistic)
}
}
return(list(multivariate.response=!is.null(dim(originalResponses)), number.of.responses=Ny,
}

MotherBrookDedham.nonNA<- which(!is.na(MotherBrookDedham$gauge)) # Note method == 3 is Generalized Cross Validation (Craven and Wahba, 1979), and # the value of spar is an initial estimate. The choice of norder == 2 is arbitrary. MotherBrookDedham.fitting<- smooth.Pspline( x=MotherBrookDedham.nonNA, y=MotherBrookDedham$gauge[MotherBrookDedham.nonNA],
norder=2, spar=0.3, method=3)
# Using 90 days as mean block length, about a quarter of a year
MotherBrookDedham.estimate.bounds<- seFromPspline(psplineFittingObject=MotherBrookDedham.fitting,
originalResponses=MotherBrookDedham$gauge[MotherBrookDedham.nonNA], nb=1000, b=91) fx<- openSVG(root="MotherBrookDedham-RawFlowData-Daily-withSmooth", width=24, height=round(24/2), pointsize=8) plot(MotherBrookDedham$gauge, type="n", xaxt="n", ylab="mean (over day) cubic feet per second", main="",
xlab="", cex.lab=2, cex.axis=2, ylim=c(-80, 650))
title(main=sprintf("Raw flow data, Mother Brook at Dedham, agency %s, site %s, fit with cubic smoothing spline",
MotherBrookDedham$agency_cd[1], MotherBrookDedham$site_no[1]),
cex.main=3, font.main=2, family="Times")
N<- nrow(MotherBrookDedham)
S<- seq(1, N, 30)
axis(side=1, at=S, line=-13, labels=MotherBrookDedham$datetime[S], las=2, cex.axis=2, font.axis=2, cex.lab=1.5, tick=FALSE) abline(v=S, lty=6, col="grey") points(1:N, MotherBrookDedham$gauge, pch=21, cex=1.2, col="blue", bg="blue")
lines(MotherBrookDedham.nonNA, MotherBrookDedham.fitting$ysmth, lwd=1, lty=1, col="green") text(which.max(MotherBrookDedham.fitting$ysmth), max(MotherBrookDedham.fitting$ysmth), pos=2, offset=2, font=2, cex=2, labels=sprintf("Found smoothing SPAR = %.3f, and G.C.V. value = %.1f", MotherBrookDedham.fitting$spar, MotherBrookDedham.fitting$gcv), family="Helvetica") text(which.max(MotherBrookDedham.fitting$ysmth), 0.95*max(MotherBrookDedham.fitting$ysmth), pos=2, offset=2, font=2, cex=2, labels=sprintf("SD = %.3f, MAD = %.3f, SEE = %.3f", MotherBrookDedham.estimate.bounds$SD, MotherBrookDedham.estimate.bounds$MAD, MotherBrookDedham.estimate.bounds$SEE), family="Helvetica")
closeSVG(fx)

# Force the same P-spline to use an arbitrary smoother SPAR by electing method == 1, and setting SPAR = 0.5.
MotherBrookDedham.fitting.p5<- smooth.Pspline( x=MotherBrookDedham.nonNA, y=MotherBrookDedham$gauge[MotherBrookDedham.nonNA], norder=2, spar=0.5, method=1) # Using 90 days as mean block length, about a quarter of a year MotherBrookDedham.estimate.bounds.p5<- seFromPspline(psplineFittingObject=MotherBrookDedham.fitting.p5, originalResponses=MotherBrookDedham$gauge[MotherBrookDedham.nonNA], nb=1000, b=91)

fx<- openSVG(root="MotherBrookDedham-RawFlowData-Daily-withSmooth-with-SPARp5", width=24, height=round(24/2), pointsize=8)

plot(MotherBrookDedham$gauge, type="n", xaxt="n", ylab="mean (over day) cubic feet per second", main="", xlab="", cex.lab=2, cex.axis=2, ylim=c(-80, 650)) title(main=sprintf("Raw flow data, Mother Brook at Dedham, agency %s, site %s, fit with cubic smoothing spline", MotherBrookDedham$agency_cd[1], MotherBrookDedham$site_no[1]), cex.main=3, font.main=2, family="Times") N<- nrow(MotherBrookDedham) S<- seq(1, N, 30) axis(side=1, at=S, line=-13, labels=MotherBrookDedham$datetime[S], las=2, cex.axis=2, font.axis=2, cex.lab=1.5, tick=FALSE)
abline(v=S, lty=6, col="grey")
points(1:N, MotherBrookDedham$gauge, pch=21, cex=1.2, col="blue", bg="blue") lines(MotherBrookDedham.nonNA, MotherBrookDedham.fitting.p5$ysmth, lwd=1, lty=1, col="green")
text(which.max(MotherBrookDedham.fitting.p5$ysmth), max(MotherBrookDedham.fitting.p5$ysmth), pos=2, offset=2,
font=2, cex=2, labels=sprintf("Found smoothing SPAR = %.3f, and G.C.V. value =  %.1f",
MotherBrookDedham.fitting.p5$spar, MotherBrookDedham.fitting.p5$gcv), family="Helvetica")
text(which.max(MotherBrookDedham.fitting.p5$ysmth), 0.95*max(MotherBrookDedham.fitting.p5$ysmth), pos=2, offset=2,
font=2, cex=2, labels=sprintf("SD = %.3f, MAD = %.3f, SEE = %.3f",
MotherBrookDedham.estimate.bounds.p5$SD, MotherBrookDedham.estimate.bounds.p5$MAD,
MotherBrookDedham.estimate.bounds.p5$SEE), family="Helvetica") closeSVG(fx) # Force the same P-spline to use an arbitrary smoother SPAR by electing method == 1, and setting SPAR = 0.7. MotherBrookDedham.fitting.p7<- smooth.Pspline( x=MotherBrookDedham.nonNA, y=MotherBrookDedham$gauge[MotherBrookDedham.nonNA],
norder=2, spar=0.7, method=1)
# Using 90 days as mean block length, about a quarter of a year
MotherBrookDedham.estimate.bounds.p7<- seFromPspline(psplineFittingObject=MotherBrookDedham.fitting.p7,
originalResponses=MotherBrookDedham$gauge[MotherBrookDedham.nonNA], nb=1000, b=91) fx<- openSVG(root="MotherBrookDedham-RawFlowData-Daily-withSmooth-with-SPARp7", width=24, height=round(24/2), pointsize=8) plot(MotherBrookDedham$gauge, type="n", xaxt="n", ylab="mean (over day) cubic feet per second", main="",
xlab="", cex.lab=2, cex.axis=2, ylim=c(-80, 650))
title(main=sprintf("Raw flow data, Mother Brook at Dedham, agency %s, site %s, fit with cubic smoothing spline",
MotherBrookDedham$agency_cd[1], MotherBrookDedham$site_no[1]),
cex.main=3, font.main=2, family="Times")
N<- nrow(MotherBrookDedham)
S<- seq(1, N, 30)
axis(side=1, at=S, line=-13, labels=MotherBrookDedham$datetime[S], las=2, cex.axis=2, font.axis=2, cex.lab=1.5, tick=FALSE) abline(v=S, lty=6, col="grey") points(1:N, MotherBrookDedham$gauge, pch=21, cex=1.2, col="blue", bg="blue")
lines(MotherBrookDedham.nonNA, MotherBrookDedham.fitting.p7$ysmth, lwd=1, lty=1, col="green") text(which.max(MotherBrookDedham.fitting.p7$ysmth), max(MotherBrookDedham.fitting.p7$ysmth), pos=2, offset=2, font=2, cex=2, labels=sprintf("Found smoothing SPAR = %.3f, and G.C.V. value = %.1f", MotherBrookDedham.fitting.p7$spar, MotherBrookDedham.fitting.p7$gcv), family="Helvetica") text(which.max(MotherBrookDedham.fitting.p7$ysmth), 0.95*max(MotherBrookDedham.fitting.p7$ysmth), pos=2, offset=2, font=2, cex=2, labels=sprintf("SD = %.3f, MAD = %.3f, SEE = %.3f", MotherBrookDedham.estimate.bounds.p7$SD, MotherBrookDedham.estimate.bounds.p7$MAD, MotherBrookDedham.estimate.bounds.p7$SEE), family="Helvetica")
closeSVG(fx)


The code is available online here and requires a utility from here.

### So, what’s the point?

Having a spline model for a data actually offers a lot. First, the estimate of SEE and MAD give some idea of how accurate prediction using the model might be. With eight years of data, such models are in hand.

Also, having a spline model is the basis for detecting changes in stream flow rates over time. Mother Brook might not be the best example of long run stream flow rates, since the Army Corps can change their policies in how they manage it, but the same kinds of flow time series are available for many other flows in the region.

To the point about changes in flow rates, having a spline model permits estimating derivatives which, in this case, are exactly these values.

Moving on, once several such flows have been modeled using splines, these can serve as the basis for various kinds of regressions, whether on the response side or on the covariates side. For example, is there statistical evidence for a link between stream flows and temperature? The Clausius-Clapeyron relation suggests there should be, at least at the regional and global scale. It would be interesting to examine if it can be seen here.

To me, it would be also interesting to see if some of the riverine connections in the region could be inferred from examination of flow rates alone. Downstream flows see a pulse of water from precipitation and melt, but their pulses are lagged with respect to earlier ones. Sure, one could examine such connections simply by looking at a map, or Google Earth, but there are other hydrological applications where these connections are latent. In particular, connections between subterranean water sources and surface flows might be reveals if these kinds of inferences are applied to them.

##### (Update, 2019-01-29)

The scholarly literature such as the paper by Professor Mann cited above which critiques and explains that by by Soon, Legates, and Baliunas (2004) shows careful consideration of these techniques matters.

## 50,000+ golf balls, along a coast

KQED carried a story about free diver and 16 y.o. Alex Weber who discovered not only a new source of plastic pollution, but another testament to the casual, careless sloppiness of people.

And Ms Weber has converted it into a crusade against marine pollution, and a technical article in a scientific publication. Writing with Professor Matt Savoca of Stanford University, Weber and her dad, Michael Weber, also a co-author of that paper, found over 50,000 balls just offshore of a California golf course, with new ones arriving every day. See her golf ball project page.

A number of the balls are in usable condition:

Quoting from the Conclusion of their article:

In central California, the Pebble Beach Golf Links host 62,000 rounds of golf per year and has been in operation since 1919 (Dunbar, 2018). The average golfer loses 1–3 balls per round (Hansson and Persson, 2012), which implies that between 62,000 and 186,000 golf balls are lost to the environment each year at the Pebble Beach Golf Links. This translates to 3.14–9.42 tons of debris annually. While a portion of these balls is lost to non-oceanic regions adjacent to the course, the coast and intertidal environments still have a high likelihood of accumulating mishit balls. Using a conservative estimate of 10,000–50,000 balls lost to sea annually gives a range of 1–5 million golf balls lost to the coastal environment during the century that this course has been in operation. These projected numbers indicate that this issue has been overlooked for decades.

I salute Ms Weber, her dad, and Professor Savoca. And look forward to reading their paper.

### Update, 27th January 2019

Accolades to authors Weber, Weber, and Savoca, and collection colleagues Johnston, Sammet, and Matthews for a most impressive piece of work!

The conditions on dives are cold and sometimes treacherous. Representative collections take planning and working around environmental and safety constraints. Revisits showed a glimpse of golfballs pollutant dynamics.

And it didn’t stop there: The huge population of golfballs needed to be characterized by age and wear.

The sampling areas and processes needed documentation.

This is a substantial body of field research, backed up by background scholarship.

## “Pelosi won, Trump lost”

From Alex Wagner, contributing editor at The Atlantic and CBS News correspondent. Excerpt from “Pelosi won, Trump Lost“:

“Nancy’s Prerogative” might be the name of an Irish bar, but in this case it signaled the waving of the presidential white flag, a fairly shocking thing to see on any war front. Trump’s pugilistic impulses, after all, have been virtually unchecked—especially these days, when he is without administration minders. But Pelosi has rendered Trump unable to employ his traditional weaponry. He couldn’t even muster the juju necessary to formulate that most Trumpian of Trump battle strategies, a demeaning nickname. “Nancy Pelosi, or Nancy, as I call her,” Trump said on Wednesday, “doesn’t want to hear the truth.”

.
.
.

Trump has intersected with powerful women before — Hillary Clinton, most notably — and showed little hesitation to diminish and demean. But Pelosi, who once joked to me she eats nails for breakfast, is a ready warrior. She is happy to meet the demands of war, whereas Clinton was reluctant, semi-disgusted, and annoyed to be dragged to the depths that running against Trump demanded. The speaker of the House is, technically, a coastal elite from San Francisco, but she was trained in the hurly-burly of machine politics of Baltimore by her father, Mayor Thomas D’Alesandro. It is not a coincidence that Pelosi has managed, over and over, to vanquish her rivals in the challenges for Democratic leadership: she flocks to the fight, not just because she usually wins, but apparently because she likes it.

Read it, particularly the quote from former Trump Organization executive Barbara Res, repeated from The New York Times.

There is a similar article at The Washington Post by Jennifer Rubin titled “Trump lost. Period.”

## “Collective reflection” and working together on climate issues in Massachusetts

This is an excerpt from an article which appeared at RealClimate. That, in turn, is a translation of the same article which appeared in Le Monde on 11th January 2019.

Recent discussions at climate-related blogs and among environmental activists make the portions of the excerpt which I have highlighted in bold especially pertinent.

What if the focus on the moods of climate scientists was a way to disengage emotionally from the choices of risk or solutions to global warming? Since the experts are worrying about it for us (it’s their daily life, isn’t it?), let’s continue our lives in peace. If feelings and expressing emotions – fear, anger, anguish, feelings of helplessness, guilt, depression – in the face of risks are legitimate, even necessary, to take action demands that we go beyond that. Catastrophism often leads to denial, a well-known psychic mechanism for protecting oneself from anxiety. Managing risk is part of our daily lives and supposes that we are not in such denial (active or passive) as it prevents clear and responsible action. Because we know that many hazards carry predictable risks, human societies have learned to anticipate and cope, for example, to limit the damage of storms or epidemics. The challenge of climate change is to build a strategy not in response to an acute and clearly identified risk, but in anticipation of a gradual, chronic increase in climate risks.

The climate scientists are alright (mostly), but that’s not the important question. The dispassionate management of climate risk will require that everyone – citizens, decision makers, teachers, intermediate bodies, companies, civil society, media, scientists – in their place and according to their means, take the time for a collective reflection, first of all through mutual listening. The news shows it every day: this process is hobbling along, too slowly for some, too fast for others. It will need to overcome emotional reactions, vested interests, and false information from the merchants of doubt. Those who are unable to review their strategy and have everything to lose from the exit from fossil-fuel based energies will use nit-picks, manipulation, short-termism, and promote binary and divisive visions, all of which undermine trust and pollute the debate. But despite that…

Every degree of warming matters, every year counts, every choice counts. The challenge is immense because of the nature and magnitude of the unprecedented risk. It requires doing everything to overcome indifference and fatalism.

And, in this regard, but obviously with no support from the authors of the above piece, one of the most constructive things the climate-concerned of Massachusetts can do right now, whatever your political background and stripe, is to throw your support behind Governor Baker’s proposal to tax real estate transfers as a funding source for climate mitigation and adaptation. While The Globe quoted ELM and other environmental groups of having cautious support for the Governor’s proposal, to stand on the sidelines and fail to give him support for the proposal against the likes of the Massachusetts Association of Realtors, quoted in the article, and probably Speaker Robert DeLeo means they are more interested in their side winning than on making progress towards the common goal of mitigating climate change, adapting, and preventing. I have criticized Governor Baker, too. But this and his Executive Order 569 are really welcome, and I walk back what I said there: The Governor has either learned, or I was wrong in the first place.

I’m not the only one supporting him: Foley-Hoag thinks this is a good idea, but wants the Governor to do more.

The risks are here. The risks are now. There is already a 1-chance-in-100 per year of an 8 inch rain or more in 24 hours. No Massachusetts stormwater infrastructure is capable of dealing with half of that. You think that risk small? There’s an 10% chance of that happening one or more times in 10 years. There’s a 4+% chance of that happening in 5 years. The chance of a 7 inch rain or more in 24 hours is 2% each year. Yet Massachusetts codes allow 1960s standards for diurnal rain projections to be the standard. These are no longer the 1960s.

So, which is it, all the people that say they want to fix climate change? Support or not? And if you don’t support this, where is your specific counterproposal? And if you don’t have one, you don’t deserve the label “climate activist” or “environmental activist”. Just settle for politician.

### Update, 2019-01-23

A measure and program I find highly constructive is the Ceres Commit to Climate program for corporations.