## Professor Tony Seba, of late

I love it.

Professor Tony Seba, Stanford, 1 week ago.

It means anyone who continues to invest in or support the fossil fuels hegemony will be fundamentally disappointed by the markets. And it serves them right. By efficiency, or momentum, there is no beating energy that has a marginal cost of zero.

As someone once said, in a movie, oppose solar energy, “Go ahead. Make my day.”

Gasoline-powered autos won’t be sidelined because gasoline costs too much, it’ll be because gasoline costs too little. No one will want it, so service stations won’t be able to cover their overheads, and they’ll close: It won’t be available, because no one will care about it any more.

And, as for the homes and businesses who continue to buy into the “presently wise choice” of natural gas? Hah! What happens when they can no longer get it, their pipeline companies shutting down flows?

It’s a beautiful thing.

Oh, sure, they’ll try to socialize their losses. Hopefully the electorate isn’t foolish enough to accept that.

But, then, they are the electorate and are highly gullible.

## This flooding can’t be stopped. What about the rest?

##### Tamino is writing about this subject, too. That entirely makes complete sense as it is the biggest geophysical and environmental story out there right now. I’ve included an update at this post’s end discussing the possible economic impacts.

It’s been known for a couple of years that the West Antarctic ice sheet was destabilizing and that this would result in appreciable sea-level rise. What wasn’t known was how widespread this was in Anarctica, and how fast this might proceed.

Well, we’re beginning to find out, and the news isn’t good.

Data below from “Mass balance of the Antarctic ice sheet from 1992 to 2017”, The IMBIE Team, Nature, 2018, 558, 219-222.

Abstract

The Antarctic Ice Sheet is an important indicator of climate change and driver of sea-level rise. Here we combine satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that it lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6 ± 3.9 millimetres (errors are one standard deviation). Over this period, ocean-driven melting has caused rates of ice loss from West Antarctica to increase from 53 ± 29 billion to 159 ± 26 billion tonnes per year; ice-shelf collapse has increased the rate of ice loss from the Antarctic Peninsula from 7 ± 13 billion to 33 ± 16 billion tonnes per year. We find large variations in and among model estimates of surface mass balance and glacial isostatic adjustment for East Antarctica, with its average rate of mass gain over the period 1992–2017 (5 ± 46 billion tonnes per year) being the least certain.

## For every centimeter [of sea-level rise] from West Antarctica, Boston feels one and a quarter centimeters. And that extends down the East Coast.

##### — Professor Robert M DeConto, University of Massachusetts, Amherst, Geosciences, as quoted in The Atlantic, 13th June 2018, “After decades of losing ice, Antarctica is now hemorrhaging it”.

S. Kruel, “The impacts of sea-level rise on tidal flooding in Boston, Massachusetts”, Journal of Coastal Research, 2016, 32(6), 1302-1309.

It is important to understand that it is too late to stop this part of the effects of climate change: Boston and coasts will flood. We can hope that by the world cutting back on emissions it might slow. But reversing it is out of the question. And, to the degree the world is not keeping on schedule, even the slowing looks out of reach.

But, seriously, it’s unrealistic to think anything else. We have important groups of people (like those who elect Congress and the President) who don’t consider these risks serious, even doubt them, or think that Archangel Michael will come riding down on a big white horse and save us collectively because of Manifest Destiny or some other pious rubbish.

Unfortunately, we did not fund the research to ascertain how fast this could go until late, and we’ve done essentially nothing so far on a serious scale to try to stop it, setting the impossible condition of having to maintain an American economic boom. That might prove to be the most expensive economic expansion the world has ever seen.

##### Update, 2018-06-17

Beyond the geophysical impact of impending ice sheet collapse, there’s the economic one: If insurance prices don’t head upwards quickly, and real estate prices for expensive homes on the coasts don’t come under downward pressure, the only reason can be is that they expect the U.S. federal government to continue to fund their rebuilding through the Biggert-Waters Act (2012), the Homeowner Flood Insurance Affordability Act (2014), the Stafford isaster Relief and Emergency Assistance Act (1988), the Disaster Mitigation Act (2000), and the Pets Evacuation and Transportation Standards Act (2006). The wisdom of continuing these in the face of increasing storm costs is being questioned with greater ferocity. (See also.) It’s not difficult to see why:

And, courtesy of the New York Times:

And, courtesy of NOAA:

While some critics (e.g., Pielke, et al, “Normalized hurricane damage in the United States — 1900-2005”) have claimed the increases in losses is because there is more expensive property being damaged by otherwise ordinary storms, correction of losses by the Consumer Price Index (CPI) controls for some of that, and the rates of inflation in damage exceed rates of even the most appreciating real estate values. Moreover, if that’s the reason for the losses, nothing is being done to discourage the practice. Sure, inflation might not be able to be controlled, but (a) it has been very low in recent years and, (b), the CPI-adjusted values from NOAA show that’s not the explanation. The amount loss to disasters is climbing and the claim that’s all it’s about is disingenuous at best.

At some point the federal government will stop or significantly limit and curtail bailouts of rebuilds, like the affluent homes on Alabama’s Dauphin Island. At that point the value of coastal real estate will crest, and it may possibly plummet: A classic Minsky moment. It would be inadvisable to own coastal real estimate when that happens, particularly in towns like Falmouth, Massachusetts. See an article from Forbes which reports climate change is already depressing coastal real estate values by 7%.

## “Will climate change bring benefits from reduced cold-related mortality? Insights from the latest epidemiological research”

### From RealClimate, and referring to article in Lancet :

Guest post by Veronika Huber Climate skeptics sometimes like to claim that although global warming will lead to more deaths from heat, it will overall save lives due to fewer deaths from cold. But is this true? Epidemiological studies suggest the opposite. Mortality statistics generally show a distinct seasonality. More people die in the colder winter months than in the warmer summer months. In European countries, for example, the difference between the average number of …

## When linear systems can’t be solved by linear means

Linear systems of equations and their solution form the cornerstone of much Engineering and Science. Linear algebra is a paragon of Mathematics in the sense that its theory is what mathematicians try to emulate when they develop theory for many other less neat subjects. I think Linear Algebra ought to be required mathematics for any scientist or engineer. (For example, I think Quantum Mechanics makes a lot more sense when taught in terms of inner products than just some magic which drops from the sky.) Unfortunately, in many schools, it is not. You can learn it online, and Professor Gilbert Strang’s lectures and books are the best. (I actually prefer the second edition of his Linear Algebra and Its Applications, but I confess I haven’t looked at the fourth edition of the text, just the third, and I haven’t looked at his fifth edition of Introduction to Linear Algebra.)

There’s a lot to learn about numerical methods for linear systems, too, and Strang’s Applications teaches a lot that’s very good there, including the SVD of which Professor Strang writes “it is not nearly as famous as it should be.” I very much agree. You’ll see it usable everywhere, from dealing with some of the linear systems I’ll mention below to support for Principal Components Analysis in Statistics, to singular-spectrum analysis of time series, to Recommender Systems, a keystone algorithm in so-called Machine Learning work.

The study of numerical linear algebra is widespread and taught in several excellent books. My favorites are Golub and Van Loan’s Matrix Computations, Björck’s Numerical Methods for Least Squares Problems, and Trefethen and Bau’s Numerical Linear Algebra. But it’s interesting how fragile these solution methods are, and how quickly one needs to appeal to Calculus directly with but small changes in these problems. That’s what this post is about.

So what am I talking about? I’ll use small systems of linear equations as examples, despite it being entirely possible and even common to work with systems which have thousands or millions of variables and equations. Here’s a basic one:

$(1)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

written for brevity

$(2)\,\,\,\mathbf{b} = \mathbf{A} \mathbf{x}$

Of course, in any application the equation looks more like:

$(3)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

In R or MATLAB the result is easily obtained. I work and evangelize R, so any computation here will be recorded in it. Doing
 solve(A,b) 

or
 lm(b ~ A + 0) 
will produce

$(4)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]$

It’s also possible to solve several at once, for example, from:

$(5)\,\,\,\left[ \begin{array} {cccc} 12 & 20 & 101 & 200 \\ 4 & 11 & -1 & 3 \\ 16 & 99 & 10 & 9 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

$(6)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right], \left[ \begin{array} {c} 15.25 \\ 15.50 \\ -8.75 \end{array} \right], \left[ \begin{array} {c} -73.75 \\ 51.90 \\ 23.65 \end{array} \right], \left[ \begin{array} {c} -146.25 \\ 99.70 \\ 48.95 \end{array} \right]$

And, of course, having an unknown $\mathbf{b}$ but a known $\mathbf{x}$ is direct, just using matrix multiplication:

$(7)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} -73.75 \\ 51.90 \\ 23.65 \end{array} \right]$

yielding:

$(8)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {c} -101 \\ -1 \\ 10 \end{array} \right]$

Linear Algebra gives us sensible ways to interpret inconsistent systems like:

$(9)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \\ 23 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \\ 17 & -2 & 11 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

by making reasonable assumptions about what the solution to such a system should mean. R via lm(.) gives:

$(10)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1.46655646 \\ 3.00534079 \\ 0.34193795 \end{array} \right]$

Sets of solutions to things like

$(11)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

can be countenanced and there is even a way which I’ll talk about below for picking out a unique one: the minimum norm solution. This is where the SVD comes in. To learn about all the ways these things can be thought about and done, I recommend:

D. D. Jackson, “Interpretation of inaccurate, insufficient and inconsistent data”, Geophysical Journal International, 1972, 28(2), 97-109.

(That’s an awesome title for a paper, by the way.)

### What if there are holes?

Going back to (3), however, suppose instead it looks something like this:

$(12)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & a_{23} \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]$

and we don’t know what $a_{23}$ is. Can it be calculated?

Well, it has to be able to be calculated: It’s the only unknown in this system, with the rules of matrix multiplication just being a shorthand for combining things. So, it’s entirely correct to think that the constants could be manipulated algebraically so they all show up on one side of equals, and $a_{23}$ on the other. That’s a lot of algebra, though.

We might guess that $\mathbf{A}$ was symmetric so think $a_{23} = 4$. But what about the following case, in (12)?

$(12)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] = \left[ \begin{array} {ccc} a_{11} & 2 & 3 \\ 2 & 1 & a_{23} \\ a_{31} & a_{23} & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]$

Now there are 3 unknowns, $a_{11}$, $a_{23}$, and $a_{31}$. The answer is available in (3), but suppose that wasn’t known?

This problem is one of finding those parameters, searching for them if you like. To search, it helps to have a measure of how far away from a goal one is, that is, some kind of score. (14) is what I propose as a score, obtained by taking (12) and rewriting it as below, (13):

$(13)\,\,\,0 = \left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] - \left[ \begin{array} {ccc} a_{11} & 2 & 3 \\ 2 & 1 & a_{23} \\ a_{31} & a_{23} & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]$

$(14)\,\,\,\left|\left|\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] - \left[ \begin{array} {ccc} a_{11} & 2 & 3 \\ 2 & 1 & a_{23} \\ a_{31} & a_{23} & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]\right|\right|_{2}$

$(15)\,\,\,||\mathbf{z}||_{2}$ is an $L_{2}$ norm, and $||\mathbf{z}||_{2} = \sqrt{(\sum_{i=1}^{n} z_{i}^2)}$.

In other words, $||\mathbf{z}||_{2}$ is the length of the vector $\mathbf{z}$. It’s non-negative. Accordingly, making (14) as small as possible means pushing the left and right sides of (12) towards each other. When (14) is zero the left and right sides are equal.

Now, there are many possible values for $a_{11}$, $a_{23}$, and $a_{31}$. In most applications, considering all flonum values for these is not necessary. Typically, the application suggests a reasonable range for each of them, from a low value to a high value. Let

$(\alpha_{11}, \beta_{11})$

be the range of values for $a_{11}$,

$(\alpha_{23}, \beta_{23})$

be the range of values for $a_{23}$, and

$(\alpha_{31}, \beta_{31})$

be the range of values for $a_{31}$, each dictated by the application. If $\sigma_{11}$, $\sigma_{23}$, and $\sigma_{31}$ are each randomly but independently chosen from the unit interval, then a particular value of (14) can be expressed

$(16)\,\,\,\left|\left|\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] - \left[ \begin{array} {ccc} r(\sigma_{11}, \alpha_{11}, \beta_{11}) & 2 & 3 \\ 2 & 1 & r(\sigma_{23}, \alpha_{23}, \beta_{23}) \\ r(\sigma_{31}, \alpha_{31}, \beta_{31}) & r(\sigma_{23}, \alpha_{23}, \beta_{23}) & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]\right|\right|_{2}$

where

$(17)\,\,\,r(\sigma, v_{\text{low}}, v_{\text{high}}) \triangleq v_{low}(1 - \sigma) + \sigma v_{\text{high}}$

So, this is an optimization problem where what’s wanted is to make (16) as small as possible, searching among triplets of values for $a_{11}$, $a_{23}$, and $a_{31}$. How does that get done? R package nloptr. This is a package from CRAN which does a rich set of numerical nonlinear optimizations, allowing the user to choose the algorithm and other controls, like ranges of search and constraints upon the control parameters.

Another reason why these techniques are interesting is it is intriguing and fun to see how far one can get knowing very little. And when little is known, letting algorithms run for a while to make up for that ignorance doesn’t seem like such a bad trade.

### An illustration

In order to illustrate the I don’t know much case, I’m opting for:

$\alpha_{11} = -2$
$\beta_{11} = 2$
$\alpha_{23} = -1$
$\beta_{23} = 8$
$\alpha_{31} = -6$
$\beta_{31} = 6$

What a run produces is:
 Call: nloptr(x0 = rep(0.5, 3), eval_f = objective1, lb = rep(0, 3), ub = rep(1, 3), opts = nloptr.options1, alpha.beta = alpha.beta)

 Minimization using NLopt version 2.4.2 NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. ) Number of Iterations....: 100000 Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05 Number of inequality constraints: 0 Number of equality constraints: 0 Current value of objective function: 0.000734026668840609 Current value of controls: 0.74997066329 0.5556247383 0.75010835335 

Y1 resulting estimates for $a_{11}$, $a_{23}$, and $a_{31}$ are: 1.00, 4.00, 3 

That’s nloptr-speak for reporting on the call, the termination conditions and result. The bottom line in bold tells what was expected, that $a_{11} = 1, a_{23} = 4, a_{31} = 3$.

What about the code? The pertinent portion is shown below, and all the code is downloadable as a single R script from here. There’s also a trace of the execution of that script available as well.

 L2norm<- function(x) { sqrt( sum(x*x) ) }

 r<- function(sigma, alpha, beta) { stopifnot( (0 <= sigma) && (sigma <= 1) ) stopifnot( alpha < beta ) alpha*(1 - sigma) + beta*sigma } # Recall original was: # # A<- matrix(c(1, 2, 3, 2, 1, 4, 3, 4, 1), 3, 3, byrow=TRUE) P1.func<- function(x, alpha.beta) { stopifnot( is.vector(x) ) stopifnot( 3 == length(x) ) # sigma11<- x[1] sigma23<- x[2] sigma31<- x[3] alpha11<- alpha.beta[1] beta11<- alpha.beta[2] alpha23<- alpha.beta[3] beta23<- alpha.beta[4] alpha31<- alpha.beta[5] beta31<- alpha.beta[6] # P1<- matrix( c( r(sigma11,alpha11,beta11), 2, 3, 2, 1, r(sigma23,alpha23,beta23), r(sigma31,alpha31,beta31), r(sigma23,alpha23,beta23), 1 ), nrow=3, ncol=3, byrow=TRUE ) return(P1) } objective1<- function(x, alpha.beta) { stopifnot( is.vector(x) ) stopifnot( 3 == length(x) ) b<- matrix(c(12,4,16),3,1) x.right<- matrix(c(-3,6,1),3,1) P1<- P1.func(x, alpha.beta) d<- b - P1 %*% x.right # L2 norm return( L2norm(d) ) } nloptr.options1<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-6, "print_level"=0, "maxeval"=100000, "population"=1000) alpha.beta<- c(-2, 2, -1, 8, -6, 6) Y1<- nloptr(x0=rep(0.5,3), eval_f=objective1, lb=rep(0,3), ub=rep(1,3), opts=nloptr.options1, alpha.beta=alpha.beta ) print(Y1) cat(sprintf("Y1 resulting estimates for a_{11}, a_{23}, and a_{31} are: %.2f, %.2f, %2.f\n", r(Y1$solution[1], alpha.beta[1], alpha.beta[2]), r(Y1$solution[2], alpha.beta[3], alpha.beta[4]), r(Y1$solution[3], alpha.beta[5], alpha.beta[6])))  ### But what is it good for? Case 1: Markov chain transition matrices Consider again (1): $(1')\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$ This happens also to be the template for a 3-state Markov chain with their many applications. The following example is taken from the famous paper by Rabiner, as presented by Resch: • L. R. Rabiner, “A tutorial on Hidden Markov Models and selected applications in speech recognition”, Proceedings of the IEEE, February 1989, 77(2), DOI:10.1109/5.18626. • B. Resch, “Hidden Markov Models”, notes for the course Computational Intelligence, Graz University of Technology, 2011. • They begin with the transition diagram: which if cast into the form of (1′) and (2) looks like: $(18)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 0.8 & 0.05 & 0.15 \\ 0.2 & 0.6 & 0.2 \\ 0.2 & 0.3 & 0.5 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$ The rows, top-to-bottom, are labeled sunny, rainy, and foggy, as are the columns, left-to-right. Cell $(i,j)$ gives the probability for going from state $i$ to state $j$. For example, the probability of going from sunny to foggy is 0.15. Here’s a prettier rendition from Resch: Resch and Rabiner go on to teach Hidden Markov Models (“HMMs”) where $\mathbf{A}$ is not known and, moreover, the weather is not directly observed. Instead, information about the weather is obtained by observing whether or not a third party takes an umbrella to work or not. Here, however, suppose the weather is directly known. And suppose $\mathbf{A}$ is known except nothing is known about what happens after foggyexcept when it remains foggy. Symbolically: $(19)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 0.8 & 0.05 & 0.15 \\ 0.2 & 0.6 & 0.2 \\ a_{31} & a_{32} & 0.5 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$ Note in (18) or Resch’s tableau how the rows each sum to one. This is a characteristic of first order Markov models: Once in a state, the transition has to go somewhere, even if to stay in that state. Transitions can’t just cause the system to disappear, so all the outgoing probabilities need to sum to one. This means, however, that when what happens when it is foggy is introduced, there aren’t two unconstrained parameters, there is only one. Accordingly, rather than introducing $a_{32}$, I could write $1 - a_{31}$. As it turns out, in my experience with nloptr, it is often better to specify this constraint explicitly so the optimizer knows about it rather than building it implicitly into the objective function, even at the price of introducing another parameter and its space to explore. The challenge I’ll pose here is somewhat tougher than that faced by HMMs. The data in hand is not a series of sunny, rainy, or foggy weather records but, because, say, the records were jumbled, all that’s in hand is a count of how many sunny, rainy, and foggy days there were, and what the count of days were following them. In particular: $(20)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1020 \\ 301 \\ 155 \end{array} \right]$ meaning that the first day of a set of pairs began where the first day was sunny 1020 times, rainy 301 times, and foggy 155 times. Statistical spidey sense wonders about how many observations are needed to pin town transition probabilities well, but let’s set that aside for now. (At least it’s plausible that if ordering information is given up, there might be a need for more count information.) And the count of what the weather was on the second days is: $(21)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {c} 854 \\ 416 \\ 372 \end{array} \right]$ or 854 sunny days, 416 rainy days, and 372 foggy foggy days. Note that unlike in (16) here in (19) there is no need to pick upper and lower bounds on the value: This is a probability so it is by definition limited to the unit interval. But $a_{31} + a_{32} + 0.5 = 1$ always so that constraint needs to be stated. Here’s the code:  P2.func<- function(x) { # Sunny, Rainy, Foggy stopifnot( is.vector(x) ) stopifnot( 2 == length(x) ) # a.31<- x[1] a.32<- x[2] # P2<- matrix( c( 0.8, 0.05, 0.15, 0.2, 0.6, 0.2, a.31, a.32, 0.5 ), nrow=3, ncol=3, byrow=TRUE ) return(P2) }  objective2<- function(x) { stopifnot( is.vector(x) ) stopifnot( 2 == length(x) ) x.right<- matrix(c(1020, 301, 155), 3, 1) b<- matrix(c(854, 416, 372),3,1) P2<- P2.func(x) d<- b - P2 %*% x.right # L2 norm return( L2norm(d) ) } constraint2<- function(x) { return( (x[1] + x[2] - 0.5 )) } nloptr.options2<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-4, "print_level"=0, "maxeval"=100000, "population"=1000) Y2<- nloptr(x0=rep(0.5,2), eval_f=objective2, eval_g_eq=constraint2, lb=rep(0,2), ub=rep(1,2), opts=nloptr.options2 ) print(Y2) cat(sprintf("Y2 resulting estimates for a_{31}, a_{32} are: %.2f, %.2f\n", Y2$solution[1], Y2$solution[2]))  This run results in:  Call: nloptr(x0 = rep(0.5, 2), eval_f = objective2, lb = rep(0, 2), ub = rep(1, 2), eval_g_eq = constraint2, opts = nloptr.options2)  Minimization using NLopt version 2.4.2 NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. ) Number of Iterations....: 100000 Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05 Number of inequality constraints: 0 Number of equality constraints: 1 Current value of objective function: 0.500013288564363 Current value of controls: 0.20027284199 0.29972776012  Y2 resulting estimates for $a_{31}, a_{32}$ are: 0.20, 0.30  Suppose some of the data is missing? In particular, suppose instead: $(20a)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1020 \\ r(\eta, 155, 1020) \\ 155 \end{array} \right]$ where $\eta$ is on the unit interval and so all that’s known is that $x_{2}$ is between 155 and 1020, that is, bounded by the other two terms in $\mathbf{x}$. Now there are two parameters to search, but they are unconstrained, apart from being on the unit interval. The code for this is:  P3.func<- function(x) { # Sunny, Rainy, Foggy stopifnot( is.vector(x) ) stopifnot( 3 == length(x) ) # a.31<- x[1] a.32<- x[2] # There's an x[3] but it isn't used in the P3.func. See # the objective3. # P3<- matrix( c( 0.8, 0.05, 0.15, 0.2, 0.6, 0.2, a.31, a.32, 0.5 ), nrow=3, ncol=3, byrow=TRUE ) return(P3) }  objective3<- function(x) { stopifnot( is.vector(x) ) stopifnot( 3 == length(x) ) x.right<- matrix(c(1020, r(x[3], 155, 1020), 155), 3, 1) b<- matrix(c(854, 416, 372),3,1) P3<- P3.func(x) d<- b - P3 %*% x.right # L2 norm return( L2norm(d) ) } constraint3<- function(x) { stopifnot( 3 == length(x) ) return( (x[1] + x[2] - 0.5 )) } nloptr.options3<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-4, "print_level"=0, "maxeval"=100000, "population"=1000) Y3<- nloptr(x0=rep(0.5,3), eval_f=objective3, eval_g_eq=constraint3, lb=rep(0,3), ub=rep(1,3), opts=nloptr.options3 ) print(Y3)  cat(sprintf("Y3 resulting estimates for a_{31}, a_{32}, and eta are: %.2f, %.2f, %.2f\n", Y3$solution[1], Y3$solution[2], Y3$solution[3])) 

The results are:

 Call: nloptr(x0 = rep(0.5, 3), eval_f = objective3, lb = rep(0, 3), ub = rep(1, 3), eval_g_eq = constraint3, opts = nloptr.options3)

 Minimization using NLopt version 2.4.2 NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. ) Number of Iterations....: 100000 Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05 Number of inequality constraints: 0 Number of equality constraints: 1 Current value of objective function: 0.639962390444759 Current value of controls: 0.20055501795 0.29944464945 0.16847867543 Y3 resulting estimates for a_{31}, a_{32}, and $\eta$ are: 0.20, 0.30, 0.17, with that $\eta$ corresponding to 301 

That 301 versus the true 372 isn’t too bad.

An example of where this kind of estimation is done more generally, see:

### But what is it good for? Case 2: Learning prediction matrices

When systems like (2) arise in cases of statistical regression, the matrix $\mathbf{A}$ is called a prediction or design matrix. The idea is that its columns represent sequences of predictions for the response, represented by the column vector $\mathbf{b}$, and the purpose of regression is to find the best weights, represented by column vector $\mathbf{x}$, for predicting the response.

Consider (2) again but instead of $\mathbf{b}$ and $\mathbf{x}$ being column vectors, as in (5), they are matrices, $\mathbf{B}$ and $\mathbf{X}$, respectively. In other words, the situation is that there are lots of $(\mathbf{b}_{k}, \mathbf{x}_{l})$ pairs available. And then suppose nothing is known about $\mathbf{A}$, that is, it just contains nine unknown parameters:

$(22)\,\,\,\mathbf{A} = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right]$

There are, of course, questions about how many $(\mathbf{b}_{k}, \mathbf{x}_{l})$ pairs are needed in tandem with choice of number of iterations (see maxeval discussion in Miscellaneous Notes below.) Here, 8 pairs were used for purposes of illustration.

$(23)\,\,\,\mathbf{X} = \left[\begin{array}{cccccccc} 1356 & 7505 & 4299 & 3419 & 7132 & 1965 & 8365 & 8031 \\ 5689 & 8065 & 7001 & 638 & 8977 & 1088 & 3229 & 1016 \\ 3777 & 8135 & 3689 & 1993 & 3635 & 9776 & 8967 & 7039 \end{array} \right]$

and

$(24)\,\,\,\mathbf{B} = \left[\begin{array}{cccccccc} 5215 & 13693 & 7265 & 4217 & 9367 & 10588 & 14372 & 12043 \\ 7528 & 17825 & 11024 & 4989 & 14860 & 9447 & 16162 & 13087 \\ 6161 & 12798 & 7702 & 3023 & 9551 & 8908 & 11429 & 8734 \end{array} \right]$

The code for this case is:

 objective4<- function(x) { stopifnot( is.vector(x) ) stopifnot( 9 == length(x) ) B<- matrix(c(5215, 13693, 7265, 4217, 9367, 10588, 14372, 12043, 7528, 17825, 11024, 4989, 14860, 9447, 16162, 13087, 6161, 12798, 7702, 3023, 9551, 8908, 11429, 8734 ), 3, 8, byrow=TRUE) X.right<- matrix(c(1356, 7505, 4299, 3419, 7132, 1965, 8365, 8031, 5689, 8065, 7001, 638, 8977, 1088, 3229, 1016, 3777, 8135, 3689, 1993, 3635, 9776, 8967, 7039 ), 3, 8, byrow=TRUE) P4<- matrix(x, nrow=3, ncol=3, byrow=TRUE) d<- B - P4 %*% X.right # L2 norm for matrix return( L2normForMatrix(d, scaling=1000) ) } nloptr.options4<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-6, "print_level"=0, "maxeval"=300000, "population"=1000) Y4<- nloptr(x0=rep(0.5,9), eval_f=objective4, lb=rep(0,9), ub=rep(1,9), opts=nloptr.options4 ) print(Y4) 

cat("Y4 resulting estimates for $\mathbf{A}$:\n") print(matrix(Y4$solution, 3, 3, byrow=TRUE))  The run results are:  Call: nloptr(x0 = rep(0.5, 9), eval_f = objective4, lb = rep(0, 9), ub = rep(1, 9), opts = nloptr.options4)  Minimization using NLopt version 2.4.2 NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. ) Number of Iterations....: 300000 Termination conditions: xtol_rel: 1e-06 maxeval: 3e+05 Number of inequality constraints: 0 Number of equality constraints: 0 Current value of objective function: 0.0013835300300619 Current value of controls: 0.66308125177 0.13825982301 0.93439957114 0.92775614187 0.63095968859 0.70967190127 0.3338899268 0.47841968691 0.79082981177  Y4 resulting estimates for mathbf{A}: [,1] [,2] [,3] [1,] 0.66308125177 0.13825982301 0.93439957114 [2,] 0.92775614187 0.63095968859 0.70967190127 [3,] 0.33388992680 0.47841968691 0.79082981177  In fact, the held back version of $\mathbf{A}$ used to generate these test data sets was: $(25)\,\,\,\mathbf{A} = \left[\begin{array}{ccc} 0.663 & 0.138 & 0.934 \\ 0.928 & 0.631 & 0.710 \\ 0.334 & 0.478 & 0.791 \end{array} \right]$ and that matches the result rather well. So, in a sense, the algorithm has “learned” $\mathbf{A}$ from the 8 data pairs presented. ### Miscellaneous notes ##### All the runs of nloptr here were done with the following settings. The algorithm is always ISRES. The parameters xrel_tol = 1.0e-4 and population = 1000. maxeval, the number of iterations, varied depending upon the problem. For Y1, Y2, Y3, and Y4 it was 100000, 100000, 100000, and 300000, respectively In all instances, the appropriate optimization controls are given by the nloptr.optionsn variable, where $n \in \{1,2,3,4\}$.Per the description, ISRES, which is an acronym for Improved Stochastic Ranking Evolution Strategy: The evolution strategy is based on a combination of a mutation rule (with a log-normal step-size update and exponential smoothing) and differential variation (a Nelder–Mead-like update rule). The fitness ranking is simply via the objective function for problems without nonlinear constraints, but when nonlinear constraints are included the stochastic ranking proposed by Runarsson and Yao is employed. The population size for ISRES defaults to 20×(n+1) in n dimensions, but this can be changed with the nlopt_set_population function. This method supports arbitrary nonlinear inequality and equality constraints in addition to the bound constraints, and is specified within NLopt as NLOPT_GN_ISRES.Further notes are available in: T. P. Runarsson, X. Yao, “Search biases in constrained evolutionary optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 205, 35(2), 233-243. T. P. Runarsson, X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Transactions on Evolutionary Computation, 2000, 4(3), 284-294. ## censorship isn’t tolerated here, so … Editorial Cartoonist Rob Rogers recent editorial cartoons have been deleted from the Pittsburgh Post Gazette. Accordingly … Posted in censorship, humor, satire ## Aldo Leopold We end, I think, at what might be called the standard paradox of the twentieth century: our tools are better than we are, and grow better faster than we do. They suffice to crack the atom, to command the tides. But they do not suffice for the oldest task in human history: to live on a piece of land without spoiling it. From Aldo Leopold, The River of the Mother of God and other essays, University of Wisconsin Press, 1991: 254. From a modern perspective, Leopold, although insightful and having contributed enormously to the development of ecological ethics and sensibilities, also claimed: Examine each question in terms of what is ethically and aesthetically right, as well as what is economically expedient. A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise That’s from his Sand County Almanac (page 262). Read literally, it suggests that biotic communities are capable of stability in the human meaning of the word. But Leopold introduced notions like the trophic cascade which is biological dynamics at its most essential, and, so, instead of biotic communities he should be read as meaning biocoenosis. Accordingly, oscillation in species abundances and even replacement of one species by another, as in forest succession or even invasion would be considered stable. An outline of a modern view is available here. To his great and practical credit, Leopold also struggled to reconcile Ecology and Economics. ## The elephant in the room: a case for producer responsibility ### This is a guest post by Claire Galkowski, Executive Director, South Shore Recycling Cooperative. With so much focus on the recycling crisis, we tend to overlook the root cause of the problem: The glut of short lived consumer products and packaging. Rather than looking for new places to dispose, it is imperative that we look at where it is coming from, and stem the flow. Mining, harvesting, processing and transport are where the biggest environmental footprints land. In the current system, manufacturers who profit from the sale of their wares have little incentive to make durable products or minimal, easily recycled packaging, or to incorporate recycled feedstock in their packaging. Thankfully, a few corporations such as Unilever and P&G are stepping up. Many more need a nudge to follow suit. Neither are consumers incented to reduce their use and disposal of unnecessary “stuff”. The proliferation of convenient single use products and unrecyclable packaging is clogging our waterways, contaminating our recycling plants and filling our landfills. Add to that the diminishing disposal capacity in Massachusetts as most of our remaining landfills face closure within the decade, and we are facing a day when the massive amount of stuff that we blithely buy, use once and toss will have no place to go. Consumers who pay for their trash by the bag have some skin in the game to reduce their disposal footprint. While this may encourage the use of less single use “stuff”, this can also result in “wishful recycling”, which is clearly hurting our recycling industry, and is one cause of China’s embargo on our recyclables. Producers of non-bottle bill products are selling us millions of tons of products for billions of dollars. Most will be disposed within 6 months. Packaging alone accounts for about 30% of our waste, and about 60% of our recycling stream. Once products and packaging leave the warehouse, producers are free of responsibility for what happens to them. A few exceptions are carbonated beverages that are redeemed for deposit, rechargeable batteries and mercury thermostats that are recycled through manufacturer- sponsored programs, which are good examples of product stewardship (*). Municipalities, haulers and recycling processors are left holding the plastic bag, the dressing-coated take out container, the plastic-embedded paper cup, and the glass bottle that currently has no local recycling market. It’s time for that to change. We need the packaging industry to partner with those of us that manage their discards to help solve this massive problem. There is a bill in Massachusetts House Ways and Means, H447, An act to reduce packaging waste, that assigns a fee to packaging sold in Massachusetts. The fee is based on the recyclability, recycled content, and cost to manage at end of life. It provides an incentive for more lean and thoughtful packaging design, and to create domestic markets for our recyclables. The proceeds provide funding for improved recycling infrastructure development. With help from MassDEP, the SSRC and many municipalities are working hard to adjust the habits of our residents, an uphill climb. Recycling companies are struggling to navigate this massive market contraction, and wondering if they can continue to operate until viable domestic outlets are established. Municipal recycling costs are skyrocketing, straining budgets with no clear end in sight. With help from the consumer product manufacturers that helped to create this crisis, it will be possible to resurrect and revitalize our recycling industry, create domestic markets for its products, and make our disposal system more sustainable. #### States look at EPR, funding cuts, mandates ##### by Jared Paben and Colin Staub, February 6, 2018, from Resource Recycling California: The Golden State is advancing a bill calling for mandates on the use of recycled content in beverage containers. The legislation, Senate Bill 168, requires the California Department of Resources Recycling and Recovery (CalRecycle) by 2023 to establish minimum recycled-content standards for metal, glass or plastic containers (state law already requires glass bottles contain 35 percent recycled content). The bill also requires that CalRecycle by 2020 provide a report to lawmakers about the potential to implement an extended producer responsibility (EPR) program to replace the current container redemption program. The state Senate on Jan. 29 voted 28-6 to pass the bill, which is now awaiting action in the Assembly. Connecticut: A workgroup convened by the Senate Environment Committee has been meeting for more than a year to consider policies, including EPR, that would reduce packaging waste and boost diversion. The group includes industry representatives, environmental advocates, MRF operators, government regulators and more. It most recently met in December and discussed what should be included in its final recommendations to state lawmakers. EPR, which is on the table for packaging and printed paper, was discussed at length. The group is working to finalize its recommendations and could present them to lawmakers during the current legislative session. ##### (*) The U.S. Solar Energy Industries Association (SEIA) established a PV panel recycling program in 2016 with the help of SunPower. ## “The path to US$0.015/kWh solar power, and lower” (PV Magazine and GTM Research)

The headline and a page with lots of graphics and associated worksheets come from this PV Magazine article. The underpinning assessment is from GTM Research and their report Trends in Solar Technology and System Prices.

Recall that Natural Gas Combined Cycle, the most efficient natural gas generation process, produces electricity at US$05.5/kWh. The quoted US$0.015/kWh is significantly lower than previous projections. Of course, you’ll get different numbers from the usual suspects. Just to note, nearly every established quasi-governmental organization, e.g., U.S. EIA, the IEA, ISO-NE, etc, have missed the mark on both projecting cost per kWh and penetration of solar PV, both behind-the-meter residential and utility scale.

## Sidney, NY: The lead example of how the USA will deal with future coastal and riverine flooding?

From Bloomberg, the story of Sidney, NY, not that far from where I used to live in Endicott, NY.

More than 400 homes and businesses ended up underwater in Sidney, affecting more than 2,000 people. It was months before Spry and her neighbors could move back in. It was also the second time in five years that the Susquehanna had wrecked half the village. People had just finished rebuilding. When Spry walked back into her soggy house, the street outside reeking with the rancid smell of garbage and fuel, she was hit first, she remembers, by the sight of her brand-new hardwood floor, completely buckled.

Spry didn’t want to rebuild again, and neither did local officials; everyone knew the river would keep flooding homes and businesses. So Sidney decided to try something else: It would use federal and state money to demolish Spry’s neighborhood while creating a new one away from the flood plain for displaced residents. Sidney would be on the forefront of U.S. disaster policy, a case study in what’s known as managed retreat—and the many ways it can go wrong.

Until recently, the guiding philosophy behind attempts to protect U.S. homes and cities against the effects of climate change was to build more defenses. Houses can be perched on stilts, surrounded by barriers, buttressed with stormproof windows and roofs. Neighborhoods can be buffered by seawalls for storm surges, levees for floods, firebreaks for wildfires. Defenses are an instinctive response for a species that’s evolved by taming the natural world.

But sometimes the natural world won’t be tamed. Or, more precisely, sometimes engineered solutions can no longer withstand the unrelenting force of more water, more rain, more fires, more wind. Within 20 years, says the Union of Concerned Scientists, 170 cities and towns along the U.S. coast will be “chronically inundated,” which the group defines as flooding of at least 10 percent of a land area, on average, twice a month. By the end of the century, that category will grow to include more than half of the communities along the Eastern Seaboard and Gulf Coast—and that’s if the rate of climate change doesn’t accelerate. In their less guarded moments, officials in charge of this country’s disaster programs have begun to acknowledge the previously unthinkable: Sometimes the only effective way to protect people from climate change is to give up. Let nature reclaim the land and move a neighborhood out of harm’s way while it still is one.

## What gives me hope … And it ain’t the small stuff

AS Arman Oganisian of Stable Markets writes “There are no solutions, only trade-offs.” That is a fundamentally engineering attitude.

It is fundamentally about the economics, and, in particular, the dramatic drop in levelized cost of energy for wind and renewables, as well as take-up rates.

But, what’s significant, is that players in the marketplace who ignore, shun, and oppose arguments in court and in the public forum of ideas in favor of regulating Carbon emissions, or assigning producer responsibility to fossil fuel companies for the downstream harm their products do, will rapidly do econotaxis towards these kinds of motivations.

That, as the above video notes, the values of these companies have dropped precipitously in the recent term is highly gratifying.

And, oh, by the way, it serves them right.

But, also, a warning to the environmental community, whether in the United States or elsewhere, if you are really, honestly concerned with this problem, you would do well to set aside your personal and collective egos and work with these organizations, some of which have been your long time enemies, to achieve these goals. These trade-offs are necessary to get there.

And what is more important, getting there? Or achieving your side conditions and side goals, however significant they might be in isolation? If climate change approaches an existential threat, then it should be treated like an existential threat? A safe continuance in an isolated ecosystem of a particular species does not rise to the level of worthiness of measures designed to stop a much bigger calamity.

Grow up.

Oh, and what will happen to fossil fuels?

## Sustainable Landscaping

### Update: 2018-05-26

It’s not about plants, not entirely. But it seems that, in one agricultural area, pollinators (bees) under stress have ceded their pollinating responsibility to a couple of species of exotic (read invasive) flies. See: J. R. Stavert, D. E. Pattemore, I. Bartomeus, A. C. Gaskett, J. R. Beggs, “Exotic flies maintain pollination services as native pollinators decline with agricultural expansion, Journal of Applied Ecology (British Ecological Society), 22 January 2018. The only thing surprising about that is that people consider it surprising.

##### Update, 2018-04-29:While my first thoughts and reasons for this post were simply to collect together a number of links pertaining to an interesting subject, regarding which there appeared to be some controversy, I have received several reactions to the material, many supportive and positive, others strongly adverse. This indicated to me that this is an area worth knowing more about, and, so, I have pulled quote a number of technical articles from the fields of Ecology, Forest Management, and Invasive Species Studies which I am currently reading. I intend to at least supplement the links below with additional ones explaining states of knowledge at present. I may include some comments summarizing what I have read. In other posts, in the future, I may do some modeling along these lines, since diffusion processes modeled by differential equations are of significant interest to me, whether for biological and physical systems, or diffusion of product innovations, via, for instance, the Bass diffusion model. Those results won’t be posted here, though.

Sustainable landscaping as described by Wikipedia, and by Harvard University. See also the Sustainable Sites Initiative. It’s a lot more than eradicating invasive species. In fact, that might be harmful. There’s a lot of questionable information out there, even by otherwise reputable sources like The Trustees of Reservations. See also their brochure on the subject where they recommend various control measures, including chemical, even if it is not their preferred option. There is evidence Roundup (glyphosate) is indeed effective against at least Alliaria petiolata, with little harm for common, commingled biocenostics.

Dandelions

###### (Above from M. Rejmánek, “What makes a species invasive?”, Ecology, September 1996, 3-13.)

Four inspirational books:

I dove into reading Professor del Tredici’s book as soon as I got my copy. Here is part of what he has to say from pages 1-3:

Perhaps the most well-known example of a “spontaneous” plant is Ailanthus altissima or tree-of-heaven, introduced from China. Widely planted in the Northeast in the first half of the nineteenth century, Ailanthus was later rejected by urban tree planters as uncouth and weedy. Despite concerted efforts at eradication, the tree managed to persist by sprouting from its roots and spread by scattering its wind-dispersed seeds …

Although it is ubiquitous in the urban landscape, Ailanthus is never counted in street tree inventories because no one planted it — and consequently its contribution to making the city a more livable place goes completely unrecognized. When the major of New York City promised in 2007 to plant a million trees to fight global warming, he failed to realize … that if the Ailanthus trees already growing throughout the city were counted he would be halfway toward his goal without doing anything. And that, of course, is the larger purpose of this book: to open people’s eyes to the ecological reality of our cities and appreciate it for what it is without passing judgment on it. Ailanthus is just as good at sequestering carbon and creating shade as our beloved native species or showy horticultural selections. Indeed, if one were to ask whether our cities would be better or worse without Ailanthus, the answer would clearly be the latter, given that the tree typically grows where few other plants can survive.

There is no denying the fact that many — if not most — of the plants covered in this book suffer from image problems associated with the label “weeds” — or, to use a more recent term, “invasive species.” From the plant’s perspective, invasiveness is just another word for successful reproduction — the ultimate goal of all organisms, including humans. From a utilitarian perspective, a weed is any plant that grows by itself in a place where people do not want it to grow. The term is a value judgment that humans apply to plants we do not like, not a biological characteristic. Calling a plant a weed gives us license to eradicate it. In a similar vein, calling a plant invasive allows us to blame it for ruining the environment when really it is humans who are actually to blame. From the biological perspective, weeds are plants that are adapted to disturbance in all its myriad forms, from bulldozers to acid rain. Their pervasiveness in the urban environment is simply a reflection of the continual disruption that characterizes this habitat. Weeds are the symptoms of environmental degradation, not its cause, and as such they are poised to become increasingly abundant within our lifetimes.

###### (Slight emphasis added by blog post author in a couple of places.)

The fact that ‘r-strategists’ are the best invaders is not surprising because the overwhelming majority of biological invasions take place in human- and/or naturally-disturbed habitats. Our modern landscape is mainly disturbed landscape.

###### (Above from M. Rejmánek, “What makes a species invasive?”, Ecology, September 1996, 3-13.)

Links with some quotes and discussion:

S. L. Flory, K. Clay, “Invasive shrub distribution varies with distance to roads and stand age in eastern deciduous forests in Indiana, USA”, Plant Ecology, 2006, 184:131-141.

Some quotes:

If roads are important corridors for exotic plants or if roadside edges provide good habitat for exotic plant growth, then one would predict decreased exotic plant density with increased distance to roads. In support, the prevalence and cover of exotic plants has been shown to decline with increasing distance to road in a number of ecosystems.

Independent of distance to road, successional age might determine susceptibility of a community to exotic plant invasions. Young forests typically have higher light levels (Levine and Feller 2004), fewer competitors, and less litter than older forests (Leuschner 2002) while mature forest interiors are known to have lower light availability, cooler temperatures, and higher humidity than forest edges (Brothers and Spingarn 1992). We would therefore expect, based on levels of light penetration and microclimatic conditions, that older forests would have higher densities of invasive shrubs near the forest edge than in forest interiors and fewer invasive shrubs overall due to less recent disturbance events and less favourable environmental conditions. We would also expect that younger forests would show weaker correlations of densities of invasive shrubs with increasing distance to road since light levels are higher throughout young forests. This would result in an interaction between distance to road and forest age.

The goal of this study was to quantify the density of invasive exotic shrubs along roads in eastern deciduous forests of varying successional ages in Indiana. Eastern deciduous forests cover much of the landscape east of the Mississippi River. Most of this region has been fragmented by urban and suburban development and roads such that ninety percent of all ecosystem areas in the eastern US are within 1061 m of a road (Riitters and Wickham 2003). We specifically addressed the following questions (1) Does the density of invasive exotic shrubs decline as the distance to a road increases? (2) Does the relationship between density and distance to road differ among exotic shrub species? and (3) Are invasive exotic shrubs less common in mature forests than in young successional forests? Answers to these questions will help develop a predictive framework for plant invasions and better inform management strategies.

Successional age has been shown to aﬀect exotic plant establishment in old ﬁelds in Minnesota with younger successional aged communities more susceptible to invasions and older communities more resistant (Inouye et al. 1987). Our results show that forest successional age plays a similar role in the distribution of invasive shrubs in eastern deciduous forests with invasive shrubs found in greater densities in young and mid-successional forests than mature forests. This is likely due to a combination of factors including differences in light regimes … Exotic shrubs would have survived and grown much more successfully where they did not have to compete with existing trees or intact forests. This hypothesis could help to explain why we found fewer shrubs near the road in mature forests than young and mid-successional forests.

S. L. Flory, K. Clay, “Effects of roads and forest successional age on experimental plant invasions”, Biological Conservation, 2009, 142, 2531-2537.

## LLNL Sankey diagram of U.S. national energy flows in 2017: What’s possible, what’s not, and who’s responsible

###### (Updated, 2018-05-02. See below.)

I love Sankey diagrams, and have written about them with respect to influence of Big Oil on U.S. climate policy, and in connection with what it takes to power a light bulb, providing a Sankey-based explanation for something Professor Kevin Anderson has spoken and written about. Indeed, there’s a wealth of computational capability in R and otherwise, for constructing Sankey diagrams and the like. Here’s a new one from Lawrence Livermore National Laboratory:

###### (Click on image to see much larger version, for inspection or saving. Use your browser Back Button to return to this blog.)

That’s a lot of energy consumption, and renewables have a long way to go before overtaking it. But, maybe not so much.

First of all, if the solution is, hypothetically, all wind and solar, setting aside storage, nearly all that rejected energy won’t be there. So the actual need isn’t about 98 quads, it’s closer to 31 quads. Call it 35 quads for jollies.

Second, note that wind and solar energy technology are presently on the middle part of the logistic S-curve of growth. (See also diffusion of innovations.) This is a super-exponential region. Call it an exponential region to be conservative. Current estimates place cost cuts in technology for these at 30%-40% per year. Actual adoption meets resistance of regulatory capture and other impediments, and during the last year it was 11%. Clearly with the cost advantages, the motivation is to go faster and, one can argue, the greater the spread between present sources of energy and wind and solar, the lower the “energy barrier” to jump to wind and energy despite other impediments. Translating into time, an 11% per year growth rate is a doubling time of 9 years. But a 30% growth rate is a doubling time of just 3.3 years. Say the doubling time is 4 years.

Third, to get from 3.1 quads to 35 quads is about $\pi$ doublings, and it’s certainly less than 4 doublings.

Fourth, so the really bad news for fossil fuels and all business and people that depend upon them to make a living it that:

• If the doubling time is 4 years, wind and solar will get to 35 quads in 13 years
• If the doubling time is 3 years, wind and solar will get to 35 quads in under 10 years
• If the doubling time is 5 years, wind and solar will get to 35 quads in 16 years
• And if the doubling time is 9 years, which by all accounts is unduly pessimistic, wind and solar will get to 35 years in under 30 years

Will that be enough to keep us from a +3C world?

Probably not: Too little, too late.

But it’ll happen anyway and it is, as I say, why fossil fuel energy and utilities which depend upon them, natural gas and all, are dead on their feet, and stranded. And any government which puts ratepayer and taxpayer dollars into building out new fossil fuel infrastructure is not only being foolish, they are making the mistake of the century.

As for the +3C climate change outcome? Clearly, that is such an emergency that the only option to address it in time is degrowth, not only cutting back on additional growth. However, there’s no evidence that that is even considered as an option. To the approvers of additional development in suburbs like the Town of Westwood and elsewhere, Conservation Commissions and all, I simply say:

You have a choice: Either manage a restrained and then negative growth plan yourself, or Nature will do it for you.

Simple. The officials responsible for these decisions know and have been warned repeatedly about these outcomes. They are ignoring them. They own the long term results. Remember them.

### Update, 2018-05-02

The Trump/Perry Department of Energy with its EIA reports solar energy in the United States grew 32% per annum since 2000. So, per the reasoning able, that’s a doubling time of under 3 years. Moreover, this is based upon a long sampling period, since 2000, so it is not only stable, it is probably conservative.

## (reblog) Bill Ritter, Jr, Colorado State University: “Market forces are driving a clean energy revolution in the U.S.”

Transforming U.S. energy systems away from coal and toward clean renewable energy was once a vision touted mainly by environmentalists. Now it is shared by market purists.

Today, renewable energy resources like wind and solar power are so affordable that they’re driving coal production and coal-fired generation out of business. Lower-cost natural gas is helping, too.

I direct Colorado State University’s Center for the New Energy Economy, which works with states to facilitate the transition toward a clean energy economy. In my view, today’s energy market reflects years of federal and state support for clean energy research, development and deployment.

And, despite the Trump administration’s support of coal, a recent survey of industry leaders shows that utilities are not changing their plans significantly.

## on turbulent eddies in oceans

Oceanic eddies are not negligible, especially in climate modeling. There’s the work of Dr Emily Shuckburgh of the BAS on this, but more specifically there’s section 6.3.3 of Gettelman and Rood, Demystifying Climate Models: A Users Guide to Earth System Models, 2016, an open source book. Generally speaking, oceans transport otherwise un-transportable heat energy from tropics north. As Gettelman and Rood say, “It is as if the large scales (think of the highway or the concrete drainage ditch) require the small scales to handle the ﬂow (or energy) of the circulation” (6.3.3, on page 97).

And while climate models are the best we’ve got, they need a lot more work, and Gettelman and Rood cover areas for improvement. There’s also:

This is not to say model projections are useless. As many say, Uncertainty is not our friend, and that’s what I mean when I write here and elsewhere that forecasts based upon climate models might well underestimate rates of climate change.

Facts are, especially when used in ensembles, numerical climate models have to be fast enough to be able to be run many times simulating Earth’s climate over many years. Even with the fastest computers and specialized hardware, this means compromises are inevitably necessary. For example, although variable grids are common, these are static, and the best dynamical models of fluids (e.g., FVCOM, and see also) use variable mesh grids where the fineness of the grid gets dropped when gradients have too large magnitudes. On the large specialized hardware, such software architectures are presently unworkable because they mean cores have to communicate with one another too much.

Then there’s the interaction with ice sheets, and various simplifications of oceans.

So, my point is, while I certainly would not feel assured that outcomes could be better than they are projected by CMIP5 going on CMIP6, I wouldn’t feel assured either that they are no worse than the models say.

## What a piece of the Internet really looks like: Hurricane Electric (AS6939)

###### (For a larger view, click on the image, and use your browser Back Button to return to the blog.)

To see more, go to Hurricane Electric’s manipulable 3D map here.

## storage

And, an aside on PV,

## Crocus tommasinianus via Google Pixel 2

Crocus tommasinianus are out, and are glorious.

Here are two photos of blooms in our yard taken with my new Google Pixel 2:

Some reviews of the Pixel 2:

## a microgrid with dynamic boundaries

###### (Updated 2018-04-05, 23:53 EDT.)

Now here’s a thought: A microgrid with dynamic boundaries.

Basic ideas were conceived by Nassar and Salama, “Adaptive self-adequate microgrids with dynamic boundaries”:

Abstract

Intensive research is being directed at microgrids because of their numerous beneﬁts, such as their ability to enhance the reliability of a power system and reduce its environmental impact. Past research has focused on microgrids that have predeﬁned boundaries. However, a recently suggested methodology enables the determination of ﬁctitious boundaries that divide existing bulky grids into smaller microgrids, thereby facilitating the use of a smart grid paradigm in large-scale systems. These boundaries are ﬁxed and do not change with the power system operating conditions. In this paper, we propose a new microgrid concept that incorporates ﬂexible ﬁctitious boundaries: “dynamic microgrids.” The proposed method is based on the allocation and coordination of agents in order to achieve boundary mobility. The stochastic behavior of loads and renewable-based generators are considered, and a novel model that represents wind, solar, and load power based on historical data has been developed. The PG&E 69-bus system has been used for testing and validating the proposed concept. Compared with the ﬁxed bound-
ary microgrids, our results show the superior effectiveness of the dynamic microgrid concept for addressing the self-adequacy of microgrids in the presence of stochastically varying loads and generation.

Applications? Jasper, a sonnenCommunity. That’s an islanded microgrid.

This is encouraging. Because it means, at least, that if people want grid defection to pursue energy democracy, this new option is moving to a community which is already islanded from the grid. With dynamic boundaries, it’s possible one could dynamically assemble groups of communities during the day. And, in fact, there might be a new business model there: An intelligent, rapidly switchable transmission network that could group, regroup, drop, and reconfigure a number of microgrids, for fees.

Oncor‘s microgrid. (Oncor is a Texas utility!)

I love the litany of limits of the supergrid vision in the following talk by Dr Chris Marnay of Lawrence Berkeley National Laboratory.

I also like the designation of renewable energies as being grid hostile.

### Update, 2018-04-05

Siemens, in cooperation with Chicago-based ComEd and the Illinois Institute of Technology (IIT), is working to support microgrids as integral operating components within a 20th century-style command-and-control grid. Note:

The dual-pronged pilot is focused on developing next-generation microgrid controller logic that can handle complex, grid-supportive situations as well as a parallel effort to explore the potential for large volumes of photovoltaic solar on microgrids, when paired with stationary storage. To get the inside scoop on the project, I spoke with vice president of strategy for Siemens Digital Grid, Ken Geisler.

Ultimately, Mr. Geisler envisions the grid of the future as a patchwork of smaller microgrids that all intelligently, intuitively work together, saying “These clusters of resources in a coherent area might be able to support more of what I would call a patchwork quilt of microgrids such that the utility has some level of control that helps offset their costs and manage their rates as well as allows third parties to come in and provide resources at some level like rooftop solar or other microgrids that could pop up for the sole purpose for the C&I industry in the area.”

## fossil fuels are done

There’s a discussion way off at Energy Institute at Haas about how unfair solar owners are, under current government policies, to electrical customers who do not have PV accessible.

It’s irrelevant.

Fossil fuels are done, stranded, the walking dead. Boss Trump or no Trump. Clinton or no Clinton.

This is an irreversible, nonlinear phenomenon, an observable of a coupled set of differential equations. It’s inexorable.

And it makes sense. Technological innovations at this scale are part of Nature.

### Update, 2018-03-30

From Bloomberg New Energy Finance, “Fossil Fuels Squeezed by Plunge in Cost of Renewables, BNEF Says”.

### Update, 2018-04-01: Solar Surprise: Small-Scale Solar a Better Deal than Big’

The relevant findings are published by ILSR in an article from 29th March 2018.

## forecast for 27th March 2018

Today is the 21st of March, 2018. We are supposed to get our fourth nor’easter tomorrow this late Winter, and the third nor’easter in nearly as many weeks.

ECMWF hosted, in this incarnation, at the Meteocentre UQAM in Montreal created this projection for next Tuesday, the 27th of March:

That’s a low pressure cell off the East Coast, another nor’easter. Except this one has a central pressure about 10 millibars lower than any we’ve experienced so far. Don’t know anything at this range about the nature of the precipitation, or closest approach timing with respect to tide levels.

But, still, this recurrence is getting pretty interesting.

By the way, ECMWF is sometimes referred to as “the European model”. It has a reputation of being pretty good, in large measure, because its ab initio physics are very good.

## Remember. But remember, too, we are no longer in the 19th century. Our risks today are much bigger.

###### Hat tip to Tamino.

Thoreau’s “Slavery in Massachusetts”.

But, recall, the stakes we gamble upon today are much bigger than those, as big as they were.

See here for further details. But watch the episode if you really want to understand what may be at stake.

The response should be proportionate.

## Sea-level report cards, contingency upon model character, and ensemble methods

Done by the Virginia Institute of Marine Science, new sea-level report cards offer a look at current sea-level rise rates, and projections. What’s interesting to me is making the projections conditional upon the character of the model used to project. In particular, this “character” is there simple — they show differences between linear and quadratic projections — but the 2050 projection is in most cases markedly different depending upon which model is chosen.

This is very good, because it shows how modeling matters, and how, as Tamino and others have noted elsewhere, proper model criticism and treatment of uncertainties are key.

I think the VIMS presentation is exactly right for public consumption.

For a more technical audience, one familiar with, say, the “advanced” level of presentation at SkepticalScience, I am increasingly fond of ensemble methods(*), like spaghetti plots. These are very flexible, and can even support a model averaged version of, say, linear and quadratic projections, even if, I think, neither is necessarily defensible on its own.

## “Eon and RWE just killed the utility as we know it”

By the way, I often post smaller things and comment upon them, typically items having to do with economic, financial, business, or solid waste management matters, at my site on LinkedIn.

Posted in Uncategorized

## and Then There’s Physics does “Talking solutions and motivating action”

And Then There’s Physics does a fine post about scientists “talking about solutions and motivating action”.

But I felt the figure from Dr Glen Peters needed to be updated a bit, with a status briefing. So, below:

###### (Click on image to see a larger figure and use your browser Back Button to return to blog.)

The problem isn’t that there aren’t solutions. The problem is that we are rapidly running out of time, and some people still think we have time to consider using, for instance, natural gas as a “bridge fuel”.

And I just discussed what begins to happen if we miss the temperature limit.

## Uh, oh: Loss of control ahead …

In the technical summary from the NASA Jet Propulsion Laboratory based at the California Institute of Technology titled “Far northern permafrost may unleash Carbon within decades”,

An excerpt:

Permafrost in the coldest northern Arctic — formerly thought to be at least temporarily shielded from global warming by its extreme environment — will thaw enough to become a permanent source of carbon to the atmosphere in this century, with the peak transition occurring in 40 to 60 years, according to a new NASA-led study.

The study calculated that as thawing continues, by the year 2300, total carbon emissions from this region will be 10 times as much as all human-produced fossil fuel emissions in 2016.

This paper, one of a long series of studies of the permafrost, including field assays of emissions using NASA and other aircraft, is titled:

 N. C. Parazoo, C. D. Koven, D. M. Lawrence, V. Romanovsky, C. E. Miller, “Detecting the permafrost carbon feedback: talik formation and increased cold-season respiration as precursors to sink-to-source transitions”, The Cryosphere, 12, 123-144, 2018.

It is the first definitive example of a long held fear apparent to anyone familiar with climate science and who thinks deeply about it. While initial forcings off the climate optimum within which civilization has developed are caused completely and entirely by human emissions and actions, the effects of such emissions are, increasingly, amplified by natural forcings. The most direct one is from water vapor which, due to the Clausius-Clapeyron relation, and the fact that water vapor is itself a greenhouse gas, means, roughly, that for each part of warming due to CO2 emissions there’s a comparable part due to water vapor, resulting in a doubling of effect. As warming progresses, there are stores of organic and other Carbon which are frozen out of circulation and microbial activity because of temperature, principally in polar regions. As the Earth warms — and noting that polar regions, percentage-wise, warm more than temperate or tropical ones — these stores are challenged and, eventually, begin to ferment once more. This results in additional emissions of greenhouse gases, CO2 but sometimes CH4.

The implication of all this is that while, at present, we control nearly all of emissions and could, in principle, reverse them by our choice of energy and other deliberate designs, in time, an increasing fraction of emissions will come from natural sources, temperature dependent, over which we have no control whatsoever. Accordingly, if, someday, humanity wished to contain climate change by reducing emissions, while they could zero their own contributions, as time goes, there is no capability of zeroing these natural ones, because they respond to increased temperatures, and that is all. Even Solar Radiation Management, which, in my opinion, is a really bad idea, basically, in its designs, maintains temperatures at whatever they are, preventing increases. It does not cool Earth’s surface.

What’s worse is that even if humanity decided, because of the consequences, to try to scrub atmosphere of emissions, this project is only reasonable, if astronomically expensive, if human emissions are nearly zeroed. It is not possible to keep up with human emissions as they are at present. Human emissions cannot be zero because of those inherent in agriculture and food production, even if these were managed with vehicles and processes which themselves were zero emitting. Accordingly, to the degree to which natural sources increase and dominate, they might, at some point, render any project for direct capture of CO2 futile, closing the door on our climate fate, even if we wanted to spend huge amounts of financial and human capital to make it happen. This is well beyond the ability of any market incentives or technology or engineering to contain.

Researchers Ben Jones, left, Laurence Plug and Guido Grosse pierce and ignite bubbles of methane gas that are frozen near the surface of a tundra lake on Alaska’s Seward Penisula. Methane has 72 times the heating effect of carbon dioxide (CO2), and its emission from arctic lakes was a major contributor to a period of global warming more than 11,000 years ago Nowhere is the evidence of a heating planet more dramatic than in the polar regions. Over the past 50 years, the arctic has warmed twice as fast as the rest of the globe. (Photo by Luis Sinco/Los Angeles Times via Getty Images)

Thus, humanity could, in these circumstance, lose control. Certainly, such natural sources of emissions will make it increasingly more expensive to manage the effects of our emissions. At some point, we tip into a world which is hurling itself headlong into a climate destiny we cannot even imagine.

This is another, perhaps dominant reasons why keeping mean surface temperature changes to but +2 Celsius is so important.

We’re failing to do that.

## Boston, and nearby, 2nd March 2018

That’s Atlantic Avenue near the Aquarium.

That’s Essex, in Cape Ann.

That’s the Sargent’s Wharf parking lot.

That’s is where General Electric wants to build their new headquarters (!).

That’s Columbus Park, near the Aquarium.

That’s Neponset Circle.

That’s Plymouth Rock in Plymouth.

That’s Sandwich.

That’s Quincy.

Yeah, developing on Atlantic Avenue is really smahhht!.

And this is not a hurricane.