## “All of Monsanto’s problems just landed on Bayer” (by Chris Hughes at Bloomberg)

Monsanto has touted Roundup (also known as Glyphosate but more properly as $\textbf{\texttt{N-(phosphonomethyl)glycine}}$) as a safe remedy for weed control, often in the taming of so-called “invasive species”. It’s used on playfields where children are exposed to it, including, apparently, in my home town of Westwood, Massachusetts.

### But what is it good for? Case 1: Markov chain transition matrices

Consider again (1):

$(1')\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

This happens also to be the template for a 3-state Markov chain with their many applications.

The following example is taken from the famous paper by Rabiner, as presented by Resch:

• L. R. Rabiner, “A tutorial on Hidden Markov Models and selected applications in speech recognition”, Proceedings of the IEEE, February 1989, 77(2), DOI:10.1109/5.18626.
• B. Resch, “Hidden Markov Models”, notes for the course Computational Intelligence, Graz University of Technology, 2011.
• They begin with the transition diagram:

which if cast into the form of (1′) and (2) looks like:

$(18)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 0.8 & 0.05 & 0.15 \\ 0.2 & 0.6 & 0.2 \\ 0.2 & 0.3 & 0.5 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

The rows, top-to-bottom, are labeled sunny, rainy, and foggy, as are the columns, left-to-right. Cell $(i,j)$ gives the probability for going from state $i$ to state $j$. For example, the probability of going from sunny to foggy is 0.15. Here’s a prettier rendition from Resch:

Resch and Rabiner go on to teach Hidden Markov Models (“HMMs”) where $\mathbf{A}$ is not known and, moreover, the weather is not directly observed. Instead, information about the weather is obtained by observing whether or not a third party takes an umbrella to work or not. Here, however, suppose the weather is directly known. And suppose $\mathbf{A}$ is known except nothing is known about what happens after foggyexcept when it remains foggy. Symbolically:

$(19)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 0.8 & 0.05 & 0.15 \\ 0.2 & 0.6 & 0.2 \\ a_{31} & a_{32} & 0.5 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]$

Note in (18) or Resch’s tableau how the rows each sum to one. This is a characteristic of first order Markov models: Once in a state, the transition has to go somewhere, even if to stay in that state. Transitions can’t just cause the system to disappear, so all the outgoing probabilities need to sum to one. This means, however, that when what happens when it is foggy is introduced, there aren’t two unconstrained parameters, there is only one. Accordingly, rather than introducing $a_{32}$, I could write $1 - a_{31}$. As it turns out, in my experience with nloptr, it is often better to specify this constraint explicitly so the optimizer
knows about it rather than building it implicitly into the objective function, even at the price of introducing another parameter and its space to explore.

The challenge I’ll pose here is somewhat tougher than that faced by HMMs. The data in hand is not a series of sunny, rainy, or foggy weather records but, because, say, the records were jumbled, all that’s in hand is a count of how many sunny, rainy, and foggy days there were, and what the count of days were following them. In particular:

$(20)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1020 \\ 301 \\ 155 \end{array} \right]$

meaning that the first day of a set of pairs began where the first day was sunny 1020 times, rainy 301 times, and foggy 155 times. Statistical spidey sense wonders about how many observations are needed to pin town transition probabilities well, but let’s set that aside for now. (At least it’s plausible that if ordering information is given up, there might be a need for more count information.) And the count of what the weather was on the second days is:

$(21)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {c} 854 \\ 416 \\ 372 \end{array} \right]$

or 854 sunny days, 416 rainy days, and 372 foggy foggy days.

Note that unlike in (16) here in (19) there is no need to pick upper and lower bounds on the value: This is a probability so it is by definition limited to the unit interval. But $a_{31} + a_{32} + 0.5 = 1$ always so that constraint needs to be stated.

Here’s the code:
 P2.func<- function(x) { # Sunny, Rainy, Foggy stopifnot( is.vector(x) ) stopifnot( 2 == length(x) ) # a.31<- x[1] a.32<- x[2] # P2<- matrix( c( 0.8, 0.05, 0.15, 0.2, 0.6, 0.2, a.31, a.32, 0.5 ), nrow=3, ncol=3, byrow=TRUE ) return(P2) }

 objective2<- function(x) { stopifnot( is.vector(x) ) stopifnot( 2 == length(x) ) x.right<- matrix(c(1020, 301, 155), 3, 1) b<- matrix(c(854, 416, 372),3,1) P2<- P2.func(x) d<- b - P2 %*% x.right # L2 norm return( L2norm(d) ) } constraint2<- function(x) { return( (x[1] + x[2] - 0.5 )) } nloptr.options2<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-4, "print_level"=0, "maxeval"=100000, "population"=1000) Y2<- nloptr(x0=rep(0.5,2), eval_f=objective2, eval_g_eq=constraint2, lb=rep(0,2), ub=rep(1,2), opts=nloptr.options2 ) print(Y2) cat(sprintf("Y2 resulting estimates for a_{31}, a_{32} are: %.2f, %.2f\n", Y2$solution[1], Y2$solution[2])) 

This run results in:

 Call: nloptr(x0 = rep(0.5, 2), eval_f = objective2, lb = rep(0, 2), ub = rep(1, 2), eval_g_eq = constraint2, opts = nloptr.options2)

 Minimization using NLopt version 2.4.2 NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. ) Number of Iterations....: 100000 Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05 Number of inequality constraints: 0 Number of equality constraints: 1 Current value of objective function: 0.500013288564363 Current value of controls: 0.20027284199 0.29972776012 

Y2 resulting estimates for $a_{31}, a_{32}$ are: 0.20, 0.30 

Suppose some of the data is missing? In particular, suppose instead:

$(20a)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1020 \\ r(\eta, 155, 1020) \\ 155 \end{array} \right]$

where $\eta$ is on the unit interval and so all that’s known is that $x_{2}$ is between 155 and 1020, that is, bounded by the other two terms in $\mathbf{x}$.

Now there are two parameters to search, but they are unconstrained, apart from being on the unit interval. The code for this is:

 P3.func<- function(x) { # Sunny, Rainy, Foggy stopifnot( is.vector(x) ) stopifnot( 3 == length(x) ) # a.31<- x[1] a.32<- x[2] # There's an x[3] but it isn't used in the P3.func. See # the objective3. # P3<- matrix( c( 0.8, 0.05, 0.15, 0.2, 0.6, 0.2, a.31, a.32, 0.5 ), nrow=3, ncol=3, byrow=TRUE ) return(P3) }

 objective3<- function(x) { stopifnot( is.vector(x) ) stopifnot( 3 == length(x) ) x.right<- matrix(c(1020, r(x[3], 155, 1020), 155), 3, 1) b<- matrix(c(854, 416, 372),3,1) P3<- P3.func(x) d<- b - P3 %*% x.right # L2 norm return( L2norm(d) ) } constraint3<- function(x) { stopifnot( 3 == length(x) ) return( (x[1] + x[2] - 0.5 )) } nloptr.options3<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-4, "print_level"=0, "maxeval"=100000, "population"=1000) Y3<- nloptr(x0=rep(0.5,3), eval_f=objective3, eval_g_eq=constraint3, lb=rep(0,3), ub=rep(1,3), opts=nloptr.options3 ) print(Y3) 

cat(sprintf("Y3 resulting estimates for a_{31}, a_{32}, and eta are: %.2f, %.2f, %.2f\n", Y3$solution[1], Y3$solution[2], Y3$solution[3]))  The results are:  Call: nloptr(x0 = rep(0.5, 3), eval_f = objective3, lb = rep(0, 3), ub = rep(1, 3), eval_g_eq = constraint3, opts = nloptr.options3)  Minimization using NLopt version 2.4.2 NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. ) Number of Iterations....: 100000 Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05 Number of inequality constraints: 0 Number of equality constraints: 1 Current value of objective function: 0.639962390444759 Current value of controls: 0.20055501795 0.29944464945 0.16847867543 Y3 resulting estimates for a_{31}, a_{32}, and $\eta$ are: 0.20, 0.30, 0.17, with that $\eta$ corresponding to 301  That 301 versus the true 372 isn’t too bad. An example of where this kind of estimation is done more generally, see: ### But what is it good for? Case 2: Learning prediction matrices When systems like (2) arise in cases of statistical regression, the matrix $\mathbf{A}$ is called a prediction or design matrix. The idea is that its columns represent sequences of predictions for the response, represented by the column vector $\mathbf{b}$, and the purpose of regression is to find the best weights, represented by column vector $\mathbf{x}$, for predicting the response. Consider (2) again but instead of $\mathbf{b}$ and $\mathbf{x}$ being column vectors, as in (5), they are matrices, $\mathbf{B}$ and $\mathbf{X}$, respectively. In other words, the situation is that there are lots of $(\mathbf{b}_{k}, \mathbf{x}_{l})$ pairs available. And then suppose nothing is known about $\mathbf{A}$, that is, it just contains nine unknown parameters: $(22)\,\,\,\mathbf{A} = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right]$ There are, of course, questions about how many $(\mathbf{b}_{k}, \mathbf{x}_{l})$ pairs are needed in tandem with choice of number of iterations (see maxeval discussion in Miscellaneous Notes below.) Here, 8 pairs were used for purposes of illustration. $(23)\,\,\,\mathbf{X} = \left[\begin{array}{cccccccc} 1356 & 7505 & 4299 & 3419 & 7132 & 1965 & 8365 & 8031 \\ 5689 & 8065 & 7001 & 638 & 8977 & 1088 & 3229 & 1016 \\ 3777 & 8135 & 3689 & 1993 & 3635 & 9776 & 8967 & 7039 \end{array} \right]$ and $(24)\,\,\,\mathbf{B} = \left[\begin{array}{cccccccc} 5215 & 13693 & 7265 & 4217 & 9367 & 10588 & 14372 & 12043 \\ 7528 & 17825 & 11024 & 4989 & 14860 & 9447 & 16162 & 13087 \\ 6161 & 12798 & 7702 & 3023 & 9551 & 8908 & 11429 & 8734 \end{array} \right]$ The code for this case is:  objective4<- function(x) { stopifnot( is.vector(x) ) stopifnot( 9 == length(x) ) B<- matrix(c(5215, 13693, 7265, 4217, 9367, 10588, 14372, 12043, 7528, 17825, 11024, 4989, 14860, 9447, 16162, 13087, 6161, 12798, 7702, 3023, 9551, 8908, 11429, 8734 ), 3, 8, byrow=TRUE) X.right<- matrix(c(1356, 7505, 4299, 3419, 7132, 1965, 8365, 8031, 5689, 8065, 7001, 638, 8977, 1088, 3229, 1016, 3777, 8135, 3689, 1993, 3635, 9776, 8967, 7039 ), 3, 8, byrow=TRUE) P4<- matrix(x, nrow=3, ncol=3, byrow=TRUE) d<- B - P4 %*% X.right # L2 norm for matrix return( L2normForMatrix(d, scaling=1000) ) } nloptr.options4<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-6, "print_level"=0, "maxeval"=300000, "population"=1000) Y4<- nloptr(x0=rep(0.5,9), eval_f=objective4, lb=rep(0,9), ub=rep(1,9), opts=nloptr.options4 ) print(Y4)  cat("Y4 resulting estimates for $\mathbf{A}$:\n") print(matrix(Y4$solution, 3, 3, byrow=TRUE)) 

The run results are:
 Call: nloptr(x0 = rep(0.5, 9), eval_f = objective4, lb = rep(0, 9), ub = rep(1, 9), opts = nloptr.options4)

 Minimization using NLopt version 2.4.2 NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. ) Number of Iterations....: 300000 Termination conditions: xtol_rel: 1e-06 maxeval: 3e+05 Number of inequality constraints: 0 Number of equality constraints: 0 Current value of objective function: 0.0013835300300619 Current value of controls: 0.66308125177 0.13825982301 0.93439957114 0.92775614187 0.63095968859 0.70967190127 0.3338899268 0.47841968691 0.79082981177 

Y4 resulting estimates for mathbf{A}: [,1] [,2] [,3] [1,] 0.66308125177 0.13825982301 0.93439957114 [2,] 0.92775614187 0.63095968859 0.70967190127 [3,] 0.33388992680 0.47841968691 0.79082981177 

In fact, the held back version of $\mathbf{A}$ used to generate these test data sets was:

$(25)\,\,\,\mathbf{A} = \left[\begin{array}{ccc} 0.663 & 0.138 & 0.934 \\ 0.928 & 0.631 & 0.710 \\ 0.334 & 0.478 & 0.791 \end{array} \right]$

and that matches the result rather well. So, in a sense, the algorithm has “learned” $\mathbf{A}$ from the 8 data pairs presented.

## censorship isn’t tolerated here, so …

Editorial Cartoonist Rob Rogers recent editorial cartoons have been deleted from the Pittsburgh Post Gazette. Accordingly …

Posted in censorship, humor, satire

## Aldo Leopold

We end, I think, at what might be called the standard paradox of the twentieth century: our tools are better than we are, and grow better faster than we do. They suffice to crack the atom, to command the tides. But they do not suffice for the oldest task in human history: to live on a piece of land without spoiling it.

From Aldo Leopold, The River of the Mother of God and other essays, University of Wisconsin Press, 1991: 254.

From a modern perspective, Leopold, although insightful and having contributed enormously to the development of ecological ethics and sensibilities, also claimed:

Examine each question in terms of what is ethically and aesthetically right, as well as what is economically expedient. A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise

That’s from his Sand County Almanac (page 262). Read literally, it suggests that biotic communities are capable of stability in the human meaning of the word. But Leopold introduced notions like the trophic cascade which is biological dynamics at its most essential, and, so, instead of biotic communities he should be read as meaning biocoenosis. Accordingly, oscillation in species abundances and even replacement of one species by another, as in forest succession or even invasion would be considered stable. An outline of a modern view is available here.

To his great and practical credit, Leopold also struggled to reconcile Ecology and Economics.

## The elephant in the room: a case for producer responsibility

### This is a guest post by Claire Galkowski, Executive Director, South Shore Recycling Cooperative.

With so much focus on the recycling crisis, we tend to overlook the root cause of the problem: The glut of short lived consumer products and packaging. Rather than looking for new places to dispose, it is imperative that we look at where it is coming from, and stem the flow. Mining, harvesting, processing and transport are where the biggest environmental footprints land.

In the current system, manufacturers who profit from the sale of their wares have little incentive to make durable products or minimal, easily recycled packaging, or to incorporate recycled feedstock in their packaging. Thankfully, a few corporations such as Unilever and P&G are stepping up. Many more need a nudge to follow suit. Neither are consumers incented to reduce their use and disposal of unnecessary “stuff”. The proliferation of convenient single use products and unrecyclable packaging is clogging our waterways, contaminating our recycling plants and filling our landfills.

Add to that the diminishing disposal capacity in Massachusetts as most of our remaining landfills face closure within the decade, and we are facing a day when the massive amount of stuff that we blithely buy, use once and toss will have no place to go. Consumers who pay for their trash by the bag have some skin in the game to reduce their disposal footprint. While this may encourage the use of less single use “stuff”, this can also result in “wishful recycling”, which is clearly hurting our recycling industry, and is one cause of China’s embargo on our recyclables.

Producers of non-bottle bill products are selling us millions of tons of products for billions of dollars. Most will be disposed within 6 months. Packaging alone accounts for about 30% of our waste, and about 60% of our recycling stream. Once products and packaging leave the warehouse, producers are free of responsibility for what happens to them. A few exceptions are carbonated beverages that are redeemed for deposit, rechargeable batteries and mercury thermostats that are recycled through manufacturer- sponsored programs, which are good examples of product stewardship (*). Municipalities, haulers and recycling processors are left holding the plastic bag, the dressing-coated take out container, the plastic-embedded paper cup, and the glass bottle that currently has no local recycling market.

It’s time for that to change. We need the packaging industry to partner with those of us that manage their discards to help solve this massive problem.

There is a bill in Massachusetts House Ways and Means, H447, An act to reduce packaging waste, that assigns a fee to packaging sold in Massachusetts. The fee is based on the recyclability, recycled content, and cost to manage at end of life. It provides an incentive for more lean and thoughtful packaging design, and to create domestic markets for our recyclables. The proceeds provide funding for improved recycling infrastructure development.

With help from MassDEP, the SSRC and many municipalities are working hard to adjust the habits of our residents, an uphill climb. Recycling companies are struggling to navigate this massive market contraction, and wondering if they can continue to operate until viable domestic outlets are established. Municipal recycling costs are skyrocketing, straining budgets with no clear end in sight.

With help from the consumer product manufacturers that helped to create this crisis, it will be possible to resurrect and revitalize our recycling industry, create domestic markets for its products, and make our disposal system more sustainable.

#### States look at EPR, funding cuts, mandates

##### by Jared Paben and Colin Staub, February 6, 2018, from Resource Recycling

California: The Golden State is advancing a bill calling for mandates on the use of recycled content in beverage containers. The legislation, Senate Bill 168, requires the California Department of Resources Recycling and Recovery (CalRecycle) by 2023 to establish minimum recycled-content standards for metal, glass or plastic containers (state law already requires glass bottles contain 35 percent recycled content). The bill also requires that CalRecycle by 2020 provide a report to lawmakers about the potential to implement an extended producer responsibility (EPR) program to replace the current container redemption program. The state Senate on Jan. 29 voted 28-6 to pass the bill, which is now awaiting action in the Assembly.

Connecticut: A workgroup convened by the Senate Environment Committee has been meeting for more than a year to consider policies, including EPR, that would reduce packaging waste and boost diversion. The group includes industry representatives, environmental advocates, MRF operators, government regulators and more. It most recently met in December and discussed what should be included in its final recommendations to state lawmakers. EPR, which is on the table for packaging and printed paper, was discussed at length. The group is working to finalize its recommendations and could present them to lawmakers during the current legislative session.

## Sidney, NY: The lead example of how the USA will deal with future coastal and riverine flooding?

From Bloomberg, the story of Sidney, NY, not that far from where I used to live in Endicott, NY.

More than 400 homes and businesses ended up underwater in Sidney, affecting more than 2,000 people. It was months before Spry and her neighbors could move back in. It was also the second time in five years that the Susquehanna had wrecked half the village. People had just finished rebuilding. When Spry walked back into her soggy house, the street outside reeking with the rancid smell of garbage and fuel, she was hit first, she remembers, by the sight of her brand-new hardwood floor, completely buckled.

Spry didn’t want to rebuild again, and neither did local officials; everyone knew the river would keep flooding homes and businesses. So Sidney decided to try something else: It would use federal and state money to demolish Spry’s neighborhood while creating a new one away from the flood plain for displaced residents. Sidney would be on the forefront of U.S. disaster policy, a case study in what’s known as managed retreat—and the many ways it can go wrong.

Until recently, the guiding philosophy behind attempts to protect U.S. homes and cities against the effects of climate change was to build more defenses. Houses can be perched on stilts, surrounded by barriers, buttressed with stormproof windows and roofs. Neighborhoods can be buffered by seawalls for storm surges, levees for floods, firebreaks for wildfires. Defenses are an instinctive response for a species that’s evolved by taming the natural world.

But sometimes the natural world won’t be tamed. Or, more precisely, sometimes engineered solutions can no longer withstand the unrelenting force of more water, more rain, more fires, more wind. Within 20 years, says the Union of Concerned Scientists, 170 cities and towns along the U.S. coast will be “chronically inundated,” which the group defines as flooding of at least 10 percent of a land area, on average, twice a month. By the end of the century, that category will grow to include more than half of the communities along the Eastern Seaboard and Gulf Coast—and that’s if the rate of climate change doesn’t accelerate. In their less guarded moments, officials in charge of this country’s disaster programs have begun to acknowledge the previously unthinkable: Sometimes the only effective way to protect people from climate change is to give up. Let nature reclaim the land and move a neighborhood out of harm’s way while it still is one.

## What gives me hope … And it ain’t the small stuff

AS Arman Oganisian of Stable Markets writes “There are no solutions, only trade-offs.” That is a fundamentally engineering attitude.

It is fundamentally about the economics, and, in particular, the dramatic drop in levelized cost of energy for wind and renewables, as well as take-up rates.

But, what’s significant, is that players in the marketplace who ignore, shun, and oppose arguments in court and in the public forum of ideas in favor of regulating Carbon emissions, or assigning producer responsibility to fossil fuel companies for the downstream harm their products do, will rapidly do econotaxis towards these kinds of motivations.

That, as the above video notes, the values of these companies have dropped precipitously in the recent term is highly gratifying.

And, oh, by the way, it serves them right.

But, also, a warning to the environmental community, whether in the United States or elsewhere, if you are really, honestly concerned with this problem, you would do well to set aside your personal and collective egos and work with these organizations, some of which have been your long time enemies, to achieve these goals. These trade-offs are necessary to get there.

And what is more important, getting there? Or achieving your side conditions and side goals, however significant they might be in isolation? If climate change approaches an existential threat, then it should be treated like an existential threat? A safe continuance in an isolated ecosystem of a particular species does not rise to the level of worthiness of measures designed to stop a much bigger calamity.

Grow up.

Oh, and what will happen to fossil fuels?

## Sustainable Landscaping

### Update: 2018-05-26

It’s not about plants, not entirely. But it seems that, in one agricultural area, pollinators (bees) under stress have ceded their pollinating responsibility to a couple of species of exotic (read invasive) flies. See: J. R. Stavert, D. E. Pattemore, I. Bartomeus, A. C. Gaskett, J. R. Beggs, “Exotic flies maintain pollination services as native pollinators decline with agricultural expansion, Journal of Applied Ecology (British Ecological Society), 22 January 2018. The only thing surprising about that is that people consider it surprising.

##### Update, 2018-04-29:While my first thoughts and reasons for this post were simply to collect together a number of links pertaining to an interesting subject, regarding which there appeared to be some controversy, I have received several reactions to the material, many supportive and positive, others strongly adverse. This indicated to me that this is an area worth knowing more about, and, so, I have pulled quote a number of technical articles from the fields of Ecology, Forest Management, and Invasive Species Studies which I am currently reading. I intend to at least supplement the links below with additional ones explaining states of knowledge at present. I may include some comments summarizing what I have read. In other posts, in the future, I may do some modeling along these lines, since diffusion processes modeled by differential equations are of significant interest to me, whether for biological and physical systems, or diffusion of product innovations, via, for instance, the Bass diffusion model. Those results won’t be posted here, though.

Sustainable landscaping as described by Wikipedia, and by Harvard University. See also the Sustainable Sites Initiative. It’s a lot more than eradicating invasive species. In fact, that might be harmful. There’s a lot of questionable information out there, even by otherwise reputable sources like The Trustees of Reservations. See also their brochure on the subject where they recommend various control measures, including chemical, even if it is not their preferred option. There is evidence Roundup (glyphosate) is indeed effective against at least Alliaria petiolata, with little harm for common, commingled biocenostics.

Dandelions

###### (Above from M. Rejmánek, “What makes a species invasive?”, Ecology, September 1996, 3-13.)

Four inspirational books:

I dove into reading Professor del Tredici’s book as soon as I got my copy. Here is part of what he has to say from pages 1-3:

Perhaps the most well-known example of a “spontaneous” plant is Ailanthus altissima or tree-of-heaven, introduced from China. Widely planted in the Northeast in the first half of the nineteenth century, Ailanthus was later rejected by urban tree planters as uncouth and weedy. Despite concerted efforts at eradication, the tree managed to persist by sprouting from its roots and spread by scattering its wind-dispersed seeds …

Although it is ubiquitous in the urban landscape, Ailanthus is never counted in street tree inventories because no one planted it — and consequently its contribution to making the city a more livable place goes completely unrecognized. When the major of New York City promised in 2007 to plant a million trees to fight global warming, he failed to realize … that if the Ailanthus trees already growing throughout the city were counted he would be halfway toward his goal without doing anything. And that, of course, is the larger purpose of this book: to open people’s eyes to the ecological reality of our cities and appreciate it for what it is without passing judgment on it. Ailanthus is just as good at sequestering carbon and creating shade as our beloved native species or showy horticultural selections. Indeed, if one were to ask whether our cities would be better or worse without Ailanthus, the answer would clearly be the latter, given that the tree typically grows where few other plants can survive.

There is no denying the fact that many — if not most — of the plants covered in this book suffer from image problems associated with the label “weeds” — or, to use a more recent term, “invasive species.” From the plant’s perspective, invasiveness is just another word for successful reproduction — the ultimate goal of all organisms, including humans. From a utilitarian perspective, a weed is any plant that grows by itself in a place where people do not want it to grow. The term is a value judgment that humans apply to plants we do not like, not a biological characteristic. Calling a plant a weed gives us license to eradicate it. In a similar vein, calling a plant invasive allows us to blame it for ruining the environment when really it is humans who are actually to blame. From the biological perspective, weeds are plants that are adapted to disturbance in all its myriad forms, from bulldozers to acid rain. Their pervasiveness in the urban environment is simply a reflection of the continual disruption that characterizes this habitat. Weeds are the symptoms of environmental degradation, not its cause, and as such they are poised to become increasingly abundant within our lifetimes.

###### (Slight emphasis added by blog post author in a couple of places.)

The fact that ‘r-strategists’ are the best invaders is not surprising because the overwhelming majority of biological invasions take place in human- and/or naturally-disturbed habitats. Our modern landscape is mainly disturbed landscape.

###### (Above from M. Rejmánek, “What makes a species invasive?”, Ecology, September 1996, 3-13.)

Links with some quotes and discussion:

S. L. Flory, K. Clay, “Invasive shrub distribution varies with distance to roads and stand age in eastern deciduous forests in Indiana, USA”, Plant Ecology, 2006, 184:131-141.

Some quotes:

If roads are important corridors for exotic plants or if roadside edges provide good habitat for exotic plant growth, then one would predict decreased exotic plant density with increased distance to roads. In support, the prevalence and cover of exotic plants has been shown to decline with increasing distance to road in a number of ecosystems.

Independent of distance to road, successional age might determine susceptibility of a community to exotic plant invasions. Young forests typically have higher light levels (Levine and Feller 2004), fewer competitors, and less litter than older forests (Leuschner 2002) while mature forest interiors are known to have lower light availability, cooler temperatures, and higher humidity than forest edges (Brothers and Spingarn 1992). We would therefore expect, based on levels of light penetration and microclimatic conditions, that older forests would have higher densities of invasive shrubs near the forest edge than in forest interiors and fewer invasive shrubs overall due to less recent disturbance events and less favourable environmental conditions. We would also expect that younger forests would show weaker correlations of densities of invasive shrubs with increasing distance to road since light levels are higher throughout young forests. This would result in an interaction between distance to road and forest age.

The goal of this study was to quantify the density of invasive exotic shrubs along roads in eastern deciduous forests of varying successional ages in Indiana. Eastern deciduous forests cover much of the landscape east of the Mississippi River. Most of this region has been fragmented by urban and suburban development and roads such that ninety percent of all ecosystem areas in the eastern US are within 1061 m of a road (Riitters and Wickham 2003). We specifically addressed the following questions (1) Does the density of invasive exotic shrubs decline as the distance to a road increases? (2) Does the relationship between density and distance to road differ among exotic shrub species? and (3) Are invasive exotic shrubs less common in mature forests than in young successional forests? Answers to these questions will help develop a predictive framework for plant invasions and better inform management strategies.

Successional age has been shown to aﬀect exotic plant establishment in old ﬁelds in Minnesota with younger successional aged communities more susceptible to invasions and older communities more resistant (Inouye et al. 1987). Our results show that forest successional age plays a similar role in the distribution of invasive shrubs in eastern deciduous forests with invasive shrubs found in greater densities in young and mid-successional forests than mature forests. This is likely due to a combination of factors including differences in light regimes … Exotic shrubs would have survived and grown much more successfully where they did not have to compete with existing trees or intact forests. This hypothesis could help to explain why we found fewer shrubs near the road in mature forests than young and mid-successional forests.

S. L. Flory, K. Clay, “Effects of roads and forest successional age on experimental plant invasions”, Biological Conservation, 2009, 142, 2531-2537.

## LLNL Sankey diagram of U.S. national energy flows in 2017: What’s possible, what’s not, and who’s responsible

###### (Updated, 2018-05-02. See below.)

I love Sankey diagrams, and have written about them with respect to influence of Big Oil on U.S. climate policy, and in connection with what it takes to power a light bulb, providing a Sankey-based explanation for something Professor Kevin Anderson has spoken and written about. Indeed, there’s a wealth of computational capability in R and otherwise, for constructing Sankey diagrams and the like. Here’s a new one from Lawrence Livermore National Laboratory:

###### (Click on image to see much larger version, for inspection or saving. Use your browser Back Button to return to this blog.)

That’s a lot of energy consumption, and renewables have a long way to go before overtaking it. But, maybe not so much.

First of all, if the solution is, hypothetically, all wind and solar, setting aside storage, nearly all that rejected energy won’t be there. So the actual need isn’t about 98 quads, it’s closer to 31 quads. Call it 35 quads for jollies.

Second, note that wind and solar energy technology are presently on the middle part of the logistic S-curve of growth. (See also diffusion of innovations.) This is a super-exponential region. Call it an exponential region to be conservative. Current estimates place cost cuts in technology for these at 30%-40% per year. Actual adoption meets resistance of regulatory capture and other impediments, and during the last year it was 11%. Clearly with the cost advantages, the motivation is to go faster and, one can argue, the greater the spread between present sources of energy and wind and solar, the lower the “energy barrier” to jump to wind and energy despite other impediments. Translating into time, an 11% per year growth rate is a doubling time of 9 years. But a 30% growth rate is a doubling time of just 3.3 years. Say the doubling time is 4 years.

Third, to get from 3.1 quads to 35 quads is about $\pi$ doublings, and it’s certainly less than 4 doublings.

Fourth, so the really bad news for fossil fuels and all business and people that depend upon them to make a living it that:

• If the doubling time is 4 years, wind and solar will get to 35 quads in 13 years
• If the doubling time is 3 years, wind and solar will get to 35 quads in under 10 years
• If the doubling time is 5 years, wind and solar will get to 35 quads in 16 years
• And if the doubling time is 9 years, which by all accounts is unduly pessimistic, wind and solar will get to 35 years in under 30 years

Will that be enough to keep us from a +3C world?

Probably not: Too little, too late.

But it’ll happen anyway and it is, as I say, why fossil fuel energy and utilities which depend upon them, natural gas and all, are dead on their feet, and stranded. And any government which puts ratepayer and taxpayer dollars into building out new fossil fuel infrastructure is not only being foolish, they are making the mistake of the century.

As for the +3C climate change outcome? Clearly, that is such an emergency that the only option to address it in time is degrowth, not only cutting back on additional growth. However, there’s no evidence that that is even considered as an option. To the approvers of additional development in suburbs like the Town of Westwood and elsewhere, Conservation Commissions and all, I simply say:

You have a choice: Either manage a restrained and then negative growth plan yourself, or Nature will do it for you.

Simple. The officials responsible for these decisions know and have been warned repeatedly about these outcomes. They are ignoring them. They own the long term results. Remember them.

### Update, 2018-05-02

The Trump/Perry Department of Energy with its EIA reports solar energy in the United States grew 32% per annum since 2000. So, per the reasoning able, that’s a doubling time of under 3 years. Moreover, this is based upon a long sampling period, since 2000, so it is not only stable, it is probably conservative.