“All of Monsanto’s problems just landed on Bayer” (by Chris Hughes at Bloomberg)

See Chris Hughes’ article.

Monsanto has touted Roundup (also known as Glyphosate but more properly as \textbf{\texttt{N-(phosphonomethyl)glycine}}) as a safe remedy for weed control, often in the taming of so-called “invasive species”. It’s used on playfields where children are exposed to it, including, apparently, in my home town of Westwood, Massachusetts.

There are more than 450 court cases in progress alleging harm from the product, and a jury in one, DEWAYNE JOHNSON VS. MONSANTO COMPANY ET AL (Case Number: CGC16550128), has found Monsanto, et al guilty with a US$289 million award. It’s long been known to affect fish and amphibians, and recently physicians have gotten concerned, particularly in its connection with cancer in humans.

Image by Benjah-bmm27Own work, Public Domain, Link

This has repercussions for Bayer, as Hughes explains.

But it is perhaps most foolish to think wishful environmental management justifies releasing such toxins where kids, adults, pets, and wildlife are exposed.

For more, check out Beyond the War on Invasive Species: A Permaculture Approach to Ecosystem Restoration by Orion and Holmgren, 2015.

Posted in agroecology, an uncaring American public, business, corporate responsibility, ecology, Ecology Action, environment, environmental law, epidemiology, evidence, invasive species, open data, Peter del Tredici, quantitative biology, quantitative ecology, rights of the inhabitants of the Commonwealth, risk, statistics, sustainability, sustainable landscaping, the right to know, Uncategorized, unreason, Westwood | Leave a comment

Local Energy Rules!

As John Farrell says, Keep your energy local. If you want to take back control of your democracy, a priority is taking back control of your energy supply. Centralized energy centralizes political power and influence.

Listen to more from a recent podcast:

There are now 52 podcasts about the primacy of local energy at ILSR.

Posted in adaptation, bridge to somewhere, Buckminster Fuller, clean disruption, climate economics, decentralized electric power generation, decentralized energy, demand-side solutions, efficiency, electricity markets, energy efficiency, energy utilities, feed-in tariff, force multiplier, fossil fuel divestment, grid defection, ILSR, investment in wind and solar energy, local generation, local self reliance, public utility commissions, solar democracy, solar domination, solar energy, solar power, Spaceship Earth, stranded assets, sustainability, the energy of the people, the green century, utility company death spiral, wind energy, wind power, zero carbon | Leave a comment

Erin Gallagher’s “#QAnon network visualizations”

See her most excellent blog post, a delve into true Data Science.

(Click on figure to see a full-size image. It is large. Use your browser Back Button to return to this blog afterwards.)

Hat tip to Bob Calder and J Berg.

Posted in data science, jibber jabber, networks | Leave a comment

On lamenting the state of the Internet or Web

From time to time, people complain about the state of the Internet or of the World Wide Web. They are sometimes parts of governments charged with mitigating crime, sometimes privacy advocates, sometimes local governments or retails lamenting loss of tax revenues, sometimes social crusaders charging it with personal isolation, bullying, vice, and other communal maladies.

Certain people have made the pointing out of Web ills their principal theme. Jaron Lanier has long done so, and has written many books on the matter. Cathy O’Neill is a more recent critic, not only of the Web but of businesses which employ it and other data collecting mechanisms to mine imperfectly and abuse their intrinsically imperfect pictures for profit.

Others have underscored the effects of what is predominantly sampling bias. The thing about that is this should be no surprise. What is a surprise is the companies involved don’t see and respond to this as the statistical problem it is. How representative a sample actually is of a population of interest is perhaps the key question in any statistical study. That these companies settle for samples of convenience rather than validated one shows they are practicing very weak data methods, no matter how many people with doctorates are associated with these projects.

There is also the criticism from Professor Lawrence Lessig who understood early the social and legal ramifications of how the Internet and Web are built, particularly in such incisive books as Code and Other Laws of Cyberspace,
Code: Version 2.0
, and Remix. In Code: Version 2.0 Lessig continued and reemphasized the warnings issued in Code and Other Laws of Cyberspace that given the way the Internet and Web were technically structured and funded, the idea of a free market of ideas was slipping away and it was becoming more regulable, more subject to influence by a few big groups, and the user-as-product problem with which Powazek, among others, has taken issue.

Perhaps Lessig has come closest to it, but what the audience of these critics should understand is that the shortcomings they articulate are inherently implied by the technical design of the Internet and Web as they are. The Internet and Web at the experiential level are constructed using the Big Ball Of Mud design anti-pattern. Accordingly, as when any ball of wet mud is subjected to sufficiently large outside forces, it deforms, and becomes arbitrary shapes. Given their present size, however, such deformation is having big social implications, whether China’s aggressive censorship, or foreign influences of United States elections, probably from Russia, or sales of private user data, whether wittingly or not, by large Internet presences.

The thing of it is, there were people who thought carefully and long about how such all-connecting networks should operate and devised specific and careful design principles for them. While there were several (see history), one of the least known, particularly today, is Theodor Holm Nelson. “Ted” Nelson conceived of a non-anonymous network of producers and consumers of content whereby, through a technical device he termed transclusion (coined in Literary Machines), readers would make micropayments to reader or otherwise access content produced by others, with the bulk of these going as compensation to producers and some used for managing the network apparatus. This was termed Xanadu and Nelson and colleagues made several attempts to realize it and several technically related and useful ideas.

This is a difficult problem. Such a structure, if it is not to be defeated or subjugated, needs mechanisms for this built into its technical structure, along with a strong authentication (public key cryptography?) built into it to both prevent theft and identify the party both sending and accepting payments and content. The Internet and Web grew up and grow in a combination of deliberate and careful crafting with haphazard, business-driven choices. Just study how companies operating their innards are paid, and how it got that way. Imposing a rigorous design would make growth expensive, slow, and difficult, demanding a large number of readers and consumers before there was anything to read. Accordingly, Xanadu not only didn’t happen, it couldn’t happen.

However, look where the Internet and Web are now? Spam, malicious attacks, election interference, theft of credit card information, identity theft, viruses, cryptocurrency-driven consumption of excess electrical energy, tracking of individuals by their phones, targeted advertising are some of the places we’ve gone.

What’s intriguing to me is the possibility that Ted Nelson was right all along, and the kind of careful design he had in mind for Xanadu may one day become necessary if the Internet and Web are to survive, and not just splinter into a hundred subsidiary networks each controlled by the biggest Local Thug, whether that is a government or telecommunications giant. Nelson himself believes we can still learn many things from Xanadu.

So, in many senses, the Internet and Web did not have to be the way they are. There were other, better ideas. In fact, considering that, and considering what we’re doing to Earth’s climate through our unmitigated worship of Carbon and growth, if humanity ever needs an epitaph, I think it ought to be:

We did a lot of things, but unfortunately we didn’t think them through.

Posted in American Association for the Advancement of Science, an ignorant American public, an uncaring American public, Anthropocene, being carbon dioxide, bollocks, Boston Ethical Society, bridge to nowhere, Buckminster Fuller, capricious gods, Carbon Worshipers, card games, civilization, climate change, consumption, corporate responsibility, Cult of Carbon, Daniel Kahneman, data centers, David Suzuki, denial, design science, ethical ideals, Faster Forward, Hyper Anthropocene, hypertext, ignorance, Internet, Joseph Schumpeter, making money, Mathbabe, networks, organizational failures, superstition, Ted Nelson, the right to know, the tragedy of our present civilization, transclusion, Xanadu, ZigZag | Leave a comment

“Space, climate change, and the real meaning of theory”

(From The New Yorker, 17th August 2016, by the late former astronaut Dr Piers Sellers)

Excerpt from “Space, climate change, and the real meaning of theory”:

.
.
.
The facts of climate change are straightforward: there’s been a warming surge over the past hundred years, with a dramatic uptick in this new century. We are seeing the effects in the shrinking of the summer Arctic sea ice and the melting of the Greenland glaciers. That melt, in turn, has been partly responsible for the three-inch rise in sea levels since 1992. The Earth is warming, the ice is melting, and sea level is rising. These are observed facts.

Are we humans the cause of these changes? The answer is an emphatic yes. Many climate-research groups around the world have calculated the various contributions to climate change, including those not related to humans, like volcanic ash. It has been shown repeatedly that it is just not possible to explain the recent warming without factoring in the rise in anthropogenic greenhouse gases. If you left the increase in carbon dioxide out of your calculations, you would see a wobbly but, on average, level temperature trend from the eighteen-nineties to today. But the record—the reality—shows a steeply rising temperature curve which closely matches the observed rise in carbon dioxide. The global community of climate scientists, endorsed by their respective National Academies of Science or equivalents, is solid in attributing the warming to fossil-fuel emissions. Humans are the cause of the accelerating warming. You can bet your life—or, more accurately, your descendants’ lives—on it.
.
.
.
Newton’s ideas, and those of his successors, are all-pervasive in our modern culture. When you walk down a street, the buildings you see are concrete and steel molded to match theory; the same is true of bridges and machines. We don’t build different designs of buildings, wait to see if they fall down, and then try another design anymore. Engineering theory, based on Newton’s work, is so accepted and reliable that we can get it right the first time, almost every time. The theory of aerodynamics is another perfect example: the Boeing 747 jumbo-jet prototype flew the first time it took to the air—that’s confidence for you. So every time you get on a commercial aircraft, you are entrusting your life to a set of equations, albeit supported by a lot of experimental observations. A jetliner is just aluminum wrapped around a theory.

Climate models are made out of theory. They are huge assemblies of equations that describe how sunlight warms the Earth, and how that absorbed energy influences the motion of the winds and oceans, the formation and dissipation of clouds, the melting of ice sheets, and many other things besides. These equations are all turned into computer code, linked to one another, and loaded into a supercomputer, where they calculate the time-evolution of the Earth system, typically in time steps of a few minutes. On time scales of a few days, we use these models for weather prediction (which works very well these days), and, on longer time scales, we can explore how the climate of the next few decades will develop depending on how much carbon dioxide we release into the atmosphere. There are three items of good news about this modelling enterprise: one, we can check on how well the models perform against the historical record, including the satellite data archive; two, we can calculate the uncertainty into the predictions; and three, there are more than twenty of these models worldwide, so we can use them as checks on one another. The conclusions drawn from a careful study of thousands of model runs are clear: the Earth is rapidly warming, and fossil-fuel burning is the principal driver.

But theories are abstract, after all, so it’s easy for people to get tricked into thinking that because something is based on theory, it could very likely be wrong or is debatable in the same way that a social issue is debatable. This is incorrect. Almost all the accepted theories that we use in the physical and biological sciences are not open to different interpretations depending on someone’s opinion, internal beliefs, gut feelings, or lobbying. In the science world, two and two make four. To change or modify a theory, as Einstein’s theories modified Newton’s, takes tremendous effort and a huge weight of experimental evidence.
.
.
.

Dr Piers Sellers space flight experience.

Dr Piers Sellers in Greenland:

We need more knowledge in and of Space and Earth, not more force.

Posted in American Association for the Advancement of Science, Anthropocene, climate change, climate data, climate education, climate models, Eaarth, Earth Day, environment, global warming, Hyper Anthropocene, NASA, oceanic eddies, Piers Sellers, Principles of Planetary Climate | Leave a comment

“What follows is not a set of predictions of what will happen …”

Posted in adaptation, Anthropocene, anti-intellectualism, Carbon Worshipers, climate change, global blinding, global warming | 2 Comments

Love means nothing, without understanding, and action

Can’t get enough of this video. It may be a corporate, Ørsted promotion, but it is beautiful.

And I continue to believe, that, as the original sense of the corporation, or benefit society suggested, contrary to (U.S.) popular progressive belief, corporations can be agencies for good.

The place we all call home needs love, but love means nothing — without action.

Posted in Aldo Leopold, American Solar Energy Society, American Statistical Association, Ørsted, Bloomberg, Bloomberg New Energy Finance, bridge to somewhere, Canettes Blues Band, climate, climate business, climate economics, corporate citizenship, corporate litigation on damage from fossil fuel emissions, corporate responsibility, corporate supply chains, corporations, destructive economic development, distributed generation, economics, emergent organization, fossil fuel divestment, global warming, green tech, Green Tech Media, Humans have a lot to answer for, Hyper Anthropocene, investing, investment in wind and solar energy, investments, Joseph Schumpeter, liberal climate deniers, reasonableness, Sankey diagram, solar democracy, solar domination, Spaceship Earth, stranded assets, the energy of the people, the green century, the value of financial assets, wind energy, wind power, wishful environmentalism | Leave a comment

big generation day … first complete with WSS II online

Our additional 3.45 kW solar PV is up and generating today, collecting substantial numbers of photons (500 kWh) by 0800 ET.

(Click on figure to see a larger image and use browser Back Button to return to blog.)

(Click on figure to see a larger image and use browser Back Button to return to blog.)

It’s a good thing! Today is a peak day! At present we are using 1.3 kW for cooling and general house stuff, and generating 9.5 kW, in net pushing 8.2 kW to help our non-solar neighbors to cool their homes without Eversource needing to supply them.

See here and here to follow these data series yourself.

Posted in electricity, engineering, RevoluSun, solar democracy, solar domination, solar energy, solar power, SolarPV.tv, SunPower, sustainability, the energy of the people, the green century, the value of financial assets, Tony Seba, utility company death spiral | Leave a comment

+10 PV panels! Now at 13.45 kW nameplate capacity

In addition to our 10.0 kW PV generation, we just added an additional 3.45 kW, via 10 additional SunPower X21-345 panels. The new panels are tied to a separate SolarEdge inverter, an SE3800H-US. (The older inverter is an SE10000A-US. The old panels are also X21-345s.) These were all designed and installed by RevoluSun of Burlington, MA. (They are great.)

Here’s are some photographs of the panels, with the new ones marked:

We also got consumption monitoring with the new inverter, although that’s not yet set up in my software.

Overall, the layout now looks like:

Inside, the inverters look like:

This additional increment is intended to offset our air source heat pump hot water heater and especially the charging of our Chevy Volt.

Posted in resiliency, RevoluSun, solar democracy, solar domination, solar energy, solar power, SolarPV.tv, SunPower, sustainability, the right to know, the value of financial assets, Tony Seba, utility company death spiral, Westwood, zero carbon | 2 Comments

typical streamflow series, and immersive Science

What is that impulse in streamflow actually about? How typical is it? How frequently has it happened in past? How often will it reoccur? What are it’s implications for floodplain planning?

There’s been a bit of discussion, here and there, about what we should or can expect the electorate of a representative democracy to know about Science. Surely, there’s “school learnin”’, and that’s valuable, but some of the most meaningful interactions with Science come from an individual’s experience of it, in vivo if you will. I recently described, in a comment at a blog how certain experiments as an undergraduate Physics student meant an awful lot to me, even if I had mastered the theory in a book. These were emotional connections. Sure, I had been prepared for this, and had already exhibited some kind of emotional commitment in my desire to remain up, late at night, our in winter, in the cold, in order to observe various stellar things, as part of a local Astronomy club. It’s hand in hand: You can’t do decent amateur Astronomy in New England except in frigid winter, because of the Summer humidity and the associated skyglow from places like Providence and Boston. I’m sure it’s worse now. Going deep north in New England is a help, and I’ve sometimes wondered why people there haven’t tried to capitalize on that.

But, I digress.

There’s something about this, whether it’s streamflow measurements, or taking your own weather measurements at home, or amateur Astronomy which bonds a body to the phenomena and to the process of knowing.

The Web and Internet interactions, despite offering superior measurement technology, never quite replace this experience. There is, I think, something to be argued for this kind of immersive experience in Science.

Posted in American Association for the Advancement of Science, science

Leaders (say they) Don’t Know About Lags

Maybe they don’t. Most people don’t. On the other hand, there’s little more to them than understanding skeet, realizing aiming where the clay pigeon is now is a useless tactic for hitting it. Aim where the pigeon will be is more useful.

Consider one Massachusetts state Representative Thomas Golden, present House Chair of the Telecommunications, Energy and Utilities Committee. During an SOS Climate Disaster: Vigil & Rally on 26th July 2018, addressed the assembled and the public, saying “The Climate Crisis is not an emergency, only a ‘situation.”’

Perhaps he believes that.

Then again, perhaps he is being disingenuous and simply saying, instead, This is not my problem. This is someone else’s problem. I can only respond to what I see. Why might that be disingenuous? Because any child knows there are optimal points to push a swing to kick it higher. If a forest fire is going to consume an island, it’s not effective or even useful to wait until the island is half consumed to act.

Even a rudimentary understanding of causation implies delays are important and a consequence of it.

University of Massachusetts, Boston and their Sustainable Solutions Laboratory has been commissioned to tell leadership what they see. They are apparently not listening, choosing to go with the short term easy road of insubstantial, pretended actions.

This is not anything new. If a leader doesn’t know how to deal with it, they should ask someone who does, unless, of course, Representative Golden, you really want to be a Champion of Ignorance.

Praised Be the Flood.”

Addendum

(Hat tip to Tamino.)

Posted in adaptation, American Association for the Advancement of Science, American Meteorological Association, AMETSOC, anti-intellectualism, anti-science, atmosphere, attribution, Boston Ethical Society, carbon dioxide, Carbon Worshipers, climate change, climate disruption, climate economics, climate education, Commonwealth of Massachusetts, Cult of Carbon, dynamical systems, environment, ethics, evidence, forecasting, geophysics, global warming, Massachusetts Interfaith Coalition for Climate Action, moral leadership, Our Children's Trust, Principles of Planetary Climate, rights of the inhabitants of the Commonwealth, the right to be and act stupid, the tragedy of our present civilization, tragedy of the horizon, unreason, UU, UU Humanists

“The Unchained Goddess”, Bell Science Hour, 1958

A tad nostalgic, for the day where humanity could have stopped a bunch of harm from climate change. Also, although from a STEM perspective, the entire show is worthwhile, only the last seven minutes are pertinent to climate change. Moreover, they address it in a flippant kind of way. I’d say this is not a definitive presentation.

Still, collectively, maybe we deserve what’s going to happen, even if those most responsible are likely to dodge the harm of the early repercussions. But, eventually, they or their children or their grandchildren will feel the full wrath of their poor choices.

The physical, material world does not read or believe in that misleading, pathetic thing, called the Bible.

Yeah, I’m an atheist. Better (or worse?), a physical materialist and a humanist. And all I’ve learned and been taught about Christianity suggests to me it is a fundamental charade. Buddhism is better, but it has its problems.

But I mostly abhor the sense of domination people have had over other species, something I think is at root representative of all that is ill with humanity, in its ignorance of its tiny place in the Universe. If want to hear more of that perspective, I advise Carl Sagan’s dramatic, narrated video.

Posted in atheism, climate change, Cosmos, global warming

`On Records`

This is a reblog from Eli Rabett, one of the post On Records, with additional comments and material from the author-moderator of this blog, 667-per-cm.net:

A distinguishing mark of a new record in a time series is that it exceeds all previous values another is that the first value in a time series is always a record.

Given a stationary situation with nothing except chance, aka natural variation, the number of new records should decline to zero, or pretty close, as the series extends in time.
.
.
.


(Click on figure to get a larger image, and use browser Back Button to return to blog.)

The above is from:

S. I. Seneviratne, R. Wartenburger, B. P. Guillod, A. L. Hirsch, M. M. Vogel, V. Brovkin, D. P. van Vuuren, N. Schaller, L. Boysen, K. V.
Calvin, J.n Doelman, P. Greve, P. Havlik, F. Humpenöder, T. Krisztin, D. Mitchell, A.r Popp, K. Riahi, J. Rogelj, C.-F.h Schleussner, J. Sillmann, E. Stehfest, ``Climate extremes, land–climate feedbacks and land-use forcing at 1.5°C'', Philosophical Transactions of the Royal Society A, Mathematical, Physical and Engineering Sciences, 2nd April 2018, \texttt{DOI: 10.1098/rsta.2016.0450}.

While Seneviratne, et al approach the estimates with a deeper and far more nuanced analysis, it’s been known that as the globe warms, land will warm faster:

And the travesty and tragedy are we’ve known about this a damn long time and have done nothing:

That’s from 1958.

See also:

H. D. Matthews, K. Zickfeld, R. Knutti, M. R. Allen, ``Focus on cumulative emissions, global carbon budgets and the implications for climate mitigation targets'', Environmental Research Letters, January 2018, 13(1).


Welcome to your choice:

Posted in Anthropocene, being carbon dioxide, carbon dioxide, Carbon Worshipers, civilization, climate change, climate disruption, Cult of Carbon, ecology, Eli Rabett, ethics, global warming, greenhouse gases, Humans have a lot to answer for, Hyper Anthropocene, liberal climate deniers, Massachusetts Interfaith Coalition for Climate Action, meteorology, Our Children's Trust, planning, pollution, quantitative ecology, radiative forcing, rights of the inhabitants of the Commonwealth, Spaceship Earth, temporal myopia, the right to be and act stupid, the right to know, the tragedy of our present civilization, the value of financial assets, tragedy of the horizon, Victor Brovkin, wishful environmentalism

CO2 efficiency as a policy concept

I listened to the following talk, featuring Professor Kevin Anderson, who I have mentioned many times here before:

While I continue to be hugely supportive of distributed PV as an energetic and democratic solution, as inspired by John Farrell at ILSR, there is something to be said, in my thinking, for migrating a version of electrical energy efficiency to the CO2 realm. What does that mean?

What it means is choosing to not use a kWh of electrical energy, or a kJ (kiloJoule) when one could. It also means being sensitive to energy intensity. Sure, if I drive to the store up the street, or I walk to the store up the street, the point is that I get me from here to there and back. But the overhead and speed of using the automobile, in contrast with walking or using a bicycle or taking the electrical streetcar that runs down the route is so much higher than do any of those, is something where I, as a responsible member of a climate sensitive society, need to properly evaluate the value of my personal time against polluting The Commons.

Unfortunately, the technological Zeitgeist is to fix all these problems by slathering on additional techno-fixes, and justifying them with techno-casuistry so the lifestyles are preserved. I’m suggesting that a full systems analysis suggests little beats simply choosing to slow down and not use the energy to save time or feed a personal impatience to begin with.

This is worth a look at, both for practical and personal reasons. I, for instance, as a matter of personal discipline, always cross walks at corners now only when the urban lightings permit me to do so.

Posted in Bloomberg New Energy Finance, Boston Ethical Society, climate disruption, ILSR, John Farrell, Kevin Anderson, lifestyle changes, local self reliance, moral leadership, naturalism, personal discipline, Spaceship Earth, Unitarian Universalism, UU, UU Humanists

One of the most interesting things about the MIP ensembles is that the mean of all the models generally has higher skill than any individual model.

We hold these truths to be self-evident, that all models are created equal, that they are endowed by their Creators with certain unalienable Rights, that among these are a DOI, Runability and Inclusion in the CMIP ensemble mean. Well, not quite. But it is Independence Day in the US, and coincidentally there is a new discussion paper [cite ref=(Abramowitz et al)]10.5194/esd-2018-51[/cite] (direct link) posted on model independence just posted at Earth System Dynamics. …

Source: Model Independence Day

Link | Posted on by

These are ethical “AI Principles” from Google, but they might as well be `technological principles’

This is entirely adapted from this link, courtesy of Google and Alphabet.

Objectives

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles. (See important additional explanation at the primary source.)

Verboten

  1. Technologies that cause or are likely to cause overall harm.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Google does qualify:

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

It’s clear this is dancing on a fence, but that uncomfortable position is inevitable in any optimization problem.

It’s curious, but these wouldn’t be bad principles for governments and polities to follow, either.

Posted in American Statistical Association, artificial intelligence, basic research, Bayesian, Boston Ethical Society, complex systems, computation, corporate citizenship, corporate responsibility, deep recurrent neural networks, emergent organization, ethical ideals, ethics, extended producer responsibility, friends and colleagues, Google, Google Pixel 2, humanism, investments, machine learning, mathematics, moral leadership, natural philosophy, politics, risk, science, secularism, technology, The Demon Haunted World, the right to know, Unitarian Universalism, UU, UU Humanists

National Academies Statement on Harmful Consequences of Separating Families at the U.S. Border

(Updated.)

“We urge the U.S. Department of Homeland Security to immediately stop separating migrant children from their families, based on the body of scientific evidence that underscores the potential for lifelong, harmful consequences for these children and based on human rights considerations.

“Reports from the National Academies of Sciences, Engineering, and Medicine contain an extensive body of evidence on the factors that affect the welfare of children – evidence that points to the danger of current immigration enforcement actions that separate children from their parents. Research indicates that these family separations jeopardize the short- and long-term health and well-being of the children involved. In addition, the Committee on Human Rights of the National Academies, which has a long history of addressing issues at the intersection of human rights, science, and health, stresses that the practice of separating parents from their children at the border is inconsistent with U.S. obligations under the International Covenant on Civil and Political Rights.

“Parents’ impact on their children’s well-being may never be greater than during the earliest years of life, when a child’s brain is developing rapidly and when nearly all of her or his experiences are shaped by parents and the family environment (NASEM, 2016, p. 1). Young children who are separated from their primary caregivers may potentially suffer mental health disorders and other adverse outcomes over the course of their lives (NASEM, 2016, p. 21-22). Child development involves complex interactions among genetic, biological, psychological, and social processes (NRC and IOM, 2009, p. 74), and a disruption in any of these – such as family disruption – hinders healthy development and increases the risk for future disorders (NRC and IOM, 2009, p.102-104). Young children are capable of deep and lasting sadness, grief, and disorganization in response to trauma and loss (NRC and IOM, 2000, p. 387). Indeed, most mental, emotional, and behavioral disorders have their roots in childhood and adolescence (NRC and IOM, 2009, p. 1), and childhood trauma has emerged as a strong risk factor for later suicidal behavior (IOM, 2002, p. 3).

“Decades of research have demonstrated that the parent-child relationship and the family environment are at the foundation of children’s well-being and healthy development. We call upon the Department of Homeland Security to stop family separations immediately based on this evidence.”

Marcia McNutt
President, National Academy of Sciences

C. D. Mote, Jr.
President, National Academy of Engineering

Victor J. Dzau
President, National Academy of Medicine

Source: Statement on Harmful Consequences of Separating Families at the U.S. Border

Posted in an ignorant American public, an uncaring American public, anti-intellectualism, anti-science, children as political casualties, compassion, Donald Trump, humanism, Humans have a lot to answer for, immigration, military inferiority, moral leadership, sadism, the right to be and act stupid, the right to know, torture, Unitarian Universalism, United States Government

Professor Tony Seba, of late

I love it.

Professor Tony Seba, Stanford, 1 week ago.

It means anyone who continues to invest in or support the fossil fuels hegemony will be fundamentally disappointed by the markets. And it serves them right. By efficiency, or momentum, there is no beating energy that has a marginal cost of zero.

As someone once said, in a movie, oppose solar energy, “Go ahead. Make my day.”

Gasoline-powered autos won’t be sidelined because gasoline costs too much, it’ll be because gasoline costs too little. No one will want it, so service stations won’t be able to cover their overheads, and they’ll close: It won’t be available, because no one will care about it any more.

And, as for the homes and businesses who continue to buy into the “presently wise choice” of natural gas? Hah! What happens when they can no longer get it, their pipeline companies shutting down flows?

It’s a beautiful thing.

Oh, sure, they’ll try to socialize their losses. Hopefully the electorate isn’t foolish enough to accept that.

But, then, they are the electorate and are highly gullible.

Posted in American Statistical Association, anti-intellectualism, anti-science, Bloomberg New Energy Finance, BNEF, bridge to nowhere, Buckminster Fuller, Carbon Tax, Carbon Worshipers, causation, central banks, children as political casualties, citizen science, citizenship, clean disruption, climate, climate business, climate change, climate data, climate disruption, climate economics, Climate Lab Book, Climate Science Legal Defense Fund, coastal communities, coastal investment risks, coasts, Commonwealth of Massachusetts, Constitution of the Commonwealth of Massachusetts, consumption, corporate responsibility, corporations, corruption, critical slowing down, ctDNA, Cult of Carbon, David Archer, David Spiegelhalter, decentralized electric power generation

This flooding can’t be stopped. What about the rest?

Tamino is writing about this subject, too. That entirely makes complete sense as it is the biggest geophysical and environmental story out there right now. I’ve included an update at this post’s end discussing the possible economic impacts.

It’s been known for a couple of years that the West Antarctic ice sheet was destabilizing and that this would result in appreciable sea-level rise. What wasn’t known was how widespread this was in Anarctica, and how fast this might proceed.

Well, we’re beginning to find out, and the news isn’t good.

Data below from “Mass balance of the Antarctic ice sheet from 1992 to 2017”, The IMBIE Team, Nature, 2018, 558, 219-222.

Abstract

The Antarctic Ice Sheet is an important indicator of climate change and driver of sea-level rise. Here we combine satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that it lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6 ± 3.9 millimetres (errors are one standard deviation). Over this period, ocean-driven melting has caused rates of ice loss from West Antarctica to increase from 53 ± 29 billion to 159 ± 26 billion tonnes per year; ice-shelf collapse has increased the rate of ice loss from the Antarctic Peninsula from 7 ± 13 billion to 33 ± 16 billion tonnes per year. We find large variations in and among model estimates of surface mass balance and glacial isostatic adjustment for East Antarctica, with its average rate of mass gain over the period 1992–2017 (5 ± 46 billion tonnes per year) being the least certain.

For every centimeter [of sea-level rise] from West Antarctica, Boston feels one and a quarter centimeters. And that extends down the East Coast.

Professor Robert M DeConto, University of Massachusetts, Amherst, Geosciences, as quoted in The Atlantic, 13th June 2018, “After decades of losing ice, Antarctica is now hemorrhaging it”.

See also

S. Kruel, “The impacts of sea-level rise on tidal flooding in Boston, Massachusetts”, Journal of Coastal Research, 2016, 32(6), 1302-1309.

which I have already written about here.

It is important to understand that it is too late to stop this part of the effects of climate change: Boston and coasts will flood. We can hope that by the world cutting back on emissions it might slow. But reversing it is out of the question. And, to the degree the world is not keeping on schedule, even the slowing looks out of reach.

But, seriously, it’s unrealistic to think anything else. We have important groups of people (like those who elect Congress and the President) who don’t consider these risks serious, even doubt them, or think that Archangel Michael will come riding down on a big white horse and save us collectively because of Manifest Destiny or some other pious rubbish.

Unfortunately, we did not fund the research to ascertain how fast this could go until late, and we’ve done essentially nothing so far on a serious scale to try to stop it, setting the impossible condition of having to maintain an American economic boom. That might prove to be the most expensive economic expansion the world has ever seen.

Update, 2018-06-17

Beyond the geophysical impact of impending ice sheet collapse, there’s the economic one: If insurance prices don’t head upwards quickly, and real estate prices for expensive homes on the coasts don’t come under downward pressure, the only reason can be is that they expect the U.S. federal government to continue to fund their rebuilding through the Biggert-Waters Act (2012), the Homeowner Flood Insurance Affordability Act (2014), the Stafford isaster Relief and Emergency Assistance Act (1988), the Disaster Mitigation Act (2000), and the Pets Evacuation and Transportation Standards Act (2006). The wisdom of continuing these in the face of increasing storm costs is being questioned with greater ferocity. (See also.) It’s not difficult to see why:

And, courtesy of the New York Times:

And, courtesy of NOAA:

While some critics (e.g., Pielke, et al, “Normalized hurricane damage in the United States — 1900-2005”) have claimed the increases in losses is because there is more expensive property being damaged by otherwise ordinary storms, correction of losses by the Consumer Price Index (CPI) controls for some of that, and the rates of inflation in damage exceed rates of even the most appreciating real estate values. Moreover, if that’s the reason for the losses, nothing is being done to discourage the practice. Sure, inflation might not be able to be controlled, but (a) it has been very low in recent years and, (b), the CPI-adjusted values from NOAA show that’s not the explanation. The amount loss to disasters is climbing and the claim that’s all it’s about is disingenuous at best.

At some point the federal government will stop or significantly limit and curtail bailouts of rebuilds, like the affluent homes on Alabama’s Dauphin Island. At that point the value of coastal real estate will crest, and it may possibly plummet: A classic Minsky moment. It would be inadvisable to own coastal real estimate when that happens, particularly in towns like Falmouth, Massachusetts. See an article from Forbes which reports climate change is already depressing coastal real estate values by 7%.

Posted in adaptation, Antarctica, Anthropocene, bridge to nowhere, Carbon Worshipers, citizenship, civilization, climate, climate change, climate disruption, climate economics, climate justice, coastal communities, coastal investment risks, coasts, Commonwealth of Massachusetts, corporate litigation on damage from fossil fuel emissions, corporate responsibility, Cult of Carbon, environment, Eric Rignot, flooding, floods, glaciers, glaciology, global warming, greenhouse gases, hydrology, Hyper Anthropocene, ice sheet dynamics, icesheets, investing, investments, John Englander, living shorelines, Massachusetts, New England, real estate values, rights of the inhabitants of the Commonwealth, Robert M DeConto, Scituate, sea level rise, seawalls, shorelines, Stefan Rahmstorf, the right to be and act stupid, the right to know, the tragedy of our present civilization, wishful environmentalism, ``The tide is risin'/And so are we''

“Will climate change bring benefits from reduced cold-related mortality? Insights from the latest epidemiological research”

From RealClimate, and referring to article in Lancet :

Guest post by Veronika Huber Climate skeptics sometimes like to claim that although global warming will lead to more deaths from heat, it will overall save lives due to fewer deaths from cold. But is this true? Epidemiological studies suggest the opposite. Mortality statistics generally show a distinct seasonality. More people die in the colder winter months than in the warmer summer months. In European countries, for example, the difference between the average number of …

Source: Will climate change bring benefits from reduced cold-related mortality? Insights from the latest epidemiological research

Posted in Anthropocene, climate, climate change, climate disruption, epidemiology, evidence, global warming

When linear systems can’t be solved by linear means

Linear systems of equations and their solution form the cornerstone of much Engineering and Science. Linear algebra is a paragon of Mathematics in the sense that its theory is what mathematicians try to emulate when they develop theory for many other less neat subjects. I think Linear Algebra ought to be required mathematics for any scientist or engineer. (For example, I think Quantum Mechanics makes a lot more sense when taught in terms of inner products than just some magic which drops from the sky.) Unfortunately, in many schools, it is not. You can learn it online, and Professor Gilbert Strang’s lectures and books are the best. (I actually prefer the second edition of his Linear Algebra and Its Applications, but I confess I haven’t looked at the fourth edition of the text, just the third, and I haven’t looked at his fifth edition of Introduction to Linear Algebra.)

There’s a lot to learn about numerical methods for linear systems, too, and Strang’s Applications teaches a lot that’s very good there, including the SVD of which Professor Strang writes “it is not nearly as famous as it should be.” I very much agree. You’ll see it usable everywhere, from dealing with some of the linear systems I’ll mention below to support for Principal Components Analysis in Statistics, to singular-spectrum analysis of time series, to Recommender Systems, a keystone algorithm in so-called Machine Learning work.

The study of numerical linear algebra is widespread and taught in several excellent books. My favorites are Golub and Van Loan’s Matrix Computations, Björck’s Numerical Methods for Least Squares Problems, and Trefethen and Bau’s Numerical Linear Algebra. But it’s interesting how fragile these solution methods are, and how quickly one needs to appeal to Calculus directly with but small changes in these problems. That’s what this post is about.

So what am I talking about? I’ll use small systems of linear equations as examples, despite it being entirely possible and even common to work with systems which have thousands or millions of variables and equations. Here’s a basic one:

(1)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

written for brevity

(2)\,\,\,\mathbf{b} = \mathbf{A} \mathbf{x}

Of course, in any application the equation looks more like:

(3)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

In R or MATLAB the result is easily obtained. I work and evangelize R, so any computation here will be recorded in it. Doing


solve(A,b)



or


lm(b ~ A + 0)


will produce

(4)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]

It’s also possible to solve several at once, for example, from:

(5)\,\,\,\left[ \begin{array} {cccc} 12 & 20 & 101 & 200 \\ 4 & 11 & -1 & 3 \\ 16 & 99 & 10 & 9 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

(6)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right], \left[ \begin{array} {c} 15.25 \\ 15.50 \\ -8.75 \end{array} \right], \left[ \begin{array} {c} -73.75 \\ 51.90 \\ 23.65 \end{array} \right], \left[ \begin{array} {c} -146.25 \\ 99.70 \\ 48.95 \end{array} \right]

And, of course, having an unknown \mathbf{b} but a known \mathbf{x} is direct, just using matrix multiplication:

(7)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} -73.75 \\ 51.90 \\ 23.65 \end{array} \right]

yielding:

(8)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {c} -101 \\ -1 \\ 10 \end{array} \right]

Linear Algebra gives us sensible ways to interpret inconsistent systems like:

(9)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \\ 23 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \\ 3 & 4 & 1 \\    17 & -2 & 11 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

by making reasonable assumptions about what the solution to such a system should mean. R via lm(.) gives:

(10)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1.46655646 \\ 3.00534079  \\ 0.34193795 \end{array} \right]

Sets of solutions to things like

(11)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & 4 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

can be countenanced and there is even a way which I’ll talk about below for picking out a unique one: the minimum norm solution. This is where the SVD comes in. To learn about all the ways these things can be thought about and done, I recommend:

D. D. Jackson, “Interpretation of inaccurate, insufficient and inconsistent data”, Geophysical Journal International, 1972, 28(2), 97-109.

(That’s an awesome title for a paper, by the way.)

What if there are holes?

Going back to (3), however, suppose instead it looks something like this:

(12)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] = \left[ \begin{array} {ccc} 1 & 2 & 3 \\ 2 & 1 & a_{23} \\ 3 & 4 & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]

and we don’t know what a_{23} is. Can it be calculated?

Well, it has to be able to be calculated: It’s the only unknown in this system, with the rules of matrix multiplication just being a shorthand for combining things. So, it’s entirely correct to think that the constants could be manipulated algebraically so they all show up on one side of equals, and a_{23} on the other. That’s a lot of algebra, though.

We might guess that \mathbf{A} was symmetric so think a_{23} = 4. But what about the following case, in (12)?

(12)\,\,\,\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] = \left[ \begin{array} {ccc} a_{11} & 2 & 3 \\ 2 & 1 & a_{23} \\ a_{31} & a_{23} & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]

Now there are 3 unknowns, a_{11}, a_{23}, and a_{31}. The answer is available in (3), but suppose that wasn’t known?

This problem is one of finding those parameters, searching for them if you like. To search, it helps to have a measure of how far away from a goal one is, that is, some kind of score. (14) is what I propose as a score, obtained by taking (12) and rewriting it as below, (13):

(13)\,\,\,0 = \left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] - \left[ \begin{array} {ccc} a_{11} & 2 & 3 \\ 2 & 1 & a_{23} \\ a_{31} & a_{23} & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]

(14)\,\,\,\left|\left|\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] - \left[ \begin{array} {ccc} a_{11} & 2 & 3 \\ 2 & 1 & a_{23} \\ a_{31} & a_{23} & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]\right|\right|_{2}

(15)\,\,\,||\mathbf{z}||_{2} is an L_{2} norm, and ||\mathbf{z}||_{2} = \sqrt{(\sum_{i=1}^{n} z_{i}^2)}.

In other words, ||\mathbf{z}||_{2} is the length of the vector \mathbf{z}. It’s non-negative. Accordingly, making (14) as small as possible means pushing the left and right sides of (12) towards each other. When (14) is zero the left and right sides are equal.

Now, there are many possible values for a_{11}, a_{23}, and a_{31}. In most applications, considering all flonum values for these is not necessary. Typically, the application suggests a reasonable range for each of them, from a low value to a high value. Let

(\alpha_{11}, \beta_{11})

be the range of values for a_{11},

(\alpha_{23}, \beta_{23})

be the range of values for a_{23}, and

(\alpha_{31}, \beta_{31})

be the range of values for a_{31}, each dictated by the application. If \sigma_{11}, \sigma_{23}, and \sigma_{31} are each randomly but independently chosen from the unit interval, then a particular value of (14) can be expressed

(16)\,\,\,\left|\left|\left[ \begin{array} {c} 12 \\ 4 \\ 16 \end{array} \right] - \left[ \begin{array} {ccc} r(\sigma_{11}, \alpha_{11}, \beta_{11}) & 2 & 3 \\ 2 & 1 & r(\sigma_{23}, \alpha_{23}, \beta_{23}) \\ r(\sigma_{31}, \alpha_{31}, \beta_{31}) & r(\sigma_{23}, \alpha_{23}, \beta_{23}) & 1 \end{array} \right] \left[ \begin{array} {c} -3 \\ 6 \\ 1 \end{array} \right]\right|\right|_{2}

where

(17)\,\,\,r(\sigma, v_{\text{low}}, v_{\text{high}}) \triangleq v_{low}(1 - \sigma) + \sigma v_{\text{high}}

So, this is an optimization problem where what’s wanted is to make (16) as small as possible, searching among triplets of values for a_{11}, a_{23}, and a_{31}. How does that get done? R package nloptr. This is a package from CRAN which does a rich set of numerical nonlinear optimizations, allowing the user to choose the algorithm and other controls, like ranges of search and constraints upon the control parameters.

Another reason why these techniques are interesting is it is intriguing and fun to see how far one can get knowing very little. And when little is known, letting algorithms run for a while to make up for that ignorance doesn’t seem like such a bad trade.

An illustration

In order to illustrate the I don’t know much case, I’m opting for:

\alpha_{11} = -2
\beta_{11} = 2
\alpha_{23} = -1
\beta_{23} = 8
\alpha_{31} = -6
\beta_{31} = 6

What a run produces is:

Call:
nloptr(x0 = rep(0.5, 3), eval_f = objective1, lb = rep(0, 3), ub = rep(1, 3), opts = nloptr.options1, alpha.beta = alpha.beta)

Minimization using NLopt version 2.4.2

NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. )

Number of Iterations....: 100000
Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05
Number of inequality constraints: 0
Number of equality constraints: 0
Current value of objective function: 0.000734026668840609
Current value of controls: 0.74997066329 0.5556247383 0.75010835335

Y1 resulting estimates for a_{11}, a_{23}, and a_{31} are: 1.00, 4.00, 3

That’s nloptr-speak for reporting on the call, the termination conditions and result. The bottom line in bold tells what was expected, that a_{11} = 1, a_{23} = 4, a_{31} = 3.

What about the code? The pertinent portion is shown below, and all the code is downloadable as a single R script from here. There’s also a trace of the execution of that script available as well.


L2norm<- function(x)
{
sqrt( sum(x*x) )
}

r<- function(sigma, alpha, beta)
{
stopifnot( (0 <= sigma) && (sigma <= 1) )
stopifnot( alpha < beta )
alpha*(1 - sigma) + beta*sigma
}

# Recall original was:
#
# A<- matrix(c(1, 2, 3, 2, 1, 4, 3, 4, 1), 3, 3, byrow=TRUE)

P1.func<- function(x, alpha.beta)
{
stopifnot( is.vector(x) )
stopifnot( 3 == length(x) )
#
sigma11<- x[1]
sigma23<- x[2]
sigma31<- x[3]
alpha11<- alpha.beta[1]
beta11<- alpha.beta[2]
alpha23<- alpha.beta[3]
beta23<- alpha.beta[4]
alpha31<- alpha.beta[5]
beta31<- alpha.beta[6]
#
P1<- matrix( c( r(sigma11,alpha11,beta11), 2, 3,
2, 1, r(sigma23,alpha23,beta23),
r(sigma31,alpha31,beta31), r(sigma23,alpha23,beta23), 1
),
nrow=3, ncol=3, byrow=TRUE )
return(P1)
}

objective1<- function(x, alpha.beta)
{
stopifnot( is.vector(x) )
stopifnot( 3 == length(x) )
b<- matrix(c(12,4,16),3,1)
x.right<- matrix(c(-3,6,1),3,1)
P1<- P1.func(x, alpha.beta)
d<- b - P1 %*% x.right
# L2 norm
return( L2norm(d) )
}

nloptr.options1<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-6, "print_level"=0, "maxeval"=100000, "population"=1000)

alpha.beta<- c(-2, 2, -1, 8, -6, 6)

Y1<- nloptr(x0=rep(0.5,3),
eval_f=objective1,
lb=rep(0,3), ub=rep(1,3),
opts=nloptr.options1,
alpha.beta=alpha.beta
)

print(Y1)

cat(sprintf("Y1 resulting estimates for a_{11}, a_{23}, and a_{31} are: %.2f, %.2f, %2.f\n",
r(Y1$solution[1], alpha.beta[1], alpha.beta[2]), r(Y1$solution[2], alpha.beta[3], alpha.beta[4]),
r(Y1$solution[3], alpha.beta[5], alpha.beta[6])))

But what is it good for? Case 1: Markov chain transition matrices

Consider again (1):

(1')\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

This happens also to be the template for a 3-state Markov chain with their many applications.

The following example is taken from the famous paper by Rabiner, as presented by Resch:

  • L. R. Rabiner, “A tutorial on Hidden Markov Models and selected applications in speech recognition”, Proceedings of the IEEE, February 1989, 77(2), DOI:10.1109/5.18626.
  • B. Resch, “Hidden Markov Models”, notes for the course Computational Intelligence, Graz University of Technology, 2011.
  • They begin with the transition diagram:

    which if cast into the form of (1′) and (2) looks like:

    (18)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 0.8 & 0.05 & 0.15 \\ 0.2 & 0.6  & 0.2 \\ 0.2 & 0.3 & 0.5 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

    The rows, top-to-bottom, are labeled sunny, rainy, and foggy, as are the columns, left-to-right. Cell (i,j) gives the probability for going from state i to state j. For example, the probability of going from sunny to foggy is 0.15. Here’s a prettier rendition from Resch:

    Resch and Rabiner go on to teach Hidden Markov Models (“HMMs”) where \mathbf{A} is not known and, moreover, the weather is not directly observed. Instead, information about the weather is obtained by observing whether or not a third party takes an umbrella to work or not. Here, however, suppose the weather is directly known. And suppose \mathbf{A} is known except nothing is known about what happens after foggyexcept when it remains foggy. Symbolically:

    (19)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {ccc} 0.8 & 0.05 & 0.15 \\ 0.2 & 0.6  & 0.2 \\ a_{31} & a_{32} & 0.5 \end{array} \right] \left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]

    Note in (18) or Resch’s tableau how the rows each sum to one. This is a characteristic of first order Markov models: Once in a state, the transition has to go somewhere, even if to stay in that state. Transitions can’t just cause the system to disappear, so all the outgoing probabilities need to sum to one. This means, however, that when what happens when it is foggy is introduced, there aren’t two unconstrained parameters, there is only one. Accordingly, rather than introducing a_{32}, I could write 1 - a_{31}. As it turns out, in my experience with nloptr, it is often better to specify this constraint explicitly so the optimizer
    knows about it rather than building it implicitly into the objective function, even at the price of introducing another parameter and its space to explore.

    The challenge I’ll pose here is somewhat tougher than that faced by HMMs. The data in hand is not a series of sunny, rainy, or foggy weather records but, because, say, the records were jumbled, all that’s in hand is a count of how many sunny, rainy, and foggy days there were, and what the count of days were following them. In particular:

    (20)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1020 \\ 301 \\ 155 \end{array} \right]

    meaning that the first day of a set of pairs began where the first day was sunny 1020 times, rainy 301 times, and foggy 155 times. Statistical spidey sense wonders about how many observations are needed to pin town transition probabilities well, but let’s set that aside for now. (At least it’s plausible that if ordering information is given up, there might be a need for more count information.) And the count of what the weather was on the second days is:

    (21)\,\,\,\left[ \begin{array} {c} b_{1} \\ b_{2} \\ b_{3} \end{array} \right] = \left[ \begin{array} {c} 854 \\ 416 \\ 372 \end{array} \right]

    or 854 sunny days, 416 rainy days, and 372 foggy foggy days.

    Note that unlike in (16) here in (19) there is no need to pick upper and lower bounds on the value: This is a probability so it is by definition limited to the unit interval. But a_{31} + a_{32} + 0.5 = 1 always so that constraint needs to be stated.

    Here’s the code:

    P2.func<- function(x)
    {
    # Sunny, Rainy, Foggy
    stopifnot( is.vector(x) )
    stopifnot( 2 == length(x) )
    #
    a.31<- x[1]
    a.32<- x[2]
    #
    P2<- matrix( c( 0.8, 0.05, 0.15,
    0.2, 0.6, 0.2,
    a.31, a.32, 0.5
    ),
    nrow=3, ncol=3, byrow=TRUE )
    return(P2)
    }

    objective2<- function(x)
    {
    stopifnot( is.vector(x) )
    stopifnot( 2 == length(x) )
    x.right<- matrix(c(1020, 301, 155), 3, 1)
    b<- matrix(c(854, 416, 372),3,1)
    P2<- P2.func(x)
    d<- b - P2 %*% x.right
    # L2 norm
    return( L2norm(d) )
    }

    constraint2<- function(x)
    {
    return( (x[1] + x[2] - 0.5 ))
    }

    nloptr.options2<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-4, "print_level"=0, "maxeval"=100000, "population"=1000)

    Y2<- nloptr(x0=rep(0.5,2),
    eval_f=objective2,
    eval_g_eq=constraint2,
    lb=rep(0,2), ub=rep(1,2),
    opts=nloptr.options2
    )

    print(Y2)

    cat(sprintf("Y2 resulting estimates for a_{31}, a_{32} are: %.2f, %.2f\n",
    Y2$solution[1], Y2$solution[2]))

    This run results in:


    Call:
    nloptr(x0 = rep(0.5, 2), eval_f = objective2, lb = rep(0, 2), ub = rep(1, 2), eval_g_eq = constraint2, opts = nloptr.options2)

    Minimization using NLopt version 2.4.2

    NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. )

    Number of Iterations....: 100000
    Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05
    Number of inequality constraints: 0
    Number of equality constraints: 1
    Current value of objective function: 0.500013288564363
    Current value of controls: 0.20027284199 0.29972776012

    Y2 resulting estimates for a_{31}, a_{32} are: 0.20, 0.30

    Suppose some of the data is missing? In particular, suppose instead:

    (20a)\,\,\,\left[ \begin{array} {c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array} {c} 1020 \\ r(\eta, 155, 1020) \\ 155 \end{array} \right]

    where \eta is on the unit interval and so all that’s known is that x_{2} is between 155 and 1020, that is, bounded by the other two terms in \mathbf{x}.

    Now there are two parameters to search, but they are unconstrained, apart from being on the unit interval. The code for this is:


    P3.func<- function(x)
    {
    # Sunny, Rainy, Foggy
    stopifnot( is.vector(x) )
    stopifnot( 3 == length(x) )
    #
    a.31<- x[1]
    a.32<- x[2]
    # There's an x[3] but it isn't used in the P3.func. See
    # the objective3.
    #
    P3<- matrix( c( 0.8, 0.05, 0.15,
    0.2, 0.6, 0.2,
    a.31, a.32, 0.5
    ),
    nrow=3, ncol=3, byrow=TRUE )
    return(P3)
    }

    objective3<- function(x)
    {
    stopifnot( is.vector(x) )
    stopifnot( 3 == length(x) )
    x.right<- matrix(c(1020, r(x[3], 155, 1020), 155), 3, 1)
    b<- matrix(c(854, 416, 372),3,1)
    P3<- P3.func(x)
    d<- b - P3 %*% x.right
    # L2 norm
    return( L2norm(d) )
    }

    constraint3<- function(x)
    {
    stopifnot( 3 == length(x) )
    return( (x[1] + x[2] - 0.5 ))
    }

    nloptr.options3<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-4, "print_level"=0, "maxeval"=100000, "population"=1000)

    Y3<- nloptr(x0=rep(0.5,3),
    eval_f=objective3,
    eval_g_eq=constraint3,
    lb=rep(0,3), ub=rep(1,3),
    opts=nloptr.options3
    )

    print(Y3)

    cat(sprintf("Y3 resulting estimates for a_{31}, a_{32}, and eta are: %.2f, %.2f, %.2f\n",
    Y3$solution[1], Y3$solution[2], Y3$solution[3]))

    The results are:


    Call:
    nloptr(x0 = rep(0.5, 3), eval_f = objective3, lb = rep(0, 3), ub = rep(1, 3), eval_g_eq = constraint3, opts = nloptr.options3)

    Minimization using NLopt version 2.4.2

    NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. )

    Number of Iterations....: 100000
    Termination conditions: xtol_rel: 1e-04 maxeval: 1e+05
    Number of inequality constraints: 0
    Number of equality constraints: 1
    Current value of objective function: 0.639962390444759
    Current value of controls: 0.20055501795 0.29944464945 0.16847867543

    Y3 resulting estimates for a_{31}, a_{32}, and \eta are: 0.20, 0.30, 0.17, with that \eta corresponding to 301

    That 301 versus the true 372 isn’t too bad.

    An example of where this kind of estimation is done more generally, see:

    But what is it good for? Case 2: Learning prediction matrices

    When systems like (2) arise in cases of statistical regression, the matrix \mathbf{A} is called a prediction or design matrix. The idea is that its columns represent sequences of predictions for the response, represented by the column vector \mathbf{b}, and the purpose of regression is to find the best weights, represented by column vector \mathbf{x}, for predicting the response.

    Consider (2) again but instead of \mathbf{b} and \mathbf{x} being column vectors, as in (5), they are matrices, \mathbf{B} and \mathbf{X}, respectively. In other words, the situation is that there are lots of (\mathbf{b}_{k}, \mathbf{x}_{l}) pairs available. And then suppose nothing is known about \mathbf{A}, that is, it just contains nine unknown parameters:

    (22)\,\,\,\mathbf{A} = \left[ \begin{array} {ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right]

    There are, of course, questions about how many (\mathbf{b}_{k}, \mathbf{x}_{l}) pairs are needed in tandem with choice of number of iterations (see maxeval discussion in Miscellaneous Notes below.) Here, 8 pairs were used for purposes of illustration.

    (23)\,\,\,\mathbf{X} = \left[\begin{array}{cccccccc}   1356 & 7505 & 4299 & 3419 & 7132 & 1965 & 8365 & 8031 \\   5689 & 8065 & 7001 & 638  & 8977 & 1088 & 3229 & 1016 \\   3777 & 8135 & 3689 & 1993 & 3635 & 9776 & 8967 & 7039   \end{array}   \right]

    and

    (24)\,\,\,\mathbf{B} = \left[\begin{array}{cccccccc}   5215 & 13693 & 7265 &  4217 &  9367 & 10588 & 14372 & 12043 \\  7528 & 17825 & 11024 & 4989 & 14860 &  9447 & 16162 & 13087 \\   6161 & 12798 & 7702 & 3023  &  9551 &  8908 & 11429 &  8734   \end{array}   \right]

    The code for this case is:

    objective4<- function(x)
    {
    stopifnot( is.vector(x) )
    stopifnot( 9 == length(x) )
    B<- matrix(c(5215, 13693, 7265, 4217, 9367, 10588, 14372, 12043,
    7528, 17825, 11024, 4989, 14860, 9447, 16162, 13087,
    6161, 12798, 7702, 3023, 9551, 8908, 11429, 8734
    ), 3, 8, byrow=TRUE)
    X.right<- matrix(c(1356, 7505, 4299, 3419, 7132, 1965, 8365, 8031,
    5689, 8065, 7001, 638, 8977, 1088, 3229, 1016,
    3777, 8135, 3689, 1993, 3635, 9776, 8967, 7039
    ), 3, 8, byrow=TRUE)
    P4<- matrix(x, nrow=3, ncol=3, byrow=TRUE)
    d<- B - P4 %*% X.right
    # L2 norm for matrix
    return( L2normForMatrix(d, scaling=1000) )
    }

    nloptr.options4<- list("algorithm"="NLOPT_GN_ISRES", "xtol_rel"=1.0e-6, "print_level"=0, "maxeval"=300000, "population"=1000)

    Y4<- nloptr(x0=rep(0.5,9),
    eval_f=objective4,
    lb=rep(0,9), ub=rep(1,9),
    opts=nloptr.options4
    )

    print(Y4)

    cat("Y4 resulting estimates for \mathbf{A}:\n")
    print(matrix(Y4$solution, 3, 3, byrow=TRUE))

    The run results are:

    Call:
    nloptr(x0 = rep(0.5, 9), eval_f = objective4, lb = rep(0, 9), ub = rep(1, 9), opts = nloptr.options4)

    Minimization using NLopt version 2.4.2

    NLopt solver status: 5 ( NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached. )

    Number of Iterations....: 300000
    Termination conditions: xtol_rel: 1e-06 maxeval: 3e+05
    Number of inequality constraints: 0
    Number of equality constraints: 0
    Current value of objective function: 0.0013835300300619
    Current value of controls: 0.66308125177 0.13825982301 0.93439957114 0.92775614187 0.63095968859 0.70967190127 0.3338899268 0.47841968691 0.79082981177

    Y4 resulting estimates for mathbf{A}:
    [,1] [,2] [,3]
    [1,] 0.66308125177 0.13825982301 0.93439957114
    [2,] 0.92775614187 0.63095968859 0.70967190127
    [3,] 0.33388992680 0.47841968691 0.79082981177

    In fact, the held back version of \mathbf{A} used to generate these test data sets was:

    (25)\,\,\,\mathbf{A} = \left[\begin{array}{ccc} 0.663 & 0.138 & 0.934 \\                                                         0.928 & 0.631 & 0.710 \\                                                         0.334 & 0.478 & 0.791  \end{array} \right]

    and that matches the result rather well. So, in a sense, the algorithm has “learned” \mathbf{A} from the 8 data pairs presented.


    Miscellaneous notes

    All the runs of nloptr here were done with the following settings. The algorithm is always ISRES. The parameters xrel_tol = 1.0e-4 and population = 1000. maxeval, the number of iterations, varied depending upon the problem. For Y1, Y2, Y3, and Y4 it was 100000, 100000, 100000, and 300000, respectively In all instances, the appropriate optimization controls are given by the nloptr.optionsn variable, where n \in \{1,2,3,4\}.

    Per the description, ISRES, which is an acronym for Improved Stochastic Ranking Evolution Strategy:

    The evolution strategy is based on a combination of a mutation rule (with a log-normal step-size update and exponential smoothing) and differential variation (a Nelder–Mead-like update rule). The fitness ranking is simply via the objective function for problems without nonlinear constraints, but when nonlinear constraints are included the stochastic ranking proposed by Runarsson and Yao is employed. The population size for ISRES defaults to 20×(n+1) in n dimensions, but this can be changed with the nlopt_set_population function.

    This method supports arbitrary nonlinear inequality and equality constraints in addition to the bound constraints, and is specified within NLopt as NLOPT_GN_ISRES.

    Further notes are available in:

    • T. P. Runarsson, X. Yao, “Search biases in constrained evolutionary optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 205, 35(2), 233-243.
    • T. P. Runarsson, X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Transactions on Evolutionary Computation, 2000, 4(3), 284-294.
Posted in Calculus, dynamic linear models, mathematics, maths, nloptr, numerical algorithms, numerical analysis, numerical linear algebra, numerics, SVD

censorship isn’t tolerated here, so …

Editorial Cartoonist Rob Rogers recent editorial cartoons have been deleted from the Pittsburgh Post Gazette. Accordingly …

Posted in censorship, humor, satire

Aldo Leopold

We end, I think, at what might be called the standard paradox of the twentieth century: our tools are better than we are, and grow better faster than we do. They suffice to crack the atom, to command the tides. But they do not suffice for the oldest task in human history: to live on a piece of land without spoiling it.

From Aldo Leopold, The River of the Mother of God and other essays, University of Wisconsin Press, 1991: 254.

From a modern perspective, Leopold, although insightful and having contributed enormously to the development of ecological ethics and sensibilities, also claimed:

Examine each question in terms of what is ethically and aesthetically right, as well as what is economically expedient. A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise

That’s from his Sand County Almanac (page 262). Read literally, it suggests that biotic communities are capable of stability in the human meaning of the word. But Leopold introduced notions like the trophic cascade which is biological dynamics at its most essential, and, so, instead of biotic communities he should be read as meaning biocoenosis. Accordingly, oscillation in species abundances and even replacement of one species by another, as in forest succession or even invasion would be considered stable. An outline of a modern view is available here.

To his great and practical credit, Leopold also struggled to reconcile Ecology and Economics.

Posted in Aldo Leopold, dynamical systems, ecological services, Ecological Society of America, ecology, Ecology Action, fragmentation of ecosystems, Lotka-Volterra systems, population biology, population dynamics

The elephant in the room: a case for producer responsibility

This is a guest post by Claire Galkowski, Executive Director, South Shore Recycling Cooperative.

With so much focus on the recycling crisis, we tend to overlook the root cause of the problem: The glut of short lived consumer products and packaging. Rather than looking for new places to dispose, it is imperative that we look at where it is coming from, and stem the flow. Mining, harvesting, processing and transport are where the biggest environmental footprints land.

In the current system, manufacturers who profit from the sale of their wares have little incentive to make durable products or minimal, easily recycled packaging, or to incorporate recycled feedstock in their packaging. Thankfully, a few corporations such as Unilever and P&G are stepping up. Many more need a nudge to follow suit. Neither are consumers incented to reduce their use and disposal of unnecessary “stuff”. The proliferation of convenient single use products and unrecyclable packaging is clogging our waterways, contaminating our recycling plants and filling our landfills.

Add to that the diminishing disposal capacity in Massachusetts as most of our remaining landfills face closure within the decade, and we are facing a day when the massive amount of stuff that we blithely buy, use once and toss will have no place to go. Consumers who pay for their trash by the bag have some skin in the game to reduce their disposal footprint. While this may encourage the use of less single use “stuff”, this can also result in “wishful recycling”, which is clearly hurting our recycling industry, and is one cause of China’s embargo on our recyclables.

Producers of non-bottle bill products are selling us millions of tons of products for billions of dollars. Most will be disposed within 6 months. Packaging alone accounts for about 30% of our waste, and about 60% of our recycling stream. Once products and packaging leave the warehouse, producers are free of responsibility for what happens to them. A few exceptions are carbonated beverages that are redeemed for deposit, rechargeable batteries and mercury thermostats that are recycled through manufacturer- sponsored programs, which are good examples of product stewardship (*). Municipalities, haulers and recycling processors are left holding the plastic bag, the dressing-coated take out container, the plastic-embedded paper cup, and the glass bottle that currently has no local recycling market.

It’s time for that to change. We need the packaging industry to partner with those of us that manage their discards to help solve this massive problem.

There is a bill in Massachusetts House Ways and Means, H447, An act to reduce packaging waste, that assigns a fee to packaging sold in Massachusetts. The fee is based on the recyclability, recycled content, and cost to manage at end of life. It provides an incentive for more lean and thoughtful packaging design, and to create domestic markets for our recyclables. The proceeds provide funding for improved recycling infrastructure development.

With help from MassDEP, the SSRC and many municipalities are working hard to adjust the habits of our residents, an uphill climb. Recycling companies are struggling to navigate this massive market contraction, and wondering if they can continue to operate until viable domestic outlets are established. Municipal recycling costs are skyrocketing, straining budgets with no clear end in sight.

With help from the consumer product manufacturers that helped to create this crisis, it will be possible to resurrect and revitalize our recycling industry, create domestic markets for its products, and make our disposal system more sustainable.

Waste hierarchy rect-en


States look at EPR, funding cuts, mandates

by Jared Paben and Colin Staub, February 6, 2018, from Resource Recycling

California: The Golden State is advancing a bill calling for mandates on the use of recycled content in beverage containers. The legislation, Senate Bill 168, requires the California Department of Resources Recycling and Recovery (CalRecycle) by 2023 to establish minimum recycled-content standards for metal, glass or plastic containers (state law already requires glass bottles contain 35 percent recycled content). The bill also requires that CalRecycle by 2020 provide a report to lawmakers about the potential to implement an extended producer responsibility (EPR) program to replace the current container redemption program. The state Senate on Jan. 29 voted 28-6 to pass the bill, which is now awaiting action in the Assembly.

Connecticut: A workgroup convened by the Senate Environment Committee has been meeting for more than a year to consider policies, including EPR, that would reduce packaging waste and boost diversion. The group includes industry representatives, environmental advocates, MRF operators, government regulators and more. It most recently met in December and discussed what should be included in its final recommendations to state lawmakers. EPR, which is on the table for packaging and printed paper, was discussed at length. The group is working to finalize its recommendations and could present them to lawmakers during the current legislative session.


(*) The U.S. Solar Energy Industries Association (SEIA) established a PV panel recycling program in 2016 with the help of SunPower.
Posted in affordable mass goods, Anthropocene, chemistry, citizenship, civilization, Claire Galkowski, CleanTechnica, climate economics, consumption, corporate citizenship, corporate responsibility, corporate supply chains, demand-side solutions, design science, ecological services, ecology, Ecology Action, economics, environment, ethics, extended producer responsibility, extended supply chains, greenwashing, Hyper Anthropocene, local self reliance, materials science, municipal solid waste, rebound effect, resource producitivity, shop, solid waste management, sustainability, temporal myopia, the green century, the tragedy of our present civilization, tragedy of the horizon, wishful environmentalism | Tagged , , , ,

“The path to US$0.015/kWh solar power, and lower” (PV Magazine and GTM Research)

The headline and a page with lots of graphics and associated worksheets come from this PV Magazine article. The underpinning assessment is from GTM Research and their report Trends in Solar Technology and System Prices.

Recall that Natural Gas Combined Cycle, the most efficient natural gas generation process, produces electricity at US$05.5/kWh. The quoted US$0.015/kWh is significantly lower than previous projections. Of course, you’ll get different numbers from the usual suspects. Just to note, nearly every established quasi-governmental organization, e.g., U.S. EIA, the IEA, ISO-NE, etc, have missed the mark on both projecting cost per kWh and penetration of solar PV, both behind-the-meter residential and utility scale.

Posted in American Solar Energy Society, Bloomberg New Energy Finance, BNEF, bridge to somewhere, clean disruption, CleanTechnica, decentralized electric power generation, decentralized energy, demand-side solutions, distributed generation, electricity, electricity markets, fossil fuel divestment, green tech, Green Tech Media, investing, investment in wind and solar energy, investments, ISO-NE, local generation, local self reliance, marginal energy sources, microgrids, natural gas, solar democracy, solar domination, solar energy, solar power, Sonnen community, stranded assets, the energy of the people, the green century, utility company death spiral, wind energy, wind power, zero carbon

Not often thought on … geothermal energy

U.S. Geothermal Energy Technologies Office.

And what about in Massachusetts?

Posted in the energy of the people, the green century

Sidney, NY: The lead example of how the USA will deal with future coastal and riverine flooding?

From Bloomberg, the story of Sidney, NY, not that far from where I used to live in Endicott, NY.

More than 400 homes and businesses ended up underwater in Sidney, affecting more than 2,000 people. It was months before Spry and her neighbors could move back in. It was also the second time in five years that the Susquehanna had wrecked half the village. People had just finished rebuilding. When Spry walked back into her soggy house, the street outside reeking with the rancid smell of garbage and fuel, she was hit first, she remembers, by the sight of her brand-new hardwood floor, completely buckled.

Spry didn’t want to rebuild again, and neither did local officials; everyone knew the river would keep flooding homes and businesses. So Sidney decided to try something else: It would use federal and state money to demolish Spry’s neighborhood while creating a new one away from the flood plain for displaced residents. Sidney would be on the forefront of U.S. disaster policy, a case study in what’s known as managed retreat—and the many ways it can go wrong.

Until recently, the guiding philosophy behind attempts to protect U.S. homes and cities against the effects of climate change was to build more defenses. Houses can be perched on stilts, surrounded by barriers, buttressed with stormproof windows and roofs. Neighborhoods can be buffered by seawalls for storm surges, levees for floods, firebreaks for wildfires. Defenses are an instinctive response for a species that’s evolved by taming the natural world.

But sometimes the natural world won’t be tamed. Or, more precisely, sometimes engineered solutions can no longer withstand the unrelenting force of more water, more rain, more fires, more wind. Within 20 years, says the Union of Concerned Scientists, 170 cities and towns along the U.S. coast will be “chronically inundated,” which the group defines as flooding of at least 10 percent of a land area, on average, twice a month. By the end of the century, that category will grow to include more than half of the communities along the Eastern Seaboard and Gulf Coast—and that’s if the rate of climate change doesn’t accelerate. In their less guarded moments, officials in charge of this country’s disaster programs have begun to acknowledge the previously unthinkable: Sometimes the only effective way to protect people from climate change is to give up. Let nature reclaim the land and move a neighborhood out of harm’s way while it still is one.

Posted in adaptation, Anthropocene, bridge to somewhere, climate, climate change, climate data, climate disruption, climate economics, climate education, coastal communities, global warming, hydrology, Hyper Anthropocene, John Englander, living shorelines, riverine flooding

What gives me hope … And it ain’t the small stuff

AS Arman Oganisian of Stable Markets writes “There are no solutions, only trade-offs.” That is a fundamentally engineering attitude.

It is fundamentally about the economics, and, in particular, the dramatic drop in levelized cost of energy for wind and renewables, as well as take-up rates.

But, what’s significant, is that players in the marketplace who ignore, shun, and oppose arguments in court and in the public forum of ideas in favor of regulating Carbon emissions, or assigning producer responsibility to fossil fuel companies for the downstream harm their products do, will rapidly do econotaxis towards these kinds of motivations.

That, as the above video notes, the values of these companies have dropped precipitously in the recent term is highly gratifying.

And, oh, by the way, it serves them right.

But, also, a warning to the environmental community, whether in the United States or elsewhere, if you are really, honestly concerned with this problem, you would do well to set aside your personal and collective egos and work with these organizations, some of which have been your long time enemies, to achieve these goals. These trade-offs are necessary to get there.

And what is more important, getting there? Or achieving your side conditions and side goals, however significant they might be in isolation? If climate change approaches an existential threat, then it should be treated like an existential threat? A safe continuance in an isolated ecosystem of a particular species does not rise to the level of worthiness of measures designed to stop a much bigger calamity.

Grow up.

Oh, and what will happen to fossil fuels?

Posted in Anthropocene, being carbon dioxide, biology, bridge to somewhere, Carbon Cycle, carbon dioxide, Carbon Worshipers, clean disruption, climate business, climate change, climate disruption, climate economics, climate justice, corporate litigation on damage from fossil fuel emissions, Cult of Carbon, decentralized electric power generation, decentralized energy, destructive economic development, disruption, distributed generation, ecological services, ecology, Ecology Action, economic trade, economics, engineering, environment, fossil fuel divestment, fossil fuels, global warming, Green Tech Media, greenhouse gases, grid defection, Humans have a lot to answer for, Hyper Anthropocene, investing, investment in wind and solar energy, investments, leaving fossil fuels in the ground, local generation, local self reliance, Sankey diagram, smart data, solar democracy, solar domination, solar energy, solar power, Sonnen community, the energy of the people, the green century, the tragedy of our present civilization, the value of financial assets, tragedy of the horizon, wind energy, wind power, wishful environmentalism, zero carbon

Sustainable Landscaping

Update: 2018-05-26

It’s not about plants, not entirely. But it seems that, in one agricultural area, pollinators (bees) under stress have ceded their pollinating responsibility to a couple of species of exotic (read invasive) flies. See: J. R. Stavert, D. E. Pattemore, I. Bartomeus, A. C. Gaskett, J. R. Beggs, “Exotic flies maintain pollination services as native pollinators decline with agricultural expansion, Journal of Applied Ecology (British Ecological Society), 22 January 2018. The only thing surprising about that is that people consider it surprising.

Added references

Updated again below, `Plants of the future’, 2018-05-03

Update, 2018-04-29:

While my first thoughts and reasons for this post were simply to collect together a number of links pertaining to an interesting subject, regarding which there appeared to be some controversy, I have received several reactions to the material, many supportive and positive, others strongly adverse. This indicated to me that this is an area worth knowing more about, and, so, I have pulled quote a number of technical articles from the fields of Ecology, Forest Management, and Invasive Species Studies which I am currently reading. I intend to at least supplement the links below with additional ones explaining states of knowledge at present. I may include some comments summarizing what I have read. In other posts, in the future, I may do some modeling along these lines, since diffusion processes modeled by differential equations are of significant interest to me, whether for biological and physical systems, or diffusion of product innovations, via, for instance, the Bass diffusion model. Those results won’t be posted here, though.

Sustainable landscaping as described by Wikipedia, and by Harvard University. See also the Sustainable Sites Initiative. It’s a lot more than eradicating invasive species. In fact, that might be harmful. There’s a lot of questionable information out there, even by otherwise reputable sources like The Trustees of Reservations. See also their brochure on the subject where they recommend various control measures, including chemical, even if it is not their preferred option. There is evidence Roundup (glyphosate) is indeed effective against at least Alliaria petiolata, with little harm for common, commingled biocenostics.

Dandelions

(Above from M. Rejmánek, “What makes a species invasive?”, Ecology, September 1996, 3-13.)

Four inspirational books:

I dove into reading Professor del Tredici’s book as soon as I got my copy. Here is part of what he has to say from pages 1-3:

Perhaps the most well-known example of a “spontaneous” plant is Ailanthus altissima or tree-of-heaven, introduced from China. Widely planted in the Northeast in the first half of the nineteenth century, Ailanthus was later rejected by urban tree planters as uncouth and weedy. Despite concerted efforts at eradication, the tree managed to persist by sprouting from its roots and spread by scattering its wind-dispersed seeds …

Although it is ubiquitous in the urban landscape, Ailanthus is never counted in street tree inventories because no one planted it — and consequently its contribution to making the city a more livable place goes completely unrecognized. When the major of New York City promised in 2007 to plant a million trees to fight global warming, he failed to realize … that if the Ailanthus trees already growing throughout the city were counted he would be halfway toward his goal without doing anything. And that, of course, is the larger purpose of this book: to open people’s eyes to the ecological reality of our cities and appreciate it for what it is without passing judgment on it. Ailanthus is just as good at sequestering carbon and creating shade as our beloved native species or showy horticultural selections. Indeed, if one were to ask whether our cities would be better or worse without Ailanthus, the answer would clearly be the latter, given that the tree typically grows where few other plants can survive.

There is no denying the fact that many — if not most — of the plants covered in this book suffer from image problems associated with the label “weeds” — or, to use a more recent term, “invasive species.” From the plant’s perspective, invasiveness is just another word for successful reproduction — the ultimate goal of all organisms, including humans. From a utilitarian perspective, a weed is any plant that grows by itself in a place where people do not want it to grow. The term is a value judgment that humans apply to plants we do not like, not a biological characteristic. Calling a plant a weed gives us license to eradicate it. In a similar vein, calling a plant invasive allows us to blame it for ruining the environment when really it is humans who are actually to blame. From the biological perspective, weeds are plants that are adapted to disturbance in all its myriad forms, from bulldozers to acid rain. Their pervasiveness in the urban environment is simply a reflection of the continual disruption that characterizes this habitat. Weeds are the symptoms of environmental degradation, not its cause, and as such they are poised to become increasingly abundant within our lifetimes.

(Slight emphasis added by blog post author in a couple of places.)



The fact that ‘r-strategists’ are the best invaders is not surprising because the overwhelming majority of biological invasions take place in human- and/or naturally-disturbed habitats. Our modern landscape is mainly disturbed landscape.

(Above from M. Rejmánek, “What makes a species invasive?”, Ecology, September 1996, 3-13.)

Links:


Links with some quotes and discussion:

S. L. Flory, K. Clay, “Invasive shrub distribution varies with distance to roads and stand age in eastern deciduous forests in Indiana, USA”, Plant Ecology, 2006, 184:131-141.

Some quotes:

If roads are important corridors for exotic plants or if roadside edges provide good habitat for exotic plant growth, then one would predict decreased exotic plant density with increased distance to roads. In support, the prevalence and cover of exotic plants has been shown to decline with increasing distance to road in a number of ecosystems.

Independent of distance to road, successional age might determine susceptibility of a community to exotic plant invasions. Young forests typically have higher light levels (Levine and Feller 2004), fewer competitors, and less litter than older forests (Leuschner 2002) while mature forest interiors are known to have lower light availability, cooler temperatures, and higher humidity than forest edges (Brothers and Spingarn 1992). We would therefore expect, based on levels of light penetration and microclimatic conditions, that older forests would have higher densities of invasive shrubs near the forest edge than in forest interiors and fewer invasive shrubs overall due to less recent disturbance events and less favourable environmental conditions. We would also expect that younger forests would show weaker correlations of densities of invasive shrubs with increasing distance to road since light levels are higher throughout young forests. This would result in an interaction between distance to road and forest age.

The goal of this study was to quantify the density of invasive exotic shrubs along roads in eastern deciduous forests of varying successional ages in Indiana. Eastern deciduous forests cover much of the landscape east of the Mississippi River. Most of this region has been fragmented by urban and suburban development and roads such that ninety percent of all ecosystem areas in the eastern US are within 1061 m of a road (Riitters and Wickham 2003). We specifically addressed the following questions (1) Does the density of invasive exotic shrubs decline as the distance to a road increases? (2) Does the relationship between density and distance to road differ among exotic shrub species? and (3) Are invasive exotic shrubs less common in mature forests than in young successional forests? Answers to these questions will help develop a predictive framework for plant invasions and better inform management strategies.

This study suggests that roads may contribute to the spread of invasive plants in eastern deciduous forests. We found a highly significant effect of distance to road over all species and for four of seven individual species … One possible mechanism for high densities of invasive shrubs along roads is that exotic shrub propagules are distributed evenly by birds with respect to distance to road and simply survived at a greater rate near the road due to better growth conditions. These conditions might include higher light conditions or increased nutrient or water availability … Better survival and growth of exotic shrubs might also be due to decreased competition with native understory species. Native species may not survive as well along roadsides where runoff from pollutants and exposure to herbivores is greater … A second possible mechanism is that exotic shrub seeds are distributed by birds and other animals in a pattern that parallels the distribution of shrubs that we found. This would mean that the density of dispersed seeds declines with increasing distance to the nearest road but that survival is unaffected by distance to road. A third possible mechanism is that exotic shrub propagules were initially distributed along roads by animals and vehicles and are invading the forest from the roadside edge.

Successional age has been shown to affect exotic plant establishment in old fields in Minnesota with younger successional aged communities more susceptible to invasions and older communities more resistant (Inouye et al. 1987). Our results show that forest successional age plays a similar role in the distribution of invasive shrubs in eastern deciduous forests with invasive shrubs found in greater densities in young and mid-successional forests than mature forests. This is likely due to a combination of factors including differences in light regimes … Exotic shrubs would have survived and grown much more successfully where they did not have to compete with existing trees or intact forests. This hypothesis could help to explain why we found fewer shrubs near the road in mature forests than young and mid-successional forests.

S. L. Flory, K. Clay, “Effects of roads and forest successional age on experimental plant invasions”, Biological Conservation, 2009, 142, 2531-2537.

Continue reading
Posted in adaptation, American Association for the Advancement of Science, argoecology, biology, Botany, Carl Safina, complex systems, conservation, ecological services, Ecological Society of America, ecology, Ecology Action, environment, fragmentation of ecosystems, invasive species, land use to fight, living shorelines, New England, population biology, population dynamics, quantitative biology, quantitative ecology, sustainability, sustainable landscaping, water as a resource | 1 Comment

LLNL Sankey diagram of U.S. national energy flows in 2017: What’s possible, what’s not, and who’s responsible

(Updated, 2018-05-02. See below.)

I love Sankey diagrams, and have written about them with respect to influence of Big Oil on U.S. climate policy, and in connection with what it takes to power a light bulb, providing a Sankey-based explanation for something Professor Kevin Anderson has spoken and written about. Indeed, there’s a wealth of computational capability in R and otherwise, for constructing Sankey diagrams and the like. Here’s a new one from Lawrence Livermore National Laboratory:

(Click on image to see much larger version, for inspection or saving. Use your browser Back Button to return to this blog.)

That’s a lot of energy consumption, and renewables have a long way to go before overtaking it. But, maybe not so much.

First of all, if the solution is, hypothetically, all wind and solar, setting aside storage, nearly all that rejected energy won’t be there. So the actual need isn’t about 98 quads, it’s closer to 31 quads. Call it 35 quads for jollies.

Second, note that wind and solar energy technology are presently on the middle part of the logistic S-curve of growth. (See also diffusion of innovations.) This is a super-exponential region. Call it an exponential region to be conservative. Current estimates place cost cuts in technology for these at 30%-40% per year. Actual adoption meets resistance of regulatory capture and other impediments, and during the last year it was 11%. Clearly with the cost advantages, the motivation is to go faster and, one can argue, the greater the spread between present sources of energy and wind and solar, the lower the “energy barrier” to jump to wind and energy despite other impediments. Translating into time, an 11% per year growth rate is a doubling time of 9 years. But a 30% growth rate is a doubling time of just 3.3 years. Say the doubling time is 4 years.

Third, to get from 3.1 quads to 35 quads is about \pi doublings, and it’s certainly less than 4 doublings.

Fourth, so the really bad news for fossil fuels and all business and people that depend upon them to make a living it that:

  • If the doubling time is 4 years, wind and solar will get to 35 quads in 13 years
  • If the doubling time is 3 years, wind and solar will get to 35 quads in under 10 years
  • If the doubling time is 5 years, wind and solar will get to 35 quads in 16 years
  • And if the doubling time is 9 years, which by all accounts is unduly pessimistic, wind and solar will get to 35 years in under 30 years

Will that be enough to keep us from a +3C world?

Probably not: Too little, too late.

But it’ll happen anyway and it is, as I say, why fossil fuel energy and utilities which depend upon them, natural gas and all, are dead on their feet, and stranded. And any government which puts ratepayer and taxpayer dollars into building out new fossil fuel infrastructure is not only being foolish, they are making the mistake of the century.

As for the +3C climate change outcome? Clearly, that is such an emergency that the only option to address it in time is degrowth, not only cutting back on additional growth. However, there’s no evidence that that is even considered as an option. To the approvers of additional development in suburbs like the Town of Westwood and elsewhere, Conservation Commissions and all, I simply say:

You have a choice: Either manage a restrained and then negative growth plan yourself, or Nature will do it for you.

Simple. The officials responsible for these decisions know and have been warned repeatedly about these outcomes. They are ignoring them. They own the long term results. Remember them.

Update, 2018-05-02

The Trump/Perry Department of Energy with its EIA reports solar energy in the United States grew 32% per annum since 2000. So, per the reasoning able, that’s a doubling time of under 3 years. Moreover, this is based upon a long sampling period, since 2000, so it is not only stable, it is probably conservative.

Posted in American Association for the Advancement of Science, American Meteorological Association, American Solar Energy Society, AMETSOC, AMOC, Amory Lovins, Anthropocene, being carbon dioxide, Bloomberg New Energy Finance, BNEF, bridge to somewhere, clean disruption, climate change, climate disruption, climate economics, coastal communities, Commonwealth of Massachusetts, Cult of Carbon, decentralized energy, demand-side solutions, denial, ecological services, energy utilities, environmental law, exponential growth, fossil fuel divestment, fossil fuel infrastructure, fossil fuels, global warming, Hermann Scheer, Humans have a lot to answer for, Hyper Anthropocene, investment in wind and solar energy, Joseph Schumpeter, Kevin Anderson, local generation, local self reliance, rationality, reasonableness, regulatory capture, solar democracy, solar domination, solar energy, solar power, Sonnen community, the green century, the right to be and act stupid, the right to know, the tragedy of our present civilization, the value of financial assets, Tony Seba, tragedy of the horizon, utility company death spiral, wind energy, wind power, wishful environmentalism, zero carbon | 1 Comment