Trending wetter with time: weather never moves in a straight line, but data from NOAA NCDC shows a steady increase in the percentage of the USA experiencing extreme 1-day rainfall amounts since the first half of the 20th century. Photograph: NOAA NCDC
My Apology to Paul Douglas
I admit that I do a lot of Republican bashing. I’m a Democrat, and more than that, I’m a partisan. I understand that a political party is a tool for grass roots influence on policy, if you care to use it. The Democratic party platform, at the state and national level, reflects my policy-related values reasonably well, and the Republican approach is largely defined as supporting the opposite of whatever the Democrats say, even when Democrats come up with a policy that is closely based on a previously developed Republican policy. So, my hope is to see the Democratic caucus in the majority, in both houses of my state legislature, and both houses of the US Congress. And a Democratic President. This is the only way that the policies I see as appropriate and important are advanced, and the anti-policies put forth by the reactionary party, the Republicans, are not.
So, with respect to elected officials, I will always oppose Republicans and always support Democrats. That includes opposing “Reasonable Republicans” (an endangered species) and, not happily, supporting Red Dog Democrats. This is necessary because of the necessity of a majority caucus in each legislative branch. (You probably know this, but the majority party gets to call the shots, run committees, etc.) At some future date, when Democratic majorities are not as tenuous, I may change that approach, but not now.
If key policy orientations for key issues tended to find cross-party support, I would not be so much of a partisan. But that is not what happens these days in government. My partisanship is not a choice, but a necessity required by Republican reactionary philosophy among elected officials.
So, that is my explanation — not excuse, but explanation — for my Republican bashing, a behavior that is one side of a coin. The obverse is, obviously, Democratic cheerleading.
And, with that as background, I sincerely apologize to my friend Paul Douglas.
Minnesota Nice Weather
Paul is one of the country’s top meteorologists.
When I was about to move to Minnesota, I flew out to find an apartment for my family, and get the feel of the landscape. I stayed in a hotel in the near western suburbs, and spent each day looking at apartments, and checking out driving times between various neighborhoods and the University of Minnesota. Every evening I pick up the local papers to peruse them while watching the local news, because that is a good way to get to know a place.
One day I was out driving around, lost, somewhere near downtown on this mess of highway that made no sense to me. The sky had been filled since early morning with enormous thunderheads, the kind I had seen previously in the Congo, but rare in Boston, where I was living at the time. Suddenly, a huge thunderstorm passed overhead, with hail, and the road filled with water, forcing me to pull off for a few minutes to avoid hydroplaning. After the storm had passed, I drove back out onto the highway, and witnessed an amazing sight.
First, I should note that in Minnesota, you can see the sky for great distances because it is relatively flat here. Minnesotans don’t think of Minnesota as flat, and compared to Kansas, it isn’t. But it is compared to my previous homes in Boston or upstate New York. I remember thinking that day that Minnesota counted as “Big Sky Country” in its own way. Minus the Rocky Mountains.
Anyway, the sky was being big, and the view was filled with more thunderheads. But off to the northeast was a huge horizontally elongated cloud. It was at about the same elevation as the lower parts of the nearby thunder clouds, longer in its longest dimension than a good size thunder storm, but shaped more like a giant cigar. And it was rotating, rapidly, like a log rolling down hill. (Except it wasn’t really going anywhere.)
I thought to myself, “This is amazing. I wonder if the people of Minnesota appreciate how spectacular and beautiful is their sky and weather, which they observed every day!”
Later that evening, I got back to my motel and switched on the news. The top news story that day, it turns out, happened to be the day’s thunderstorms, so the anchor handed off the mic to the meteorologist.
I had made an error in thinking that the people of Minnesota might be inured to spectacular thunder storms and giant rotating cigar shaped clouds. The weather reporter was showing news footing of the sky, including the rotating cigar shaped cloud I had witnessed. He told the viewers that the storms today were especially spectacular, and that this giant rotating cigar thing was a special, highly unusual weather event. He named it, calling it an arcus cloud, and noted that it was effectively similar to a tornado, in terms of wind speed and destructive potential, but that this sort of cloud rarely touched down anywhere.
(This sort of arcus cloud is a roll cloud, very rare in continental interiors, though somewhat more common in coastal areas.)
That year there were many thunderstorms in the Twin Cities. The following year as well. There were also a lot of tornadoes. All of the tornadoes I’ve ever seen with my own eyes (small ones only) were during that two year period, including one that passed directly overhead and eventually damaged a tree on the property of a house we had just made an offer on, subsequently moving along a bit father and menacing my daughter’s daycare.
An Albino Unicorn Observes Weather Whiplash
I’m pretty sure, if memory serves, that some time between my observation of the arcus cloud and the Saint Peter tornado, Paul Douglas moved from Chicago back to the Twin Cities, where he had perviously been reporting the weather.
Paul Douglas will tell you that during this period he, as a meteorologist covering the midwest and plains, started to notice severe weather coming on more frequently than before. When such a thing happens a few years in a row, one can write that off as a combination of long term oscillations in weather patterns and random chance. But when the fundamental nature of the weather in a region shifts and such normally rare events become typical, then one might seek other explanations. Climate change, caused by the human release of greenhouse gasses into the atmosphere, is ultimately the explanation one is forced to land on when considering widespread, global (and Minnesotan), changes in weather patterns.
Paul describes himself as an “albino unicorn.” This is not a reference to a horn sticking out of his nose, or atypical pigmentation. Rather, he recognizes that as a Republican who fully accepts science, and in particular, the science of climate change, he is an odd beast. It is worth noting that Paul is also an Evangelical Christian. There are not many Evangelical Christian Republicans who understand and accept science. There are probably more than the average liberal or progressive Democrat thinks there are, because such rare beasts need to keep their heads down in many contexts. But Paul is the rarer subspecies of albino unicorn that simply refuses to do that. He speaks openly and often about climate change, giving talks, frequent interviews (like this one with me), and regular appearances on various news and commentary shows.
Paul currently runs this company, and writes an excellent weather blog here. His weather blog focuses on Minnesota weather, but it should be of interest to everyone in the US and beyond, because he also catalogues current extreme weather events globally, and summarizes current scientific research on climate change.
The graph at the top of this post is featured in Paul’s writeup, so go there and read the background. If you happen to know Donald Trump, suggest to him that there is an interesting write-up on climate change by an Evangelical Christian Republican, which he should read in order to get the Evangelical Christian Republican view on the topic!
In a day and age of scammers, hackers, hucksters and special interests it’s good to be skeptical. You should be skeptical about everything. Some of the biggest skeptics on the planet are scientists. In fact, science is organized skepticism. Climate and weather are flip-sides of the same coin; everything is interconnected. Climate scientists tell us the climate is warming and meteorologists are tracking the symptoms: freakish weather showing up with unsettling regularity. Even if you don’t believe the climate scientists or your local meteorologist do yourself and your kids a favor. Believe your own eyes.
Paul saw the signature of anthropogenic climate change in the weather he was analyzing and reporting on long before climate scientists began to connect the dots with their research. Many of the dots remain unconnected, but the association between observable changes in the climate system and changes in the weather is now understood well enough to say that it is real. I believe that the recent uptick in acceptance of climate science by Americans is partly a result of the impossible to ignore increase in severe weather events, especially flooding and major storms. The most severe heat waves have, so far, occurred in other countries, but we do get the news and we do know about them.
Check out Pauls’ Guardian writeup where he connects the dots for you, and makes a strong case that we need to put aside denialism of the science.
This is a press release that just came across my desk that I thought would be of interest to you.
High-drama at low-key follow up to UN Climate Change Agreement in Paris
Bonn, Germany – United Nations climate change negotiations today concluded their first session since the adoption of the Paris Agreement in December last year, including the first session of a new body called the Ad Hoc Working Group on the Paris Agreement (APA) tasked with carrying out activities related to the implementation of the Agreement. What was touted as a “housekeeping meeting” following the high-drama of COP21 turned out in fact to be more eventful than expected.
On the closing day, controversy flared around proposed techno-fixes involving bioenergy carbon capture and storage (BECCS) with several dozen African social organisations and networks issuing a joint statement entitled “Sacrificing the global South in the name of the global South: Why the 1.5°C goal must not be met with land grabs.”
This was followed by an announcement of plans to launch a new renewable energy initiative for Least Developed Countries (LDCs) following on from last year’s breakthrough African Renewable Energy Initiative which has attracted $10 billion in pledges. The announcement was made during a press conference with Ambassadors from Sweden and Mali on behalf of the African Group, alongside the chief negotiator of the Alliance of Small Island States and the Chair of the LDCs.
Tense exchanges also took place throughout the week and boiled over in the closing plenary of the Subsidiary Body for Implementation around the issue of “conflicts of interest” with over 75 developing countries and many NGOs calling for climate talks to adopt measures which would limit the ability of fossil fuel corporations to advance their agenda, which runs contrary to the objectives of the negotiations.
As the negotiations wrapped up ahead of the next Conference in Marrakech, November 7-18, representatives from a diverse range of civil society groups expressed their views:
“We spent another precious week engaged in very procedural discussions, but the hard work had to be done. As Gandhi said, ‘speed is irrelevant if you are going in the wrong direction’. Even as negotiations took place this week, the world has been dealing with record-breaking temperatures, and climate impacts. It’s clear that vulnerable communities around the world urgently need support. They need help when they are displaced, and they need strategies to cope with extreme weather events and slow onset impacts. The climate change agenda going forward must reflect these realities” said Harjeet Singh of Action Aid.
“As we move forward to Marrakech, we hope that developed countries will not re-negotiate what they have agreed in Paris for the Agreement to be implemented in a balanced manner, on all the elements, including mitigation, adaptation, loss and damage and means of implementation for developing countries. They should not resort to tactics in the process which lead to mitigation centric outcomes which that will not be just and equitable. We hope that now the process can carry on in an open and transparent way to ensure that a balanced outcome results in Morocco without a re-negotiation of the Paris Agreement and in implementing pre-2020 commitments with urgency” said Meena Raman of Third World Network.
“If Paris is to be more than just a diplomatic success, catalysing the urgent transformation of our global energy systems must be the cornerstone to meeting the planetary goal of 1.5°C. An important first step was the successful launch in Paris of the African Renewable Energy Initiative – now Marrakesh must build on that by broadening this initiative to other vulnerable countries. By becoming the COP for Renewable Energy, it would be genuinely deserving of global applause, for concretely tackling climate pollution as well as delivering energy to the millions of people who have none” said Asad Rehman, Friends of the Earth England, Wales, and Northern Ireland.
“If Marrakech is to live up to it’s billing as an implementation COP, developed countries must come to the ministerial dialogue on climate finance with clear commitments – with amounts and timeframe – to meeting the $100 billion promised in 2010 and reaffirmed in Paris. This $100 billion is a floor – the real needs of tackling climate change and addressing the impacts are much greater” said Lidy Nacpil of the Asian Peoples Movement on Debt and Development.
“For many years we demanded a 1.5°C goal, which for Africa means significantly more warming and severe impacts on food security. For many years we were told it was not politically possible. Now that we have a 1.5°C in the Paris Agreement, we are being told that the measures to achieve it are not politically possible. Instead of changing the mode of production and consumption in the global North, we in the South are being asked to sacrifice our land and food security on the assumption that technologies such as BECCS will work. Let me be clear: they will not work for us. We cannot sacrifice our food security and land. Instead w
e need urgent and serious mitigation to keep to 1.5°C. The next 5 years are critical – we hope countries come to Marrakech ready to increase their pre-2020 ambition in line with their fair shares” said Augustine Njamnshi of Pan-African Climate Justice Alliance.
“The Paris Agreement swings the door wide open to non-state actors, including to the private sector, not only to enhance climate action but also to engage in the policymaking process. But no process currently exists to address the perceived, potential, or actual conflicts of interests that could result from that engagement. If we are serious about keeping warming below 1.5 degrees Celsius, Parties must overcome opposition from the US and others and ensure this process has safeguards in place to maintain the integrity of the UNFCCC, it’s Parties and its outcomes” said Tamar Lawrence-Samuel of Corporate Accountability International.
The probability of this disturbance turning into a tropical storm has been upgraded to 90%, and this transformation is expected to happen some time this evening or on Saturday.
Once that happens, the NWS will probably start issuing maps and probability information for where the storm will go and how strong it will be. For now, the NWS is indicating that “all interests along the southeast coast from Georgia through North Carolina should monitor the progress of this low.”
There is not much to look at yet, but here is a moving GIF of the area. The low pressure system is centered to the east of Florida and north of the Lesser Antilles.
Thursday, May 26, 8 pm cst
The chance if a tropical storm forming by by Saturday evrning is now 80% according to NWS.
The storm is still likely headed gor the US East coast.
Thursday, May 26, mid day:
Last year’s Atlantic Hurricane season was a bit odd, and produced a storm that happened days after the technical end of that year’s season. So, they put it in this year’s season. That was Hurricane Alex.
Yesterday, the first important looking disturbance in the 2016 Hurricane season, which technically starts on June 1st, showed up. And, it is possible that this disturbance will turn into something that will hit the US East Coast in time for Memorial day.
The storm is not yet organized enough to be named. It is located between Bermuda and the Bahamas, and is becoming better defined and strengthening. According to the Hurricane Prediction Center of the National Weather Service:
Environmental conditions are expected to become more conducive for tropical or subtropical cyclone formation on Friday while the system moves west-northwestward or northwestward toward the southeastern United States coast. With the Memorial Day weekend approaching, all interests along the southeast coast from Georgia through North Carolina should monitor the progress of this low.
There is an even chance that this disturbance will form up into a tropical storm by mid day Saturday, and a very high chance, about 70%, that it will do so some time between now and Tuesday.
Paul Douglas informs me that there is an increasing chance that this storm will brush the coast of South Carolina with the next three or four days, with moderate winds and some storm surge flooding, with minor to moderate flash flooding between late Saturday and early Monday. This may affect the Outer Banks.
He also notes that if the storm does develop sufficient strength to require an evacuation on the barrier islands, this would get complicated with extra large numbers of people visiting due to the holiday weekends.
You will recall that the forecasts for this year’s Atlantic Hurricane season suggest that while this will not be a record breaking year, it may be more active (more storms, bigger storms) than average. Since the last couple of years were below average, this will make this year’s season seem above-above average.
Most likely, this storm will mainly be a bunch of rain that falls on the US southeastern plain, but if you are in the region, keep an eye on it. Consider having a backup plan for Memorial Day that involves a roof.
I want to tell you about a great new book that has one forgivable flaw, which I’ll mention at the end. But first, a word from Bizarro Land. This is about the Grand Canyon.
I would think that the Grand Canyon would be the last thing that creationists would point to as proof of a young earth (several thousands of years old). Just go look at the Grand Canyon. One of the top major layers, the Kaibab Formation, is around 300 to 400 feet thick and made mostly of limestone. That would take a long time to form. But wait, there’s more. Within the Kaibab limestone there are also different sorts of rocks, evaporates, which indicate prolonged dry periods. How can an environment that is forming a thick limestone layer, but occasionally drying out for prolonged periods, be accommodated in a short chronology like required by Young Earth Creationists? This formation also contains fossils of organisms that do not exist today. Certainly, more time than possible in a world that began 4004 BC is required to have produce the Kaibab Formation. And that is just one relatively thin layer exposed by the Grand Canyon, and nearly at the top.
Down lower than that is a thick series of deposits that reflect major changes in Earth’s climate and ecology. These are the rocks that contribute most to giving the Grand Canyon it’s glorious redness and depth. They contain fossil footprints of organisms that don’t exist today. They contain alternating layers with evidence of marine environments and dry land terrestrial environments. Any reasonable understanding of how long it would take for these layers to form requires tens or hundreds of millions of years, even without dating, and one can only estimate that the formation of these sediments was finished long before anything like modern life forms existed.
The rock at the base of the Grand Canyon is separated from the rest by a long discomformity (a period of erosion that wiped out an unknown thickness of rock), so this rock is way, way older than everything else. These rocks are highly deformed and contain no evidence of multicellular life. Laying this rock down and subsequently mushing it all up, then eroding the heck out of took more than 6,000 years! Probably closer to 600 million years!
On top of all this, many of the formations we see exposed in the Grand Canyon are known to be represented a great distance away in other areas, and in some places those rocks form the guts of mountains. How long does it take for continents to squeeze together and move about with such force to form the American Great Basin and Range system of mountains, in Utah, Nevada, and nearby areas? More than 6,000 years! For those mountains to have formed from flatness fast enough to accommodate a young Earth, there would have be be mountains somewhere forming fast enough that you’d need to set the handbrake on your car if you parked there for a day, in case the parking lot went vertical on you.
If I was a Young Earth Creationist I’d try to ignore the Grand Canyon, pretend it isn’t there. But it is there. And everybody knows about it.
One alternative to pretending that the Grand Canyon doesn’t exist is to explain how it got there within a time frame of a few thousand years. But that requires speeding up processes to an unbelievable extent.
So, obviously, the only possible way for Young Earth Creationists to deal with the grand canyon is to fully depart reality and claim that it formed in a very short period of time by processes never before or since observed.
According to the Young Earth version of the Bible, dry land appeared in 4004 BC. Then, the Garden of Eden and all that stuff happened, and then the Noachian Diluvian event happened, the great flood, in 2348 BC. If we assume that the flood created the canyon itself, then all of the rock we see now exposed in the grand canyon was laid down over the course of 1,656 years. But that would be way to reasonable for Young Earth Creationists, who seem claim that the sediments seen in the Grand Canyon were actually laid down by the great flood itself. The canyon was then exposed by a single, later, flooding event when a big lake let out all its water at once.
It turns out that the Young Earth creationists have a lousy argument to explain the sediments exposed by the Grand Canyon, and the formation of the canyon itself. If geologists try to explain the Grand Canyon, however, they end up with an amazing and quite plausible story full of exciting geological and geographic adventure and intrigue. The Grand Canyon turns out to be really cool.
It includes several chapters by eleven experts, all fascinating, all informative, all amazing, talking about various aspects of both the creationist view of the Grand Canyon, and about the real geology of this amazing feature.
Great illustrations abound within this volume.
It turns out that the Young Earth Creationists are wrong, in case you were wondering.
As an aside, I don’t actually think the Young Earth Creationists have to be right, or even believable by non-scientists, to have succeeded in explaining the Grand Canyon. From the point of view of a Christian who wants to take the Bible literally, all you need to know is that there is an explanation. You don’t even have to know what the explanation is. By simply knowing that somewhere out there a team of Creation Scientists have explained away the annoying claims of great antiquity and such, you can go on believing in the literal truth of the Bible. In fact, better to not explore the Creationist explanation, really. You wouldn’t believe it.
It isn’t just that the Young Earth version of the Grand Canyon is wrong from a scientific perspective. It is also the case that the Young Earth “facts” from the Bible are themselves wrong. This book also covers that set of problems. And, of course, the Grand Canyon is way more Grand from a geological perspective than it is from a Biblical perspective. The Young Earth version is dumb and uninteresting. The real version is big, giant, wonderful science.
The book outlines the basic arguments about the Grand Canyon and how they differ. Then, the authors explore some basic geology needed to understand the Grand Canyon, looking at how sediments form, the Earth moves, and what fossils can tell us, how dating works, etc.
Especially interesting to me are the chapters on the canyon’s formation. This is a very interesting aspect of both canyons and mountains that I ran into when developing tourism and educational materials for geological sites in South Africa. Get a bunch of regular people who are not very science savvy. Bring them to a mountain. Then, discuss how old the mountain is.
If the rocks the mountain is made of are 500,000,000 years old, then the mountain is 500,000,000 years old, right? I’ve seen public info documents that use that logic, so it must be true! But clearly the mountain you are looking at was not a mountain five hundred million years ago. It was an inland sea or something. The mountain itself rose up between 20 and five million years ago. So that is how old the mountain is, right? Same with Canyons. It isn’t actually hard to understand that the rocks a particular geological feature are made from would be of one age, but the aspects of the feature that expose those rocks (erosion or uplift) are later, and that the ages of the two things must be entirely different.
It is probably a lot easier to date the rise of a mountain system than it is to date the erosion of a surface or the cutting of a canyon. This is because after mountain building slows down, datable sediments may form in clearly identifiable environments that did not exist before the mountain was formed. But a hole is a bit harder to grok. When the Grand Canyon formed, and how long it took, are actually active and open scientific questions. This fascinating subject, which relates as you might imagine to the creationist story in important ways, are well and fully addressed in this volume.
I asked one of the book’s editors, Tim Helble, what the current open questions and areas of active research are for the Grand Canyon. He told me that one “hot topic continues to be how and when the Grand Canyon was carved. The current Colorado River appears to have integrated multiple drainages and proto-canyons, and how and when they were integrated has attracted a lot of research.” He noted that one of the book’s other editors, Carol Hill, “continues to present evidence that there was a karst (limestone/sinkhole/cave) connection between the eastern and western proto-drainages.”
Also, Tim told me that “the Grand Canyon National Park hydrologist is leading a lot of research on the highly complex groundwater system in the canyon area. This is especially timely with all the recent controversy about uranium mining in the greater Grand Canyon area (which actually goes back many decades).”
There really isn’t a problem with this book, but there is a problem with our collective conversation about creationism vs. science. This book addresses a central point in Young Earth Creationism and resoundingly refutes it. But, this is also an excellent book about the Grand Canyon. Personally, I would love to see a book like this that doesn’t waste a page on the creationist story. I want the geology of the Grand Canyon untainted by reference to the yammering of YECs.
I do fully appreciate the role this book will play, and for this reason I recommend it for all science teachers and others who interface with the public in matters of science. No matter what your area of science is, the creationist argument based on the Grand Canyon has become central dogma for that school of non-thought, and you need to know about it. This volume lets you do that in a way that is also rich in real science and very rewarding.
I hereby encourage the team that put this book together to also write a post-creationist version that does the excellent science and description, and pretends like the Young Earth Creationists never existed. Who knows, maybe they’ll do it!
As noted, this is a nice looking book, almost coffee table but rich in information, suitable as a gift.
I’m not anti-government. I’m pro civilization. But I’m also an anarchist, of a sort. I think institutions should be dissolved and reformed regularly. What really happens is that institutions add bits and pieces over time, in response to things that happen, as solutions to interim problems, until finally the bits and pieces take over and nobody can move.
… a doomed lord, a scheming underling, an ancient royal family plagued by madness and intrigue – these are the denizens of ancient, sprawling, tumbledown Gormenghast Castle. Within its vast halls and serpentine corridors, the members of the Groan dynasty and their master Lord Sepulchrave grow increasingly out of touch with a changing world as they pass their days in unending devotion to meaningless rituals and arcane traditions. Meanwhile, an ambitious kitchen boy named Steerpike rises by devious means to the post of Master of the Ritual while he maneuvers to bring down the Groans.
A subtext of the story is that over time, in the kingdom of Gormeghast, ritual after ritual has been added to the daily life of the royal family, to the extent that there is barely enough time in the day for the Lord to do anything but serve those rituals, and in fact, the Master of the Ritual is ultimately in charge. This fantastical depiction of a fantasy kingdom is the future of all institutions that are not occasionally rebuilt.
There are other elements to this problem. Consider technology. Back when the Year 2000 problem happened, people learned that a good portion of the critical computing technology, such as that used in banking, was based on mainframe computers using ancient programming languages like cobol, where values were hard coded rather than represented as variables, and data was stored on ancient media. That is actually a good thing in a way, because those systems were proven to work. Shifting a system to the most current and advanced technologies virtually guarantees unforeseen bugs and opportunities for exploits by nefarious crackers. In critical technology, traditional and proven is good. But there are limits. In the video below Rachel Maddow points out that key data used in the US nuclear defense systems are stored on 8 inch floppies. Where do they even get those floppies?
In a way this seems the opposite of adding rituals over time, but it actually isn’t. It can create new rituals, and stupid rituals.
The intersection of ancient technologies that were once new and modern context that demands new rules (such as documentation of communications or transactions) results in bizarre outcomes even more troubling than the use of 8 inch floppies to hold the data needed to run and control the nuclear arsenal.
By now I’m sure you know that we’re talking about emails. Rachel also talks about the official government method of dealing with emails.
When you get an email, or send an email, you print out a copy of it and put it in a box. All of the emails. There are no exceptions.
If everyone printed out every email, there would be about six billion emails printed out, at least one page, often many more, per email. I estimate that if this policy was generally applied across all email uses, 2 or 3% of all paper use would be dedicated to this purpose, not counting storage boxes.
How do State Department officials and employees handle this problem? Simple. They ignore it. But how many things do we do, especially in the government, and other institutions, can’t be, ignored, and thus serve as glue poured into the precision gear boxes of our administrative institutions? A lot of them, I suspect.
There is a new technology that can convert both solar and wind energy into electricity in such a way that it is suitable for use on urban rooftops.
Here’s the abstract from the paper describing this work:
To realize the sustainable energy supply in a smart city, it is essential to maximize energy scavenging from the city environments for achieving the self-powered functions of some intelligent devices and sensors. Although the solar energy can be well harvested by using existing technologies, the large amounts of wasted wind energy in the city cannot be effectively utilized since conventional wind turbine generators can only be installed in remote areas due to their large volumes and safety issues. Here, we rationally design a hybridized nanogenerator, including a solar cell (SC) and a triboelectric nanogenerator (TENG), that can individually/simultaneously scavenge solar and wind energies, which can be extensively installed on the roofs of the city buildings. Under the same device area of about 120 mm × 22 mm, the SC can deliver a largest output power of about 8 mW, while the output power of the TENG can be up to 26 mW. Impedance matching between the SC and TENG has been achieved by using a transformer to decrease the impedance of the TENG. The hybridized nanogenerator has a larger output current and a better charging performance than that of the individual SC or TENG. This research presents a feasible approach to maximize solar and wind energies scavenging from the city environments with the aim to realize some self-powered functions in smart city.
The Democratic National Convention Committee has announced who will be on the all important Platform Drafting Committee. The committee will include an impressive mix of Clinton and Sanders supporters, as well as a key member associated with climate change activism.
The committee is assembled by the CND Chair, who this time around is Debbie Wasserman Schultz.
The members that will represent the presidential campaigns (75% of the members) were chosen to proportionately represent the candidates according to the current vote tally from the primaries.
The platform committee will include, as voting members, Hon. Howard Berman, Paul Booth, Hon. Carol Browner, Rep. Keith Ellison, Rep. Luis Gutiérrez, Rep. Barbara Lee, 350.org founder Bill McKibben, Deborah Parker, State Rep. Alicia Reece, Bonnie Schaefer, Ambassador Wendy Sherman, Neera Tanden, Dr. Cornel West, and James Zogby. Maya Harris from the Clinton Campaign and Warren Gunnels from the Sander campaign will be non-voting but official members of the Committee as well.
The Platform Executive Director will be Andrew Grossman.
As you know, I’ve been running a model to predict the outcomes of upcoming Democratic Primary contests. The model has change over time, as described below, but has always been pretty accurate. Here, I present the final, last, ultimate version of the model, covering the final contests coming up in June.
Why predict primaries and caucuses?
Predicting primaries and caucuses is annoying to some people. Why not just let people vote? Polls predict primaries and caucuses, and people get annoyed at polls.
But there are good reasons to make these predictions. Campaign managers might want to have some idea of what to expect, in order to better deploy resources, or to control expectations. But why would a voter who is not involved in a campaign care?
I had a very particular reason for working on this project, of predicting primaries and, ultimately, the course of the Democratic race for the Democratic nomination as a whole. When this campaign started, there were several candidates, and they all had positive and negative features. Very early in the process, all but two candidates dropped out, and I found myself liking both of them, though for different reason. I would have been happy supporting either Hillary Clinton or Bernie Sanders.
Personally I believe that it is good to vote, during a primary, for the person you like best in direct comparison among the other candidates. But at some point, it may be wise to support the one you feel is most likely to win. There are two closely related reasons to do this, and I think most observers of the current campaign can easily understand them. One is to help build momentum for the candidate that is going to win anyway. The other is to limit the damage that is inevitable during a primary campaign as the candidates fight it out.
So, early on in the process, I decided to see if I could produce a reliable method to predict the final outcome of the primary process, in order to know if and when I should get behind one of the candidates. That is the main reason I did this. In order for this method to meet this and other goals, it had to be more accurate than polls.
There are other reasons. One is that it is fun. I’ve been doing this in primaries and general election campaigns for quite a few elections. I like data, I like analyzing data, I like politics, I like trying to understand what is going on in a given political scenario. So, obviously, I’m going to do this.
Another reason is to test the idea that the voters are changing their minds over time. In order to do this one might use all the primaries and caucuses to date to predict future primaries and caucuses, and then, if the predictions go out of whack, you can probably figure that something new is going on. This relates to overall feelings among the electorate as sampled by each state, but it also relates specifically to ideas about why a particular state reacted to the campaigns the way it did.
An example of this came up recently when Bernie Sanders won in West Virginia. My model had predicted a Sanders win there, and the actual vote count was very close to the prediction. Since that prediction was based on voter behavior across the country to date, I was confident that nothing unusual happened in West Virginia. But, something unusual should have happened there, according to some conceptions of this campaign.
The economy of West Virginia is based largely on coal mining, and there are lot of Democrats there. (Democrats in local elections; they tend to vote for Republicans in the general.) So, it was thought that the voters would pick a candidate based on a perceived position on climate change and coal. Clinton went so far as to pander to the West Virginians with a rather mealy mouthed comment about how we could still keep mining coal as long as we figured out a way to have it not harm the environment. That was the Clinton campaign doing something about the coal mining vote. Others thought that a Sanders win there would indicate that he somehow managed to get a strong climate change message across to coal miners. That idea is a bit weak because when it comes down to it, Clinton and Sanders are not different enough on climate change to be distinguished by most voters, let alone coal supporting voters. In any event, the win there by Sanders was touted as a special case of a certain candidate bringing a certain message to certain voters. But, he then lost in the next coal mining state over, Kentucky, and in both states the percentage of voters that picked Clinton and Sanders was almost exactly what my model predicted, and that model was not based on climate change, coal, or perceptions or strategies related to these things, but rather, on what voters had been doing all along.
So, nothing interesting actually happened in West Virginia. Or, two interesting things happened that cancelled each other out perfectly. Which is not likely.
In short, the closeness of my model to actual results, and the lack of significant outliers in the overall pattern (see below), seems to indicate that the voters have been behaving the same way during the entire primary season, by and large. This is a bit surprising when considered in light of the assumption that Sanders would take some time to get his message across, and pick up steam (or, I suppose, drive people over to Clinton) over time. That did not happen. Democratic voters became aware of Sanders and what he represents right away, and probably already had a sense of Clinton, and that has not changed measurably since Iowa.
How does this model work?
For the first few weeks of this campaign I used one model, then switched to an entirely different one. Then I stuck with the second model until now, but with a major refinement that I introduce today. The reason for using different models has to do with the availability of data.
All the models use the same basic assumption. Simply put, what happened will continue to happen. This is why I sometimes refer to this approach a a “status quo model.” I don’t use polling data at all, but rather, I assume that whatever voters were doing in states already done, their compatriots will do in states not yet done. But, I also break the voters down into major ethnic groups based on census data. So, for each state, I have data dividing the voting populous into White, Black, Hispanic and Asian. These racial categories are, of course, bogus in many ways (click on the “race and racism” category in the sidebar if you want to explore that). But as far as American voters go, these categories tend to be meaningful.
The fist version of the model used exit polling (ok, so I did use that kind of polling for a while) to estimate the percentage of black voters who would prefer Sanders vs. Clinton. I used the simple fact that in non-favorite son states that were nearly all white Clinton and Sanders essentially tied to estimate the ratio of preferences for white votes at about even. I ignored Hispanic and Asian voters because the data were unavailable or unclear.
This model simply simulated voters’ behavior (in the simplest way, no randomization or multiple iterations or anything like that). I also used some guesses (sort of based on data) of the ethnic mix for Democrats specifically in so doing. That somewhat clumsy model worked well for the first several primaries, but then, after Super Tuesday there were (sort of) enough data points to use a different, superior method.
This method simply regressed the outcome of the primary (in terms of one candidate’s percentage of the vote) against the available ethnic variables by state. Early on, the percentage of Hispanic or Asian did not factor in as meaningful at all, and White and Black together or White on its own did not work too well. What gave the best results was simply the precent of African Americans per state.
“Best results,” by the way, is simply measured as the r-squared value of the regression analysis, which can be thought of as the percentage of variation (in voting) explained by variation in the independent variable(s) of ethnicity.
Primaries vs. Caucuses and Open vs. Closed
Many things have been said about how each of the two candidates do in various kinds of contests. We heard many say that Sanders does better in Caucuses, or that Clinton does better in closed primaries. During the middle of the primary season, I tested that idea and found it wanting. Yes, Sanders does well in caucuses, but the ethnic model predicts Sanders’ performance much better than the caucus-no caucus difference. It turns out that caucusing is a white people thing. There are no high diversity states where caucusing happens. It is not the caucus, but rather the Caucasian, that gives Sanders the edge.
This graph shows how Sanders vs. Clinton over-performed in caucuses vs. primaries.
The value plotted is the residual of each contest in relation to the model, or how far off a theoretical straight line approximating the pattern of results each contest was. Two things are apparent. One is that caucuses are less predictable than primaries. The other is that while Sanders did over-perform in several caucuses, this was not a fixed pattern.
This graph shows the residuals divided on the basis of whether the contest was open (so people could switch parties, or engage as an independent) vs closed (more restricted).
Open contests were more variable than closed contests, but it is not clear that either candidate did generally better in one or the other.
After many primaries and caucuses were finished, there became enough data to use the kind of contest as a factor in conducting the regression analysis. There are a lot of ways to do this, but I chose the simplified brute force method because it actually gives cleaner, and more understandable, results.
I simply divided the sample into the kind of contest, and then ran a multivariable regression analysis with each group, with the percent of Sanders plus Clinton votes cast for Clinton as the dependent variable, and the percentage of each of the four ethnic categories as the independent variables. There are some combinations of caucus-primary and open-closed/semi-open/semi-closed that are too infrequent to allow this. For those contests, I simply developed a regression model based on all the data to use to make a prediction in each of those states. The results, shown below, use this method of developing the most accurate possible model.
How does this sort of model actually make a prediction?
The actual method is simple, and most of you either know this or don’t care, but for those who would like a refresher or do care…
The regression model, using multiple variables, produces a series of coefficients and an intercept. You will remember from High School algebra that the formula for a line is
Y = mX + b
X is the independent variable, along the x axis, and Y is what you are trying to predict. m is the slope of the line (a higher positive number is a steeply upward sloping line, for example) and b is the point where the line crosses the Y axis.
For multiple variables, the formula looks like this:
Y = m1(X1) + m2(X2) + … mn(Xn) + b
Here, each coefficient (m1, m2, up to mn) is a different number that you multiply by each corresponding variable (percent White, Black, etc.) and then you add on the intercept value (b). So, the regression gives the “m’s” and the ethnic data gives the “X’s” and you don’t forget the “b” and you can calculate Y (percent of voters casting a vote for Clinton) for any given state.
So, enough already, who is going to win what primary when?
Not so fast, I have more to say about my wonderful model.
How have the public opinion polls done in predicting the contests?
Everybody hates polls, but like train wrecks, you can’t look away from them.
Actually, I love polls, because they are data, and they are data about what people are thinking. The idea that polls are inaccurate, misleading, or otherwise bogus is an unsubstantiated and generally false meme. Naturally, there are bad polls, biased polls, and so on, but for the most part polls are carried out by professionals who know what they are doing, and I promise that those professionals are aware of the things you feel make polls wrong, such as the shift from landlines to cell phones.
Anyway, polls can be expected to be reasonable predictors of election outcomes, but just how good are they?
Looking at a number of races today, excluding only a few because there were no polls, I got the Real Clear Politics web site averages for polls across the states, transformed those numbers to get a percentage of the Sanders + Clinton vote that went to Clinton, and plotted that with the similarly transformed data from the actual primaries and caucuses. The r-squared value is 0.52443, which is not terrible, and the graphic shows that there is a clear correlation between the two numbers, though the spread is rather messy.
The ethnic status quo model outperformed polls
My model is actually many models, as mentioned. I have a separate regression model for each of several kinds of primary, including Closed Caucuses, Closed Primaries, Semi-Closed Primaries, and Open primaries. I did not create separate models for the much rarer Semi-Open Primary, Semi-Open Caucus or Open Caucus style contests, as each of these categories had only one or a few states. Rather, the model used to calculate values for these states is derived from all the data, so addressing specific quirkiness of each kind of contest is sacrificed for large sample size.
I also generated models that included White, Black, Hispanic, and Asian; each of these separately; and various combinations of them. As noted above, the best single predictor was Black. Hispanic and Asian were very poor predictors. White was OK but not as good as Black. But, combining all the variables worked best. That is not what usually happens when throwing together variables. It is more like mixing water colors, you end up with muddy grayish brown most of the time. But this worked because, I think, diversity matters but in different ways when it comes in different flavors.
When the total data set was analyzed with the all-ethnicity model, that worked well. But when the major categories of contest type was analyzed with the all-ethnicity model, some of the data really popped, producing some very nice r-squared values. Closed caucuses can not be predicted well at all (r-squared = 0.2577) while Open Caucuses perform very well (over 0.90, but there are only a few). The most helpful and useful results, though, were for the closed primary, open primary, and Semi=closed primary, which had R-squared values of 0.69, 0.61, and 0.74, respectively.
What this means is that the percentage of the major ethnic groups across states, which varies, explains between about 61 and 74% of the variation in what percentage of voters or caucusers chose Clinton vs. Sanders.
Polls did not do as well, “explaining” only about half the variation.
So, the following graph is based on all that. This is a composite of the several different models (same basic model recalculate separately for some of the major categories of contest), using nominal ethnic categories. The model retrodicts, in this case, the percentage of the vote that would be given to Clinton across races. Notice that this works very well. The few outliers both above and below the line are mainly caucuses, but the are also mainly smaller states, which may be a factor.
Who will win the California, New Jersey, Montana, New Mexico, North Dakota, South Dakota, and D.C. primaries?
Clinton will win the California, New Jersey, New Mexico and D.C. Primaries. Sanders will win the Montana, North Dakota, and South Dakota primaries. According to this model.
The distribution of votes and delegates will be as shown here:
This will leave Sanders 576 pledged delegates short of a lock on the convention, and Clinton 212 pledged delegates short of a lock on the convention. If Super Delegates do what Sanders has asked them to do, to respect the will of the voters in their own states, then the final count will be Sanders with 2131 delegates, and Clinton with 2560 delegates. Clinton would then have enough delegates to take the nomination on the first ballot.
In the end, Clinton will win the nomination on the first ballot, and she will win it with more delegates than Obama did in 2008, most likely.
The amount of carbon dioxide in the Earth’s atmosphere directly and indirectly determines the sea level. The more CO2 the higher the sea level. The details matter, the mechanism is complex, and as CO2 levels change, it takes an unknown amount of time for the sea level to catch up.
The present day level of CO2 is just over 400ppm (parts per million). For thousands of years prior to humans having a large effect on this number, the level of atmospheric CO2 was closer to 250. Human release of CO2 into the atmosphere by the burning of fossil fuel, and other human activities, are responsible for this difference. We expect the atmospheric concentration of CO2 to rise considerably by the end of the century. It is remotely possible that by 2100, CO2 will be about where it is now, but only if a significant effort is made to curtail its release. If nothing is done about the release of CO2 by human burning, the number will exceed 1000ppm by 2100. Reasonable estimates assuming the most likely level of effort to change the energy system put CO2 at somewhere around 600 to 700ppm by the end of the century.
So, it is reasonable to ask the question, what is the ultimate sea level likely to be with atmospheric concentrations of CO2 between 500 and 700ppm?
How long will sea level rise take?
Once the CO2 is in the atmosphere, it stays there for a very long time. So, if we curtail emissions and manage to keep the atmospheric CO2 between 500 and 700, that number will not drop for centuries. So, again, we have to expect sea levels to rise to whatever level is “normal” for an atmospheric concentration of CO2 between 500 and 700ppm. That is a conservative and perhaps even hopeful estimate.
A fair amount of research (but not nearly enough) has been produced over the last two or three years with the aim of estimating how fast and to what degree the major glaciers of the earth (in Greenland and Antarctic) will melt with global warming. In the long view, all of this research is irrelevant. The simple fact is that with higher CO2 levels, a lot of that ice will ultimately melt, and sea levels will ultimately go up. But in the short and medium term, that research is some of the most important research being done in climate change today, because it will lead to an understanding of the time frame for this rise in sea level.
None of the research I have seen satisfactorily estimate this rate, but with each new research project, we have a better idea of the process of deterioration of major glaciers. As this research progresses, the glaciers themselves are actually deteriorating, but they are only beginning to do so. As the research advances, we will get a better theoretical model for glacial deterioration. As the glaciers deteriorate we will have increasing opportunity to calibrate and test the models. I expect that in a few years from now (ten or twenty?) there will be active competition in the research world between theoretically based models and empirical observations to provide rate estimates for sea level rise. But at present we have mainly theory (the observational data is important but insufficient) and the theory is too vague.
A new paper came out this week that explores the process of deterioration of a major part of the Antarctic glacial mass. I’ll summarize this research below, but the main point of this post is to put all of this recent research in the context outlined in the first few paragraphs above. How much sea level rise can we ultimately expect, even if we have no good idea of when it will happen?
It may not matter how fast sea level rises
Uncertainty about the time frame for glacial melt is important for all sorts of practical reasons, but an interesting aspect of human culture and economy obviates that uncertainty, and does so rather ironically. In our economic system, we value things in many different ways. There are things that have great value in part because they are fleeting, rare, or ephemeral. People pay a lot for a great meal that is gone in 20 minutes, a random act of erotic pleasure, two minutes of terror on a carnival ride, or a small pile of white powder.
But we also pay good money for things because of their long term value. A classic problem in economics asks why a man (it’s always a man) in, say, Egypt, is willing to plant and tend hundreds of date tree saplings, knowing full well that the first fruit will not be provided until long after his own death. The reason, of course, is that his son will inherit these trees. Of course, his son is not likely to gain much from these trees because they will still be young and small. So the value of this grove of trees to his son is based mostly on the value to the son’s son. And so on.
This is how we place value on real estate. The main reason that a home you might consider buying today is of a certain value is that you can sell it for a similar or greater value in the future. The fact that short term fluctuations may destroy a good part of that value over the next decade does not obviate the longer term value of the property.
Some of the most valuable property in the US and in many other countries is within spitting distance of the ocean. The ocean itself adds this extra value either as commercial or industrial space, or as high-end domestic or tourist space.
If sea level rise sufficient to destroy that property was imminent, so the property would be destroyed this year, then the value of that property would be zero. But considering that the value of the property is always based in large part on the future sale value, then sea level rise sufficient to destroy it, but that won’t happen for a century, is sufficient to destroy that long term component. If you acquire property today that will eventually be flooded by the sea, you might think you own it. But really, you are renting it. When the sea rises up and inundates the lot, the lease is up.
So, in a way, it does not matter how long it will take the sea to rise, say, 10 meters. Any property that would be destroyed with sea level rise is, right now, worth less than a market ignorant of this inevitability would price it.
Why is it hard to estimate the rate of sea level rise?
Most of the ice in Antarctica is sitting on the interior of the continent, well away from the sea. But much of this ice is held in large catchments, or valleys, that have outlets to the sea. Those outlets are plugged with huge masses of ice, and that ice is, in turn, held in place by grounding lines, where part of the ice sits tenuously on the bedrock below sea level.
Behind the groundling lines, upstream, are valleys of various depths, but deeper than the point of grounding itself. It is thought that if the ice sitting on the grounding line falls apart, the plug of ice will deteriorate fairly quickly until a new grounding line, perhaps many kilometers upstream, is established. But that grounding line may be subject to deterioration as well, and eventually, the outlet valley that connects the interior catchment to the sea becomes open water, and the ice in that catchment can also deteriorate, and fall into the sea mainly in the form of ice bergs regularly calved off the glacier, like we see today in Greenland.
The main cause of global warming induced melting of the ice near the groundling line is warm water. The surface of the Antarctic glaciers does not melt very much from warm air, because the air over the southern continent is rarely above freezing. However, with global warming, we expect air temperatures to go above freezing more commonly. This would contribute to thinning of the ice over the grounding lines, and thus, more rapid breakdown of the plug holding most of the ice in place.
It is thought that when a grounding line fails, and the plug of ice begins to move into the ocean, steep cliffs are formed alongside the deteriorating ice. This would cause the ice behind the cliffs to destabilize, causing ice bergs to form at a very high rate. Also, the interior ice, in the large catchments, is generally thought to be unstable, so when Antarctic glaciers reach a certain point of deterioration, those glacial masses may deteriorate fairly quickly.
Each of these steps in glacial deterioration is very difficult to model or predict, as these phenomena have never been directly observed and the process involves so many difficult to measure mechanical and catastrophic events.
Upstream from the grounding line, though the horizontally flowing glacial masses in the plug, and up into the catchments, the sub-ice topography is complex and will likely control, by speeding up or slowing down, glacial deterioration. It is thought that many of the glacial masses that make up Antarctica’s ice have melted and refrozen numerous times, and glacial ice has moved towards the sea again and again, over the last several million years. As glacial ice moves along it carves out valleys or deposits sediments in a complex pattern, which then determines subsequent patterns of ice formation or deterioration. It is reasonable to assume that each time the glaciers melt and reform, the terrain under the ice becomes, on average, more efficient in allowing the movement of ice towards the sea. Thus, any estimate of the rate of glacial movement and deterioration based on past events is probably something of an underestimate of future events.
This has all happened before
We know that the world’s glaciers have melted and reformed numerous times from several sources of evidence, and that this has been the major control of global sea level, as water alternates between being trapped in glacial ice and being in the oceans.
We know that global sea levels have gone up and down numerous times, because we see direct evidence of ancient shorelines above current shorelines, and we have direct evidence that vast areas of the sea have been exposed when glaciers were at maximum size.
We can also track glacial growth and melting by using oxygen isotopes that differ in mass. Glaciers tend to be made of water that contains a relatively high fraction of light oxygen, while the ocean water tends to have relatively more heavy oxygen. This is because water molecules with heavy oxygen are slightly less likely to evaporate, so precipitation tends to be be light. Glaciers are ultimately made of precipitation. There are organisms that live in the sea that incorporate oxygen from sea water into their structures, which are then preserved, and can be recovered from drilling in the deep sea. By measuring the relative amount of heavy vs. light oxygen in these fossils, controlling for depth in the sea cores, and dating the cores at various depths, we can generate a “delta–18” (short for “Difference between Oxygen–18 and Oxygen–16”) curve. This is an indirect but very accurate measure of how much of the world’s free water is stuck in glaciers at any given moment in time.
The Delta–18 curve for Earth for the last ca 800,000 years looks like this:
That curve is made from deep sea curves, and unfortunately, the deep sea curves don’t go back far enough in time to get a good idea of glacial change, at this level of detail, for enough time to really get a handle on the full history of glaciation. But by piecing together data from many sources, and careful use of dating techniques like Paleomagnetism, it is possible to get a long view of Earth history. The following graphic (from The Panerozoic Record of Global Sea-Level Change by Miller et al, Science 310(1293)) shows the general pattern.
The Earth was warmer many tens of millions of years ago (before the modern ecology, flora, and fauna evolved), fluctuating between warm and very warm over long periods of time. Then, in recent millions of years, things cooled down quite a bit. It is during this cooler period that the modern plants and animals became established, and that humans came on the scene.
Here’s what I want you to get out of this graphic. Look at the purple line. These are global sea levels after about 7 million years ago. Note that sea levels were often tens of meters higher than they are today (relative to the zero line on this graph).
Now have a look at this graph.
This is a very complicated graph and in order to understand all of the details you’ll have to carefully read the original paper. But I can give you a rough idea of what it all means.
Each of the four graphics is a different way that paleoclimatologists can look at the relationship between atmospheric CO2 and sea level, and compile a large number of data points. Think of each graphic as a metastudy of CO2 and sea level using four independent approaches and all of the data available through about 2012.
In each graph, note the dotted lines. The vertical dotted line is CO2 at 280ppm, taken as the pre-industrial value (though we know that is probably an overestimate since lots of CO2 had been released into the environment by human landuse and burning prior to the presumed beginning of the “pre-industrial” period). The horizontal line is the present day sea level.
Each of the little symbols on the graph is a different observation of CO2 and sea level from ancient contexts, using various techniques and with various paleoclimate data sets over many millions of years.
Note that at the point where the pre-industrial CO2 and the modern sea level intersect, there are many points above and below the line. This is partly error or uncertainty, and partly because of the time lag between CO2 reducing over time and subsequent growth of glaciers. It takes many thousands of years for these glaciers to grow (they melt much more quickly).
As we go from pre-industiral levels of CO2 through the values of interest here, getting up to 600 or more parts per million, past sea levels are generally higher than the present. In fact, at 400ppm, where we are now, sea levels are substantially higher than the present for almost all of the data points. This probably means the following, and this is one of the most important sentences in this post, so I’ll give it its own paragraph:
Present day carbon dioxide levels are associated with sea levels many meters above the present sea level; the current Earth’s atmosphere is incompatible with the current Earth’s glaciers, and those glaciers will therefore become much smaller, and the sea level much higher, even if we stop adding CO2 to the atmosphere this afternoon.
The other interesting thing about this graph is that above about 600ppm atmospheric CO2, the sea levels observed in the past are even higher, way way higher. These are time periods when there was virtually nil glacial ice on either pole, or elsewhere. This is what the Earth looks like when virtually all the ice is melted. This is the Earth that we are likely to be creating if we allow CO2 levels to approach 1000ppm, and the track we are on now virtually guarantees that by the end of the century.
One might assume that we would never let that happen, that we would solve this problem of where to get energy without buring fossil fuel before that time. But at this very moment there is about a 50–50 chance that the next president of the United States will be a man who believes that climate change is a hoax. In other words, it is distinctly possible that one of the largest industrial economies in the world, and a globally influential government, will ignore climate change and forestall the transition to clean energy until 2024.
Look at this graph. The upper line, “High Emissions Pathway (RCP 8.5)”
I now officially rename the “Trump Line.”
So, how high will the sea levels rise?
There really is little doubt that we are looking at several meters of sea level rise given our current CO2 levels. How many meters?
From the paper cited above: “…our results imply that acceptance of a long-term 2 degrees C warming [CO2 between 500 and 450 ppm] would mean acceptance of likely (68% confidence) long-term sea level rise by more than 9 m above the present…”
Personally, I think this is a low estimate, and the actual value may be more like 14. Or more. There is no doubt that we are going to add many tens of ppm of CO2 to the atmosphere over the next few decades even if we act quickly on changing our energy system, and the chances are good that we will be close to 600. This is at the threshold, based on the paleodata, between lots of glacial ice melting to produce 9, 15, or so levels of sea level rise, and nearly all of the glacial ice melting, to produce sea level rise of over 30 meters.
And, if we follow the Trump Line we’ll reach that level of CO2 well before the end of the century.
Again, the question emerges, how long will it take for sea level to rise in response to the added CO2? Than answer, again, is we don’t know, but in important ways, as noted above, it matters less than one might think.
We are probably going to be stupid about sea level rise
There is another aspect of this problem that is underscored by the most current research, and several other research projects. For the most part, the glaciers will not melt evenly and steadily. This is not a situation where we can measure how much ice melts off every decade, and extrapolate that into the future. What we now know about the big glaciers is that they will almost certainly deteriorate a little here and a little there, then suddenly and catastrophically break down, losing a huge amount of their ice to the sea, then for a period of time continue to fall apart at a lower but still accelerated rate, which will slow down after a while until some level of stability is reached. Then, that stability will remain threatened for a period of time until the next catastrophic collapse of that particular glacier.
Also, as noted in the research project I’ll report below, some of these catastrophic steps that happen later in the process may in some cases be the largest.
Here’s why this is important. Honestly, do you even believe that we have already added enough CO2 to the atmosphere to flood all of the planet’s major coastal cities, and major areas of cropland, and that this can’t be stopped? Isn’t that a bit extreme, alarmist, even crazy? Of course it seems that way, and even if you accept the science, a significant part of you will have a very hard time accepting this conclusion.
Those in charge of policy, the people who can actually do something about this, are not immune to this sort of cognitive dissonance. So, as long as the glaciers are only adding a foot a century instead of a foot a decade, the massive melting scenario will be on the back burner. Then, of course, one or two or more of these glaciers are going to lose their grounding lines within a few years of each other, and start to add huge amounts of water to the sea, and everyone will freak out and catastrophic coastal flooding will happen, and then the whole thing will slow down and more or less stop for a period of time, and once again, the prospect of sudden and major sea level rise will return to the back burner.
Then it will happen again.
New research on glacial collapse
OK, about that new research. Here is the abstract:
Climate variations cause ice sheets to retreat and advance, raising or lowering sea level by metres to decametres. The basic relationship is unambiguous, but the timing, magnitude and sources of sea-level change remain unclear; in particular, the contribution of the East Antarctic Ice Sheet (EAIS) is ill defined, restricting our appreciation of potential future change. Several lines of evidence suggest possible collapse of the Totten Glacier into interior basins during past warm periods, most notably the Pliocene epoch, … causing several metres of sea-level rise. However, the structure and long-term evolution of the ice sheet in this region have been understood insufficiently to constrain past ice-sheet extents. Here we show that deep ice-sheet erosion—enough to expose basement rocks—has occurred in two regions: the head of the Totten Glacier, within 150?kilometres of today’s grounding line; and deep within the Sabrina Subglacial Basin, 350–550?kilometres from this grounding line. Our results, based on ICECAP aerogeophysical data, demarcate the marginal zones of two distinct quasi-stable EAIS configurations, corresponding to the ‘modern-scale’ ice sheet (with a marginal zone near the present ice-sheet margin) and the retreated ice sheet (with the marginal zone located far inland). The transitional region of 200–250?kilometres in width is less eroded, suggesting shorter-lived exposure to eroding conditions during repeated retreat–advance events, which are probably driven by ocean-forced instabilities. Representative ice-sheet models indicate that the global sea-level increase resulting from retreat in this sector can be up to 0.9?metres in the modern-scale configuration, and exceeds 2?metres in the retreated configuration.
Chris Mooney has written up a detailed description of the research including information gleaned from interviews with the researchers, and you can read that here. In that writeup, Chris notes:
Scientists believe that Totten Glacier has collapsed, and ice has retreated deep into the inland Sabrina and Aurora subglacial basins, numerous times since the original formation of the Antarctic ice sheet over 30 million years ago. In particular, they believe one of these retreats could have happened during the middle Pliocene epoch, some 3 million years ago, when seas are believed to have been 10 or more meters higher (over 30 feet) than they are now.
“This paper presents solid evidence that there has been rapid retreat here in the past, in fact, throughout the history of the ice sheet,” Greenbaum says. “And because of that, we can say it’s likely to happen again in the future, and there will be substantial sea level implications if it happens again.”
And, from the original paper (refer to the graphic below):
The influence of Totten Glacier on past sea level is clearly notable, but for any particular warm period it is also highly uncertain, because the system is subject to progressive instability. Our results suggest that the first discriminant is the development of sufficient retreat to breach the A/B-boundary ridge. This causes an instability-driven transition from the modern-scale configuration to the retreated configuration. Under ongoing ice-sheet loss, the breaching of Highland B causes further retreat into the ASB. Each of these changes in state is associated with a substantial increase in both the absolute and the proportional contribution of this sector to global sea level.
This is a video by Peter Sinclair that dates to a bit earlier than this research was published, but covers the same issue:
Many of the interesting natural areas, like national parks or preserves, have a museum. In the museum there is often a geology exhibit, showing the changes in the landscape over long periods of time. Almost always, there was a period of time when the place you are standing, looking at the exhibit, was part of a “great inland sea.”
Let me introduce you to my little friend … the Great Inland Sea. Because it is coming back.
I have not updated my model for predicting primary outcomes in the Democratic contest, but since the last few predictions were very accurate, I don’t feel the need to do so. However, I will before the California primary, just in case.
Meanwhile, my model suggests the following for today’s primaries.
Kentucky should be nearly a tie, though my model suggests that Sanders will get one more delegate than Clinton (Clinton: 27, Sanders: 28).
The model also suggests that Sanders will win in Oregon, Clinton: 24 and Sanders: 37.
There is very little polling in Kentucky, but the latest poll from early March has Clinton slightly ahead. I expect that to be wrong.
Kentucky will be interesting. Clinton has been campaigning fairly strongly there, with TV ads and a lot of hand shaking. Sanders has been campaigning very little there lately, but he was campaigning heavily up until just a couple of days ago.
In Oregon, there is also very little in the way of polls, but the one poll I’ve seen, from just a week ago, has Clinton winning handily.
You will remember that my model is based mainly on ethnicity, and Oregon is a white state, thus the predicted Sanders win. But Oregon is also way different than other states. Politically it is more liberal, and they vote by mail. Every resident of the state is automatically registered. So, Oregon may be the best state in the US to represent a truly democratic and open process. Some say this arrangement favors Sanders, but in fact, it seems to favor neither candidate. So, Oregon will be interesting.
Results will be posted here when they are available. The Oregon results will not be available until really late, maybe Wednesday some time.
With 99.8% of the vote counted, Hillary Clinton is being called the winner of the Kentucky Primary, but about 1,800 votes.
Meanwhile, in Oregon …
My model, which is generally pretty accurate, predicted a Sanders win, and that happened.
But Sanders did not do as well as he should have. Here’s the numbers with about 77% counted.
This may change as more votes are counted. And, the numbers are not really all that far off. But, Sanders, in the end, will take fewer delegates from Oregon to the national convention than I was thinking he would, and he needed more than I was thinking to have any kind of chance of closing the gap.
It will be interesting to see if, when I re-run the model with the latest info, the Oregon gap between expected and actual closes up. (Since the Oregon data will be in the model, it will close up, but by how much?)
Oregon might have some explaining to do, and that will probably be in the framework of their new and unique way of voting. This could be quite interesting.
Although chewing gum is designed to be chewed and not swallowed, it generally isn’t harmful if swallowed. Folklore suggests that swallowed gum sits in your stomach for seven years before it can be digested. But this isn’t true. If you swallow gum, it’s true that your body can’t digest it. But the gum doesn’t stay in your stomach. It moves relatively intact through your digestive system and is excreted in your stool.
On rare occasions, large amounts of swallowed gum combined with constipation have blocked intestines in children. It’s for this reason that frequent swallowing of chewing gum should be discouraged, especially in children.
The American Chemical Society has a nice video explaining what happens when you swallow gum: