When there is a major climate disaster in the US, people move. Since the US is big and has large gaps in population, it looks different than when a disaster happens in some other places. Five million (or more) Syrians leaving the Levant left a major mark across the globe. A half million leaving the Katrina hit zone was barely noticed on a global, or even national, scale, not just because it was one tenth the amount, but because of our size and space as well.
Something close to half the 400K or so displaced by Katrina (over half of them from NOLA) have returned to the vicinity they formerly lived in, and only a third to the same original location. The others are all over the place, distributed with a rapidly decreasing distance decay function. So these displacements, in the US, tend to be very long term and can thus affect demography and politics far afield.
An exodus from Puerto Rico will likely have a different decay function than seen for Katrina because it is, and apparently few people know this, an Island! But anyway, it is likely that there will be an exodus from Puerto Rico and it is starting to look like it will be sufficient to make Florida less Purple and more Blue, and specifically, more anti-Trump.
Note that in the past, New York was the most likely destination for a person from Puerto Rico to move, which is funny given Trump’s statements about all his Puerto Rican friends. For those not from that region, Puerto Ricans have long been hated by white supremacists in the greater NY metro area. But I digress. Anyway, over recent years, Florida has become a growing center of the US Mainland Puerto Rican community.
For context: There are about 3.5 million people living in Puerto Rico who identify as Puerto Rican, and about 5.3 million self identified Puerto Ricans in the lower 48. Currently there is somewhat under one million in Florida, somewhat over in NY, but Puerto Ricans are everywhere in the US, with the fewest in the upper plains and the most in the greater NY area (as far out as Penn) and Florida.
We are concerned that cholera will spread in Puerto Rico. You may remember the ca 2011 epidemic that mainly struck Haiti (see chart above). There was another ten years earlier. There is some interesting research out there linking cholera to climate change. The pathogen, Vibrio cholerae, lives in coastal waters where it has a keystone commensal relationship with copepods and other microinvertebrates. We think of cholera as a highly contagious pathogen among humans, but it starts from its natural reservoirs in water. In some areas of South Asia, cholera was significantly attenuated by the discovery that simply passing well water through common cotton cloth filtered out the disease enough to make a difference, at least in some contexts.
For historical context, there was a huge cholera epidemic in the Caribbean in the 19th century, and I understand this event, which killed something like 30,000 in Puerto Rico alone, is still a traumatic memory in the region. From a 2011 summary of the historic epidemic, written I suspect in response to the re-emergence of the problem about six years ago:
The Caribbean region experienced cholera in 3 major waves… The 3 periods of cholera in the Caribbean that we have identified are 1833–1834 (with, according to Kiple , possible lingering cholera in outlying areas until late 1837 or early 1838) in Cuba; 1850–1856 in Jamaica, Cuba, Puerto Rico, St. Thomas, St. Lucia, St. Kitts, Nevis, Trinidad, the Bahamas, St. Vincent, Granada, Anguilla, St. John, Tortola, the Turks and Caicos, the Grenadines (Carriacou and Petite Martinique), and possibly Antigua; and 1865–1872 in Guadeloupe, Cuba, St. Thomas, the Dominican Republic, Dominica, Martinique, and Marie Galante.
It is thought that Cholera is more likely to be abundant and to spread into human populations with warmer waters, and possibly the range over which cholera is a lingering constant threat in coastal waters is likely increasing. Also, increased air temperatures and rainfall can increase growth or spread of cholera in the wild. This is a relationship first identified in the 1990s, and that has been demonstrated through several studies. The next few weeks and months in Puerto Rico are an accidental and potentially horrific experimental laboratory to test the science that has been percolating along over the last 20 years.
It isn’t. Well, it is a little, but not totally. OK, it is, but actually, it is complicated.
First, you are probably asking about the Atlantic hurricane season, not the global issue of hurricanes and typhoons and such. If you are asking world-wide, recent prior years were worse if counted by how many humans killed and how much damage done.
With respect to the Atlantic, this was a bad year and there are special features of this year that were bad in a way that is best accounted for by global warming. But looking at the Atlantic hurricanes from a somewhat different but valid perspective, last year was worse (so far) and this year is ordinary, within the context of global warming. So, let’s talk about the global warming question first.
How Global Warming Makes Hurricane Seasons Worse
The effects of global warming on hurricanes in the Atlantic have two interesting features that must be understood to place this discussion in proper context.
First, we are having a bunch of bad decades in a row probably because of global warming. If we compare pre-1980, for a decade, with post 1980, or pre vs. post 1990, or anything similar, the more recent years have had more hurricanes than the earlier years. Comparing to even earlier time periods is tricky because of differences in available data (Satellites make a difference, probably, even with giant weather features like hurricanes). This is mainly due to increasing sea surface temperatures, but there are other factors as well.
Hurricanes are more likely to form when sea surface temperatures are higher. Higher sea surface temperatures can make a hurricane larger or stronger. Hurricanes will last longer if there is more, higher, hurricane-hot sea to travel over. If sea surface temperatures are high enough to cause hurricanes earlier in the year or later in the year, the hurricane season can be longer. Possibly, storms that in a non-warmed world would not have made it to “named storm” status are moved to that level of strength and organization because of the elevated sea surface temperature.
Sea surface temperature increases of small amounts cause large changes in hurricanes, and large changes in hurricanes cause larger changes in potential damage level. The increase in Atlantic sea surface temperatures over recent decades have probably been sufficient, according to my thumb-suck estimate that I strongly suspect is close to correct, to make about half the hurricanes that would have existed anyway jump up one category. Then, when hurricanes get stronger, the amount of damage they can do goes up exponentially. So the sea surface temperature increases we’ve see with global warming easily explain the fact that we’ve had more hurricanes overall, and stronger ones, over the last twenty or thirty years than during the previous years back to when the data are still pretty good.
Second, the science says this will get worse. There is one 2007 study (by Vecci and Soden, in Geophysical Research Letters) that suggests that maybe in the Atlantic, smaller size hurricanes will be less likely to form because of increased vertical wind shear, but that study does not mean much for larger or stronger hurricanes. This decade old study is constantly cited as evidence that global warming will not increase hurricanes in the Atlantic. Other studies show that the overall amount of hurricane activity, and the potential higher end of hurricane strength, and the size, and the speed at which they form, and the amount of water they can contain, and possibly the likelihood of a hurricane stalling right after landfall, go up. Up. Up. Up. One study says down and that word, “down” it resonates across the land like a sonic boom. The other studies say we can expect, and to varying degrees already see, up, up, up, up, up, and denial makes words like “up” and “more” and “worse” and “exasperated” dangerously quiet. Please don’t fall into that trap. Oh, by the way,the one study that says “down” has not been replicated and though experts feel it has some merit, it is far from proven and there are reasons to suggest it my be problematic.
Comparing the 2017 Atlantic Hurricane Season to Other Years
Funny thing about hurricanes: They exist whether or not they menace you. Every year a certain number of hurricanes (usually) form and wander about in the Atlantic ocean for a while, maybe hitting some boats, but otherwise doing little more than causing some big waves to eventually reach beaches in the Caribbean or the eastern US.
This year, we’ve had four major hurricanes so far. Harvey, which maxed out as a Cat 4, ravaged and flooded Texas and Louisiana. Irma, maxing at Cat 5, ravaged Florida after wiping out islands in the Leewards and doing great damage to Cuba and elsewhere in the Caribbean. Maria, maxing out as a Cat 5, did major damage in the Leewards and notably wiped out Puerto Rico. So, four Major Hurricanes formed in the Atlantic and hit something major.
Meanwhile, Jose, another Major hurricane at Cat 4 status, still spinning about in the North Atlantic, is one of those that hit nothing. And that’s all so far this year.
Last year, there were almost exactly the same number of named storms in total (so far) and just like 2017, 2016 had four major hurricanes.
You remember Matthew, which scraped the Atlantic coast and was rather damaging. But do you remember Gaston (Cat 3)? Nicole (Cat 4)? Otto (Cat 3)?
Gaston and Nicole wandered about in the Atlantic and hit nothing. Otto was for real, it hit Central America, but not the US, so from the US perspective, it counts as a non-hitting hurricane. Also, it was only barely cat 3 and weakened quickly.
From 2000 to 2016, inclusively, we have had an average of 15 named storms per year, with a minimum of 8 and a maximum of 28, with most years being between 10 and 16. So far 2017 has had 13 named storms. We may have a couple more. So, likely, we will be right in the middle.
For the same period, the number of hurricanes has ranged from 2 to 15 with an average of about 7. This year, we have had … wait for it … 7. We may or ma not get another one, not very likely two more. In other words, this is an average year for the number of hurricanes.
For the same period, the number of major hurricanes ranges from 0 (though only one year ad zero, it is more typical to have 2 in a low year) to 7, but again, 7 is extreme. It is usually from 2-5. The average is just over 3. This year, we have four. That’s pretty typical.
So, within the context that the last couple of decades has had a somewhat higher than average frequency of hurricanes, and probably more strong ones than previous decades, this we had a typical year this year.
Why does it feel different? Why is it in fact difference, with respect to the horror of it all? Because we had more landfalls, and more serious landfalls.
Keep in mind that Harvey could have hit Houston differently and done more damage. Keep in mind that Cuba beat up Irma, then Irma failed to strike Florida in just the right way to do maximum damage. Keep in mind that after wiping out Puerto Rico, Maria swerved quickly out to sea. In other words, keep in mind that this year could have been much worse than it was.
This is the point that you must understand: Any year can be like this year, or worse. And, with increasing sea surface temperatures and other global warming related factors, worse still.
This book should be on the shelf or in the classroom for every teacher in science, or even social science. It is essentially the highly digestable (and illustration rich) version of the IPCC report on the scientific basis for climate change, written by one of that report’s famous authors: Dire Predictions, 2nd Edition: Understanding Climate Change
And now for the fun part, the toys. Amazon is having a huge sale on refurbished devices that you may want to have. I assume they are getting ready for the holidays or something. Go to this link to see what they are
I myself got a Kindle Paperwhite E-reader a while back, and I love it. Then, for her birthday, I got one for Julia. I recommend starting out with the one with “special offers” which are basically ads that are not there when you are reading. The device is cheaper this way, and if the ads really annoy you, you can pay them off to upgrade to the no ad version.
I’m seriously thinking about getting Amanda one of these refurb-Kindle paperwhites. She likes the Kindle just enough for a refurbished one, maybe not enough for a new one…
At the very least, when you meet your teacher at the beginning of the school year, say to them what I say or something like it. “If you ever get hassled by anyone — parent, administration, other teachers — about teaching real science, let me know, I’ll be your best ally. Of course, if you are a science denier or a creationist so the situation is turned around, let me know, I’ll be your worst nightmare …” Then kind of pat them on the shoulder, flip your cape to one side, get on your motorcycle, and drive off.
Study Finds Top Fossil Fuel Producers’ Emissions Responsible for as Much as Half of Global Surface Temperature Increase, Roughly 30 Percent of Global Sea Level Rise
Findings Provide New Data to Hold Companies Responsible for Climate Change
WASHINGTON (September 7, 2017)—A first-of-its-kind study published today in the scientific journal Climatic Change links global climate changes to the product-related emissions of specific fossil fuel producers, including ExxonMobil and Chevron. Focusing on the largest gas, oil and coal producers and cement manufacturers, the study calculated the amount of sea level rise and global temperature increase resulting from the carbon dioxide and methane emissions from their products as well as their extraction and production processes.
The study quantified climate change impacts of each company’s carbon and methane emissions during two time periods: 1880 to 2010 and 1980 to 2010. By 1980, investor-owned fossil fuel companies were aware of the threat posed by their products and could have taken steps to reduce their risks and share them with their shareholders and the general public.
“We’ve known for a long time that fossil fuels are the largest contributor to climate change,” said Brenda Ekwurzel, lead author and director of climate science at the Union of Concerned Scientists (UCS). “What’s new here is that we’ve verified just how much specific companies’ products have caused the Earth to warm and the seas to rise.”
The study builds on a landmark 2014 study by Richard Heede of the Climate Accountability Institute, one of the co-authors of the study published today. Heede’s study, which also was published in Climatic Change, determined the amount of carbon dioxide and methane emissions that resulted from the burning of products sold by the 90 largest investor- and state-owned fossil fuel companies and cement manufacturers.
Ekwurzel and her co-authors inputted Heede’s 2014 data into a simple, well-established climate model that captures how the concentration of carbon emissions increases in the atmosphere, trapping heat and driving up global surface temperature and sea level. The model allowed Ekwurzel et al. to ascertain what happens when natural and human contributions to climate change, including those linked to the companies’ products, are included or excluded.
The study found that:
<li>Emissions traced to the 90 largest carbon producers contributed approximately 57 percent?of the observed rise in atmospheric carbon dioxide, nearly 50 percent of the rise in global average temperature, and around 30 percent of global sea level rise since 1880.</li>
<li>Emissions linked to 50 investor-owned carbon producers, including BP, Chevron, ConocoPhillips, ExxonMobil, Peabody, Shell and Total, were responsible for roughly 16 percent of the global average temperature increase from 1880 to 2010, and around 11 percent of the global sea level rise during the same time frame.</li>
<li>Emissions tied to the same 50 companies from 1980 to 2010, a time when fossil fuel companies were aware their products were causing global warming, contributed approximately 10 percent of the global average temperature increase and about 4 percent sea level rise since 1880.</li>
<li>Emissions traced to 31 majority state-owned companies, including Coal India, Gazprom, Kuwait Petroleum, Pemex, Petroleos de Venezuela, National Iranian Oil Company and Saudi Aramco, were responsible for about 15 percent of the global temperature increase and approximately 7 percent of the sea level rise between 1880 and 2010.</li>
“Until a decade or two ago, no corporation could be held accountable for the consequences of their products’ emissions because we simply didn’t know enough about what their impacts were,” said Myles Allen, a study co-author and professor of geosystem science at the University of Oxford in England. “This study provides a framework for linking fossil fuel companies’ product-related emissions to a range of impacts, including increases in ocean acidification and deaths caused by heat waves, wildfires and other extreme weather-related events. We hope that the results of this study will inform policy and civil society debates over how best to hold major carbon producers accountable for their contributions to the problem.”
The question of who is responsible for climate change and who should pay for its related costs has taken on growing urgency as climate impacts worsen and become costlier. In New York City alone, officials estimate that it will cost more than $19 billion to adapt to climate change. Globally, adaptation cost projections are equally astronomical. The U.N. Environment Programme estimates that developing countries will need $140 billion to $300 billion annually by 2030 and $280 billion to $500 billion annually by 2050 to adapt.
The debate over responsibility for climate mitigation and adaptation has long focused on the “common but differentiated responsibilities” of nations, a framework used for the Paris climate negotiations. Attention has increasingly turned to non-state actors, particularly the major fossil fuel producers.
“At the start of the Industrial Revolution, very few people understood that carbon dioxide emissions progressively undermine the stability of the climate as they accumulate in the atmosphere, so there was nothing blameworthy about selling fossil fuels to those who wanted to buy them,” said Henry Shue, professor of politics and international relations at the University of Oxford and author of a commentary on the ethical implications of the Ekwurzel et al. paper that was published simultaneously in Climatic Change. “But circumstances have changed radically in light of evidence that a number of investor-owned companies have long understood the harm of their products, yet carried out a decades-long campaign to sow doubts about those harms in order to ensure fossil fuels would remain central to global energy production. Companies knowingly violated the most basic moral principle of ‘do no harm,’ and now they must remedy the harm they caused by paying damages and their proportion of adaptation costs.”
Had ExxonMobil, for example, acted on its own scientists’ research about the risks of its products, climate change likely would be far more manageable today.
“Fossil fuel companies could have taken any number of steps, such as investing in clean energy or carbon capture and storage, but many chose instead to spend millions of dollars to try to deceive the public about climate science to block sensible limits on carbon emissions,” said Peter Frumhoff, a study co-author and director of science and policy at UCS. “Taxpayers, especially those living in vulnerable coastal communities, should not have to bear the high costs of these companies’ irresponsible decisions by themselves.”
Ekwurzel et al.’s study may inform approaches for juries and judges to calculate damages in such lawsuits as ones filed by two California counties and the city of Imperial Beach in July against 37 oil, gas and coal companies, claiming they should pay for damages from sea level rise. Likewise, the study should bolster investor campaigns to force fossil fuel companies to disclose their legal vulnerabilities and the risks that climate change poses to their finances and material assets.
Three statisticians go hunting for rabbit. They see a rabbit. The first statistician fires and misses, her bullet striking the ground below the beast. The second statistician fires and misses, their bullet striking a branch above the lagomorph. The third statistician, a lazy frequentist, says, “We got it!”
OK, that joke was not 1/5th as funny as any of XKCD’s excellent jabs at the frequentist-bayesian debate, but hopefully this will warm you up for a somewhat technical discussion on how to decide if observations about the weather are at all explainable with reference to climate change.
We are having this discussion here and now for two reasons. One is that Hurricane Harvey was (is) a very serious weather event in Texas and Louisiana that may have been made worse by the effects of anthropogenic global warming, and there may be another really nasty hurricane coming (Irma). The other is that Michael Mann, Elisabeth Lloyd and Naomi Oreskes have just published a paper that examines so-called frequentist vs so-called Bayesian statistical approaches to the question of attributing weather observations to climate change.
First, I’ll give you the abstract of the paper then I’ll give you my version of how these approaches are different, and why I’m sure the authors are correct.
The conventional approach to detecting and attributing climate change impacts on
extreme weather events is generally based on frequentist statistical inference wherein a null hypothesis of no influence is assumed, and the alternative hypothesis of an influence is accepted only when the null hypothesis can be rejected at a sufficiently high (e.g., 95% or Bp = 0.05^) level of confidence. Using a simple conceptual model for the occurrence of extreme weather events, we
show that if the objective is to minimize forecast error, an alternative approach wherein likelihoods
of impact are continually updated as data become available is preferable. Using a simple proof-of-concept, we show that such an approach will, under rather general assumptions, yield more
accurate forecasts. We also argue that such an approach will better serve society, in providing a
more effective means to alert decision-makers to potential and unfolding harms and avoid
opportunity costs. In short, a Bayesian approach is preferable, both empirically and ethically.
Frequentist statistics is what you learned in your statistics class, if you are not an actual statistician. I want to know if using Magic Plant Dust on my tomatoes produces more tomatoes. So, I divide my tomato patch in half, and put a certain amount of Magic Plant Dust on one half. I then keep records of how many tomatoes, and of what mass, the plants yield. I can calculate the number of tomatoes and the mass of the tomatoes for each plant, and use the average and variation I observe for each group to get two sets of numbers. My ‘null hypothesis’ is that adding the magic dust has no effect. Therefore, the resulting tomato yield from the treated plants should be the statistically the same as from the untreated plants. I can pick any of a small number of statistical tools, all of which are doing about the same thing, to come up with a test statistic and a “p-value” that allows me to make some kind of standard statement like “the treated plants produced more tomatoes” and to claim that the result is statistically significant.
If the difference, though, is very small, I might not get a good statistical result. So, maybe I do the same thing for ten years in a row. Then, I have repeated the experiment ten times, so my statistics will be more powerful and I can be more certain of an inference. Over time, I get sufficient sample sizes. Eventually I conclude that Magic Plant Dust might have a small effect on the plants, but not every year, maybe because other factors are more important, like how much water they get or the effects of tomato moth caterpillars.
In an alternative Bayesian universe, prior to collecting any data on plant growth, I do something very non-statistical. I read the product label. The label says, “This product contains no active ingredients. Will not affect tomato plants. This product is only for use as a party favor and has no purpose.”
Now, I have what a Bayesian statistician would call a “prior.” I have information that could be used, if I am clever, to produce a statistical model of the likely outcome of the planned experiments. In this case, the likely outcome is that there won’t be a change.
Part of the Bayesian approach is to employ a statistical technique based on Bayes Theorem to incorporate a priori assumptions or belief and new observations to reach towards a conclusion.
In my view, the Bayesian approach is very useful in situations where we have well understood and hopefully multiple links between one or more systems and the system we are interested in. We may not know all the details that relate observed variation in one system and observed variation in another, but we know that there is a link, that it should be observable, and perhaps we know the directionality or magnitude of the effect.
The relationship between climate change and floods serves as an example. Anthropogenic climate change has resulted in warmer sea surface temperatures and warmer air. It would be very hard to make an argument from the physics of the atmosphere that this does not mean that more water vapor will be carried by the air. If there is more water vapor in the air, there is likely to be more rain. Taken as a Bayesian prior, the heating of the Earth’s surface means more of the conditions that would result in floods, even if the details of when, how much, and where are vague at this level.
A less certain but increasingly appreciated effect of climate change is the way trade winds and the jet stream move around the planet. Without going into details, climate change over the last decade or two has probably made it more likely that large storm systems stall. Storms that may have moved quickly through an area are now observed to slow down. If a storm will normally drop one inch of rain on the landscape over which it passes, but now slows down but rains at the same rate, perhaps 3 inches of rain will be dropped (over a shorter distance). What would have been a good watering of all the lawns is now a localized flood.
That is also potentially a Bayesian prior. Of special importance is that these two Bayesian priors imply change in the same direction. Since in this thought experiment we are thinking about floods, we can see that these two prior assumptions together suggest that a post-climate change weather would include more rain falling from the sky in specific areas.
There are other climate change related factors that suggest increased activity of storms. The atmosphere should have more energy, thus more energetic storms. In some places there should more of the kind of wind patterns that spin up certain kinds of storms. It is possible that the relationship between temperature of the air at different altitudes, up through the troposphere and into the lower stratosphere, has changed so that large storms are likely to get larger than they otherwise might.
There is very little about climate change that implies the reverse; Though there may be a few subsets of storm related weather that would be reduced with global warming, most changes are expected to result in more storminess, more storms, more severe storms, or something.
So now we have the question, has climate change caused any kind of increase in storminess?
I’d like to stipulate that there was a kind of turning point in our climate around 1979, before which we had a couple of decades of storminess being at a certain level, and after which, we have a potentially different level. This is also a turning point in measured surface heat. In, say, 1970 plus or minus a decade, it was possible to argue that global warming is likely but given the observations and data at the time, it was hard to point to much change (though we now know, looking back with better data for the previous centuries, that is was actually observable). But, in 2008, plus or minus a decade, it was possible to point to widespread if anecdotal evidence of changes in storm frequency, patterns, effects, as well as other climate change effects, not the least of which was simply heat.
I recently watched the documentary, “An Inconvenient Sequel.” This is a fairly misunderstood film. It is not really part two of Al Gore’s original “An Inconvenient Truth.” The latter was really Al Gore’s argument about climate change, essentially presented by him. “An Inconvenient Sequel” was made by independent film makers with no direct input by Gore with respect to contents and production, though it is mostly about him, him talking, him making his point, etc. But I digress. Here is the salient fact associated with these two movies.An Inconvenient Truth came out in May 2006, so it is based mainly on information available in 2005 and before. In it, there are examples of major climate change effects, including Katrina, but it seems like the total range of effects is more or less explicated almost completely. When An Inconvenient Sequell came out a few weeks ago, a solid 10+ years had passed and the list of actual climate effects noted in the movie was a sampling, not anything close to a full explication, of the things that had happened over recent years. Dozens of major flooding, storming, drying, and deadly heat events had occurred of which only a few of each were mentioned, because there was just so much stuff.
My point is that there is a reasonable hypothesis based on anecdotal observation (at least) that many aspects of weather in the current decade, or the last 20 years, or since 1979 as I prefer, are different in frequency and/or severity than before, because of climate change.
A frequentist approach does not care why I think a certain hypothesis is workable. I could say “I hypothesize that flies can spontaneously vanish with a half life of 29 minutes” and I could say “I hypothesis that if a fly lays eggs on a strawberry there will later be an average of 112 maggots.” The same statistical tests will be usable, the same philosophy of statistics will be applied.
A Bayesian approach doesn’t technically care what I think either, but what I think a priori is actually relevant to the analysis. I might for example know that the average fly lays 11 percent of her body mass in one laying of eggs, and that is enough egg mass to produce about 90-130 maggots (I am totally making this up) so that observational results that are really small (like five maggots) or really large (like 1 million maggots) are very unlikely a priori, and, results between 90 and 130 are a priori very likely.
So, technically, a Bayesian approach is different because it includes something that might be called common sense, but really, is an observationally derived statistical parameter that is taken very seriously by the statistic itself. But, philosophically, it is a little like the pitcher of beer test.
I’ve mentioned this before but I’ll refresh your memory. Consider an observation that makes total sense based on reasonable prior thinking, but the standard frequentist approach fails to reject the null hypothesis. The null hypothesis is that there are more tornadoes from, say, 1970 to the present than there were between 1950 and 1970. This graph suggests this is true…
… but because the techniques of observation and measuring tornado frequency have changed over time, nobody believes the graph to be good data. But, it may not be bad data. In other words, the questions about the graph do not inform us of the hypothesis, but the graph is suggestive.
So, I take a half dozen meteorologists who are over 55 years old (so they’ve seen things, done things) out for a beer. The server is about to take our order, and I interrupt. I ask all the meteorologists to answer the question … using this graph and whatever else you know, are there more tornadoes in the later time interval or not? Write your answer down on this piece of paper, I say, and don’t share your results. But, when we tally them up, if and only if you all have the same exact answer (all “yes” or all “no”) then this pitcher of beer is on me.
Those are quasi-Bayesian conditions (given that these potential beer drinkers have priors in their heads already, and that the graph is suggestive if not conclusive), but more importantly, there is free beer at stake.
They will all say “yes” and there will be free beer.
OK, back to the paper.
Following the basic contrast between frequentist and Bayesian approaches, the authors produce competing models, one based on the former, the other on the latter. “In the conventional, frequentist approach to detection and attribution, we adopt a null hypothesis of an equal probability of active and inactive years … We reject it in favor of the alternative hypothesis of a bias toward more active years … only when we are able to achieve rejection of H0 at a high… level of confidence”
In the bayesian version, a probability distribution that assumes a positive (one directional) effect on the weather is incorporated, as noted above, using Bayes theorem.
Both methods work to show that there is a link between climate change and effect, in this modeled scenario, eventually, but the frequentist approach is very much more conservative and thus, until the process is loaded up with a lot of data, more likely to be wrong, while the bayesian approach correctly identifies the relationship and does so more efficiently.
The authors argue that the bayesian method is more likely to accurately detect the link between cause and effect, and this is almost certainly correct.
This is what this looks like: Frank Frequency, weather commenter on CNN says, “We can’t attribute Hurricane Harvey, or really, any hurricane, to climate change until we have much more data and that may take 100 years because the average number of Atlantic hurricanes to make landfall is only about two per year.”
Barbara Bayes, weather commenter on MSNBC, says, “What we know about the physics of the atmosphere tells us to expect increased rainfall, and increased energy in storms, because of global warming, so when we see a hurricane like Harvey it is really impossible to separate out this prior knowledge when we are explaining the storms heavy rainfall and rapid strengthening. The fact that everywhere we can measure possible climate change effects on storms, the storms seem to be acting as expected under climate change, makes this link very likely.”
I hasten to add that this paper is not about hurricanes, or severe weather per se, but rather, on what statistical philosophy is better for investigating claims linking climate change and weather. I asked the paper’s lead author, Michael Mann (author of The Madhouse Effect: How Climate Change Denial Is Threatening Our Planet, Destroying Our Politics, and Driving Us Crazy, The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, and Dire Predictions, 2nd Edition: Understanding Climate Change), about Hurricane Harvey specifically. He told me, “As I’ve pointed out elsewhere, I’m not particularly fond of the standard detection & attribution approach for an event like Hurricane Harvey for a number of reasons. First of all, the question isn’t whether or not climate change made Harvey happen, but how it modified the impacts of Harvey. For one thing, climate change-related Sea Level Rise was an important factor here, increasing the storm surge by at least half a foot.” Mann recalls the approach taken by climate scientist Kevin Trenberth, who “talks about how warmer sea surface temperatures mean more moisture in the atmosphere (about 7% per degree C) and more rainfall. That’s basic physics and thermodynamics we can be quite certain of.”
The authors go a step farther, in that they argue that there is an ethical consideration at hand. In a sense, an observer or commenter can decide to become a frequentist, and even one with a penchant for very low p-values, with the purpose of writing off the effects of climate change. (They don’t say that but this is a clear implication, to me.) We see this all the time, and it is in fact a common theme in the nefarious politicization of the climate change crisis.
Or, an observer can chose to pay attention to the rather well developed priors, the science that provides several pathways linking climate change and severe weather or other effects, and then, using an appropriate statistical approach … the one you use when you know stuff … be more likely to make a reasonable and intelligent evaluation, and to get on to the business of finding out in more detail how, when, where, and how much each of these effects has taken hold or will take hold.
The authors state that one “… might therefore argue that scientists should err on the side of caution and take steps to ensure that we are not underestimating climate risk and/or underestimating the human component of observed changes. Yet, as several workers have shown …the opposite is the case in prevailing practice. Available evidence shows a tendency among climate scientists to underestimate key parameters of anthropogenic climate change, and thus, implicitly, to understate the risks related to that change”
While I was in contact with Dr. Mann, I asked him another question. His group at Penn State makes an annual prediction of the Atlantic Hurricane Season, and of the several different such annual stabs at this problem, the PSU group tends to do pretty well. So, I asked him how this season seemed to be going, which partly requires reference to the Pacific weather pattern ENSO (El Nino etc). He told me
We are ENSO neutral but have very warm conditions in the main development region of the Tropcs (which is a major reason that Irma is currently intensifying so rapidly). Based on those attributes, we predicted before the start of the season (in May) that there would be between 11 and 20 storms with a best estimate of 15 named storms. We are currently near the half-way point of the Atlantic hurricane season, and with Irma have reached 9 named storms, with another potentially to form in the Gulf over the next several days. So I suspect when
all is said and done, the total will be toward the upper end of our predicted range.
I should point out that Bayesian statistics are not new, just not as standard as one might expect, partly because, historically, this method has been hard to compute. So, frequency based methods have decades of a head start, and statistical methodology tends to evolve slowly.
This is a picture of some men.
Since they are men, they have some abilities. They can, for example, knock each other over, and they can play with balls. This is what men do, and this is what these men can do.
This is a picture of some professional NFL foodball players.
They are also men. They can also knock each other over, and they can also play with balls. But the NFL football players are much better at knocking each other over, and you wouldn’t believe how great they are at playing with balls.
They are NFL enhanced. They are trained, embiggened with special diets, and they are clad with armor and vibrant, often scary, colors.
This is a picture of a hurricane from 1938.
It was a big one; It did lots of damage when it slammed into New England and New York.
A hurricane is a large storm that forms in the tropics, and sometimes hits land. The energy from a hurricane comes from a combination of the earth’s spin, trade winds, and so on, but mainly, from the heat on the surface of the sea. The rain that falls from the hurricane also comes mainly from the sea surface indirectly, and any water that evaporates into the atmosphere.
This is a picture of Harvey the Hurricane, the remnants of which are still circulating around in Texas.
Harvey is a lot like the 1938 hurricane, in that it formed in the tropics, in the Atlantic, and was a big spinny thing. It got its energy in the same way, and formed in the same way, and both slammed into land and scared the crap out of everybody.
But they are different, the 1938 Hurricane and Harvey the Hurricane. How are they different? Have a look at this map:
The pairs of photos above show “then” and “now” for two different things (men and hurricanes). This map shows both then and now in the same graphic. This map represents the current sea surface temperature anomalies, meaning, how much warmer or cooler the current sea temperatures are compared to the same time of year but at some time in the past, averaged over a long period, in this case, from 1971-2000. Global warming was well underway during that period, so present sea surface temperature readings that are above that baseline are not only high but are actually very high, because the baseline is high.
In this map, red is more, blue is less. Look at all the nearly ubiquitous more-ness in sea surface temperatures around the world. That causes the atmosphere across the entire globe to potentially contain much more water vapor than it could have contained during that that baseline period. Look at the sea surface temperature anomalies for the gulf of Mexico, where Harvey formed. They are high. This means that any hurricane that formed over that extra warm water will be stronger, and any tropical storm system that occurs pretty much anywhere on this map (or round the other side of the Earth as well, for that matter) will contain more water, than it would if it existed and all else was equal several decades ago.
This is a picture of a Unicorn.
A unicorn poops rainbows and pees mimosas. Or so I’m told. This is another view of Harvey the Hurricane.
What is the difference between the unicorn and Harvey? Harvey is real, and the unicorn is not.
I won’t quote you or give you links. Why? Because I find this whole thing a bit too embarrassing. But here is the thing. Otherwise intelligent and well informed individuals have stated in various outlets, including major media, and including twitter, that it is simply inappropriate to claim that Harvey the Hurricane is in any way global warming enhanced.
This is wrong. There is no such thing as a storm of any kind that is not a function of the current climatology. The current climatology has widespread and persistent, and in many cases alarmingly high, sea surface temperature anomalies. There will not be a tropical storm, including hurricanes, that escape the physics and poop out rainbows and pee mimosas. They will all be real. They will all have greater power and more moisture than they otherwise would have, had they formed decades ago before the extreme global warming we have experience so far.
There was a time when Harvey was a rabbit, an invisible rabbit only seen by a delusional character in a movie, played by Jimmy Stewart. Today, we have Harvey the Unenhanced Storm, playing that role. It is a fiction, something seen by a few but that is no more real than the above depicted unicorn.
As I was writing this post, Michael Mann posted an item in the Guardian that makes this case.
Sea level rise attributable to climate change – some of which is due to coastal subsidence caused by human disturbance such as oil drilling – is more than half a foot (15cm) over the past few decades … That means the storm surge was half a foot higher than it would have been just decades ago, meaning far more flooding and destruction.
… sea surface temperatures in the region have risen about 0.5C (close to 1F) over the past few decades from roughly 30C (86F) to 30.5C (87F), which contributed to the very warm sea surface temperatures (30.5-31C, or 87-88F).
… there is a roughly 3% increase in average atmospheric moisture content for each 0.5C of warming. Sea surface temperatures in the area where Harvey intensified were 0.5-1C warmer than current-day average … That means 3-5% more moisture in the atmosphere.
That large amount of moisture creates the potential for much greater rainfalls and greater flooding. The combination of coastal flooding and heavy rainfall is responsible for the devastating flooding that Houston is experiencing.
… there is a deep layer of warm water that Harvey was able to feed upon when it intensified at near record pace as it neared the coast….
Harvey was almost certainly more intense than it would have been in the absence of human-caused warming, which means stronger winds, more wind damage and a larger storm surge…
Mann mentions other effects as well, but I’ll let you go read them.
The extra heat at depth Mann mentions is now recognized as responsible for the extra bigness and badness of some other famous hurricanes as well, such as Katrina and Haiyan. Harvey might be a member of a small but growing class of hurricanes, deep-heat hurricanes I’ll call them for now, that simply did not exist prior to global warming of recent decades. Further research is needed on this, but that’s the direction we are heading.
Climate scientist Kevin Trenberth recently noted that “The human contribution can be up to 30 percent or so up to the total rainfall coming out of the storm,”
Aside from Michael Mann’s Guardian article, he has this facebook post making the same argument.
Harvey the Hurricane is real, and so was the 1938 Hurricane. Climate change enhancement of Harvey is real, but unicorns are not. Sadly.
I really thought we had stopped hearing this meme, that “you can never attribute a given weather event to climate change.” But, apparently not. That is a statement that is technically true in the same way that we can’t really attribute an Alberta Clipper (a kind of snow storm) to the spin of the Earth. Yet, somehow, the spin of the Earth is why Alberta Clippers come from Alberta. In other words, the statement is a falsehood that can never be evaluated because it is framed incorrectly. Here is the correct framing:
Climate is weather long term, and weather is climate here and now. The climate has changed. Ergo … you fill in the blank. Hit: Unicorns are not involved.
Remember the revelation back a year or so ago that Exxon Mobil knew all about the likely effects of the global warming they contributed to, and the subsequent denials by Exxon that this was not true, yada yada yada?
A paper has just come out that confirms what we all said then. From the abstract:
This paper assesses whether ExxonMobil Corporation has in the past misled the general public about climate change. We present an empirical document-by-document textual content analysis and comparison of 187 climate change communications from ExxonMobil, including peer-reviewed and non-peer-reviewed publications, internal company documents, and paid, editorial-style advertisements (‘advertorials’) in The New York Times. We examine whether these communications sent consistent messages about the state of climate science and its implications—specifically, we compare their positions on climate change as real, human-caused, serious, and solvable. In all four cases, we find that as documents become more publicly accessible, they increasingly communicate doubt. This discrepancy is most pronounced between advertorials and all other documents. For example, accounting for expressions of reasonable doubt, 83% of peer-reviewed papers and 80% of internal documents acknowledge that climate change is real and human-caused, yet only 12% of advertorials do so, with 81% instead expressing doubt. We conclude that ExxonMobil contributed to advancing climate science—by way of its scientists’ academic publications—but promoted doubt about it in advertorials. Given this discrepancy, we conclude that ExxonMobil misled the public. Our content analysis also examines ExxonMobil’s discussion of the risks of stranded fossil fuel assets. We find the topic discussed and sometimes quantified in 24 documents of various types, but absent from advertorials. Finally, based on the available documents, we outline ExxonMobil’s strategic approach to climate change research and communication, which helps to contextualize our findings.
The article is free and open access so you can read the whole thing!
Posting this with no comment because it is expected and so obviously bone-headed Trump:
US President Donald Trump’s administration has disbanded a government advisory committee that was intended to help the country prepare for a changing climate.
The US National Oceanic and Atmospheric Administration established the committee in 2015 to help businesses and state and local governments make use of the next national climate assessment. The legally mandated report, due in 2018, will lay out the latest climate-change science and describe how global warming is likely to affect the United States, now and in coming decades.
The advisory group’s charter expired on 20 August, and Trump administration officials informed members late last week that it would not be renewed. “It really makes me worried and deeply sad,” says Richard Moss, a climate scientist at the University of Maryland in College Park and co-chair of the committee. “It’s another thing that is just part of the political football game.”
First, there is the annual up and down cycle that happens because there is more land in the Northern Hemisphere. I won’t explain that to you now because I know you can figure out why that happens.
Second, there is natural variation up and down aside from that annual cycle that has to do with things like volcanoes and such. This includes the rate of forest fires, which increase greenhouse gases by turning some of the Carbon trapped in plant tissue into gas form as CO2. (That was a hint for the answer to the first reason!)
There was a big spike in CO2 concentration this year, and it was caused by El Nino increasing forest fire output, which in turn, freed up some of that CO2. Also, regional drought in some places simply slowed down plant growth, leaving some Carbon stranded in the atmosphere.
So was that natural? Not at all. ENSO cycles, that cause El Nino and La Nina constitute and oscillation in rainfall patterns, and part of that results in extra forest fires or other effects as mentioned. But these effects are caused directly by weather disruption. Human caused global warming was already doing that. The severe El Nino of 2014-2016 was more severe (and probably longer) than any, or almost any, ever observed, precisely because it was a big dermatological monster sitting on top of a big hill made by anthropogenic global warming.
But there is also another,subtler but very important lesson in this event. At any given time we could have what would normally be a “natural” shift to bad conditions. But under global warming, such a shift can be transformed from a disaster to a much bigger disaster. In this way, think of climate change as the steepening of the drop off alongside the road from a 2 foot ditch to a 10 foot embankment. When we drive off the road due to natural forces (some ice, for example) without global warming,we get bounced around a bit. With global warming we get to rely on our airbags to save us, but the airbag deployment will probably break both our arms and mess up our face.
Anyway, the confirmation of the role of El Nino comes from new research discussed here.