This graphic, by Boggis Makes Videos and put on YouTube just a few days ago, breaks all the rules of how to make effective, understandable graphs for the general public. However, if you follow all those rules, it is difficult or impossible to get certain message across. Therefore, this graphic is necessary if a bit difficult. I would like you to watch the graphic several times with a prompt before each watching so that you fully appreciate it. This will only take you six or seven minutes, I’m sure you weren’t doing anything else important.
Pass 1: How to read the graph
This graph’s basic data are temperature anomaly, not temperature, but difference in observed temperature averaged out over a month, using a baseline of 1961-1990. Global warming was already underway for this period, but it still works as a baseline. Anyway notice the scale shown at the beginning of the presentation.
The Graph shows the temperature anomaly across latitude, using a circle meant to represent the earth, so the north pole is on top, the south pole on the bottom, the equator half way between, etc.
The height of the graph’s bars, as well as their color, show the anomaly, but the beginning of the graphic shows you how far out, in standard deviations, the values are.
The Graphic display starts at 1900. The values are shown for each month, but they are 12 month moving average values, otherwise this graphic would give you a seizure.
So watch the first 20 seconds or so as many times as you need to, to fully understand these details.
Pass 2: It is getting warmer and weirder
On the first pass, just note that as the earth gets warmer, at sea and on land (see the two graphics at the bottom). Notice that the variation from year to year as well as the increase in temperature really takes off in the 1980s. Notice that the surface warmth values increase dramtically starting in the 1990s. Notice that things get really wild over just the last ten years or less.
Pass 3: Ends and middles
On your third pass, and this may take a few passes, notice the difference between the equatorial, temperate, and polar regions, as well as the difference between the two poles.
Consider that the increased warming in arctic regions compared to other regions affects many aspects of our weather.
Consider that the increases in temperate and tropical regions means that over some periods of time an increasingly lager area of the earth becomes uninhabitable without air conditioning.
Notice that the northern and southern hemisphere don’t have the same exact pattern.
When there is a major climate disaster in the US, people move. Since the US is big and has large gaps in population, it looks different than when a disaster happens in some other places. Five million (or more) Syrians leaving the Levant left a major mark across the globe. A half million leaving the Katrina hit zone was barely noticed on a global, or even national, scale, not just because it was one tenth the amount, but because of our size and space as well.
Something close to half the 400K or so displaced by Katrina (over half of them from NOLA) have returned to the vicinity they formerly lived in, and only a third to the same original location. The others are all over the place, distributed with a rapidly decreasing distance decay function. So these displacements, in the US, tend to be very long term and can thus affect demography and politics far afield.
An exodus from Puerto Rico will likely have a different decay function than seen for Katrina because it is, and apparently few people know this, an Island! But anyway, it is likely that there will be an exodus from Puerto Rico and it is starting to look like it will be sufficient to make Florida less Purple and more Blue, and specifically, more anti-Trump.
Note that in the past, New York was the most likely destination for a person from Puerto Rico to move, which is funny given Trump’s statements about all his Puerto Rican friends. For those not from that region, Puerto Ricans have long been hated by white supremacists in the greater NY metro area. But I digress. Anyway, over recent years, Florida has become a growing center of the US Mainland Puerto Rican community.
For context: There are about 3.5 million people living in Puerto Rico who identify as Puerto Rican, and about 5.3 million self identified Puerto Ricans in the lower 48. Currently there is somewhat under one million in Florida, somewhat over in NY, but Puerto Ricans are everywhere in the US, with the fewest in the upper plains and the most in the greater NY area (as far out as Penn) and Florida.
We are concerned that cholera will spread in Puerto Rico. You may remember the ca 2011 epidemic that mainly struck Haiti (see chart above). There was another ten years earlier. There is some interesting research out there linking cholera to climate change. The pathogen, Vibrio cholerae, lives in coastal waters where it has a keystone commensal relationship with copepods and other microinvertebrates. We think of cholera as a highly contagious pathogen among humans, but it starts from its natural reservoirs in water. In some areas of South Asia, cholera was significantly attenuated by the discovery that simply passing well water through common cotton cloth filtered out the disease enough to make a difference, at least in some contexts.
For historical context, there was a huge cholera epidemic in the Caribbean in the 19th century, and I understand this event, which killed something like 30,000 in Puerto Rico alone, is still a traumatic memory in the region. From a 2011 summary of the historic epidemic, written I suspect in response to the re-emergence of the problem about six years ago:
The Caribbean region experienced cholera in 3 major waves… The 3 periods of cholera in the Caribbean that we have identified are 1833–1834 (with, according to Kiple , possible lingering cholera in outlying areas until late 1837 or early 1838) in Cuba; 1850–1856 in Jamaica, Cuba, Puerto Rico, St. Thomas, St. Lucia, St. Kitts, Nevis, Trinidad, the Bahamas, St. Vincent, Granada, Anguilla, St. John, Tortola, the Turks and Caicos, the Grenadines (Carriacou and Petite Martinique), and possibly Antigua; and 1865–1872 in Guadeloupe, Cuba, St. Thomas, the Dominican Republic, Dominica, Martinique, and Marie Galante.
It is thought that Cholera is more likely to be abundant and to spread into human populations with warmer waters, and possibly the range over which cholera is a lingering constant threat in coastal waters is likely increasing. Also, increased air temperatures and rainfall can increase growth or spread of cholera in the wild. This is a relationship first identified in the 1990s, and that has been demonstrated through several studies. The next few weeks and months in Puerto Rico are an accidental and potentially horrific experimental laboratory to test the science that has been percolating along over the last 20 years.
It isn’t. Well, it is a little, but not totally. OK, it is, but actually, it is complicated.
First, you are probably asking about the Atlantic hurricane season, not the global issue of hurricanes and typhoons and such. If you are asking world-wide, recent prior years were worse if counted by how many humans killed and how much damage done.
With respect to the Atlantic, this was a bad year and there are special features of this year that were bad in a way that is best accounted for by global warming. But looking at the Atlantic hurricanes from a somewhat different but valid perspective, last year was worse (so far) and this year is ordinary, within the context of global warming. So, let’s talk about the global warming question first.
How Global Warming Makes Hurricane Seasons Worse
The effects of global warming on hurricanes in the Atlantic have two interesting features that must be understood to place this discussion in proper context.
First, we are having a bunch of bad decades in a row probably because of global warming. If we compare pre-1980, for a decade, with post 1980, or pre vs. post 1990, or anything similar, the more recent years have had more hurricanes than the earlier years. Comparing to even earlier time periods is tricky because of differences in available data (Satellites make a difference, probably, even with giant weather features like hurricanes). This is mainly due to increasing sea surface temperatures, but there are other factors as well.
Hurricanes are more likely to form when sea surface temperatures are higher. Higher sea surface temperatures can make a hurricane larger or stronger. Hurricanes will last longer if there is more, higher, hurricane-hot sea to travel over. If sea surface temperatures are high enough to cause hurricanes earlier in the year or later in the year, the hurricane season can be longer. Possibly, storms that in a non-warmed world would not have made it to “named storm” status are moved to that level of strength and organization because of the elevated sea surface temperature.
Sea surface temperature increases of small amounts cause large changes in hurricanes, and large changes in hurricanes cause larger changes in potential damage level. The increase in Atlantic sea surface temperatures over recent decades have probably been sufficient, according to my thumb-suck estimate that I strongly suspect is close to correct, to make about half the hurricanes that would have existed anyway jump up one category. Then, when hurricanes get stronger, the amount of damage they can do goes up exponentially. So the sea surface temperature increases we’ve see with global warming easily explain the fact that we’ve had more hurricanes overall, and stronger ones, over the last twenty or thirty years than during the previous years back to when the data are still pretty good.
Second, the science says this will get worse. There is one 2007 study (by Vecci and Soden, in Geophysical Research Letters) that suggests that maybe in the Atlantic, smaller size hurricanes will be less likely to form because of increased vertical wind shear, but that study does not mean much for larger or stronger hurricanes. This decade old study is constantly cited as evidence that global warming will not increase hurricanes in the Atlantic. Other studies show that the overall amount of hurricane activity, and the potential higher end of hurricane strength, and the size, and the speed at which they form, and the amount of water they can contain, and possibly the likelihood of a hurricane stalling right after landfall, go up. Up. Up. Up. One study says down and that word, “down” it resonates across the land like a sonic boom. The other studies say we can expect, and to varying degrees already see, up, up, up, up, up, and denial makes words like “up” and “more” and “worse” and “exasperated” dangerously quiet. Please don’t fall into that trap. Oh, by the way,the one study that says “down” has not been replicated and though experts feel it has some merit, it is far from proven and there are reasons to suggest it my be problematic.
Comparing the 2017 Atlantic Hurricane Season to Other Years
Funny thing about hurricanes: They exist whether or not they menace you. Every year a certain number of hurricanes (usually) form and wander about in the Atlantic ocean for a while, maybe hitting some boats, but otherwise doing little more than causing some big waves to eventually reach beaches in the Caribbean or the eastern US.
This year, we’ve had four major hurricanes so far. Harvey, which maxed out as a Cat 4, ravaged and flooded Texas and Louisiana. Irma, maxing at Cat 5, ravaged Florida after wiping out islands in the Leewards and doing great damage to Cuba and elsewhere in the Caribbean. Maria, maxing out as a Cat 5, did major damage in the Leewards and notably wiped out Puerto Rico. So, four Major Hurricanes formed in the Atlantic and hit something major.
Meanwhile, Jose, another Major hurricane at Cat 4 status, still spinning about in the North Atlantic, is one of those that hit nothing. And that’s all so far this year.
Last year, there were almost exactly the same number of named storms in total (so far) and just like 2017, 2016 had four major hurricanes.
You remember Matthew, which scraped the Atlantic coast and was rather damaging. But do you remember Gaston (Cat 3)? Nicole (Cat 4)? Otto (Cat 3)?
Gaston and Nicole wandered about in the Atlantic and hit nothing. Otto was for real, it hit Central America, but not the US, so from the US perspective, it counts as a non-hitting hurricane. Also, it was only barely cat 3 and weakened quickly.
From 2000 to 2016, inclusively, we have had an average of 15 named storms per year, with a minimum of 8 and a maximum of 28, with most years being between 10 and 16. So far 2017 has had 13 named storms. We may have a couple more. So, likely, we will be right in the middle.
For the same period, the number of hurricanes has ranged from 2 to 15 with an average of about 7. This year, we have had … wait for it … 7. We may or ma not get another one, not very likely two more. In other words, this is an average year for the number of hurricanes.
For the same period, the number of major hurricanes ranges from 0 (though only one year ad zero, it is more typical to have 2 in a low year) to 7, but again, 7 is extreme. It is usually from 2-5. The average is just over 3. This year, we have four. That’s pretty typical.
So, within the context that the last couple of decades has had a somewhat higher than average frequency of hurricanes, and probably more strong ones than previous decades, this we had a typical year this year.
Why does it feel different? Why is it in fact difference, with respect to the horror of it all? Because we had more landfalls, and more serious landfalls.
Keep in mind that Harvey could have hit Houston differently and done more damage. Keep in mind that Cuba beat up Irma, then Irma failed to strike Florida in just the right way to do maximum damage. Keep in mind that after wiping out Puerto Rico, Maria swerved quickly out to sea. In other words, keep in mind that this year could have been much worse than it was.
This is the point that you must understand: Any year can be like this year, or worse. And, with increasing sea surface temperatures and other global warming related factors, worse still.
As you already know, Hurricane Maria is a Category 5 storm menacing the Leewards, and heading, likely, for Puerto Rico.
Please avoid making the mistakes that were made in talking about Irma. There will probably be no Category 5 storm hitting Puerto Rico. The storm will probably be a Category 4 before it hits. So, reporters will sloppily declare that “a category 5 storm is heading for Puerto Rico” then later Rush Limberger will say “Look there was never no such storm, see?” and so on.
But, a Category 4 storm is still nothing to sneeze into, and Puerto Rico and the other island in this storm’s path are in big trouble.
As we wait for that to develop further, let’s talk a bit about predicting hurricane seasons. A lot of people are arguing about whether or not global warming means more, or bigger, or whatever, Atlantic hurricanes. (Short answer: there are probably already more hurricanes in the Atlantic, and bigger ones, but they are hard to count because they are in fact rare events, and science says that there will likely be more in the future). One of the dumber counter arguments to science suggesting that there may be bigger storms, or more of them, is this: You can’t even predict the weather for next weekend, so what the heck, right?
The counter argument to that is this: Ok, fine, we don’t know very accurately what the weather is going to be like next weekend, but what would you say if I told you that we can do a pretty good job of telling, before the hurricane season starts, how many named storms there will be? Huh? Wouldn’t that be amazing?
Turns out we can. And the fact that we can suggests that we should be trusting the models, generally, and therefore, expecting more and bigger hurricanes.
I looked at the predictions made in several recent years by several groups that do this prediction, and found out that the total average wrongness averaging across all of them is down near one hurricane, with the range of wrongness being between -8 and 4, but with most of the predictions being within just a few one way or another.
First, a quick look at the number of named tropical storms in the Atlantic per year:
People will tell you there is no trend here, but as you can see, about 44% of the variation seen in the number of storms over time is accounted for by year, so there is a good argument that there is an increasing trend. One might argue that back in the 70s we missed some Hurricanes. That, I do not buy, but if you need to believe that, you can see there is still a trend from 1980 on. So there is an increase.
But I digress. Here’s the point I wanted to make with this graph. The number of hurricanes in a given year varies quite a bit, from 4 to 28 over this time period (and less over the most recent years). So, a method of prediction that gets within two or three in either direction is pretty good.
The number of named storms (many, usually most, of which will be hurricanes) that will happen in a give season in the Atlantic is predicted with reasonable accuracy by several groups. Here’s a chart showing several different prediction groups compared to reality.
The light blue line is the actual number, and you can see that except for 2011 and 2012, the number of storms predicted by various groups, and the number that occur, are very similar. Let’s assume 2011 and 2012 are strange years and arbitrarily ignore them (I know, this would normally be cheating but we’ll come back to that in a minute).
Looking only at those years, one prediction undershot by 4, one prediction undershot by 3, and 7 overshot by 3 or 4. The other 20 predictions were off by no more than two storms.
So, why is it OK to fudge the data like that? Well, it isn’t really, but the last two years of predictions have been off by one or fewer storms on average, and I’m assuming the predictions are getting better and better. In other words, if I were to lay odds on predicting three years in a row a few years in the future, I’d bet that the average difference between all the predictions an the actual observations would be less than one named storm, and I’d win that bet. For this reason I don’t care so much about older data.
Notice that I’m only using predictions made prior to the start of the season, not later updates which some groups do provide.
For this year, we’ve had 13 named storms so far, and all the various groups predicted 14. There is plenty of time to have a couple more storms, so likely, this year will be a bit more active than expected, but just by a couple of storms.
Back to Maria for a moment, you may be wondering if this storm will hit the coast along the lower 48. It is possible, it is too early to tell, but history and the models that exist so far both suggest that it probably will not, but stay tuned.
Study Finds Top Fossil Fuel Producers’ Emissions Responsible for as Much as Half of Global Surface Temperature Increase, Roughly 30 Percent of Global Sea Level Rise
Findings Provide New Data to Hold Companies Responsible for Climate Change
WASHINGTON (September 7, 2017)—A first-of-its-kind study published today in the scientific journal Climatic Change links global climate changes to the product-related emissions of specific fossil fuel producers, including ExxonMobil and Chevron. Focusing on the largest gas, oil and coal producers and cement manufacturers, the study calculated the amount of sea level rise and global temperature increase resulting from the carbon dioxide and methane emissions from their products as well as their extraction and production processes.
The study quantified climate change impacts of each company’s carbon and methane emissions during two time periods: 1880 to 2010 and 1980 to 2010. By 1980, investor-owned fossil fuel companies were aware of the threat posed by their products and could have taken steps to reduce their risks and share them with their shareholders and the general public.
“We’ve known for a long time that fossil fuels are the largest contributor to climate change,” said Brenda Ekwurzel, lead author and director of climate science at the Union of Concerned Scientists (UCS). “What’s new here is that we’ve verified just how much specific companies’ products have caused the Earth to warm and the seas to rise.”
The study builds on a landmark 2014 study by Richard Heede of the Climate Accountability Institute, one of the co-authors of the study published today. Heede’s study, which also was published in Climatic Change, determined the amount of carbon dioxide and methane emissions that resulted from the burning of products sold by the 90 largest investor- and state-owned fossil fuel companies and cement manufacturers.
Ekwurzel and her co-authors inputted Heede’s 2014 data into a simple, well-established climate model that captures how the concentration of carbon emissions increases in the atmosphere, trapping heat and driving up global surface temperature and sea level. The model allowed Ekwurzel et al. to ascertain what happens when natural and human contributions to climate change, including those linked to the companies’ products, are included or excluded.
The study found that:
<li>Emissions traced to the 90 largest carbon producers contributed approximately 57 percent?of the observed rise in atmospheric carbon dioxide, nearly 50 percent of the rise in global average temperature, and around 30 percent of global sea level rise since 1880.</li>
<li>Emissions linked to 50 investor-owned carbon producers, including BP, Chevron, ConocoPhillips, ExxonMobil, Peabody, Shell and Total, were responsible for roughly 16 percent of the global average temperature increase from 1880 to 2010, and around 11 percent of the global sea level rise during the same time frame.</li>
<li>Emissions tied to the same 50 companies from 1980 to 2010, a time when fossil fuel companies were aware their products were causing global warming, contributed approximately 10 percent of the global average temperature increase and about 4 percent sea level rise since 1880.</li>
<li>Emissions traced to 31 majority state-owned companies, including Coal India, Gazprom, Kuwait Petroleum, Pemex, Petroleos de Venezuela, National Iranian Oil Company and Saudi Aramco, were responsible for about 15 percent of the global temperature increase and approximately 7 percent of the sea level rise between 1880 and 2010.</li>
“Until a decade or two ago, no corporation could be held accountable for the consequences of their products’ emissions because we simply didn’t know enough about what their impacts were,” said Myles Allen, a study co-author and professor of geosystem science at the University of Oxford in England. “This study provides a framework for linking fossil fuel companies’ product-related emissions to a range of impacts, including increases in ocean acidification and deaths caused by heat waves, wildfires and other extreme weather-related events. We hope that the results of this study will inform policy and civil society debates over how best to hold major carbon producers accountable for their contributions to the problem.”
The question of who is responsible for climate change and who should pay for its related costs has taken on growing urgency as climate impacts worsen and become costlier. In New York City alone, officials estimate that it will cost more than $19 billion to adapt to climate change. Globally, adaptation cost projections are equally astronomical. The U.N. Environment Programme estimates that developing countries will need $140 billion to $300 billion annually by 2030 and $280 billion to $500 billion annually by 2050 to adapt.
The debate over responsibility for climate mitigation and adaptation has long focused on the “common but differentiated responsibilities” of nations, a framework used for the Paris climate negotiations. Attention has increasingly turned to non-state actors, particularly the major fossil fuel producers.
“At the start of the Industrial Revolution, very few people understood that carbon dioxide emissions progressively undermine the stability of the climate as they accumulate in the atmosphere, so there was nothing blameworthy about selling fossil fuels to those who wanted to buy them,” said Henry Shue, professor of politics and international relations at the University of Oxford and author of a commentary on the ethical implications of the Ekwurzel et al. paper that was published simultaneously in Climatic Change. “But circumstances have changed radically in light of evidence that a number of investor-owned companies have long understood the harm of their products, yet carried out a decades-long campaign to sow doubts about those harms in order to ensure fossil fuels would remain central to global energy production. Companies knowingly violated the most basic moral principle of ‘do no harm,’ and now they must remedy the harm they caused by paying damages and their proportion of adaptation costs.”
Had ExxonMobil, for example, acted on its own scientists’ research about the risks of its products, climate change likely would be far more manageable today.
“Fossil fuel companies could have taken any number of steps, such as investing in clean energy or carbon capture and storage, but many chose instead to spend millions of dollars to try to deceive the public about climate science to block sensible limits on carbon emissions,” said Peter Frumhoff, a study co-author and director of science and policy at UCS. “Taxpayers, especially those living in vulnerable coastal communities, should not have to bear the high costs of these companies’ irresponsible decisions by themselves.”
Ekwurzel et al.’s study may inform approaches for juries and judges to calculate damages in such lawsuits as ones filed by two California counties and the city of Imperial Beach in July against 37 oil, gas and coal companies, claiming they should pay for damages from sea level rise. Likewise, the study should bolster investor campaigns to force fossil fuel companies to disclose their legal vulnerabilities and the risks that climate change poses to their finances and material assets.
Three statisticians go hunting for rabbit. They see a rabbit. The first statistician fires and misses, her bullet striking the ground below the beast. The second statistician fires and misses, their bullet striking a branch above the lagomorph. The third statistician, a lazy frequentist, says, “We got it!”
OK, that joke was not 1/5th as funny as any of XKCD’s excellent jabs at the frequentist-bayesian debate, but hopefully this will warm you up for a somewhat technical discussion on how to decide if observations about the weather are at all explainable with reference to climate change.
We are having this discussion here and now for two reasons. One is that Hurricane Harvey was (is) a very serious weather event in Texas and Louisiana that may have been made worse by the effects of anthropogenic global warming, and there may be another really nasty hurricane coming (Irma). The other is that Michael Mann, Elisabeth Lloyd and Naomi Oreskes have just published a paper that examines so-called frequentist vs so-called Bayesian statistical approaches to the question of attributing weather observations to climate change.
First, I’ll give you the abstract of the paper then I’ll give you my version of how these approaches are different, and why I’m sure the authors are correct.
The conventional approach to detecting and attributing climate change impacts on
extreme weather events is generally based on frequentist statistical inference wherein a null hypothesis of no influence is assumed, and the alternative hypothesis of an influence is accepted only when the null hypothesis can be rejected at a sufficiently high (e.g., 95% or Bp = 0.05^) level of confidence. Using a simple conceptual model for the occurrence of extreme weather events, we
show that if the objective is to minimize forecast error, an alternative approach wherein likelihoods
of impact are continually updated as data become available is preferable. Using a simple proof-of-concept, we show that such an approach will, under rather general assumptions, yield more
accurate forecasts. We also argue that such an approach will better serve society, in providing a
more effective means to alert decision-makers to potential and unfolding harms and avoid
opportunity costs. In short, a Bayesian approach is preferable, both empirically and ethically.
Frequentist statistics is what you learned in your statistics class, if you are not an actual statistician. I want to know if using Magic Plant Dust on my tomatoes produces more tomatoes. So, I divide my tomato patch in half, and put a certain amount of Magic Plant Dust on one half. I then keep records of how many tomatoes, and of what mass, the plants yield. I can calculate the number of tomatoes and the mass of the tomatoes for each plant, and use the average and variation I observe for each group to get two sets of numbers. My ‘null hypothesis’ is that adding the magic dust has no effect. Therefore, the resulting tomato yield from the treated plants should be the statistically the same as from the untreated plants. I can pick any of a small number of statistical tools, all of which are doing about the same thing, to come up with a test statistic and a “p-value” that allows me to make some kind of standard statement like “the treated plants produced more tomatoes” and to claim that the result is statistically significant.
If the difference, though, is very small, I might not get a good statistical result. So, maybe I do the same thing for ten years in a row. Then, I have repeated the experiment ten times, so my statistics will be more powerful and I can be more certain of an inference. Over time, I get sufficient sample sizes. Eventually I conclude that Magic Plant Dust might have a small effect on the plants, but not every year, maybe because other factors are more important, like how much water they get or the effects of tomato moth caterpillars.
In an alternative Bayesian universe, prior to collecting any data on plant growth, I do something very non-statistical. I read the product label. The label says, “This product contains no active ingredients. Will not affect tomato plants. This product is only for use as a party favor and has no purpose.”
Now, I have what a Bayesian statistician would call a “prior.” I have information that could be used, if I am clever, to produce a statistical model of the likely outcome of the planned experiments. In this case, the likely outcome is that there won’t be a change.
Part of the Bayesian approach is to employ a statistical technique based on Bayes Theorem to incorporate a priori assumptions or belief and new observations to reach towards a conclusion.
In my view, the Bayesian approach is very useful in situations where we have well understood and hopefully multiple links between one or more systems and the system we are interested in. We may not know all the details that relate observed variation in one system and observed variation in another, but we know that there is a link, that it should be observable, and perhaps we know the directionality or magnitude of the effect.
The relationship between climate change and floods serves as an example. Anthropogenic climate change has resulted in warmer sea surface temperatures and warmer air. It would be very hard to make an argument from the physics of the atmosphere that this does not mean that more water vapor will be carried by the air. If there is more water vapor in the air, there is likely to be more rain. Taken as a Bayesian prior, the heating of the Earth’s surface means more of the conditions that would result in floods, even if the details of when, how much, and where are vague at this level.
A less certain but increasingly appreciated effect of climate change is the way trade winds and the jet stream move around the planet. Without going into details, climate change over the last decade or two has probably made it more likely that large storm systems stall. Storms that may have moved quickly through an area are now observed to slow down. If a storm will normally drop one inch of rain on the landscape over which it passes, but now slows down but rains at the same rate, perhaps 3 inches of rain will be dropped (over a shorter distance). What would have been a good watering of all the lawns is now a localized flood.
That is also potentially a Bayesian prior. Of special importance is that these two Bayesian priors imply change in the same direction. Since in this thought experiment we are thinking about floods, we can see that these two prior assumptions together suggest that a post-climate change weather would include more rain falling from the sky in specific areas.
There are other climate change related factors that suggest increased activity of storms. The atmosphere should have more energy, thus more energetic storms. In some places there should more of the kind of wind patterns that spin up certain kinds of storms. It is possible that the relationship between temperature of the air at different altitudes, up through the troposphere and into the lower stratosphere, has changed so that large storms are likely to get larger than they otherwise might.
There is very little about climate change that implies the reverse; Though there may be a few subsets of storm related weather that would be reduced with global warming, most changes are expected to result in more storminess, more storms, more severe storms, or something.
So now we have the question, has climate change caused any kind of increase in storminess?
I’d like to stipulate that there was a kind of turning point in our climate around 1979, before which we had a couple of decades of storminess being at a certain level, and after which, we have a potentially different level. This is also a turning point in measured surface heat. In, say, 1970 plus or minus a decade, it was possible to argue that global warming is likely but given the observations and data at the time, it was hard to point to much change (though we now know, looking back with better data for the previous centuries, that is was actually observable). But, in 2008, plus or minus a decade, it was possible to point to widespread if anecdotal evidence of changes in storm frequency, patterns, effects, as well as other climate change effects, not the least of which was simply heat.
I recently watched the documentary, “An Inconvenient Sequel.” This is a fairly misunderstood film. It is not really part two of Al Gore’s original “An Inconvenient Truth.” The latter was really Al Gore’s argument about climate change, essentially presented by him. “An Inconvenient Sequel” was made by independent film makers with no direct input by Gore with respect to contents and production, though it is mostly about him, him talking, him making his point, etc. But I digress. Here is the salient fact associated with these two movies.An Inconvenient Truth came out in May 2006, so it is based mainly on information available in 2005 and before. In it, there are examples of major climate change effects, including Katrina, but it seems like the total range of effects is more or less explicated almost completely. When An Inconvenient Sequell came out a few weeks ago, a solid 10+ years had passed and the list of actual climate effects noted in the movie was a sampling, not anything close to a full explication, of the things that had happened over recent years. Dozens of major flooding, storming, drying, and deadly heat events had occurred of which only a few of each were mentioned, because there was just so much stuff.
My point is that there is a reasonable hypothesis based on anecdotal observation (at least) that many aspects of weather in the current decade, or the last 20 years, or since 1979 as I prefer, are different in frequency and/or severity than before, because of climate change.
A frequentist approach does not care why I think a certain hypothesis is workable. I could say “I hypothesize that flies can spontaneously vanish with a half life of 29 minutes” and I could say “I hypothesis that if a fly lays eggs on a strawberry there will later be an average of 112 maggots.” The same statistical tests will be usable, the same philosophy of statistics will be applied.
A Bayesian approach doesn’t technically care what I think either, but what I think a priori is actually relevant to the analysis. I might for example know that the average fly lays 11 percent of her body mass in one laying of eggs, and that is enough egg mass to produce about 90-130 maggots (I am totally making this up) so that observational results that are really small (like five maggots) or really large (like 1 million maggots) are very unlikely a priori, and, results between 90 and 130 are a priori very likely.
So, technically, a Bayesian approach is different because it includes something that might be called common sense, but really, is an observationally derived statistical parameter that is taken very seriously by the statistic itself. But, philosophically, it is a little like the pitcher of beer test.
I’ve mentioned this before but I’ll refresh your memory. Consider an observation that makes total sense based on reasonable prior thinking, but the standard frequentist approach fails to reject the null hypothesis. The null hypothesis is that there are more tornadoes from, say, 1970 to the present than there were between 1950 and 1970. This graph suggests this is true…
… but because the techniques of observation and measuring tornado frequency have changed over time, nobody believes the graph to be good data. But, it may not be bad data. In other words, the questions about the graph do not inform us of the hypothesis, but the graph is suggestive.
So, I take a half dozen meteorologists who are over 55 years old (so they’ve seen things, done things) out for a beer. The server is about to take our order, and I interrupt. I ask all the meteorologists to answer the question … using this graph and whatever else you know, are there more tornadoes in the later time interval or not? Write your answer down on this piece of paper, I say, and don’t share your results. But, when we tally them up, if and only if you all have the same exact answer (all “yes” or all “no”) then this pitcher of beer is on me.
Those are quasi-Bayesian conditions (given that these potential beer drinkers have priors in their heads already, and that the graph is suggestive if not conclusive), but more importantly, there is free beer at stake.
They will all say “yes” and there will be free beer.
OK, back to the paper.
Following the basic contrast between frequentist and Bayesian approaches, the authors produce competing models, one based on the former, the other on the latter. “In the conventional, frequentist approach to detection and attribution, we adopt a null hypothesis of an equal probability of active and inactive years … We reject it in favor of the alternative hypothesis of a bias toward more active years … only when we are able to achieve rejection of H0 at a high… level of confidence”
In the bayesian version, a probability distribution that assumes a positive (one directional) effect on the weather is incorporated, as noted above, using Bayes theorem.
Both methods work to show that there is a link between climate change and effect, in this modeled scenario, eventually, but the frequentist approach is very much more conservative and thus, until the process is loaded up with a lot of data, more likely to be wrong, while the bayesian approach correctly identifies the relationship and does so more efficiently.
The authors argue that the bayesian method is more likely to accurately detect the link between cause and effect, and this is almost certainly correct.
This is what this looks like: Frank Frequency, weather commenter on CNN says, “We can’t attribute Hurricane Harvey, or really, any hurricane, to climate change until we have much more data and that may take 100 years because the average number of Atlantic hurricanes to make landfall is only about two per year.”
Barbara Bayes, weather commenter on MSNBC, says, “What we know about the physics of the atmosphere tells us to expect increased rainfall, and increased energy in storms, because of global warming, so when we see a hurricane like Harvey it is really impossible to separate out this prior knowledge when we are explaining the storms heavy rainfall and rapid strengthening. The fact that everywhere we can measure possible climate change effects on storms, the storms seem to be acting as expected under climate change, makes this link very likely.”
I hasten to add that this paper is not about hurricanes, or severe weather per se, but rather, on what statistical philosophy is better for investigating claims linking climate change and weather. I asked the paper’s lead author, Michael Mann (author of The Madhouse Effect: How Climate Change Denial Is Threatening Our Planet, Destroying Our Politics, and Driving Us Crazy, The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, and Dire Predictions, 2nd Edition: Understanding Climate Change), about Hurricane Harvey specifically. He told me, “As I’ve pointed out elsewhere, I’m not particularly fond of the standard detection & attribution approach for an event like Hurricane Harvey for a number of reasons. First of all, the question isn’t whether or not climate change made Harvey happen, but how it modified the impacts of Harvey. For one thing, climate change-related Sea Level Rise was an important factor here, increasing the storm surge by at least half a foot.” Mann recalls the approach taken by climate scientist Kevin Trenberth, who “talks about how warmer sea surface temperatures mean more moisture in the atmosphere (about 7% per degree C) and more rainfall. That’s basic physics and thermodynamics we can be quite certain of.”
The authors go a step farther, in that they argue that there is an ethical consideration at hand. In a sense, an observer or commenter can decide to become a frequentist, and even one with a penchant for very low p-values, with the purpose of writing off the effects of climate change. (They don’t say that but this is a clear implication, to me.) We see this all the time, and it is in fact a common theme in the nefarious politicization of the climate change crisis.
Or, an observer can chose to pay attention to the rather well developed priors, the science that provides several pathways linking climate change and severe weather or other effects, and then, using an appropriate statistical approach … the one you use when you know stuff … be more likely to make a reasonable and intelligent evaluation, and to get on to the business of finding out in more detail how, when, where, and how much each of these effects has taken hold or will take hold.
The authors state that one “… might therefore argue that scientists should err on the side of caution and take steps to ensure that we are not underestimating climate risk and/or underestimating the human component of observed changes. Yet, as several workers have shown …the opposite is the case in prevailing practice. Available evidence shows a tendency among climate scientists to underestimate key parameters of anthropogenic climate change, and thus, implicitly, to understate the risks related to that change”
While I was in contact with Dr. Mann, I asked him another question. His group at Penn State makes an annual prediction of the Atlantic Hurricane Season, and of the several different such annual stabs at this problem, the PSU group tends to do pretty well. So, I asked him how this season seemed to be going, which partly requires reference to the Pacific weather pattern ENSO (El Nino etc). He told me
We are ENSO neutral but have very warm conditions in the main development region of the Tropcs (which is a major reason that Irma is currently intensifying so rapidly). Based on those attributes, we predicted before the start of the season (in May) that there would be between 11 and 20 storms with a best estimate of 15 named storms. We are currently near the half-way point of the Atlantic hurricane season, and with Irma have reached 9 named storms, with another potentially to form in the Gulf over the next several days. So I suspect when
all is said and done, the total will be toward the upper end of our predicted range.
I should point out that Bayesian statistics are not new, just not as standard as one might expect, partly because, historically, this method has been hard to compute. So, frequency based methods have decades of a head start, and statistical methodology tends to evolve slowly.
Remember the revelation back a year or so ago that Exxon Mobil knew all about the likely effects of the global warming they contributed to, and the subsequent denials by Exxon that this was not true, yada yada yada?
A paper has just come out that confirms what we all said then. From the abstract:
This paper assesses whether ExxonMobil Corporation has in the past misled the general public about climate change. We present an empirical document-by-document textual content analysis and comparison of 187 climate change communications from ExxonMobil, including peer-reviewed and non-peer-reviewed publications, internal company documents, and paid, editorial-style advertisements (‘advertorials’) in The New York Times. We examine whether these communications sent consistent messages about the state of climate science and its implications—specifically, we compare their positions on climate change as real, human-caused, serious, and solvable. In all four cases, we find that as documents become more publicly accessible, they increasingly communicate doubt. This discrepancy is most pronounced between advertorials and all other documents. For example, accounting for expressions of reasonable doubt, 83% of peer-reviewed papers and 80% of internal documents acknowledge that climate change is real and human-caused, yet only 12% of advertorials do so, with 81% instead expressing doubt. We conclude that ExxonMobil contributed to advancing climate science—by way of its scientists’ academic publications—but promoted doubt about it in advertorials. Given this discrepancy, we conclude that ExxonMobil misled the public. Our content analysis also examines ExxonMobil’s discussion of the risks of stranded fossil fuel assets. We find the topic discussed and sometimes quantified in 24 documents of various types, but absent from advertorials. Finally, based on the available documents, we outline ExxonMobil’s strategic approach to climate change research and communication, which helps to contextualize our findings.
The article is free and open access so you can read the whole thing!
Every now and then, I hear someone giving the Republican Party credit for finally starting to get on board with 20th (or even 19th) century science, and 21st century eyeballs, to accept the idea of climate change. That is annoying whenever it happens because it simply isn’t ever true and never will be.
Media Matters for America has a piece critiquing a recent Politico assertion that the tide is turning. Here is some of what they say, click through to the rest.
Politico’s story…offers two main examples to support its argument: First, the bipartisan House Climate Solutions Caucus “has more than tripled in size since January” and now includes 26 of the House’s 240 Republicans. Second, 46 House Republicans voted in July against lifting a requirement that the Defense Department study climate change’s impacts on the military.
But these House members are hardly going out on a limb. The climate caucus does not promote any specific legislation or policies. And military leaders, including Defense Secretary James Mattis, have long been concerned about climate change and have voiced no objections to studying it. Indeed, the Politico article notes, “If the Republican Party is undergoing a shift on climate, it is at its earliest, most incremental stage.”
What the article missed was a timely and dramatic counterexample: In California, where a handful of GOP state legislators recently provided the decisive votes in favor of actual climate legislation, they have come under brutal fire from other Republicans.
California Gov. Jerry Brown, a Democrat, signed a bill on July 25 to extend the state’s cap-and-trade system until 2030. He had negotiated with a handful of Republican legislators and with business lobbies, among others, to craft a relatively corporate-friendly bill, not as strong as many environmental justice advocates and other progressives wanted. In the end, three Democrats in the Assembly voted against it, so it was passed only because seven of their Republican colleagues voted for it. One Republican in the state Senate also voted in favor of the bill.
The blowback against those Republicans was immediate and intense. GOP leaders throughout California are now pushing for the ouster of Republican Assembly Leader Chad Mayes, who played a key role in negotiating the bill and rounding up other Republican votes for it.
And the blowback has gone national: Powerful D.C.-based anti-tax zealot Grover Norquist declared open season on Mayes and the seven other Republicans who voted “yes,” co-authoring an op-ed in the Los Angeles Times last week that accused Mayes of “treachery” and argued that the California legislature is a “big fat target for taxpayers who wish to go after Republicans behaving badly.”
If this kind of backlash happens in the Golden State, just imagine what would happen in D.C. if the House Climate Solutions Caucus did anything more than gently gesture at the possibility of climate action. Conservative groups in D.C. aren’t even satisfied with an administration that’s been aggressively rolling back environmental protections; they are pushing the EPA to debate and undermine basic climate science.
National media should be reporting on the drama unfolding in California when they write about Republicans and climate change. It’s been covered by newspapers in the state but missed by virtually all outlets beyond California’s borders.
Politico is far from alone in pushing the idea that Republicans might be nearing a tipping point on climate change. Reporters and columnists at national outlets keep publishing versions of this seemingly counterintuitive story and glossing over a key truth: The base and the establishment of the Republican Party will enact harsh retribution on elected officials who endorse policies designed to cut greenhouse gas emissions.
Vice published a piece on August 17 titled “The Republicans Trying to Fight Climate Denial in Their Own Party,” which focused on the Climate Leadership Council…
Time ran an article in May headlined “Meet the Republicans Taking On Climate Change,” which mentioned both the Climate Solutions Caucus and the Climate Leadership Council. The Guardian ran one in April under the headline “The Republicans who care about climate change: ‘They are done with the denial.'”…
Go all the way back to 2010 for a classic of the genre, a Thomas Friedman opinion column in The New York Times titled “How the G.O.P. Goes Green,” which praised Sen. Lindsey Graham (R-SC) for “courageously” trying to craft a bipartisan climate bill. Less than four months later, Graham bailed from the whole enterprise and helped to ensure that no climate legislation would pass during the Obama presidency.
It’s nice that a handful of congressional Republicans are taking baby steps toward acknowledging that climate change is a big problem that demands big solutions. But their moves are far from courageous, and the media adulation they get is all out of proportion to their clout. Norquist is more influential on this issue than all of the climate-concerned congressional Republicans combined, a fact most journalists are not acknowledging, and Norquist reiterated his die-hard opposition to a carbon tax just last week.
Many of the articles about Republicans turning over a new leaf on climate cite Bob Inglis or the group he runs, RepublicEN, which promotes conservative climate solutions. Inglis was a U.S. representative from South Carolina until he got primaried out in 2010, in part because he called for a carbon tax. Norquist’s organization, Americans for Tax Reform, gave a boost to Inglis’ primary challenger. In the years since, Inglis has been working doggedly to get other Republicans to take climate change seriously, but if they followed his advice at this point, they’d likely get booted out in a primary too.
Just like there’s no Donald Trump pivot, there’s no Republican climate pivot. We’ll know we’re seeing real change when more than a handful of GOP lawmakers take a risky vote for actual policy to reduce carbon emissions. Until then, journalists should avoid writing trend stories about this nonexistent trend.
Posting this with no comment because it is expected and so obviously bone-headed Trump:
US President Donald Trump’s administration has disbanded a government advisory committee that was intended to help the country prepare for a changing climate.
The US National Oceanic and Atmospheric Administration established the committee in 2015 to help businesses and state and local governments make use of the next national climate assessment. The legally mandated report, due in 2018, will lay out the latest climate-change science and describe how global warming is likely to affect the United States, now and in coming decades.
The advisory group’s charter expired on 20 August, and Trump administration officials informed members late last week that it would not be renewed. “It really makes me worried and deeply sad,” says Richard Moss, a climate scientist at the University of Maryland in College Park and co-chair of the committee. “It’s another thing that is just part of the political football game.”
Harvey the Hurricane will hit Texas roughly between Corpus Christi and Victoria (but stay tuned for exact details).
Harvey is passing over water that is significantly warmer than usual, owing to global warming. This storm was too disorganized to even, under normal conditions, to have a name, just a day or so. But, when this storm hits Texas late this week (maybe by the time you are reading this) it is likely to be a Category III storm.
Then, after landfall, the storm will hang around that area for a while dumping huge amounts of rain on the Texas flatness.
The target area may have 15 inches of rain or more over fairly large areas. There may be spots with more than 25 inches. This is one of those storms that requires the weather forecasters to add new colors to their usual maps.
The last “major hurricane” (Category 3 or larger) to hit the US was Wilma in 2005.
This is an area with abundant oil extraction and processing facilities which are subject to damage from large storms.
Since the ocean has risen since the last major storm surge in this area, local residents and businesses need to make an adjustment in their expectations. If you are in the area look at the National Weather Service’s science based information on storm surges. They have some new tools available. Good thing they have not been removed yet!
For more on the link between this storm and climate change, see THIS.
We don’t know how strong this storm is going to be, but a lot of experts are saying they are above average worried.
UPDATE: Friday AM
Despite rumors of weakening, the storm continues to strengthen. The main change in forecast is that the center of the storm’s expected landfall is father south than expected, away from Houston, but Houston will still receive a great deal of rain, maybe most of the high rainfall amounts.
Storm surges of up to 9 feet or more are possible around Victoria and Corpus Christi.
First, there is the annual up and down cycle that happens because there is more land in the Northern Hemisphere. I won’t explain that to you now because I know you can figure out why that happens.
Second, there is natural variation up and down aside from that annual cycle that has to do with things like volcanoes and such. This includes the rate of forest fires, which increase greenhouse gases by turning some of the Carbon trapped in plant tissue into gas form as CO2. (That was a hint for the answer to the first reason!)
There was a big spike in CO2 concentration this year, and it was caused by El Nino increasing forest fire output, which in turn, freed up some of that CO2. Also, regional drought in some places simply slowed down plant growth, leaving some Carbon stranded in the atmosphere.
So was that natural? Not at all. ENSO cycles, that cause El Nino and La Nina constitute and oscillation in rainfall patterns, and part of that results in extra forest fires or other effects as mentioned. But these effects are caused directly by weather disruption. Human caused global warming was already doing that. The severe El Nino of 2014-2016 was more severe (and probably longer) than any, or almost any, ever observed, precisely because it was a big dermatological monster sitting on top of a big hill made by anthropogenic global warming.
But there is also another,subtler but very important lesson in this event. At any given time we could have what would normally be a “natural” shift to bad conditions. But under global warming, such a shift can be transformed from a disaster to a much bigger disaster. In this way, think of climate change as the steepening of the drop off alongside the road from a 2 foot ditch to a 10 foot embankment. When we drive off the road due to natural forces (some ice, for example) without global warming,we get bounced around a bit. With global warming we get to rely on our airbags to save us, but the airbag deployment will probably break both our arms and mess up our face.
Anyway, the confirmation of the role of El Nino comes from new research discussed here.
It really is true that global warming has made heat waves more common and more severe. The heat wave last month that affected the American southwest was one of these. Yet, of the 433 local broadcast events in local TV affiliates in Phoenix and Las Vegas to mention the heatwave (which was current news at the time) only one event mentioned a climate change connection, and that was to downplay it.
Similarly, governments are ignoring the connection.
This is the people who are supposed to help or at least disseminate correct information, letting everyone down for, I assume, political reasons. Shame on them.
Irma is a new named storm in the Eastern Atlantic. See this post for details, eventually.
UPDATE (Aug 29th)
There is a system currently raining on Cabo Verde, off the West Coast of Africa (nee Cape Verde) that is expected to develop. It is on the verge of becoming a tropical depression. The National Hurricane Center has estimated that there is a high probability of this stormy feature becoming a tropical storm in a couple of days or so. If it gets a name, it will be Irma, unless some other large rotating wet object takes that name first.
UPDATE (Aug 29th)
How is the Atlantic Season doing so far, in relation to most hurricane seasons?
Using data from NOAA, we can say that on average (using the 1966-2009 baseline) we reach the eight named storm in the Atlantic (Harvey is the eighth) on September 24th. So, we’re having more named storms than average.
This year so far we’ve had 3 hurricanes. Normally one reaches that number of hurricanes on September 9th. That’s a week and a half from now, so we can declare this year a bit above average in this measure, but not spectacularly so.
So far this year we’ve had one major hurricane (Category 3 or above). There are some years with zero major hurricanes, but on average one major hurricane occurs by September 4th. So, we’re close to average now.
UPDATE (Aug 29th)
The following posts discuss various aspects of Harvey
Well, finally, something interesting happened in the Atlantic! Tropical Depression Harvey is heading for Texas and in a very short amount of time is going to whip up into a hurricane and hit the Lone Star State right on the coastline.
From the NWS HPC:
1. Harvey is likely to bring multiple hazards, including heavy
rainfall, storm surge, and possible hurricane conditions to portions
of the Texas coast beginning on Friday.
2. Heavy rainfall is likely to spread across portions of eastern
Texas, Louisiana, and the lower Mississippi Valley from Friday
through early next week and could cause life-threatening flooding.
Please refer to products from your local National Weather Service
office and the NOAA Weather Prediction Center for more information
on the flooding hazard.
3. A Storm Surge Watch is in effect from Port Mansfield to High
Island, Texas, indicating the possibility of life-threatening
inundation from rising water moving inland from the coast during the
next 48 hours. For a depiction of areas at risk, see the Storm
Surge Watch/Warning Graphic at hurricanes.gov.
4. The Potential Storm Surge Flooding Map is available on the NHC
website. This product depicts a reasonable worst-case scenario –
the amount of inundation that has a 10 percent chance of being
exceeded at each individual location. Because the Flooding Map is
based on inputs that extend out only to about 72 hours, it best
represents the flooding potential in those locations within the
We still hear the yammering that climate change has not affected storms. “They said there would be more storms. There’s no more storms,” they say.
They are wrong in so many ways. For example, the total energy observed in tropical storms around the globe is up. There have been several big huge scary storms in the tropics in recent years, some of which are unprecedented in their size, strength, rapidity of forming, when they formed, where they went, and what they messed up. Other types of storms show either likely increases or, if not clearly increased yet, still show strong liklihood of increasing in the future based on models. Models that are good.
This is from Emannuel 2005, showing his “Power Dissipation Index” over time and sea surface temperatures.
This shows the long term up and down swings in total tropical storm activity, and an overall upward trend exactly as expected with effects from global warming.
This is from “Increasing destructiveness of tropical cyclones over the past 30 years” by Kerry Emanuel, Nature 436:686-688.
Roger Pielke Jr. is one of those yammering fools (I used to try to be nice to him until he accused me of horrible things a few months back and almost none of them were true!) who will tell you this. Roger says, there have bee no more landfalling Atlantic Hurricanes in the US recently than ever before. But trying to figure out what is occurring on the Earth by only considering what the smallest of the Hurricane basins produces, and only counting the small subset of those hurricanes that hit the US (and, by thew way, ignoring some of them such as Hurricane Sandy in order to fudge the numbers) is like trying to get a handle on the frequency of major train derailments by watching the 100 mile length of track you drive along five times a year on the way up north fishing. Nobody would do that. Except Roger.
The normal number of named Atlantic storms is 12.1 of which 6.4 are hurricanes, and 2.7 major hurricanes, in a given year. The record high is 28 named storms, and the record low, is 4.
There have been various predictions for how much storm activity we expect this year. The predictions that are most recent and most reliable call for 11, 12, 11-15, 14, 11-17, and 15.3 storms. So, generally, close to average plus.
The prediction I watch most closely is from PSU’s Earth System Science Center. PSU has been making very accurate predictions for a number of years. For this year, they predict 15.3 +/- 3.9 named storms this year (i.e., about 11 to 20 with the best guess being 15). Their prediction will drop a little if there is a mild El Niño this year, but that seems increasingly unlikely. Also, PSU has a second alternative model that produces a lower estimate, of around 12.4.
So, in short, barring an El Niño, we can expect a near average but slightly above average year for Atlantic hurricanes. And no, that does not mean that global warming is not happening. It means that no derailments are expected along a particular section of recently maintained rail track.
Anyway, for the second year in a row, IIRC, we got cheated on our A storm. Below, I’ve put the official list of storm names for the Atlantic 2017 season (as headings, we’ll fill in info as the year progresses), but the first tropical storm to talk about today, 19 days into the season, is Bret (one ‘t’). Arlene happened last April.
Tropical storms don’t happen in the Atlantic in April. ‘Cept for Arlene. Generally, it seems like the boundaries are becoming enfuzzied. Expect more “extraseasonal” storms over the next few years, and expect eventually, perhaps a decade from now, for the National Hurricane Center crew to be asked to start watching year round, because a tropical storm that hits your fleet in April is still a tropical storm. Even if Roger says it doesn’t exit.
Bret formed near the very southern edge of the Atlantic Hurricane basin.
This is the earliest far south forming hurricane in the Atlantic Basin. So, our first storm of the season happened months early, the second storm hundreds of miles south, compared to normal. Roger that.
Bret will menace the northern edge of South America, then in a few days from now it will be gone. Bret is not expected to strengthen and will not be a hurricane. Nor will it hit the United States of America. Therefore, according to Roger, Bret, as novel as it is, does not exist.
The next storm, to be named Cindy, is very likely to form from a disturbance now seen in the south-central Gulf of Mexico. This is fairly typical place to see a tropical storm or hurricane form this time of year. Cindy will likely become a north-moving tropical storm, and will likely stay just at tropical storm strength, coming ashore somewhere between Houston, Texas and Morgan City, Louisiana. The chances of Cindy wetting down NOLA is very good, but again, this will not be a hurricane. This will happen some time late Wednesday, most likely.
While possible-Cindy would transform from a tropical storm to a depression with landfall, the storm will track up the Mississippi and cause lots of rain.
Michael Mann, distinguished professor of atmospheric science and director of the Earth System Science Center, Penn State, will receive the seventh annual Stephen H. Schneider Award for Outstanding Climate Science Communications from Climate One at the Commonwealth Club.
The $15,000 award is given to a natural or social scientist who has made extraordinary scientific contributions and communicated that knowledge to a broad public in a clear and compelling fashion. The award was established in honor of Stephen Henry Schneider, one of the founding fathers of climatology, who died suddenly in 2010.
The jurors for the award state that Mann exemplifies the rare ability to be both a superb scientist and powerful communicator in the mold of Schneider.
“Professor Mike Mann has been a world leader in scientific efforts to understand the natural variability of the climate system and to reconstruct global temperature variations over the past two millennia,” said Ben Santer, climate researcher, Lawrence Livermore National Laboratory. “This critically important work led to the famous ‘hockey stick’ temperature reconstruction. The hockey stick provides compelling evidence for the emergence of a human-caused warming signal from the background noise of natural fluctuations in climate.”
Mann will receive the award — presented by Climate One, a project of the Commonwealth Club of California and underwritten by Tom R. Burns, Nora Machado and Michael Haas — in December during the annual meeting of the American Geophysical Union in New Orleans.
Mann is the author of more than 200 peer-reviewed publications and has written “Dire Predictions: Understanding Climate Change” and “The Hockey Stick and the Climate Wars.” He is also co-author with Tom Toles, Washington Post editorial cartoonist, of “The Madhouse Effect.” He is co-founder of the science website RealClimate.org.
“Stephen Schneider was a role model and mentor to me, and I am truly humbled to receive the Stephen Schneider Award for Outstanding Climate Science Communications,” said Mann. “While none of us can fill the very large shoes Steve left behind, we can honor his legacy by doing our best to inform the public discourse over human-caused climate change in an objective, clear and effective manner.”
The first recipient of the Schneider Award was Richard Alley, Evan Pugh Professor of Geosciences, Penn State.