Tag Archives: Extreme weather

New Research on Assessing Climate Change Impact on Extreme Weather

Three statisticians go hunting for rabbit. They see a rabbit. The first statistician fires and misses, her bullet striking the ground below the beast. The second statistician fires and misses, their bullet striking a branch above the lagomorph. The third statistician, a lazy frequentist, says, “We got it!”

OK, that joke was not 1/5th as funny as any of XKCD’s excellent jabs at the frequentist-bayesian debate, but hopefully this will warm you up for a somewhat technical discussion on how to decide if observations about the weather are at all explainable with reference to climate change.


[source]

We are having this discussion here and now for two reasons. One is that Hurricane Harvey was (is) a very serious weather event in Texas and Louisiana that may have been made worse by the effects of anthropogenic global warming, and there may be another really nasty hurricane coming (Irma). The other is that Michael Mann, Elisabeth Lloyd and Naomi Oreskes have just published a paper that examines so-called frequentist vs so-called Bayesian statistical approaches to the question of attributing weather observations to climate change.

Mann, Michael, ElisabethLloyd, Naomi Oreskes. 2017. Assessing climate change impacts on extreme weather events; the case for an alternative (Baesian) approach. Climate Change (2017) 144:131-142.

First, I’ll give you the abstract of the paper then I’ll give you my version of how these approaches are different, and why I’m sure the authors are correct.

The conventional approach to detecting and attributing climate change impacts on
extreme weather events is generally based on frequentist statistical inference wherein a null hypothesis of no influence is assumed, and the alternative hypothesis of an influence is accepted only when the null hypothesis can be rejected at a sufficiently high (e.g., 95% or Bp = 0.05^) level of confidence. Using a simple conceptual model for the occurrence of extreme weather events, we
show that if the objective is to minimize forecast error, an alternative approach wherein likelihoods
of impact are continually updated as data become available is preferable. Using a simple proof-of-concept, we show that such an approach will, under rather general assumptions, yield more
accurate forecasts. We also argue that such an approach will better serve society, in providing a
more effective means to alert decision-makers to potential and unfolding harms and avoid
opportunity costs. In short, a Bayesian approach is preferable, both empirically and ethically.

Frequentist statistics is what you learned in your statistics class, if you are not an actual statistician. I want to know if using Magic Plant Dust on my tomatoes produces more tomatoes. So, I divide my tomato patch in half, and put a certain amount of Magic Plant Dust on one half. I then keep records of how many tomatoes, and of what mass, the plants yield. I can calculate the number of tomatoes and the mass of the tomatoes for each plant, and use the average and variation I observe for each group to get two sets of numbers. My ‘null hypothesis’ is that adding the magic dust has no effect. Therefore, the resulting tomato yield from the treated plants should be the statistically the same as from the untreated plants. I can pick any of a small number of statistical tools, all of which are doing about the same thing, to come up with a test statistic and a “p-value” that allows me to make some kind of standard statement like “the treated plants produced more tomatoes” and to claim that the result is statistically significant.

If the difference, though, is very small, I might not get a good statistical result. So, maybe I do the same thing for ten years in a row. Then, I have repeated the experiment ten times, so my statistics will be more powerful and I can be more certain of an inference. Over time, I get sufficient sample sizes. Eventually I conclude that Magic Plant Dust might have a small effect on the plants, but not every year, maybe because other factors are more important, like how much water they get or the effects of tomato moth caterpillars.

In an alternative Bayesian universe, prior to collecting any data on plant growth, I do something very non-statistical. I read the product label. The label says, “This product contains no active ingredients. Will not affect tomato plants. This product is only for use as a party favor and has no purpose.”

Now, I have what a Bayesian statistician would call a “prior.” I have information that could be used, if I am clever, to produce a statistical model of the likely outcome of the planned experiments. In this case, the likely outcome is that there won’t be a change.

Part of the Bayesian approach is to employ a statistical technique based on Bayes Theorem to incorporate a priori assumptions or belief and new observations to reach towards a conclusion.

In my view, the Bayesian approach is very useful in situations where we have well understood and hopefully multiple links between one or more systems and the system we are interested in. We may not know all the details that relate observed variation in one system and observed variation in another, but we know that there is a link, that it should be observable, and perhaps we know the directionality or magnitude of the effect.

The relationship between climate change and floods serves as an example. Anthropogenic climate change has resulted in warmer sea surface temperatures and warmer air. It would be very hard to make an argument from the physics of the atmosphere that this does not mean that more water vapor will be carried by the air. If there is more water vapor in the air, there is likely to be more rain. Taken as a Bayesian prior, the heating of the Earth’s surface means more of the conditions that would result in floods, even if the details of when, how much, and where are vague at this level.

A less certain but increasingly appreciated effect of climate change is the way trade winds and the jet stream move around the planet. Without going into details, climate change over the last decade or two has probably made it more likely that large storm systems stall. Storms that may have moved quickly through an area are now observed to slow down. If a storm will normally drop one inch of rain on the landscape over which it passes, but now slows down but rains at the same rate, perhaps 3 inches of rain will be dropped (over a shorter distance). What would have been a good watering of all the lawns is now a localized flood.

That is also potentially a Bayesian prior. Of special importance is that these two Bayesian priors imply change in the same direction. Since in this thought experiment we are thinking about floods, we can see that these two prior assumptions together suggest that a post-climate change weather would include more rain falling from the sky in specific areas.

There are other climate change related factors that suggest increased activity of storms. The atmosphere should have more energy, thus more energetic storms. In some places there should more of the kind of wind patterns that spin up certain kinds of storms. It is possible that the relationship between temperature of the air at different altitudes, up through the troposphere and into the lower stratosphere, has changed so that large storms are likely to get larger than they otherwise might.

There is very little about climate change that implies the reverse; Though there may be a few subsets of storm related weather that would be reduced with global warming, most changes are expected to result in more storminess, more storms, more severe storms, or something.

So now we have the question, has climate change caused any kind of increase in storminess?

I’d like to stipulate that there was a kind of turning point in our climate around 1979, before which we had a couple of decades of storminess being at a certain level, and after which, we have a potentially different level. This is also a turning point in measured surface heat. In, say, 1970 plus or minus a decade, it was possible to argue that global warming is likely but given the observations and data at the time, it was hard to point to much change (though we now know, looking back with better data for the previous centuries, that is was actually observable). But, in 2008, plus or minus a decade, it was possible to point to widespread if anecdotal evidence of changes in storm frequency, patterns, effects, as well as other climate change effects, not the least of which was simply heat.

I recently watched the documentary, “An Inconvenient Sequel.” This is a fairly misunderstood film. It is not really part two of Al Gore’s original “An Inconvenient Truth.” The latter was really Al Gore’s argument about climate change, essentially presented by him. “An Inconvenient Sequel” was made by independent film makers with no direct input by Gore with respect to contents and production, though it is mostly about him, him talking, him making his point, etc. But I digress. Here is the salient fact associated with these two movies.An Inconvenient Truth came out in May 2006, so it is based mainly on information available in 2005 and before. In it, there are examples of major climate change effects, including Katrina, but it seems like the total range of effects is more or less explicated almost completely. When An Inconvenient Sequell came out a few weeks ago, a solid 10+ years had passed and the list of actual climate effects noted in the movie was a sampling, not anything close to a full explication, of the things that had happened over recent years. Dozens of major flooding, storming, drying, and deadly heat events had occurred of which only a few of each were mentioned, because there was just so much stuff.

My point is that there is a reasonable hypothesis based on anecdotal observation (at least) that many aspects of weather in the current decade, or the last 20 years, or since 1979 as I prefer, are different in frequency and/or severity than before, because of climate change.

A frequentist approach does not care why I think a certain hypothesis is workable. I could say “I hypothesize that flies can spontaneously vanish with a half life of 29 minutes” and I could say “I hypothesis that if a fly lays eggs on a strawberry there will later be an average of 112 maggots.” The same statistical tests will be usable, the same philosophy of statistics will be applied.

A Bayesian approach doesn’t technically care what I think either, but what I think a priori is actually relevant to the analysis. I might for example know that the average fly lays 11 percent of her body mass in one laying of eggs, and that is enough egg mass to produce about 90-130 maggots (I am totally making this up) so that observational results that are really small (like five maggots) or really large (like 1 million maggots) are very unlikely a priori, and, results between 90 and 130 are a priori very likely.

So, technically, a Bayesian approach is different because it includes something that might be called common sense, but really, is an observationally derived statistical parameter that is taken very seriously by the statistic itself. But, philosophically, it is a little like the pitcher of beer test.

I’ve mentioned this before but I’ll refresh your memory. Consider an observation that makes total sense based on reasonable prior thinking, but the standard frequentist approach fails to reject the null hypothesis. The null hypothesis is that there are more tornadoes from, say, 1970 to the present than there were between 1950 and 1970. This graph suggests this is true…

Annual number of tornadoes for the period 1916-1995; the dashed line connecting solid circles shows the raw data, the red heavy solid line is the result of smoothing. Also shown in the green light solid line is the number of tornado days (i.e., days with one or more tornadoes) per year.

… but because the techniques of observation and measuring tornado frequency have changed over time, nobody believes the graph to be good data. But, it may not be bad data. In other words, the questions about the graph do not inform us of the hypothesis, but the graph is suggestive.

So, I take a half dozen meteorologists who are over 55 years old (so they’ve seen things, done things) out for a beer. The server is about to take our order, and I interrupt. I ask all the meteorologists to answer the question … using this graph and whatever else you know, are there more tornadoes in the later time interval or not? Write your answer down on this piece of paper, I say, and don’t share your results. But, when we tally them up, if and only if you all have the same exact answer (all “yes” or all “no”) then this pitcher of beer is on me.

Those are quasi-Bayesian conditions (given that these potential beer drinkers have priors in their heads already, and that the graph is suggestive if not conclusive), but more importantly, there is free beer at stake.

They will all say “yes” and there will be free beer.

OK, back to the paper.

Following the basic contrast between frequentist and Bayesian approaches, the authors produce competing models, one based on the former, the other on the latter. “In the conventional, frequentist approach to detection and attribution, we adopt a null hypothesis of an equal probability of active and inactive years … We reject it in favor of the alternative hypothesis of a bias toward more active years … only when we are able to achieve rejection of H0 at a high… level of confidence”

In the bayesian version, a probability distribution that assumes a positive (one directional) effect on the weather is incorporated, as noted above, using Bayes theorem.

Both methods work to show that there is a link between climate change and effect, in this modeled scenario, eventually, but the frequentist approach is very much more conservative and thus, until the process is loaded up with a lot of data, more likely to be wrong, while the bayesian approach correctly identifies the relationship and does so more efficiently.

The authors argue that the bayesian method is more likely to accurately detect the link between cause and effect, and this is almost certainly correct.

This is what this looks like: Frank Frequency, weather commenter on CNN says, “We can’t attribute Hurricane Harvey, or really, any hurricane, to climate change until we have much more data and that may take 100 years because the average number of Atlantic hurricanes to make landfall is only about two per year.”

Barbara Bayes, weather commenter on MSNBC, says, “What we know about the physics of the atmosphere tells us to expect increased rainfall, and increased energy in storms, because of global warming, so when we see a hurricane like Harvey it is really impossible to separate out this prior knowledge when we are explaining the storms heavy rainfall and rapid strengthening. The fact that everywhere we can measure possible climate change effects on storms, the storms seem to be acting as expected under climate change, makes this link very likely.”

I hasten to add that this paper is not about hurricanes, or severe weather per se, but rather, on what statistical philosophy is better for investigating claims linking climate change and weather. I asked the paper’s lead author, Michael Mann (author of The Madhouse Effect: How Climate Change Denial Is Threatening Our Planet, Destroying Our Politics, and Driving Us Crazy, The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, and Dire Predictions, 2nd Edition: Understanding Climate Change), about Hurricane Harvey specifically. He told me, “As I’ve pointed out elsewhere, I’m not particularly fond of the standard detection & attribution approach for an event like Hurricane Harvey for a number of reasons. First of all, the question isn’t whether or not climate change made Harvey happen, but how it modified the impacts of Harvey. For one thing, climate change-related Sea Level Rise was an important factor here, increasing the storm surge by at least half a foot.” Mann recalls the approach taken by climate scientist Kevin Trenberth, who “talks about how warmer sea surface temperatures mean more moisture in the atmosphere (about 7% per degree C) and more rainfall. That’s basic physics and thermodynamics we can be quite certain of.”

The authors go a step farther, in that they argue that there is an ethical consideration at hand. In a sense, an observer or commenter can decide to become a frequentist, and even one with a penchant for very low p-values, with the purpose of writing off the effects of climate change. (They don’t say that but this is a clear implication, to me.) We see this all the time, and it is in fact a common theme in the nefarious politicization of the climate change crisis.

Or, an observer can chose to pay attention to the rather well developed priors, the science that provides several pathways linking climate change and severe weather or other effects, and then, using an appropriate statistical approach … the one you use when you know stuff … be more likely to make a reasonable and intelligent evaluation, and to get on to the business of finding out in more detail how, when, where, and how much each of these effects has taken hold or will take hold.

The authors state that one “… might therefore argue that scientists should err on the side of caution and take steps to ensure that we are not underestimating climate risk and/or underestimating the human component of observed changes. Yet, as several workers have shown …the opposite is the case in prevailing practice. Available evidence shows a tendency among climate scientists to underestimate key parameters of anthropogenic climate change, and thus, implicitly, to understate the risks related to that change”

While I was in contact with Dr. Mann, I asked him another question. His group at Penn State makes an annual prediction of the Atlantic Hurricane Season, and of the several different such annual stabs at this problem, the PSU group tends to do pretty well. So, I asked him how this season seemed to be going, which partly requires reference to the Pacific weather pattern ENSO (El Nino etc). He told me

We are ENSO neutral but have very warm conditions in the main development region of the Tropcs (which is a major reason that Irma is currently intensifying so rapidly). Based on those attributes, we predicted before the start of the season (in May) that there would be between 11 and 20 storms with a best estimate of 15 named storms. We are currently near the half-way point of the Atlantic hurricane season, and with Irma have reached 9 named storms, with another potentially to form in the Gulf over the next several days. So I suspect when
all is said and done, the total will be toward the upper end of our predicted range.

I should point out that Bayesian statistics are not new, just not as standard as one might expect, partly because, historically, this method has been hard to compute. So, frequency based methods have decades of a head start, and statistical methodology tends to evolve slowly.

Explaining The Recent Extreme Weather: Global Warming

The human release of greenhouse gasses has ultimately caused changes in weather patterns so that major storm systems in the Northern Hemisphere get wetter and move along more slowly, causing significant rainfall events to occur at a much higher rate than previously. This has become a nearly ongoing phenomenon, with major floods in Canada, Colorado, Texas, Western Europe, Texas again, various places in Azia, more in Europe, Texas again, and so on.

The short version of the story: The jet stream is often fairly linear, traveling around the planet at a high speed, but it can also get all wavy and those waves can become “quasi resonant” meaning that they sit in the same place for a long period of time. Also, they go slower and thus move weather patterns along more slowly. This can cause the aforementioned major rainfall events, as well as persistent droughts. And we’ve had plenty of both of those.

I have written quite a bit about this, but especially this item (but see also this). And now we have more research confirming the findings.

The same (or overlapping) team of researchers that did this earlier work has a new paper out in PNAS. Here’s the summary material from the paper:

Significance:

Weather extremes are becoming more frequent and severe in many regions of the world. The physical mechanisms have not been fully identified yet, but there is growing evidence that there are connections to planetary wave dynamics. Our study shows that, in boreal spring-to-autumn 2012 and 2013, a majority of the weather extremes in the Northern Hemisphere midlatitudes were accompanied by highly magnified planetary waves with zonal wave numbers m = 6, 7, and 8. A substantial part of those waves was probably forced by subseasonal variability in the extratropical midtroposphere circulation via the mechanism of quasiresonant amplification (QRA). The results presented here support the overall hypothesis that QRA is an important mechanism driving many of the recent exceptional extreme weather events.

Abstract
In boreal spring-to-autumn (May-to-September) 2012 and 2013, the Northern Hemisphere (NH) has experienced a large number of severe midlatitude regional weather extremes. Here we show that a considerable part of these extremes were accompanied by highly magnified quasistationary midlatitude planetary waves with zonal wave numbers m = 6, 7, and 8. We further show that resonance conditions for these planetary waves were, in many cases, present before the onset of high-amplitude wave events, with a lead time up to 2 wk, suggesting that quasiresonant amplification (QRA) of these waves had occurred. Our results support earlier findings of an important role of the QRA mechanism in amplifying planetary waves, favoring recent NH weather extremes.

The paper is: Role of quasiresonant planetary wave dynamics in recent boreal spring-to-autumn extreme events, by Vladimir Petoukhov, Stefan Petri, Stefan Rahmstorf, Dim Coumou, Kai Kornhuber, and Hans Joachim Schellnhuber.

Global Warming: Getting worse

I recently noted that there are reasons to think that the effects of human caused climate change are coming on faster than previously expected. Since I wrote that (in late January) even more evidence has come along, so I thought it was time for an update.

First a bit of perspective. Scientists have known for a very long time that the proportion of greenhouse gasses in the Earth’s atmosphere controls (along with other factors) overall surface and upper ocean heat balance. In particular, is has been understood that the release of fossil Carbon (in coal and petroleum) as CO2 would likely warm the Earth and change climate. The basic physics to understand and predict this have been in place for much longer than the vast majority of global warming that has actually happened. Unfortunately, a number of factors have slowed down the policy response, and the acceptance of this basic science by non scientists.

A very small factor, often cited by climate contrarians, is the consideration mainly during the 1960s and 1970s, that the Earth goes through major climate swings including the onset of ice ages, so we have to worry about both cooling and warming. This possibility was obviated around the time it was being discussed, though people then may not have fully realized it at the time, because as atmospheric CO2 concentrations increased beyond about 300ppm, from the pre-industrial average of around 250–280ppm (it is now at 400ppm), the possibility of a new Ice Age diminished to about zero. Another factor mitigating against urgency is the fact that the Earth’s surface temperatures have undergone a handful of “pauses” as the surface temperature has marched generally upwards. I’m not talking about the “Faux Pause” said to have happened during the last two decades, but earlier pauses, including one around the 1940s that was probably just a natural down swing that happened when there was not enough warming to swamp it. A second pause, shorter, happened after the eruption of Mount Pinatubo, in 1991.

Prior to recent anthropogenic global warming, the Earth’s surface temperature has squiggled up and down do to natural variability. Some of these squiggles were, at least reionally large enough to get names, such as the “Medieval Warm Period” (properly called the “Medieval Climate Anomaly”) and the “Little Ice Age.” When the planet’s temperature started going distinctly up at the beginning of the 20th century, these natural ups and downs, some larger and some smaller, caused by a number of different factors, eventually became imposed on a stronger upward signal. So, when we have a “downward” swing caused by natural variation, it is manifest not so much as a true downturn in surface temperatures, but rather, less of an upward swing. Since about a year and a half ago, we have seen very steady warming suggesting that a recent attenuation in how much temperatures go up is reversing. Most informed climate scientists expect 2015 and even 2016 to be years with many very warm months globally. So, the second factor (the first being the concern over the ice age as possibly) is natural variation in the Earth’s surface temperature. To reiterate, early natural swings in the surface temperature may have legitimately caused some scientists to wonder about how much greenhouse gas pollution changes things, but later natural variations have not; Scientists know that this natural variation is superimposed on an impressive long term upward increase in temperature of the Earth’s surface and the upper ocean. Which brings us to the third major factor delaying both non-scientists’ acceptance of the realities of global warming, and dangerous policy inaction: Denialism.

The recent relative attenuation of increase in surface temperatures, likely soon to be over, was not thought of by scientists as disproving climate models or suggesting a stoppage of warming. But it was claimed by those denying the science as evidence that global warming is not real and that the climate scientists have it all wrong. That is only one form of denialism, which also includes the idea that yes, warming is happening, but does not matter, or yes, it matters, but we can’t do anything about it, or yes, we could do something about it, but the Chinese will not act (there is little evidence of that by the way, they are acting) so we’re screwed anyway. Etc.

The slowdown in global warming is not real, but a decades-long slowdown in addressing global warming at the individual, corporate or business, and governmental levels is very real, and very meaningful. There is no doubt that had we started to act aggressively, say, back in the 1980s when any major hurdles for overall understanding of the reality of global warming were overcome, that we would be way ahead of where we are now in the effort to keep the Carbon in the ground by using clean energy. The precipitous drop we’ve seen in photovoltaic costs, increases in battery efficiency and drop in cost, the deployment of wind turbines, and so on, would have had a different history than they have in fact had, and almost certainly all of this would have occurred faster. Over the last 30 or 40 years we have spent considerable effort building new sources of energy, most of which have used fossil Carbon. If even half of that effort was spent on increasing efficiency and developing non fossil Carbon sources, we would not have reached an atmospheric concentration of CO2 of 400ppm in 2015. The effects of greenhouse gas pollution would be less today and we would not be heading so quickly towards certain disaster. Shame on the denialists for causing this to happen.

I should mention a fourth cause of inappropriate rejection of the science of climate change. This is actually an indirect effect of climate change itself. You all know about the Inhofe Snowball. A US Senator actually carried a snowball into the senate chamber, a snowball he said he made outside where there has been an atypical snowfall in Washington DC, and held it aloft as evidence that the scientists had it all wrong, and that global warming is a hoax. Over the last few years, we have seen a climatological pattern in the US which has kept winter snows away from the mountains of California, contributing significantly to a major drought there. The same climatological phenomenon has brought unusual winter storms to states along the Eastern Seaboard that usually get less snow (such as major snow storms in Atlanta two winters ago) and persistent unseasonal cold to the northeastern part of the US. This change in pattern is due to a shift in the behavior of the Polar jet stream, which in turn is almost certainly caused by anomalous very warm water in parts of the Pacific and the extreme amplification of anomalous warm conditions in the Arctic, relative to the rest of the planet. (The jury is still out as to the exact process, but no serious climate scientists working on this scientific problem, as far as I know, doubts it is an effect of greenhouse gas pollution). This blob of cold air resting over the seat of power of one of the more influential governments in the world fuels the absurd but apparently effective anti-science pro-fossil fuel activism among so many of our current elected officials.

Climate Sensitivity Is Not Low

The concept of “Climate Sensitivity” is embodied in two formulations that each address the same basic question: given an increase in CO2 in the atmosphere, how much will the Earth’s surface and upper ocean temperatures increase? The issue is more complex than I’ll address here, but here is the simple version. Often, “Climate sensitivity” is the amount of warming that will result from a doubling of atmospheric CO2 from pre-industrial levels. That increase in temperature would take a while to happen because of the way climate works. On a different planet, equilibrium would be reached faster or slower. Historically, the range of climate sensitivity values has run from as low as about 1.5 degrees C up to 6 degrees C.

The difficulty in estimating climate sensitivity is in the feedbacks, such as ice melt, changes in water vapor, etc. For the most part, feedbacks will increase temperature. Without feedbacks, climate sensitivity would be about 1.2 degrees C, but the feedbacks are strong, the climate system is complex, and the math is a bit higher level.

As time goes by, our understanding of climate sensitivity has become more refined, and it is probably true that most climate scientists who study this would settle on 3 degrees C as the best estimate, but with wide range around that. The lower end of the range, however, is not as great as the larger end of the range, and the upper end of the range probably has what is called a “fat tail.” This would mean that while 3 degrees C is the best guess, the probability of it being way higher, like 4 or 5, is perhaps one in ten. (This all depends on which model or scientist you query.) The point here is that while it might be 3, there is a non-trivial chance (one in ten is not small for an extreme event) that it would be a value that would be really bad for us.

Anyway, Dana Nuccitelli has a recent post in The Guardian that looks at climate sensitivity in relation to “The Single Study Syndrome.”

There have been a few recent studies using what’s called an “energy balance model” approach, combining simple climate models with recent observational data, concluding that climate sensitivity is on the low end of IPCC estimates. However, subsequent research has identified some potentially serious flaws in this approach.

These types of studies have nevertheless been the focus of disproportionate attention. For example, in recent testimony before the US House of Representatives Committee on Science, Space and Technology, contrarian climate scientist Judith Curry said,

Recent data and research supports the importance of natural climate variability and calls into question the conclusion that humans are the dominant cause of recent climate change: … Reduced estimates of the sensitivity of climate to carbon dioxide

Curry referenced just one paper (using the energy balance model approach) to support that argument – the very definition of single study syndrome …

…As Andrew Dessler told me,

There certainly is some evidence that climate sensitivity may be below 2°C. But if you look at all of the evidence, it’s hard to reconcile with such a low climate sensitivity. I think our best estimate is still around 3°C for doubled CO2.

So there is not new information suggesting a higher climate sensitivity, or a quicker realization of it, but there is a continuation of the consensus that the value is not low, despite efforts by so called luke-warmists or denialists to throw cold water on this hot topic.

Important Carbon Sink May Be Limited.

A study just out in Nature Geoscience suggests that one of the possible factors that may mitigate against global warming, the terrestrial sink, is limited in its ability to do so. The idea here is that as CO2 increases some biological activities at the Earth’s Surface increase and store some of the carbon in solid form as biomass. Essentially, the CO2 acts as plant fertilizer, and some of that Carbon is trapped in the detritus of that system, or in living tissue. This recent study suggests that this sink is smaller than previously suspected.

Terrestrial carbon storage is dependent on the availability of nitrogen for plant growth… Widespread phosphorus limitation in terrestrial ecosystems may also strongly regulate the global carbon cycle… Here we use global state-of-the-art coupled carbon–climate model projections of terrestrial net primary productivity and carbon storage from 1860–2100; estimates of annual new nutrient inputs from deposition, nitrogen fixation, and weathering; and estimates of carbon allocation and stoichiometry to evaluate how simulated CO2 fertilization effects could be constrained by nutrient availability. We find that the nutrients required for the projected increases in net primary productivity greatly exceed estimated nutrient supply rates, suggesting that projected productivity increases may be unrealistically high. … We conclude that potential effects of nutrient limitation must be considered in estimates of the terrestrial carbon sink strength through the twenty-first century.

Related, the Amazon carbon sink is also showing long term decline in its effectiveness.

Permafrost Feedback

From Andy Skuce writing at Skeptical Science:

We have good reason to be concerned about the potential for nasty climate feedbacks from thawing permafrost in the Arctic….research bring good news or bad? [From recent work on this topic we may conclude that] although the permafrost feedback is unlikely to cause abrupt climate change in the near future, the feedback is going to make climate change worse over the second half of this century and beyond. The emissions quantities are still uncertain, but the central estimate would be like adding an additional country with the unmitigated emissions the current size of the United States’ for at least the rest of the century. This will not cause a climate catastrophe by itself, but it will make preventing dangerous climate change that much more difficult. As if it wasn’t hard enough already.

Expect More Extreme Weather

Michael D. Lemonick at Climate Central writes:

disasters were happening long before humans started pumping heat-trapping greenhouse gases into the atmosphere, but global warming has tipped the odds in their favor. A devastating heat wave like the one that killed 35,000 people in Europe in 2003, for example, is now more than 10 times more likely than it used to be…. But that’s just a single event in a single place, which doesn’t say much about the world as a whole. A new analysis in Nature Climate Change, however, takes a much broader view. About 18 percent of heavy precipitation events worldwide and 75 percent of hot temperature extremes — defined as events that come only once in every thousand days, on average — can already be attributed to human activity, says the study. And as the world continues to warm, the frequency of those events is expected to double by 2100.

Melting Glaciers Are Melting

This topic would require an entire blog post in itself. I’ll give just an overview here. Over the last year or so, scientists have realized that more of the Antarctic glaciers are melting more than previously thought, and a few big chunks of ice have actually floated away or become less stable. There is more fresh water flowing from glacial melt into the Gulf of Alaska than previously thought. Related to this, as well as changes in currents and increasing sea temperatures, sea level rise is sparking sharply.

The Shifting Climate

I mentioned earlier that the general upward trend of surface temperature has a certain amount of natural variation superimposed over it. Recent work strongly suggests that a multi-decade long variation, an up and down squiggle, which has been mostly in the down phase over recent years, is about to turn into an upward squiggle. This is a pretty convincing study that underscored the currently observed month by month warming, which has been going on for over a year now. It is not clear that the current acceleration in warming is the beginning of this long term change … that will be known only after a few years has gone by. But it is important to remember that nothing new has to happen, no new scientific finding has to occur, for us to understand right now that the upward march of global surface temperatures is going to be greater on average than the last decade or so has suggested. We have been warming all along, but lately much of that warming has been in the oceans. Expect surface temperatures to catch up soon.

Heat in Brazil is stressing the electrical grid

Brazil already has an iffy electrical grid, apparently, but very hot conditions are pushing it over the edge. Also, they had a small problem related to a nuclear plant (nothing nuclear, don’t worry). From Reuters:

ONS said it orchestrated 2,200 megawatts of controlled outages in eight states as the hottest day of the year in Sao Paulo, where the temperature hit 36.5 Celsius (97.7 Fahrenheit), and other southeastern cities led to surging demand from air conditioners and other power-hungry appliances.

Eletronuclear, a unit of state-run power company Eletrobras , said nuclear reactor Angra I powered down automatically at 2:49 p.m. local time (1649 GMT) due to a drop in frequency on the national grid. The company said there were no risks to workers or the environment due to the stoppage.

Climate Change = Extreme Weather = More Climate Change

The last several decades of climate change, and climate change research, have indicated and repeatedly confirmed a rather depressing reality. When something changes in the earth’s climate system, it is possible that a negative feedback will result, in which climate change is attenuated. I.e., more CO2 could cause more plant growth, the plants “eat” the CO2, so a negative feedback reduces atmospheric levels of the greenhouse gas bringing everything back to normal. Or, when something changes in the earth’s climate system, we could get a positive feedback, where change in one direction (warming) causes more change in that direction. A developing and alarming example of this would be warming in the arctic causing less summer sea ice in the arctic which warms the arctic which causes less sea ice, etc. etc., with numerous widespread and dramatic effects on climate and weather.

ResearchBlogging.orgOver these decades of observation and research, we’ve discovered that negative feedbacks are rare, and when they occur, the are feeble. Yes, some plants do eat some of that extra CO2, bur hardly any. This makes sense. Adding antelopes to the savanna might cause there to be more lions to some extent, but the cap on lion density is not antelopes … it is other lions, staking out territories. After the first few dozen antelopes all you get is a lot of antelopes. Biological systems tend to optimize within some range. Plants can’t really be expected to use more water, more CO2, more other nutrients, just because they are there, beyond some range that they typically use in nature.

Well, we have a new positive feedback: Weather Weirding caused by climate change causes more climate change. Here’s how it works.

First, we warm the arctic. This causes the gradient of warm tropical air to cooler temperate and arctic air to reduce. The gradient causes atmospheric systems that include jet streams to form, but with a reduced gradient, the jet streams change their behavior. When the gradient is low enough (as it is now most of the time) the polar jet stream shifts from being a more or less simple circle around the earth to a very wavy circle, and the jet stream itself moves more slowly. For reasons that have to do with the math of the atmosphere, when the waviness reaches a certain point the waves themselves tend to stop moving, or move only slowly. So the jet stream is moving through these waves, but the position of the waves remains stable for days and days on end.

Where the wave dips towards the equator, an low pressure system forms in the “elbow” of the wave and sits there for days on end, causing cool conditions and a lot of precipitation. Flooding ensues. Where the wave dips up towards the pole, a high pressure system forms in the inverted elbow of the jet stream. This brings warm air north and that air tends to be dry (depending on where it is). This results in heat waves and drought conditions. The reality is more complex that I’ve indicated here, but you get the picture. Weather extremes of both cold and heat occur, and weather extremes of both wet and dry occur.

(For more information on these phenomena see: Why are we having such bad weather?, Linking Weather Extremes to Global Warming, and The Ice Cap is Melting and You Can Help.)

And now comes the newly identified “positive” feedback.

The research by scientists at Max Plank is published in nature (see citation below) but summarized on a web page from that institute:

When the carbon dioxide content of the atmosphere rises, the Earth not only heats up, but extreme weather events, such as lengthy droughts, heat waves, heavy rain and violent storms, may become more frequent. Whether these extreme climate events result in the release of more CO2 from terrestrial ecosystems and thus reinforce climate change has been one of the major unanswered questions in climate research. It has now been addressed by an international team of researchers working with Markus Reichstein, Director at the Max Planck Institute for Biogeochemistry in Jena. They have discovered that terrestrial ecosystems absorb approximately 11 billion tons less carbon dioxide every year as the result of the extreme climate events than they could if the events did not occur. That is equivalent to approximately a third of global CO2 emissions per year….that the consequences of weather extremes can be far-reaching. “As extreme climate events reduce the amount of carbon that the terrestrial ecosystems absorb and the carbon dioxide in the atmosphere therefore continues to increase, more extreme weather could result,” explains Markus Reichstein. “It would be a self-reinforcing effect.”

In particular drought (caused by extremes of heat long term, and lack of rainfall) cause plants to absorb less CO2. Heavy precipitation increases the flow of water containing carbonate holding materials into bodies of water where the CO2 out-gasses.

I would add to this the relationship between drought, fire, and dark snow in Greenland.

Climate change causes extreme weather which causes more of the same sort of climate change.

Bobby Magill also discusses this here: Can Extreme Weather Make Climate Change Worse?


Reichstein, Markus, Bahn, Michael, Ciais, Phillipe, & Et Al (2013). Climate extremes and the carbon cycle Nature DOI: 10.1038/nature12350

Extreme Weather in the US Northeast and Climate Change

This graph shows the extremes in one-day precipitation in a given month relative to the amount of precip in that month for the Northeastern US. So, if the green bar is at 30%, that means that that 30% of month’s precip fell in one event. The way this is computed is a little complicated because it is hard to define an “event” in time and space in relation to the time and space coordinates (as it were) we normally use. Check the source of the graph for a more detailed explanation. The point of this graph is that the opposite is true from what many expect: It isn’t the case that the snow was deeper back when you were a kid. It’s deeper now! (Check out this blog post for an explanation for why you may have misremembered your childhood.) There are a number of contributing factors to a pattern like this, with increasing extreme events, but the best way to think of this may be as an increase in the bimodality of the water cycle. Dry events are dryer (you may have noticed widespread drought) and wet events are wetter (as shown in this graph).

Northeastern US extreme precip events; more extreme rain and snow storms in more recent times.
From NOAA National Climatic Data Center.