This is Part I of a two part treatment of new research on climate change. Part II is here.
There is a new paper out, Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise, by Patrick T. Brown, Wenhong Li, Eugene C. Cordero & Steven A. Mauget. It is potentially important for two reasons. One is because of what it says about how to interpret the available data on global warming caused by human generated greenhouse gas pollution. The other is because of the way in which the results are being interpreted, willfully or through misunderstanding, by climate science contrarians.
I will have a more detailed post on this in a few days, after I’ve gotten responses back from Patrick Brown, lead author, to a number of questions. For now I wanted to make a few preliminary remarks.
These to features … the part about how to interpret the record vs. the part about climate contrary commentary … are entirely unrelated. First a few comments about the science.
Science is often about measuring, describing, and explaining variation. There are usually two main sources of variation. One is variation in a natural system. The other is variation in measurement or some other aspect that amounts to error or noise. Some of that noise is actually part of the system, but not the part you are trying to study. In climate science, the error, the noise, an the part of the variation that is not of direct relevance to understanding climate change can all be thought of as “unforced variation.” Forced variation is the change in the Earth’s temperature (say, at the surface) caused by variation in the sun’s output, the effects of greenhouse gas, the cooling effects of aerosols (dust,etc.) and so on. Unforced variation is a large part of the cause of the wiggles we see in global temperature measurements over time.
The question is, when we see an uptick, or down tick, fast or slow or of any particular configuration in the march of global surface temperature over time, what does that variation mean? Does it mean that there is a response happening of the climate system to a forcing (green house gas, the sun, aerosols, etc.) or does it mean that there is random-esque noisy stuff going on?
Climate scientists have some standard ways of handling variation, and look very closely at these wiggles in temperature curves to try to understand them. This study, by Brown et al, takes a somewhat different (but not outrageously different) look at forced and unforced variation.
Here, I will parse out the abstract of the paper for you:
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention.
Models have predicted a certain set of upward curves of global temperatures (varying across different models or model assumptions). While the actual upward trend of surface temperatures has been as predicted overall, the actual curve is never right where the models say it will be. This is expected. Over the last decade or two, the actual curve has been lower than model predictions. The curve is at present (since early 2014) been shooting upwards and we may expect to see the actual curve move to the other side (above) the center of the predictions. This reflects expected and manageable differences between modeled projections and realty.
For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN).
The models predict not just a central line but a range of values, an envelope. The envelope (with upper and lower bounds) is the noise around the central, meaningful projected change.
Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability.
This gets to the basic problem of variation. How do we measure and characterize it? Most people who do modeling that I’ve spoken to don’t think the models do a poor job of estimating the noise, and I think Brown et al do not think they do a bad job either. But the Brown et al takes a look at unforced variation (EUN) in the climate system from a different view, by looking at actual data to compare that with modeled data.
Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records.
So this new measure will produce the same sort of measurement previously used by climate scientists but using intrumental (thermometers and satellites) measurements for recent years and proxyindicators (like corals that indicate temperature changes over time, etc.) for longer periods, over the last 1,000 years.
We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario’s forced signal, but is likely inconsistent with the steepest emission scenario’s forced signal.
Brown et al found that more traditional methods do a good job of estimating the track of warming and its variation, but may underestimate the degree to which the global warming signal can wander in one direction or another (“wiggles”) when looking at temperature change over decades-long periods. Brown notes, “Our model shows these wiggles can be big enough that they could have accounted for a reasonable portion of the accelerated warming we experienced from 1975 to 2000, as well as the reduced rate in warming that occurred from 2002 to 2013.”
The global warming has this long term pattern, with warming from the 1910s-1940s, a hiatus from the 1940s-1970s and resumed warming from the 1970s-2000s. The question is, is this pattern (or other shorter term patterns) mostly the meaningful result of forced changes in climate (from greenhouse gasses and aerosols, mainly), or mainly random noise, or about even. Most climate scientists would probably say these longer term changes are mainly forced with a good dose of noise, while Brown et al might say that noise can explain more than they were thinking.
The main criticism I’ve heard from colleagues about Brown et al is that there is too much variation in the empirical data (especially the proxies), and that they may have increased the variation that seems to come from errors by compounding errors. I’m not sure if I agree with that or not. Still thinking about it.
So that’s the science. To me this is very interesting because, as a scientist, I’ve been especially interested in variation and how to observe and explain it. But in a sense, while potentially important, this paper is mostly a matter of dotting the i’s and crossing the t’s. Important i’s and t’s, to be sure. But from the outside looking in, from the point of view of the average person, this is not a change in how we think about global warming. It is a refinement.
But, Brown et al also looked at another thing. They looked at how the data behave when different starting assumptions for the nature of global warming, called the Representative Concentration Pathways, (RCPs). RCP’s are different projected scenarios of greenhouse gas forcings. Basically, a bunch of scientists put their heads together and came up with a set of different what-ifs, which differ from one another on the basis of how much greenhouse gas goes into the atmosphere and over what time period.
RCP’s are important because the amount of greenhouse gas that is released matters, and the timing of that release matters. As well, the presumed decrease in release and when that happens matters. It also matters that some greenhouse gas goes away, converts to some other greenhouse gas, etc. So the whole thing actually turns out to be mind numbingly complicated and numerically difficult to manage. The different RCP’s are actually big giant piles of pre-calculated numbers based on specified assumptions that climate modelers can download and use in their models.
Here is the bottom line with respect to RCPs. Brown et al, if correct, show that unforced variation — noise — will behave differently under different RCPs. Specifically,
We also find that recently observed GMT values, as well as trends, are near the lower bounds of the EUN for a forced signal corresponding to the RCP 8.5 emissions scenario but that observations are not inconsistent with a forced signal corresponding to the RCP 6.0 emissions scenario.
This is the part of Brown et al that will get the most criticism from climate scientists, and that is most abused by denialists. If you just look at the words, it looks like Brown et al are saying that RCP 8.5, the most extreme of the scenarios, is less likely than RCP 6.0. But they can’t say that. We don’t get to say which RCP is most likely. This, rather, is something we do. The different RCPs are scenarios of what humans (and volcanoes, I suppose) do, not how the climate responds to what we do. In short, Brown et al have noted that the interaction between the internal workings of climate change and what we see as the result (in terms of surface warming) results in a range of different patterns of the actual temperature wiggling around an ideal projected line. That is rather esoteric. It is useful. But it does not say that one or another scenario is likely or less likely … because it does not address that question. Brown et al also says nothing about the validity of climate models … they did not look at that. Rather, they have provided an insight, if their work holds up, on how random wanderings of reality from projections, in both directions (cooler than expected, warmer than expected) will emerge depending on what we do with our greenhouse gases.
If you drive faster, when you skid on the ice, you will skid farther. Doesn’t change where the road is.
ADDED: As I suspect, more and more contrarian misuse of this work is happening. Even Rush Limbaugh has mis-quoted the research, as did the Daily Mail (though there, mainly in their headlines and bullet points … their actual “reporting” is mostly cut and paste from the press release, so that’s lazy journalists and incompetent mean spirited editors!). Anyway, you can read all about it here in excellent coverage by Media Matters.