This is Part I of a two part treatment of new research on climate change. Part II is here.
There is a new paper out, Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise, by Patrick T. Brown, Wenhong Li, Eugene C. Cordero & Steven A. Mauget. It is potentially important for two reasons. One is because of what it says about how to interpret the available data on global warming caused by human generated greenhouse gas pollution. The other is because of the way in which the results are being interpreted, willfully or through misunderstanding, by climate science contrarians.
I will have a more detailed post on this in a few days, after I’ve gotten responses back from Patrick Brown, lead author, to a number of questions. For now I wanted to make a few preliminary remarks.
These to features … the part about how to interpret the record vs. the part about climate contrary commentary … are entirely unrelated. First a few comments about the science.
Science is often about measuring, describing, and explaining variation. There are usually two main sources of variation. One is variation in a natural system. The other is variation in measurement or some other aspect that amounts to error or noise. Some of that noise is actually part of the system, but not the part you are trying to study. In climate science, the error, the noise, an the part of the variation that is not of direct relevance to understanding climate change can all be thought of as “unforced variation.” Forced variation is the change in the Earth’s temperature (say, at the surface) caused by variation in the sun’s output, the effects of greenhouse gas, the cooling effects of aerosols (dust,etc.) and so on. Unforced variation is a large part of the cause of the wiggles we see in global temperature measurements over time.
The question is, when we see an uptick, or down tick, fast or slow or of any particular configuration in the march of global surface temperature over time, what does that variation mean? Does it mean that there is a response happening of the climate system to a forcing (green house gas, the sun, aerosols, etc.) or does it mean that there is random-esque noisy stuff going on?
Climate scientists have some standard ways of handling variation, and look very closely at these wiggles in temperature curves to try to understand them. This study, by Brown et al, takes a somewhat different (but not outrageously different) look at forced and unforced variation.
Here, I will parse out the abstract of the paper for you:
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention.
Models have predicted a certain set of upward curves of global temperatures (varying across different models or model assumptions). While the actual upward trend of surface temperatures has been as predicted overall, the actual curve is never right where the models say it will be. This is expected. Over the last decade or two, the actual curve has been lower than model predictions. The curve is at present (since early 2014) been shooting upwards and we may expect to see the actual curve move to the other side (above) the center of the predictions. This reflects expected and manageable differences between modeled projections and realty.
For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN).
The models predict not just a central line but a range of values, an envelope. The envelope (with upper and lower bounds) is the noise around the central, meaningful projected change.
Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability.
This gets to the basic problem of variation. How do we measure and characterize it? Most people who do modeling that I’ve spoken to don’t think the models do a poor job of estimating the noise, and I think Brown et al do not think they do a bad job either. But the Brown et al takes a look at unforced variation (EUN) in the climate system from a different view, by looking at actual data to compare that with modeled data.
Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records.
So this new measure will produce the same sort of measurement previously used by climate scientists but using intrumental (thermometers and satellites) measurements for recent years and proxyindicators (like corals that indicate temperature changes over time, etc.) for longer periods, over the last 1,000 years.
We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario’s forced signal, but is likely inconsistent with the steepest emission scenario’s forced signal.
Brown et al found that more traditional methods do a good job of estimating the track of warming and its variation, but may underestimate the degree to which the global warming signal can wander in one direction or another (“wiggles”) when looking at temperature change over decades-long periods. Brown notes, “Our model shows these wiggles can be big enough that they could have accounted for a reasonable portion of the accelerated warming we experienced from 1975 to 2000, as well as the reduced rate in warming that occurred from 2002 to 2013.”
The global warming has this long term pattern, with warming from the 1910s-1940s, a hiatus from the 1940s-1970s and resumed warming from the 1970s-2000s. The question is, is this pattern (or other shorter term patterns) mostly the meaningful result of forced changes in climate (from greenhouse gasses and aerosols, mainly), or mainly random noise, or about even. Most climate scientists would probably say these longer term changes are mainly forced with a good dose of noise, while Brown et al might say that noise can explain more than they were thinking.
The main criticism I’ve heard from colleagues about Brown et al is that there is too much variation in the empirical data (especially the proxies), and that they may have increased the variation that seems to come from errors by compounding errors. I’m not sure if I agree with that or not. Still thinking about it.
So that’s the science. To me this is very interesting because, as a scientist, I’ve been especially interested in variation and how to observe and explain it. But in a sense, while potentially important, this paper is mostly a matter of dotting the i’s and crossing the t’s. Important i’s and t’s, to be sure. But from the outside looking in, from the point of view of the average person, this is not a change in how we think about global warming. It is a refinement.
But, Brown et al also looked at another thing. They looked at how the data behave when different starting assumptions for the nature of global warming, called the Representative Concentration Pathways, (RCPs). RCP’s are different projected scenarios of greenhouse gas forcings. Basically, a bunch of scientists put their heads together and came up with a set of different what-ifs, which differ from one another on the basis of how much greenhouse gas goes into the atmosphere and over what time period.
RCP’s are important because the amount of greenhouse gas that is released matters, and the timing of that release matters. As well, the presumed decrease in release and when that happens matters. It also matters that some greenhouse gas goes away, converts to some other greenhouse gas, etc. So the whole thing actually turns out to be mind numbingly complicated and numerically difficult to manage. The different RCP’s are actually big giant piles of pre-calculated numbers based on specified assumptions that climate modelers can download and use in their models.
Here is the bottom line with respect to RCPs. Brown et al, if correct, show that unforced variation — noise — will behave differently under different RCPs. Specifically,
We also find that recently observed GMT values, as well as trends, are near the lower bounds of the EUN for a forced signal corresponding to the RCP 8.5 emissions scenario but that observations are not inconsistent with a forced signal corresponding to the RCP 6.0 emissions scenario.
This is the part of Brown et al that will get the most criticism from climate scientists, and that is most abused by denialists. If you just look at the words, it looks like Brown et al are saying that RCP 8.5, the most extreme of the scenarios, is less likely than RCP 6.0. But they can’t say that. We don’t get to say which RCP is most likely. This, rather, is something we do. The different RCPs are scenarios of what humans (and volcanoes, I suppose) do, not how the climate responds to what we do. In short, Brown et al have noted that the interaction between the internal workings of climate change and what we see as the result (in terms of surface warming) results in a range of different patterns of the actual temperature wiggling around an ideal projected line. That is rather esoteric. It is useful. But it does not say that one or another scenario is likely or less likely … because it does not address that question. Brown et al also says nothing about the validity of climate models … they did not look at that. Rather, they have provided an insight, if their work holds up, on how random wanderings of reality from projections, in both directions (cooler than expected, warmer than expected) will emerge depending on what we do with our greenhouse gases.
If you drive faster, when you skid on the ice, you will skid farther. Doesn’t change where the road is.
ADDED: As I suspect, more and more contrarian misuse of this work is happening. Even Rush Limbaugh has mis-quoted the research, as did the Daily Mail (though there, mainly in their headlines and bullet points … their actual “reporting” is mostly cut and paste from the press release, so that’s lazy journalists and incompetent mean spirited editors!). Anyway, you can read all about it here in excellent coverage by Media Matters.
Looking forward to hearing more about this study.
I’ve made some comments on RC about this kind of issue; to my mind we need to refocus on both geographically limited and shorter-term prediction/projection. And what I see lacking is any push to improve the measurement, rather than analysis, components of the science.
There’s this silly debate going on about ‘how chaotic is the system’ when that question almost certainly can’t be answered with existing science. And more to the point, complaining that Denialists are going to take advantage of it is truly naive; of course they will– if you want to convince the public that you know what you are doing, you have to do things to which the public can relate and feel engaged about.
There are people trying to work on improving instrumentation and geographical coverage, but they get no press. The public hears about the LHC, which is pretty irrelevant in a real-world practical sense, and which deals with stuff only specialists can really understand.
But measuring changes in the AMOC or parts of the Pacific that may seriously affect the weather in the next few decades? Well, throw in a few buoys spaced way too far apart and hope for the best, and don’t even bother getting some pictures on the morning news.
NOTE to Greg: That last sentence doesn’t hold up as an analogy. It depends on the curves in the road and what the terrain is like on either side. Which is kind of what I’m saying we should be talking about. The public is probably at the point where they can really be engaged if we start acting confident about what we have to offer.
you will skid father.
*farther
I spent too much time living in Boston to fall for that one! I even hung around Havad a lot.
Zebra, this complaint of mine is nothing close to naive. IT is simply not true that every one of the many papers that come out every day is denialist fodder. I watch the literature, I spot the fodder, I try to get ahead of the curve. Naive, no.
And yes, rephrasing research in terms that the public can relate to is important, also something I do every day. Think of an example of improving instrumentation or refining measurement, use the search engine here, see if I’ve covered it already, and if not, let me know I may be interested. But don’t just throw out a shotgun critique that has no value. I”m utterly mystified by this particular bug you’ve got going.
Refer to the Law of Conservation of Consonants for the explanation of this.
Greg,
I’m not calling you personally naive, nor am I suggesting that you don’t try to make science relevant to the public. To the contrary, I wouldn’t be reading this if I thought anything like that.
That was a generalized rant directed at climate science PR in general. I want to see some pictures on the news, and I want someone complaining on TV about funding cuts that affect the ability to collect data. Finally, the weather people are acknowledging that ACC /GW exists, and even the RW propaganda machine is shifting from absolute denial to cost/benefit arguments. Science needs to keep up.
I’m suggesting something along the lines of “There’s a good chance climate change will disrupt the AMOC, which will affect weather, fishing, and sea level for New England and Europe. We need to start detailed monitoring and mapping so we can help people adapt.”
I just saw something very like that in reference to characterizing earthquakes caused by fluid injection– those people aren’t hemming and hawing with low resolution paleodata and yet another bit of subtle modeling. They say: ‘It’s happening and it’s dangerous; we need to get a better handle on it to decide what course we should follow.’
That said, I still don’t think your analogy is appropriate. If you’re not still mad at me, I would be happy to explain.
It seems to me there is a difference between
and this
that isn’t simple to overcome. People have heard concerns that the injections could cause small earthquakes, then they’ve seen those earthquakes, in a very short period of time. There is no good way to put a happy spin on an earthquake, so people are concerned.
The predictions for what will come from uncheck climate change don’t talk about the immediate future, but rather years in the future. And who wouldn’t like it a few degrees warmer near them in winter? It’s easy to plant the “there’s nothing to be concerned about here” seed in the minds of the public about climate change. That’s the biggest PR hurdle.
And, in the list of things you’d swear come from the Onion but don’t ‘
http://www.rawstory.com/2015/04/koch-backed-group-sending-real-scientists-to-lecture-pope-francis-about-biblical-duty-to-pollute/
#7 dean,
What I’m saying is that people are beginning to be open to the association between shorter term local effects and climate change.
It isn’t about making unfounded predictions, but it is about saying “we’re working on it” in a way that the public can relate to. The kind of paper Greg is talking about doesn’t do that; it fosters the idea that it is all abstract and unresolvable. (Which, in any useful amount of time, using the data sources we have, it is.)
How many times have you seen a report on the news about Mars rover? And how many times have you seen a report about the system of buoys monitoring the AMOC, or the Pacific, or Arctic ice?
Which of them is more relevant to human well-being?
“It isn’t about making unfounded predictions, …”
I didn’t mean that at all, and if it sounded that way I apologize. I don’t disagree with your point, I was simply trying to say making the case for the consequences of climate change will be a far more difficult task than it is for other things. Part of that comes from falling behind the 8 ball from the start, letting the denialists spin things their way for so long.
I agree here too – the difficulty is wring about what the sensors are saying in a way that doesn’t seem too boring or technical to the majority of people.
OK, zebra, I basically agree with most of what you are saying.
The analogy is not the greatest one I’ve ever come up with, but it does fit. The differences between RCPs is magnitude. They discovered larger scale variability when they used a larger scale starting point.
Greg, I will look forward to the next post on this.
Dean, where you start is by explaining to people what you are trying to figure out, why it matters to them, and showing them some pictures of real stuff– placing the sensors, where they are built–, and telling a story about the political and economic issues at work.
Look at the ads from the fossil fuel industry some time. And what science is selling, unlike those, is real, not phony.
I’ll leave it at that for now.
OK zebra, I see exactly what you are up to. Folks, you’all are observing the next move, not quite out there yet, of the denialists. They have already tried everything once and failed. This is round three, I think, of an old trick. It us not designed to work, but to distract.
Greg, you lost me completely with that. I’ve been debating Denialists in various forums for well over 20 years, on the science and on mitigation.
What is the ‘trick’ here? Spending more money on data collection? Making people realize that there could be more near-term negative consequences to BAU?
The only thing I’m being critical about is the idea that we don’t need to do more measuring to create regional projections. Refining global models doesn’t tell us much that we don’t already know, and even that will obviously benefit from more data. But getting sensitivity to move a few percent one way or the other isn’t going to have any effect on public policy– Dean is right; people don’t get excited about 2degC, so why should they get excited about 2.2?.
What I’m saying is what we need to do to counter what the Denialists have already revealed as the next step, which is arguing cost/benefit. What is your plan to respond to that? Teach them about kriging?
Perhaps I am interpreting your statements incorrectly.
Ya think? Maybe if you explained where you thought I was going it would help me be clearer here and elsewhere.
Maybe I’ve been doing this too long and what I think is obvious isn’t.
Zebra.
I cannot see where you are going. I see plenty of reporting to the public in the media going on, some of it silly some of it good.
#17 Harry,
I don’t see reporting about efforts to improve the resolution of our data on specific phenomena that might have serious consequences.
That means putting more and better sensors to work monitoring polar ice and temperatures– where there are big holes– , or parts of the Pacific, where there are big holes, or characterizing all the parts of the AMOC, or TOA radiation, and so on.
If you can give me some links to reports like that on the evening news or the front page of NYT, I would appreciate it. It could be pictures of researchers placing buoys and sensors in challenging circumstances– always good video, or someone pounding the table at a congressional hearing.
But not another report on how the blogosphere is spinning a couple of papers that might or might not contradict each other, because the results are equivocal. (Which is normal in science, but this particular science is critical to human well-being, so that is not acceptable.)
If you want to get across that fact, and show people that you are doing something, you have to do things they can relate to.
Do you see where I am going now?
Following up on #18:
Here’s an example.
http://www.popsci.com/drones-fly-over-melting-arctic-ice-science
Popular Science is the closest to mainstream I could find for that using google news search.
I mean, everybody loves a drone story, right? Not just people who read climate blogs, which is where I found out about this.
And the real point is, we need to do scientific work at this level of detail– that’s how we’ve always made progress figuring things out. I know there are people engaged in this, in the different areas I mentioned above, but it just gets buried by the supposed ‘controversy’. That, to some extent, is the fault of the scientific community in failing at ‘framing’, which, as Dean said, it has been bad at for a long time.
Zebra.
NOAA have set up a new network of climatic stations called the U.S. Climate Reference Network (USCRN).
Australia has modified existing weather stations to form a new climatic network called ACORN-SAT.
New research projects at Antarctica and Greenland.
New satellites to gather climate-related data such as OCO2 and DSCOVR (probably others as well).
As far as I know the ARGO float program is progressing.
The IPCC has released it’s 2013 and is working on the next one.
NASA, NOAA and the US government have launched new websites that discuss and present climate change info for the layperson.
Lots of things happening.
#20 Harry,
If you don’t understand what I am talking about, after I provided an example, I don’t know how to make it clearer.
From the USCRN website:
“The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the Nation changed over the past 50 years?”
Well OK then, I guess we’re all set.
Authors’ guest commentary at RC:
http://www.realclimate.org/index.php/archives/2015/05/global-warming-and-unforced-variability-clarifications-on-recent-duke-study/
> Round three, to distract
Those misstating the study, Limbaugh and Wingnut Daily are also worth considering because they are profitable advertising venues of the ilk discussed in Rick Perlstein’s “The Long Con” . He says: count on the advertisers to know where to find gullible, easily frightened people who are suckers for fake health and fake get rich scams. And those are the same places where the current wave of climate disinformation is also found. That’s their meat, that’s their desired audience.
They need to fool enough of those people, soon enough to affect the upcoming Paris climate peace talks. Look at the ads in the places carrying the disinformation stories, and read Perlstein’s piece about the cold hard cynicism of those who fleece that audience for their income.
One way to turn EUN problem around is to step back and display the noise envelope alongside the noisy data it represents.
<a href = " http://vvattsupwiththat.blogspot.com/2015/03/better-than-best.htmlIf the slope of the ensemble is still visible, the difficult problem of deconvoluting forced and unforced variation may be moot
Sorry for the broken link : here’s the missing image
http://vvattsupwiththat.blogspot.com/2015/03/better-than-best.html