A small “Signal-to-Noise Ratio” means that there is not enough real information (signal) compared to the background noise to make a definitive statement about something. With a sufficiently high Signal-to-Noise Ratio, it is possible to make statistically valid statements about some measure or observation. This applies to a lot of day to day decisions you make in life.
Climate change denialists understand this principle and they use it to try to fool people into thinking that “the jury is still out” on Global Warming, or that scientists are making up their data, and so on. Here, I want to explain very clearly what a Signal-to-Noise Ratio is and now it works in a totally understandable way; What this means for understanding Global Climate Change (in particular, warming); and to point you to an excellent paper (“Separating Signal and Noise in Atmospheric Temperature Changes: The Importance of Timescale”) about to be published by Ben Santer and several other authors. Sander’s paper effectively puts an end to Climate Change denialists misuse of data which has come to be known as “cherry picking” but that I prefer to call “dishonesty.”
Think of any system in which measurements are made again and again and the measurements matter. When to mow the lawn (measurement: length of the grass); When to buy more milk (measurements: How much milk is left, how fast does it get drunk); Is the Earth’s climate warming because of fossil fuel burning (measurements: change over time in global temperatures, amount of previously trapped CO2 added to the atmosphere over time).
In some cases the measurements are almost always accurate enough that you just take them as they come. For instance, you measure a hole in your wall and you cut a piece of wallboard that will fit in there. Your measurements will be close enough that you need not worry about the variation introduced by changes in temperature of the metal measuring tape, or shrinkage or expansion of the wall board owing to moisture content. The signal to noise ratio is high. (except when you invert a number or make some other huge mistake, which I’m sure rarely happens to you!) So that’s a system with very high signal (real) to noise (irrelevant variation) ratio.
What about your body weight? Let’s say you are out of shape and eat poorly and one day decide to go on a diet and exercise a lot. Before you do anything, you weigh yourself, then you start working on your new routine, and after two days you weigh yourself again. Assume you lost three pounds. Does this mean that over 6 more days you will lose 9 pounds? (That would be the extrapolation from your first two days over six more days.)
The answer of course, is no, and if you’ve been through this before you’ll know why. First, you cheated. You wore heavy clothing and kept your keys in your pocket and had a big meal for the first measurement, knowing that if you take the second measurement naked and after having trimmed your eyebrows and not eaten for several hours you’ll get some extra poundage off. Also, as you may know, when you shift from a positive energy balance (low activity levels, high caloric intake) to a negative one (dieting, exercising) the first thing that happens is you lose “water weight” (a form of energy storage called glycogen, which is relatively heavy being mostly water and used up first when you start to “burn fat”). These two factors are the main causes contributing your loss of weight over the first two days. They are not real fat loss, which is the loss of weight you are trying to effect and measure over a few months of hard work and dieting. This measurement, three pounds in two days, cannot be used to estimate your future progress. Too much noise, not enough signal.
If, on the other and, you weigh yourself every week for 12 weeks while doing the same sorts of activities and eating the same diet, and plot those data on a graph, you could reasonably well predict your future body mass with a proper analysis of the line the data produced. The signal has gone up, the noise has been swamped out.
Did you notice that thing that just happened? When we went from a short time period to a longer one, we got more signal and less noise. Well, this is what usually happens with Signal-to-Noise Ratios. When the data are expanded in scope appropriately the variation in the data tends to be caused by long term, larger scale, over-arching trends and if that is what we are trying to measure, then we are getting more signal and less noise.
I quickly add that some analyses require smaller scales … if you want to measure the effects of burning off glycogen … that water weight stuff… than there will be too much “noise” in long term dieting data which would mostly reflect changes other than glycogen storage and use. If I want to know if it is true that people commit stranger crimes during a full moon, than I need to divide my data up into segments of time that match full moon vs. not full moon. The way the data are sliced up determines what you can study with those data, and how well that study can turn out.
And since the relationship between releasing fossil fuel’s Carbon, in the form of CO2, into the atmosphere and global temperatures is a long term phenomenon occurring over several decades, it is appropriate to measures this phenomenon over several decades.
The earth’s temperature is measured in three broadly different ways. All are “proxyinidicators” (aka proxies), or indirect measures, even though only some of them are called this. The three ways are 1) Temperature gauges, such as thermometers, measure temperature indirectly by expansion/contraction of liquids, changes in resistivity of conducing materials, etc. There are thousands and thousands of these “thermometers” around the world and they have been applied to measuring the temperature of the air just over the surface of the earth for a very long time. There are millions of measurements in this data set. 2) Satellite measurements. Satellites collect temperature data in different ways, and can measure the temperature of the Earth’s atmosphere at various levels, or the earth’s surface (including the ocean’s surface). The key thing to know about Satellite measurement is that a very large amount of data is collected in a short period of time, so it is really good data, but the data set itself goes back only a few decades because Space Travel has only recently been developed by Earthlings. 3) Biological or chemical signals in ancient records. These are diverse but often quite accurate measures of temperature that are often expensive and difficult to obtain but that allow measurements to be extended in time and space well beyond the other techniques. For instance, certain kinds of “plankton” do well in certain ocean surface temperatures, other prefer different temperatures. By looking at which plankton thrive vs. do not thrive one can estimate the sea surface temperature fairly accurately. In many places the plankton die off in large numbers seasonally and their tiny corpses are trapped in layered sediments on the ocean floor. By sampling these layered sediments, extracting and counting the plankton, we can reconstruct sea surface temperatures going back many tens of thousands of years.
These three sources of information are independent of each other and are used to verify and calibrate each other. I am fortunate to have been able to work with these data and to supervise MA and PhD students working with this data at the University of Minnesota’s Lake Research Center (LRC) where I was an advising faculty member for several years. Squiggles (climate data over long periods of time) and the study of squiggles are huge fun.
There are several different factors that determine the earth’s temperature. The first one, and the one that I’d like to dispose of right away, is horizontal synchronic variation. What I mean by that is how temperature varies across space at any one moment. Right now as I write this it is 79 degrees Fahrenheit in Minneapolis, MN, but it is probably some frightfully cold measurement on Table Mountain by Cape Town, South Africa. It is colder there because they are having winter in the Southern Hemisphere while it is summer in the Northern Hemisphere, because Table Mountain is thousands of feet above sea level, and because it is night there and noon there. This sort of variation is easily dealt with by average out all the measurements over a large area across an entire year. Check.
A second major form of variation in the temperature of the sea surface and the atmosphere are short or medium term changes such as ENSO (which you may know of as “El NiÃ±o or La NiÃ±a”. Heat from the sun falls unevenly on the earth, and then spreads out across both land and sea. The resulting dynamic is very complicated. Some of this dynamic turns into seasonal weather patterns such as monsoons, some of this dynamic turns into regular ocean currents, and some of it turns into a kind of oscillation which causes several years in a row in a given spot to be one way (say, cool and wet) and then the next several years in a row in that spot to be a different rway (say, warm and dry) with in between years being in between. These cycles are long by seasonal standards and short by climate-change standards. The influence that ENSO has over our climate is large and comes in chunks several years long, so the temperature change (or any other weather change) over, say, five years or a decade may be more influence by this factor than anything else.
By now you know exactly how to fix this, right? Average out temperature readings over periods long enough to swamp out ENSO variations. Compare decade to decade rather than year to year, for instance.
A third form of variation comes in the form of gasses and aerosols (dust and yeck and stuff) belched out by major volcanoes (or even groups of smaller volcanoes). Generally speaking, a major volcanic eruption will cause cooling of the earth’s atmosphere pretty much right away, with the effects of that cooling becoming less and less over a few years, and slowly diminishing to the point where it is not measurable. Again, averaging out over several years helps to eliminate the effects of volcanic activity.
And there are other factors as well.
With climate data, you must adjust the scope of the data to the question being asked. Century-long average of data from Ipswitch Massachusetts will not reveal ENSO related climate patterns. Annual temperature data for sea surface temperatures in the Pacific examined over 30 years will. To study the effects of releasing previously fossilized Carbon into the atmosphere, via the burning of fossil fuels (mainly coal and oil) since 1850 requires averaging across space (taking a “global” temperature reading) per year over several years, and then overlooking the squigles that show ENSO or volcano-related variation and focusing on multi-decade trends.
And now on to the paper by Santer et al.
There are two take home messages that go with this paper. First, the data showing that the Earth’s surface and lower atmosphere are warming is overwhelming and come from different independent sources. Second, “cherry-picking” of temperature data by those who for some reason can’t (or refuse to) grasp that the science on global warming is very clear creates unnecessary and in my view dangerous confusion in the media and among the public.
The scientific record of surface temperature changes are based on millions of temperature measurements taken with thermometers located at thousands of weather data collection stations around the world. These records clearly show pronounced warming of the land and ocean surface since 1900. The paper by Santer et al. analyzes measurements taken by satellite of the temperature of the region of the atmosphere from the surface to roughly 5 miles above the surface (the lower troposphere). These measurements are totally independent of millions of thermometer measurements but they show the same pattern for the period that can be compared; The satellite data also show statistically significant and very obvious global-scale warming of the lower troposphere. The magnitude of the lower tropospheric warming is about 0.9 degrees Fahrenheit (0.5 degrees Celsius) over the last 32 years for which there is a good satellite-based temperature record.
Numerous studies have identified a human “fingerprint” in global temperature change, and they do so by looking at temperature change over several decades, as described above. In doing this, the influence of year to year or other short term “noise” is reduced, allowing the identification of a signal caused by greenhouse gases.
Recently, politicians, climate change denialists, and a very small number of scientists have argued that the small surface and tropospheric warming we have observed since 1998 is at odds with what we would expect from warming by greenhouse gases released by fossil fuel burning. In addition, there has been a claim that computer climate models cannot produce 10-year periods with little or no warming, suggesting that the models are broken.
This paper shows that such claims are wrong. Climate models can simulate 10-year periods with minimal warming. The study shows that lower atmosphere temperature records must be at least 17 years long to be good at sorting out the noise from the signal. When people chose to look only at a single, noisy, 10-year period they are cherry-picking.
So, what does the study show about Global Warming?
For many years, satellite-based estimates of temperature change developed by scientists at the University of Alabama at Huntsville (UAH), which show warming but that are also influenced by shorter term natural “noise,” implied that there had been no global-scale warming of the troposphere since the beginning of the satellite records in 1979. This temperature record was thus considered to be “evidence of absence” of human effects on climate. However, the UAH dataset now has a signal-to-noise ratio of almost 4; Global-scale warming over the 32-years from 1979 to 2010 is now four times larger than computer model estimates of internal climate “noise” on the same timescale. Purely natural changes in climate are highly unlikely to explain the overall warming trend in the UAH data. The study also looked at the relationship between computer models and actual observations at decade-long time scales, to test the assertion that noise and signal have different relationships in the models than they do in real life (this has been proposed). The study found no evidence of such a bias. The study examined 22 different computer models, and found that on average they overestimated the size of observed temperature “noise”. Thus, Signal-to-Noise Ratios found in this study of the UAH satellite data are conservative, and it is likely that the true signal-to-noise ratio of CO2-based global warming (signal) to natural variation (noise) is probably even larger than 4.
Global warming is real. When you hear things like “the last decade was colder than the previous decade” or “during the years X to Y (where the differences is less than 15 years or so) the earth cooled,” you are probably hearing about data from only one part of the planet (failure to average across space) or from a cherry picked time period just after a volcano went off or some other short term variation occurred. To put a finer point on it, when you hear that sort of thing, you are being lied to.
Oh, and by the way, when you repeat those lies, or say things like “Huh… global warming … the jury is still out…” then you look and sound, well, dumb. Because you are being dumb. Don’t be dumb. We can’t afford any more of this distraction.
Santer, B., Karl, T., Lanzante, J., Meehl, G., Stott, P., Taylor, K., Thorne, P., Wehner, M., Wentz, F., Mears, C., Doutriaux, C., Caldwell, P., Gleckler, P., Wigley, T., Solomon, S., Gillett, N., & Ivanova, D. (2011). Separating Signal and Noise in Atmospheric Temperature Changes: The Importance of Timescale Journal of Geophysical Research DOI: 10.1029/2011JD016263