In order to do good climate science, you have to understand and control for the sources of variation in the system. In any system that involvs metric change over time, there are four sources of variation:
1) Measurement or observational error (goofs, inaccuracies, bad calibration). The speed was 23 feet per second but the instrument read 22.5 or the observer wrote down 32 by accident, etc.
2) Internal (secular, natural) variation. If A causes changes in B over time, variation in B that would have happened anyway don’t count in understanding the A-B link.
3) Causal relationship (causal arrow directionality) . You have to know what part of the system causes variation in which other part, and to be able to distinguish between A causing variation in B from B causing variation A. Since both can happen at once and the amount of causal effect can change, causal direction can be modeled as a source of variation.
4) Cause. If changes in A cause a certain change in B then that change (in B) is accounted for by A, so this is a source of variation, and often, the one you are looking to measure.
In climate science this can be quite complex, and some of the most important work in this field is all about determining what causes what. Accordingly, some of the most significant questions that have arisen about, for example, anthropogenic global warming have been about the true sources of variation. For instance, if increases in CO2 amounts in the atmosphere lag behind increasing average annual temperature, then it would be difficult to argue that CO2 is a cause of warming. I turns out that this is not the case (CO2 increase does NOT lag temperature increase), though it has been suggested by denialists.
Kevin Trenberth, John Fasullo and John Abraham have just put out a paper in Remote Sensing that looks at three of these factors … the last three … and how they influence the understanding of climate modeling and measurement. They look in particular at a set of previously published papers that failed to account for one of the main sources of variation or another, and that also lost track of other features that influence the measurement of variation such as the scale of analysis, and as a result, came up with counter-intuitive results that would have been important if they were real, but since they were erroneous, were, well, bad. As in bad science.
The authors also look at papers that have critiqued these inadequate studies. Trenberth et al, in the end, make a very good argument that …
… in any analysis, it is essential to perform a careful assessment of (1) uncertainty in any data set or method and (2) causal interpretations in the fields observed; while (3) accounting for the natural variability inherent in any observed record.
Like I said.
Or else you get …
… erroneous conclusions and widespread distortion of the science in the mainstream media. … Addressing [relevant] questions in even a cursory manner would [avoid] major mistakes.
These authors really don’t say anything that is not generally understood by scientists, but they do apply these criteria for quantitative research to specific recent cases and do an excellent job of summarizing and tying up this recent spate of confusion and misunderstanding perpetuated by researchers who are doing bad climate science instead of good climate science.
Trenberth, K., Fasullo, J., & Abraham, J. (2011). Issues in Establishing Climate Sensitivity in Recent Studies Remote Sensing, 3 (9), 2051-2056 DOI: 10.3390/rs3092051