January was warm, globally. A fun fact of limited importance is that January’s average global temperature, in the NASA GISS database, has a value of 75 (that’s anomaly above a baseline expressed in the standard hundredths’s of a degree C) of 75, which is higher than the average for any year in that dat base. (Lot’s of months are higher than the average, but only recent ones!)

January 2015 was the second warmest January in this data set. The graph above also indicates which of the Januarys in the data base are in the top ten, and obviously, they are all recent.

So, we’ll do this 11 more times and see how the year goes. Since 2014 was the warmest year in most data sets, it is going to be hard to beat. But you never know.

**In Search of Sungudogo** *by Greg Laden*, now in

Kindle or

Paperback
*Please note:

Links to books and other items on this page and elsewhere on Greg Ladens' blog may send you to Amazon, where I am a registered affiliate. As an Amazon Associate I earn from qualifying purchases, which helps to fund this site.

### Like this:

Like Loading...

That doesn’t necessarily mean it will be hard to beat.

Well, since there tends to ne a lot of up ans down squiggling, one can’t assume. but since it is getting ever warmer, it could happen.

Some scientistssuch as Guy McPherson say that we are in the beginning stages of exponential climate change. If true, it wouldn’t surprise me if 2015 beats the warmest year record.

2014 could be hard to beat due to “regression to the mean” I expect. It will be broken sooner or later because of the global warming trend, of course. But if 2015 is a new record again without an El Nino, I think I would start to be afraid.

Harry, I think “regression towards the mean” assumes that what is being measured does not change between measurements. Which is not true in this case since we know the earth is warming. I also think it is safe to start being afraid as of now.

Harry, the problem with regression to the mean is that it doesn’t work quit the same way when the mean is moving. Oh, I see Raucous has just said that. So what he said.

Also we have good reason to believe that the first few months of 2015 are going to be relatively warm.

The mean is nonstationary, but the trend is small enough that one year’s worth of expected change is small compared to interannual variation. IOW, the signal to noise ratio is poor. So “regression to mean” still makes back to back records unlikely.

Written on ipad with clumsy fingers, apologies for poor composition.

Yes, the variance is high enough that one would expect that.

However, regression (without qualifications) to the mean is not applicable for yet another reason; independence (lack of). There are strong patterns of forgings that transcend years. One year predicts the next to a stronger degree than distributional statistics based on recent years. We were able to say that this January (and February) were going to be very warm several months ago.

What makes back to back records somewhat unlikely is that a very strong El Nino a few years back dominates the recent trend to such an extent that a record is hard to make. EG 2013 may have been a record year had it not been competing with the Babe Ruth of years. So we’d be looking at two years in a row, with a likely third, being a record.

On “regression to the mean”.

There is an upward trend in temps, but the natural variability between years is so high that the temp changes between years are more or less random – so I suspect regression to the mean still applies.

I didn’t consider some sort of autocorrelation effect between years, but I guess it is possible.

Oh I just saw Ned has already anticipated my response.

Yes, and I anticipated your response to yourself . These comments are not independent.

Harry:

Trouble is, 2014 was actually below the trend or “mean”: http://www.woodfortrees.org/plot/gistemp/from:1974/trend:12/plot/gistemp/from:1974/compress:12

So “regression to the mean”, should it occur, will actually mean a new record. So you are quite justified in being afraid now.

Interesting looking at the entire GISS record with the trend since 1974: http://www.woodfortrees.org/plot/gistemp/from:1974/trend:12/plot/gistemp/from/compress:12

It has been time to be afraid for some time now.

Trouble is, 2014 was actually below the trend or “mean”That’s sensitive to the choice of start date for the trend. You picked 1974. If you started five years earlier or five years later, you’d find the opposite. So, not very robust argument.

Over the past half-century, there have been 12 new records set. Of those, three were immediately followed by another new record, while nine were not.

Also over the past half-century, the delta from year x to year x+1 is negatively correlated with the delta from year x+1 to year x+2. So a year that’s warmer than the previous one tends to be followed by a year that’s colder, and vice versa.

But … the effect is small and the R2 value is low. The prediction for 2015 would be just 0.009C below 2014.

Basically, it’s a coin flip at this point.

These comments are not independent.There’s an anomalous lack of trolling so far. Let’s hope that we continue to avoid regression to the mean(ies).

Chris, yes, that’s very true!

Ned, the upward trend over a few years is not as sensitive to the starting point if the trends are a few years long.

No. Five years earlier or later brings 2014 to right on the regression line (to within 0.01 deg C). So “regression to the mean” will mean a new record. So it is a robust argument.

You can be afraid.

Most of those 12 new records were significantly above a range of trend lines. 2014 was not.

But a loaded coin flip.

Yes, I think the key point here is that there is a persistent trend. Regression towards the trend is the operative concept, as pointed out.

This is also why arguing over the statistical significance of a “warmest year” or “warmest month” is bogus. Given 130+ years of data showing a trend, the appropriate statistic is NOT a t-test on the newest data point’s position in a distribution of a handful of recent points in the series.

Plus here we have good underlying science. We have a cause and an effect. The fact that the cause adequately predicts the effect over time confirms that the science is right. So, the value of the trend analysis is not to prove the cause-effect relationship (any more) but rather to track it and measure it more precisely.

No. Five years earlier or later brings 2014 to right on the regression line (to within 0.01 deg C). So “regression to the mean” will mean a new record. So it is a robust argument.Eh. If “0.01 deg C” is the threshold, then 1974 and 1984 are the *only* start years for a trend that would put 2014 below the regression line. So still not very robust.

I will revise my thinking on this a bit. Let’s try again:

If you think that the expected value for 2015 would be best described by a linear trend starting during the 30-year window from 1964 to 1993, then you would probably expect 2015 to be at least 0.01C warmer than 2014. And that is a defensible argument.

But if you think that the expected value for 2015 would be best described by the statistics of more recent years (post-1994), then you would probably expect 2015 to be at least 0.01C cooler than 2014.

I would never make the latter argument for long-term predictions or for sweeping statements about climate sensitivity, etc. Obviously you want to look at longer-term trends to understand the long-term behavior of the climate.

But I do think that for predicting what’s going to happen

next yearthe statistics of recent years are more important. For now I don’t have time to go into more detail, but maybe later.This is also why arguing over the statistical significance of a “warmest year” or “warmest month” is bogus. Given 130+ years of data showing a trend, the appropriate statistic is NOT a t-test on the newest data point’s position in a distribution of a handful of recent points in the series.Plus here we have good underlying science. We have a cause and an effect. The fact that the cause adequately predicts the effect over time confirms that the science is right. So, the value of the trend analysis is not to prove the cause-effect relationship (any more) but rather to track it and measure it more precisely.I strongly agree with both those paragraphs.

The only way to do this is to start 3 years back and calculate every line going back maybe 30 years and then look at the distribution of predicted points weighted by sample size. That may cause even more confusion but would be fun.

I did something slightly different. I looked at all 30-year trends in GISTEMP (i.e., those ending in 1910, 1911, … 2013) and compared them to the 10-year trends. For both sets of trends I used the trend to predict the following year (i.e., 1911, 1912, … 2014) and compared the predictions to the observed values.

In general, the predictions from 30-year trends seemed to do *slightly* better than the predictions from 10-year trends.

The difference in accuracy was small, but pretty consistently in favor of the predictions from the longer time period.

So, I’d say that I was wrong.

Ned, that is exactly along the lines of what I was thinking of. How does the difference between 30 and 10 year based predictions change over time?

Try this, also. Make a straight line regression model for the whole data set. Then do it again with a first order polynomial. For the entire data set, IIRC, the first order polynomial works a bit better. Then, you can see where the curve ends (there is a curve) for the first order polynomial. That is where I’d start my straight line regression.

So, I’d say that I was wrong.That is, I now agree that the odds are in favor of a new record in 2015. The reasons that I thought otherwise seemed valid in principle but did not survive contact with actual data.

Try this, also.Um. I have some other ideas that might be more elegant. Probably not worth pursuing except out of absurdly overdeveloped curiosity. I will get back to you on that.

You’re not getting the point. It doesn’t matter if 2014 is already on the regression line. In that case “regression to the mean” simply means it stays on the regression line. So the argument that “regression to the mean” will mean a new record is robust.

Of course, given that the noise is much stronger than the expected increase in one year, the odds of a new record on a simple analysis are not much more than 50-50. I’ll just say a new record won’t be a surprise.

The noise to signal ratio probably changes depending on the signal. If it was just CO2 and some feedbacks, it would be noisy. But we also have the Tropical Pacific, and short term it is a very strong sin gal that can have a magnitude much larger than the noise. If you look at the upward squiggles in the graph, the largest ones are El Nino events. Among the downward squiggles are some La Nina, but also volcanic events. That is also a strong signal.

That’s the thing. This system is a bit noisy but it is also pretty well understood. We are entering El Nino conditions, so we will have a few months above the trend-line for sure. If a big volcano went off we’d have to predict several months below the trend line. Both signals, if on the strong end, cause as much of a shift as something like one standard deviation of the noisy part of the signal.

You’re not getting the point. It doesn’t matter if 2014 is already on the regression line.Chris, my comment “not very robust argument” was specifically referring to the statement

“2014 was actually below the trend or ‘mean'”. And that statement really *isn’t* a very robust claim.Whether 2014 is above, on, or below the linear trend line depends on what year you choose to start the trend. Out of approx. 130 possible start years, the one you chose (1974) just happens to be the one year that makes 2014 as far as possible below the trend line, and it’s still not very far. I’m sure that was just chance, but … when your argument involves picking the most advantageous starting point out of 130 years, I think it’s fair to question its robustness. Needless to say there are many more possible start-years that would put 2014 above the trend rather than below.

But that’s because the long term time evolution of global mean temperature is nonlinear.

Now it’s true that if we want to ignore that nonlinearity and project 2015’s temperature via a linear model, then a lot of reasonable choices for a model would have 2014 very close to the trend line itself. And under those circumstances, the odds are (slightly) in favor of a new record in 2015. That’s the bigger picture here and I basically agree with it now.

But the “non-robustness” remark was in response to the idea that 2014 was not merely “on” but actually “below” the trendline. The accuracy of that statement is dependent on the start year. And, ironically, it’s also strongly affected by the existence of the 1998 El Nino. So, again, not very robust.

I wrote:

And under those circumstances, the odds are (slightly) in favor of a new record in 2015. That’s the bigger picture here and I basically agree with it now.OK, I am going to contradict myself again, and flip back to saying I do *not* expect a new record in 2015.

First, looking at the real-world situation, it’s not clear that the long-awaited El Nino is going to do much. The ENSO forecast is pretty quiet right now.

So that leaves the statistics, which we’ve been obsessing over in this thread. Looking at this further, I don’t think the 30-to-40-year linear trend that Chris is referring to is in fact the best basis for predicting 2015.

Shorter-term trends actually do a better job of prediction in this case. Trends starting in the 1970s and 1980s have a standard error of about 0.09. For start dates in the 1990s this drops off, and from 2001 onward the standard error of prediction is around 0.06. In addition to having a lower standard error, the trends for these shorter time periods predict a lower 2015 value, one that could set a new record but probably won’t.

Another way of looking at this is with LOWESS models. LOWESS models with an alpha corresponding to less than 30 years incorporate some of the decadal-scale variability that we’ve seen in recent years, and predict a somewhat lower value (non-record-setting) for 2015. Only models with longer timescales predict that 2015 will exceed 2014’s value, and those models don’t really capture the observed pattern of decadal variability.

So I will go back to my previous stance. For the specific purpose of predicting *next year’s temperature* I think that methods focusing on shorter time scales are more appropriate than extrapolating a linear model from the 1970s … and those methods suggest that 2015 is somewhat more likely to come in below 2014 rather than above.

Having now flip-flopped twice on this, I am now back at my original opinion. Without knowing what ENSO will do over the next 11 months, my a priori expectation is that 2015 will be close to, but slightly lower than, 2014.

Obligatory disclaimer: this is basically angels-on-pinheads analysis. We all know that over the long term global temperatures are rising due to radiative forcing from greenhouse gases. That’s not up for debate. Speculation about whether individual year X will or will not set a new record is purely for entertainment purposes.

I don’t understand how that happens. Do you include autocorrelation?

It happens because of nonlinearity in the series.

I didn’t include autocorrelation when I ran it for every year. I did a correction for AR(1) autocorrelation on a couple of sample years. For the trend starting in 1981, the StdErr increases slightly from 0.09 to 0.10. For the trend starting in 2001, the StdErr increass from 0.06 to 0.08. The error in prediction from the shorter trends is still smaller than the error from the longer trends.

One could do a more rigorous model of autocorrelation (ARMA or whatever). If we were talking about monthly data, it would obviously be important. But there’s not a ton of autocorrelation in the annual data.

Please note that I’m not suggesting using short trends to draw conclusions about 21st century warming or climate sensitivity or whatever. I’m not one of “those” people. But for predicting this year’s temperature measurement, I think the 10-year trend is probably more reliable than the 30-year trend. And the reason for that is the existence of decadal-scale variability in the temperature series.

It’s also possible that one could get a better prediction by adjusting the time series for ENSO etc., a la Foster & Rahmstorf. You’d need to make some assumptions about what ENSO will do this year, for the “prediction” part.

I would have thought short term variation (e.g. the La Ninas in 2011 and 2012) would degrade the reliability of using 10 year trends more than using 30 year trends.

If you get under the range of the shorter term forcing, it will appear to work better for immediately upcoming years. Then if you add years the value of the trend line may degrade slightly. Eventually it should come back.

Of course once could also address El Nino and no El Nino years separately.

@Ned : El Nino has been tricky to predict lately because among other things the whole Pacific ocean has been warmer not just warm pools of it in certain places.

The Aussie Bureau of meteorolgy was saying we we in boarderline El Nino conditions until recenty when they switched to neutral but its been a very long time since it rained at all ,let alone rained properly here and its going to be forty degrees celsius (104 Fahrenheit) this weekend according to the forecast. When you work outside as I do, you tend to notice that.

@36. Greg Laden : La Nina years have been record breaking -for them too :

http://www.smh.com.au/environment/weather/weather-2014-australias-third-hottest-year-on-record-20150106-12ignf.html

But then I’m sure you already knew that.