Monthly Archives: August 2015

Peer Reviews Faked In Tiny Percentage of A Small Percentage of Journals, Heads Will Roll.

The academic world and its detractors are all a-tizzy about this recent news reported here:

Springer, a major science and medical publisher, recently announced the retraction of 64 articles from 10 of its journals. The articles were retracted after editors found the authors had faked the peer-review process using phony e-mail addresses.

The article goes on to say that science has been truly sullied by this event, and anti-science voices are claiming that this is the end of the peer reviewed system, proving it is corrupt. The original Springer statement is here.

See this post at Retraction Watch and links therein for much more information on this and related matters.

Here are the papers

Personally, I think that anyone who circumvents the process like this should be tossed out of academia. Academics toss each other out of their respective fields for much much less, and often seem to enjoy doing it. Cheating like this (faking peer review as well as making up data) is the sort of thing that needs to end a career, and a publishing company that allows this to happen has to examine its procedures.n This is probably worse than Pal Review, but Pal Review is probably more widespread and harder to detect because it does not use blatantly obvious fake people. (See this for more on Pal Review.)

But, there is an utter, anti-scientific and rather embarrassing lack of perspective and context here. Let us fix it.

Ten of Springer’s journals are sullied by this process. Is that most of Springer’s journals? All of then? Half of them? Well, Springer produces 2,900 journals, so it is less than a percent of the journals. I’m not sure how many papers Springer produces in a year, but about 1,400,000 peer reviewed papers are produced per year. So the total number of papers known to be affected by this nefarious behavior is a percentage that is too small to be meaningful. More papers were eaten by dogs before publication.

So there are two obvious facts here. One is that authors who faked their research or its qualities, including faking the peer reviewed process, need to be run out of town on a rail, drawn and quartered, and otherwise gone medieval on. The other is that the conversation we are going to have about this on social media and elsewhere is likely to be a useless pile of steaming bull dung if we can’t start the conversation with two feet planted firmly in reality, rather than context free scary looking numbers willfully (I assume, but maybe out of ignorance) presented scale free.

Springer’s full statement, which actually indicates that the process works rather than it does not work, is here:

Retraction of articles from Springer journals

London | Heidelberg, 18 August 2015
Springer confirms that 64 articles are being retracted from 10 Springer subscription journals, after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports. After a thorough investigation we have strong reason to believe that the peer review process on these 64 articles was compromised. We reported this to the Committee on Publishing Ethics (COPE) immediately. Attempts to manipulate peer review have affected journals across a number of publishers as detailed by COPE in their December 2014 statement. Springer has made COPE aware of the findings of its own internal investigations and has followed COPE’s recommendations, as outlined in their statement, for dealing with this issue. Springer will continue to participate and do whatever we can to support COPE’s efforts in this matter.
The peer-review process is one of the cornerstones of quality, integrity and reproducibility in research, and we take our responsibilities as its guardians seriously. We are now reviewing our editorial processes across Springer to guard against this kind of manipulation of the peer review process in future.

In all of this, our primary concern is for the research community. A research paper is the result of funding investment, institutional commitment and months of work by the authors, and publishing outputs affect careers, funding applications and institutional reputations.

We have been in contact with the corresponding authors and institutions concerned, and will continue to work with them.

The Birds At Itasca and Other Matters

When I studied the Efe Pygmies of the Congo, I discovered (and yes, it was me who discovered this amazing fact everyone now knows) that the Efe organize their space in elongated linear trails. They knew all about everything along those specific trails, and their knowledge of other trails was often very limited. If an Efe person spent time living with a group associated with a trail, he* would learn about that trail as well. Most interesting is that one’s knowledge of important things like where to find food (or danger) was based on experience not on general principles. So an Efe off his trail, or another trail he knew about, was not much better than, say, me (after a couple of years gaining my own experience) at having a clue. Also interesting is that there is a relatively formal connection between historic families (you can think of these as “clans”) and regular use of specific trails or sets of trails. So an older male member of Clan X will tend to know all the trails anyone in Clan X knows.

Turns out this is true of Minnesotans as well….

Continue reading here.

Baba Brinkman Climate Change Rap: What’s Beef?

This is F@k1n’ brilliant.

Cherry-pickin’ a bit of temperature data
And tryin’ to claim that climate change is in hiatus
It’s not, the trends are still going straight up
But they ain’t tryin’ to change their minds once they’re made up

In 1992 my mama’s thesis
Was about CO2 and Svante Arrhenius
So if you try to tell me that climate change isn’t serious
You’re dissin’ my mama, yup I’m kinda cliquish

Also by Baba Brinkman:

  • The Rap Guide to Religion
  • The Rap Guide to Evolution
  • The Rap Guide to Evolution: Revised [Explicit]
  • The Rap Canterbury Tales
  • The Rap Guide to Wilderness
  • The Rap Guide to Human Nature [Explicit]
  • Dead Poets
  • Update on climate models and heat waves

    Climate Models Accurately Predict Warming

    Climate models employ piles of data and sophisticated computational techniques to predict what will happen in the future. Sometimes they predict what happened in the past as well. That is important to test the models (because we might know what happened in the past), or to fill in the blanks (we don’t always know exactly what happened in the past) or to understand complex climate systems better.

    If you glance at the science denier rhetoric (mainly on blogs, you won’t find much in the peer reviewed literature because it isn’t good science) you’ll see repeated claims that climate models that try to predict global warming don’t match the actual observations of global warming. Most of the time, this claim is simply wrong. Perhaps an improper measurement of warming (like temperatures up higher in the atmosphere where we actually expect cooling rather than warming) is being used, which constitutes a lie. In other cases observed warming is within the model projections, but tracks off to one side (usually the low side) of the average expectation, but remains within the margin of error. This is either a misunderstanding of how the science works, or a willful misrepresentation. (Again, a lie.) But there are actually two legitimate areas where certain climate models seem to overstate observed warming. A recent post by John Abraham at the Guardian explores these areas.

    First there is the question of where the warming is observed. We measure warming in several parts of the Earth’s surface. (See “What does “Global Warming” Mean?) One is surface temperatures at about head height, over land, via a gazillion weather stations many of which have been in operation since the 19th century. The other is at the surface of the sea, using a combination of older measurements taken from ships and more recent satellite observations. In addition, we have measurements of the deeper ocean itself, usually averaged out over some depth such as the top 700 meters, or the top 2,000 meters. This combines older and new ship our buoy measurements but tend to not go back in far in time as the land and sea surface measurements.

    John Abraham has spent a lot of effort looking at ocean temperatures at depth. He and I recently published this item, and he’s done a lot of additional work on it. The total amount of heat added to the Earth’s surface from anthropogenic warming (caused mainly by greenhouse gasses such as carbon dioxide) is divided between the ocean and the surface, with the ocean taking up much of that heat. I liken the system to a dog with a wagging tail. The ocean is the dog, the surface temperatures, making up a small part of the overall system and being much more variable, is the tail.

    In “The oceans are warming faster than climate models predicted,” Abraham notes:

    We separated the world’s oceans into the Atlantic, Pacific, and Indian. All three of these oceans are warming with the Atlantic warming the most. We also calculated the ocean heating by using 40 state-of-the-art climate models. Over the period from 1970, the climate models have under-predicted the warming by 15%.

    And here you can see a number of climate models superimposed over the observed heating in the top 700 meters of the ocean (the red line):

    Ocean_Heating_Climate_Models

    In a more recent post, Abraham asks the question, “ how have the models done at predicting the changes in air temperatures?”

    As noted, global surface temperature is estimated in part from a bunch of thermometers around the globe, but these thermometers were not placed there for this purpose. They are weather stations set up to help track the weather, not to address questions of climate change. They are not evenly distributed, and there are huge gaps in the surface coverage, most notably in the Arctic and interior Africa, both regions where recent warming has probably been greater than elsewhere. In order for these temperature data to be used, they have to be carefully employed in a computational system that helps fill in the gaps. There are other complexities beyond our scope here. Suffice it to say that when a bunch of different groups of scientists (i.e., the British meteorology office, NASA, NOAA, the Japan Meteorological agency, and various university based groups) take on the task of estimating surface temperatures, they all do it a bit differently and thus turn up with slightly different curves.

    This is true as well with sea surface temperatures. There is more than one way to measure sea surface temperature, or more exactly, to take existing data and turn it into a useful estimate of sea surface temperatures.

    In addition, the data are cleaned up over time as mistakes are found, the basic computational approach used may be updated to make it more precise, and the overall approach to computation may be changed to make it more accurate.

    Over time, two clear patterns have emerged. First, if you take all of the different measurements of surface temperature over time spanning from the 19th century to the present, lay them all out in front of you and step back about two meters and squint slightly, you can’t see the difference among them. They all look the same, they all tell the same story. They all have a handful of notable ups and downs along the generally upward march of surface temperatures with industrial pollution. You have to look at the graphs all on the same axes, together, to see the differences, and the differences are minor. This tells us that all the different approaches to processing a largely overlapping set of data end up with the same basic result. So many smart minds working with the best available science all produce the same result. How boring. But also, how reassuring that the science is being done right.

    The second pattern emerges when we look at these graphs as they are produced over time. Various groups have said, “hey wait a minute, we’re missing this” or “we’re missing that” factor. Urban heat island effects may change the data! What about the Arctic! Interior Africa! Our satellites were recalibrated what does that do? Etc.

    Over time, and honest, well informed, diligent effort by many groups to improve the measurement of the Earth’s surface temperature has resulted in a series of slightly different graphs, and in each and every case of which I’m aware, the resulting, more recent and better done graphs show more warming, and various periods of relative flatness have become steeper (going upwards).

    So, what John Abraham has done is to take some of the more recently processed, better quality data and compared it to the usual models to see how well the models have done. They did well.

    He based his discussion on a comparison of the most recent climate model simulations with actual global surface temperature measurements as numerically summarized by NASA’s Gavin Schmidt, shown here:

    Abraham_Models_Vs_Surface_Measurements

    John has superimposed 2015 so far (the star).

    This shows the most current computer model results and five current temperature data sets. The dark line is the average of the models, and the various colored lines are the temperature measurements.

    The dashed line is slightly above the colored markers in recent years, but the results are quite close. Furthermore, this year’s temperature to date is running hotter than 2014. To date, 2015 is almost exactly at the predicted mean value from the models. Importantly, the measured temperatures are well within the spread of the model projections.

    Too Hot

    This is the year of the heatwave. We’ve had heat waves off and on for the last few years, but it seems that there are more now than ever before. While some have tried to argue that global warming can’t really cause warming (sometimes expressed as heat waves), it does.

    Climate Scientist Stefan Rahmstorf has a blog post, in German, about a current heat wave in Europe. He notes (rough translation):

    Europe is currently experiencing the second major heat wave this summer. On 5 July, according to the German Weather Service , a never before measured in Germany temperature reached 40.3 ° C in Kitzingen. But a month later, on August 7, this century record has been revised…One might speculate that a single heat wave could be simply due to chance. If you look at the temperature data, however, in their entirety it is immediately clear that extreme heat has become more common over several decades. (Apologies for errors in translation.)

    And he shows this graph:

    2014-05-13_series_obs-sigma1

    Percentage of global land area where the temperatures over a month were two or three standard deviations above the average from 1950 to 1980. Two standard deviations (orange) could be described as “very hot”, three standard deviations (dark red) as “extremely hot”. Source: Coumou and Robinson 2013

    Meanwhile, in Egypt, there is another heat wave. According to ABC News,

    Egypt’s state news agency says 21 more people have died due to a scorching heat wave, raising this week’s death toll to more than 60.

    The official MENA news agency said Wednesday that the latest deaths are from the previous day, mostly elderly people. It says 581 people are in hospital for heat exhaustion.

    The Mideast has been hit by a heat wave since late July. Egyptian summers are usually hot, but temperatures this week soared to 46 degrees Celsius (114 degrees Fahrenheit) in the south.

    At least 40 people had died on Sunday and Monday, including detainees and patients in a psychiatric hospital, according to officials. It wasn’t immediately clear whether Tuesday’s death toll includes a German national living in the southern city of Luxor who died from heatstroke.

    Additional items of interest:

    It is not the sun: Corrected sunspot history suggests climate change not due to natural solar trends

    It is not just the editorial page: Study Finds WSJ’s Reporting On Climate Change Also Skewed

    It is not a hiatus or pause: The alleged hiatus in global warming didn’t happen, new research shows

    It is not just the heat, it is also sea level rise: Catastrophic Sea Level Rise: More and sooner

    We are expecting a major Carbon Dioxide sink to eventually stop grabbing CO2 and, possibly, to start releasing it: Global Warming: Earth, Wind, Fire, and Ice

    Trump, The Others, and A New Test of the Hypothesis

    A couple of days ago I assimilated data from a bunch of on line polls where people could informally and unscientifically express their opinion about who won the GOP debate (the big boy debate only, with ten candidates). I suggested a series of hypotheses to isolate the idea that this sort of on line unscientific effort might reflect reality, with the idea of testing the results of those polls with upcoming formal polls.

    Now we have a couple of formal polls to test against. I took the raw percentages for the ten GOP big boy debate candidates, recalculated the percentages, and came up with the standings of those candidates in the more recent scientifically done polls. The polls are by Bloomberg and WMUR. The former is national, the latter pertains to New Hampshire, which will have a key early primary. Here is the relevant graphic:

    Screen Shot 2015-08-11 at 10.50.08 AM

    We see verification of Trump being in the lead. His performance during the debate was liked by a large majority, and he is the leader of the pack, still by a large majority, by those subsequently polled. What appears to be a drop is more a factor of the difference between asking who won the debate vs. who one would vote for.

    There is a big difference, though, in the back field. Bush and Walker were in the lower tier of the back field in people’s response to the debate, but are moving into a shared second place.

    So, two things. First, Trump is still winning, and really is winning, the GOP race. Second, unscientific online polls seem in this case meaningful. The polls initially gave uncannily similar (not random) results, and the application of a more scientific methodology verifies them.

    I quickly add this. This is not a prediction of who will win the GOP nomination, or who will win the election for President.

    Nate Silver makes some excellent points about this question in this blog post. The bottom line is that polling at this stage, or even well into the primary process, does not predict either outcome very well. But I think Silver also misses an important point. These polls are not meaningless. If you view them as having only one function, predicting primary or general election outcomes, they are useless. But they do something else.

    Polling at this stage in a presidential race is not about who is going to be President. Rather, such information is a good indicator of what people are thinking, how the politics are operating, how campaigns are doing, what issues are motivating people, and all that stuff. If you see polls early in the process this way, they are interesting. If you want to know who will be on the ballot in November (next November, not this November) or who will win, then … well, no.

    The #BlackLivesMatter Disruptive Activism

    I have a few thoughts I want to float on the recent #BLM activism that involved, as of this writing, two takeovers of public events. One takeover was at a Netroots Nation event that included Bernie Sanders, the other at a Sanders rally.

    First, I think it has to be understood that disruptive actions like this need to be carried out, and carried out more. Unless you can somehow convince me that there is a way to deal with violence in and against the African American community, widespread incarceration, habitual attacks by police on African Americans (and some others), etc. without civil disobedience, I’m going to stick with that. A disruptive action here and there will not leave much of a mark. It will be forgotten about. Sustained and well done disruptive activism is called for in the current situation. If it is only addressed to Bernie Sanders, it will fail, and if it doesn’t sustain through the entire election season, it will fail, in my opinion.

    (Having said that, at some point security changes will make stage rushes impossible, and after that, it will all be protesting outside events. That has to be evaluated for effectiveness and a good strategy that works will have to develop. A small protest at every event will probably get ignored. A planned huge protest that does not end up being huge will backfire. A good number of very large outside protest will probably be effective.)

    I don’t think either of the events, as far as I can tell, were done as well as one might like. At Netroots Nation, the #BLM activists gained the floor, and seem to have done well making key points. But they didn’t seem to have an exit strategy. An exit strategy would have gained them even more points and avoided some of the irrelevant conversations. An act of disruptive activism is always going to produce whinging and complaining about the act, but it is also good to try to have as much of the ensuing conversation as possible be about the point of the activism itself. It should be all about black lives, mattering, not about the #BLM movement’s tactics.

    In the case of the Sanders rally, it appears to me that mistakes were made by both parties. The Sanders people tried to say that the #BLM protesters could take the mic after Sanders spoke. They should have just handed the mic over. On the other hand, it was not clear that the #BLM activists were prepared, both rhetorically and technically, to actually take over the rally.

    In this sense, disruption may be a little like “awareness raising.” If either of those on its own is your goal, you won’t win. Those are only parts of a larger strategy, and both can actually have negative effects including the development of an inured public. In the case of going after an election campaign, the larger scale strategy might be to make sure that the problems we are seeing now, including racially motivated violence, mass incarceration, and the unthinkably horrible acts of an emerging police state, become part of the conversation for every campaign. Ideally a good percentage of votes will be gained or lost depending on a candidate’s, or a party’s, position on these issues.

    Some people are complaining about the specific reactions of Sanders. I want to add an element to the conversation that I’ve not seen discussed. Normally this would be the kind of thing I’d bring up at an organizing meeting because it is a nuanced issue that a lot of people probably won’t react well to. But it is part of the reality of disrupting campaign events. But first a critically important digression.

    The number one cause of death for African American males aged 15–34 is murder. Gun related deaths in the US are higher than anywhere else (not counting war zones, I assume) but for African Americans it is twice as high as white americans. If you are black and in America, your chance of being killed by some violent cause is 12 times higher than if you are black and living in some other developed country. And so on.

    How often to cops brutalize, including shooting, African Americans? We don’t know. There are a number of reasons this is hard to figure out, not the least of which being that the US government has reduced, rather than increased, the quality and quantity of data collection, mainly since the NRA does not want easy access to information about gun injuries and deaths. Also the rate may be going up so available numbers may not reflect the present, or important trends. We know that African Americans are significantly overrepresented in the frequency of police shootings. That could be attributed to something other than racist police brutality. Poor communities may have a disproportionate number of African Americans as well as more crime, yadda yadda yadda. The real question is how much targeting do police do of African Americans, and how much more likely are police to shoot an African American rather than a non-African American (or a minority vs. a white person)? The answer to that is that police clearly target blacks, and are more likely to kill black Americans. We just don’t know the numbers. Frankly, the numbers don’t matter to the issue of whether this is something that has to be addressed.

    This is nothing new. I first got involved in this issue when I was a teenager, and Keith Balou, 17, was shot in the back and killed by a state trooper in New York. Keith was one of several African Americans killed over the previous couple of years, and that instigated the rise of an organization called “Fight Back.” We had a huge conference in Chicago at which people related their own local stories. Obviously anti-black violence had been going on for centuries, this was just the new version of it. By the way, that was also at the time of one of the early first steps at militarizing the police. There used to be rules about how big a gun cops could carry. Keith was one of the first people, maybe the first, to be killed by a trooper using a .357 magnum, only recently issued to that particular police department.

    My point here is that black lives have always mattered, of course, and have always been at risk. I think it is fair to say that this risk level has gone up in our post 9/11 terrified society, with the rise of an increasingly militarized police state. Things are getting worse.

    So that’s the background, and that is why the #BLM movement exists, and why it is important.

    But there is one detail about disrupting political rallies that should be remembered. I’m not saying don’t do it, but this is a factor that should be taken into account.

    Several years ago I saw Jesse Jackson give a talk in Milwaukee. He was running for president. The talk was at the University of Wisconsin, Milwaukee student union. There had been rumors that someone was going to try to kill Jackson, so the security was tight. When Jessie went to shake everyone’s hand at the edge of the stage, a secret service agent stood next to him with machine pistol, thinly disguised as a handbag, pointed at the crowd. He was prepared to kill anyone who pulled out a gun. No one did, by the way.

    Running for president is a bit dangerous. Theodore Roosevelt, Robert Kennedy, and George Wallace were shot while running for president. Franklin Roosevelt was attacked while president elect. Of the 44 individuals who have been president, four (nearly one in ten) have been killed, 16 have been seriously attacked, with, I think, ten attacked with guns or, in one case, a hand grenade. In other words, the chance of being attacked with a gun or explosive, credibly, with about a 50–50 mortality rate, if you are president, at least to one in four, depending on how you count each attack.

    Over the last seven presidents, four or five, depending on how you count it, were credibly and dangerously attacked. Ford was shot at twice. Reagan was shot and seriously wounded. Clinton was fired upon in 1994, George W. Bush had a grenade tossed at him in Tblisi. There have been various attacks on Obama but I think that was mostly just people jumping over his fence.

    What is the point of this? The point is NOT to say that we should feel sorry for presidential candidates, elected presidents, or ex presidents, at the expense of black lives mattering. This is where the nuance comes in. This is not zero-sum game. Too much ammunition for that to be the case. The point of saying all this is simple. If you are going to plan a disruption campaign against candidates, you have to assume that those you are going after will be freaked out. They were already freaked out. They’ve already had the conversation about whether or not to wear a bullet proof vest. They’ve already been held in the kitchen or some waiting room while tough looking scary people check to make sure their pistols are loaded and ready, their communications systems in place. If they were paying attention, they already know about the snipers positioned on nearby buildings, and they probably walked by the ambulance positioned near by to take them to the emergency room when the shot that changes their lives, or ends it, rings out.

    As campaigns progress, Secret Service protection is eventually handed out, or increased. It may actually be impossible, as I mentioned above, to disrupt talks and rallies by going after the stage. Alternative strategies will have to be developed.

    Meanwhile, be careful.

    Added:

    The Beginning of The End of [Donald Trump/Tea Party/Fox News] UPDATED

    Select one and only one. Or two if you like.

    — see down below for update —
    Megyn Kelly of FOX news went after Donald Trump, the apparent winner of the FOX-GOP Fauxbate. Donald Trump at first declared that he has no time to be politically correct. Later he proved that he does have time to be politically incorrect, when he seemed to imply that Kelly was out of sorts during the debate because she was having female problems.

    This led a conservative organization to dump Trump from a keynote speakers spot. We see a crack in the armor form as Erick Erickson, who had invited Trump to speak, disinvites him at the same time that he makes it clear that this is not because Trump was “politically incorrect.” Rather, it was because Trump failed to follow common decency. That is a crack in the armor because political correctness IS common decency.

    The right wing has created several monsters. The Tea Party, the philosophical leaders such as Rush Limbaugh, and a gaggle of candidates and elected officials who are far beyond the pale of anything acceptable in terms of civil liberties. And, among those monsters is Donald Trump, who combines the worst of the philosophical leaders with the worst of those seeking office.

    And now even some of the conservatives realize that Trump is too much. But not too too much, because one does not want to admit that being politically correct is the right thing to do. This is called tripping over one’s own dog whistle.

    What has to be remembered here, I think, is that Trump is the same as all the others, just less polished, more in your face, more direct. But somehow he strikes a nerve with those who created him.

    The real issue here is not what Trump said. He didn’t say anything worse during the debate or in the aftermath (his remarks about Megan Kelly’s menstrual status, for example) than any of the elected officials who have chastised women who have been raped for having a problem with being raped.

    The real issue is that he offended his keepers, FOX news. He is not respecting his role. The philosophical leaders are supposed to bully and threaten and set the tone, FOX news is supposed to spread the rhetoric to the masses. The masses, the Tea Party asses, are supposed to vote for the Republicans, and the candidates are supposed to, and are paid to, maintain policies that shore up the 1%.

    By going after one of the FOX personalities, he has violated the internal order. Now, they are turning on him. What remains to be seen is how the masses, who believe they are acting independently, will respond to this. Will they fall in line and do what FOX says, dumping The Donald? Or will they see FOX’s attack on Trump as an offense, and turn on FOX?

    First Test Of Hypothesis

    An NBC Poll taken right after the debate tested voter opinions of the various candidates. This is also an online polls. The poll asks about more candidates than were in the Big Boy debate, but shows very little movement for trump (a slight increase, from 22% to 23%). A few other candidates have much larger numbers (but still one digit) which takes away from Trump’s total percentage (recalculated for just those in the second debate, Trump has 28%). The overall order of the candidates remains roughly the same, with Trump way out in front, and then two tiers. Rubio, Carson and Cruz are still in the upper tier, the other candidates in the lower tier.

    So, I’m calling it, so far, failed to disprove. The concept remains standing. Trump is the candidate that is actually winning, as indicated by both scientific on line polls and the NBC poll.
    Screen Shot 2015-08-10 at 1.31.41 PM

    Unscientific polls rocket Trump to way top spot.

    Trump went into the GOP debate last night with a roughly 20% poll standing. Everyone will tell you to ignore polls early in this race, they never predict the outcome of a primary or a general election. That, however, is a non sequitur. We do not look at early polls to predict the distant future. We look at them to help understand the present, and to get a handle on what might happen over the next few weeks. The meaning of the polls shifts quite a bit before the first primaries, then they meaning of the polls has to be re-evaluated after every primary. At some point the re-evaluations start to return an end result like “Candidates A and B are in a horserace” or “Candidate A is the clear leader.” After that, you can get caught on a boat with your mistress, or you can be killed, and that can change things, but not much else does. Democrats believe in the Dark Horse but no one has ever captured one to my knowledge. But up until that point, polls are useful, and meaningful, if done scientifically, but no, the fact that they don’t predict an outcome over a year in advance is not a surprise and does not mean they don’t have interest or utility.

    But what about unscientific polls?

    Well, they are not scientific and thus not worthy. However, over the last few hours, several non-scientific polls, and in this case I mean internet polls where anybody who happens on a site can vote, have come out asking who won last night’s GOP primary.

    If a bunch of unscientific polls that all return the same result become scientific, or at least, believable? That is a hypothesis I’d like to test with the current polling. It seems to me that if informal web based polls from across a spectrum of political orientation (of the site, not the poll clickers … we don’t know who the poll clickers are) all show similar results, then they might mean something. So, here is the hypothesis. If several informal polls show a very similar result, we expect to see that result reflected in the first scientific polls that come out.

    I got poll results from the following sources (shown in order from left to right on the charts):

    Slate
    Right Scoop
    Fox 5
    Drudge
    Palm Beach Post
    News OK

    Sadly MNSNBC had a poll but it was fairly useless in the way it was conducted. Also, HT Politics had a poll with similar results as those above, but I found it after I’d made the graphs.

    Trump was a clear winner in these polls.

    GOP_Debate_On_Line_Poll_Results

    Trump’s numbers ranged over several points, but are always higher than everyone else, and approached or met 50%. One hypothesis predicts that formal, scientific polls should have Trump as the front runner. Another hypothesis predicts that Trump’s numbers in a scientific poll should be between about 40% and 50%, give or take a few points.

    The gaggle of low numbers is difficult to even see on this graph, so I made a second graph with everybody but Trump:

    GOP_Debate_On_Line_Poll_Results_Not_Trupm

    Here we see what looks to me like two tiers. Walker, Christie, Bush, Huckabere and Paul are all really low, while Cruz, Kasich, Carson and Rubio are all relatively high. Note how variable Cruz’s numbers are. But aside from Cruz, just as is the case with Trump, the results are fairly similar across the polls.

    One hypothesis would then be that Walker will be shown as dead last in upcoming proper post debate polls. One could produce a number of other hypotheses as well, but it could get messy. Let’s try this hypothesis. Upcoming proper post debate polls will have a rank order statistically like this:

    Trump
    Rubio
    Carson
    Kasich
    Cruz
    Paul
    Huckabee
    Bush
    Christie
    Walker

    An additional hypothesis should probably be made, that the rank order for all the non-Trump candidates will be as shown. (This avoids the problem of having such a large magnitude of difference between the first and second rank).

    There is one poll that I know of that was conducted by pollsters. It is by One America News Network, a conservative news agency that bills itself as “credible” (which is funny, why would you have to say that if you were that?)

    If we take this poll by itself, most of the above suggested hypotheses are smashed. Here is the result of the poll questions “who won the debate” and “who lost the debate.”

    Screen Shot 2015-08-07 at 2.02.20 PM

    This poll asked questions of “Republican poll participants.” It shows Ben Carson beating Trump, and a lot less spread between leader and others than the on line polls indicated. Also, very few people thought Scott Walker, who was a big looser in the on line polls, had lost the debate. Generally, the rank order between this poll and the on line polls is different.

    Reading the reporting of this poll, it looks a lot like a shill for Ben Carson. Details of the methodology are as follows:

    Gravis Marketing, a nonpartisan research firm, conducted a random survey of 904 registered Republican voters across the U.S. Questions included in the poll were focused only on the top ten GOP candidates that participated in the 9 PM ET debate. The poll has an overall margin of error of +/- 3%. The polls were conducted on August 6, immediately following the GOP debate using interactive voice response, IVR, technology. The poll was conducted exclusively for One America News Network.

    I should add that the agency reporting the poll is owned by the company that commissioned the poll. Gravis, the pollsters, are used at Real Clear Politics. So I’m on the fence about the legitimacy of this poll and eagerly await other results.

    Hurricane Good News Bad News

    First the bad news. Taiwan is going to get slammed with Typhoon Soudelor over the next day (landfall at about 8:00 AM local time). Soudelor was one of the strongest typhoons earlier during its development but weakened to a Category 1. However, very warm seas, lack of wind shear, and other factors may make Doudelor return to category 3 or even 4 strength before making landfall. Also, it is large.

    The storm is likely to hit Taiwan in about the middle, which along the east coast is not heavily populated. But it will bring heavy rains, likely causing landslides and floods, to the mountainous middle of the island. On the other hand, the storm is moving quickly, so if it moves onto land and then moves through quickly, the total rain accumulation may be attenuated. After crossing Taiwan, the storm will hit mainland China.

    Bob Henson at Weather Underground has a summary, but it is from yesterday.

    Now the good news. NOAA has revised the estimate for the overall strength of the so far anemic Atlantic hurricane season, downgrading it a bit.

    The NOAA Climate Prediction Center’s updated 2015 Atlantic Hurricane Season Outlook calls for a 90 percent chance of a below-normal hurricane season. A below-normal season is now even more likely than predicted in May, when the likelihood of a below-normal season was 70 percent.

    This is largely due to increased vertical wind shear as a result of the strong El Niño we are experiencing. The agency is predicting between 6 and 10 named storms, with only 1-4 of them being hurricanes, with between zero and one being major hurricanes. So far the Atlantic has had three named storms, one of which managed to be hurricanes. A typical (average) season would have about 3 or 4 named storms (so this seems on track to be average) but by n ow on average one of the would be likely to be a hurricane. The El Niño related factors likely to attenuate a storm season are increasing and likely to maintain or increase over coming months.

    Crowd Sourced Award Winning Wines Support Prostate Cancer Research

    A guest post by Robet Hollander, Winemaker

    WordPress_Banner_02

    2redWinery, makers of the award-winning Ziniphany© Zinfandel and #2red is 38% towards goal on Indiegogo with all proceeds supporting prostate cancer research through the Robert and Susan Hollander Foundation, an IRS approved 501c3 organization. Campaign supporters, in exchange for their tax-deductible support, can secure wine from the 2015 vintage or from the award-winning wine library of 2redWinery.

    Robert Hollander, the winemaker and principle of 2redWinery, started small-volume winemaking in 2007 to indulge a long-standing passion. Passion changed to purpose in 2010 after he was diagnosed with prostate cancer at an incurable stage. Dr. Hollander, a highly-regarded clinician/teacher at the Gainesville VA Medical Center, affiliated with the University of Florida, then created the Robert and Susan Hollander Foundation to fund prostate cancer research. “After that, it just made sense to fund the Foundation with my winemaking. It gave my wine a dual purpose, not just to make a great wine I was proud of, but a wine that served a special purpose,” Hollander observed. In the two years the Foundation has been operational, unrestricted grants have been provided to researchers at MD Anderson Center and the Cleveland Clinic.

    Dr. Hollander’s campaign goal is $35000 with all proceeds above production costs supporting prostate cancer research through the Foundation. Contributions to the campaign are processed by FirstGiving and are tax deductible. Rewards for campaign supporters include wine from the upcoming 2015 vintage or wines from the award-winning library of 2redWinery. “It’s a win-win-win-wine,” according to Hollander.

    See Campaign: http://www.igg.me/at/2redwinery.com

    Robert Hollander Winemaker, 2redWinery
    President, Robert and Susan Hollander Foundation
    doctorbobster@gmail.com
    http://www.igg.me/at/2redwinery.com
    https://youtu.be/zu7Bf5RubpI
    https://www.facebook.com/doctorbobster

    The alleged hiatus in global warming didn’t happen, new research shows.

    There are two new scientific research papers looking at variation over the last century or so in global warming. One paper looks at the march of annual estimates of global surface temperature (air over the land plus sea surface, not ocean), and applies a well established statistical technique to ask the question: Was there a pause in global warming some time over the last couple of decades, as claimed by some?

    The answer is, no, there wasn’t.

    The paper is open access, is very clearly written so it speaks for itself, and is available here. One of the authors has a blog post here, in German.

    The other paper looks at the so called global warming “pause” and interrogates the available data to see if the pause is supported. It concludes that it isn’t. The paper is written up in a blog post by one of the authors, here.

    I’ll give you an overview of the findings below, but first, a word from the world of How Science Works.

    It’s the variation, stupid

    No, I’m not calling you stupid. Probably. I’m just paraphrasing Bill Clinton to underscore the importance of variation in science. The new paper examines variation in the global surface temperature record, so this is an opportunity to make this point.

    Much of the time, science is about measuring, understanding, explaining, and predicting variation. This is a point non-scientists would do well to grasp. One of the reasons non-scientists, especially those engaged in policy making (from voter to member of Congress to regulatory agent to talking head) need to understand this better is because variation is one of the most useful tools in the denier tool kit. If your objective is to deny or obfuscate science, variation is there to help you.

    Global warming, the increase in the Earth’s surface and ocean temperatures caused by the human caused increase in greenhouse gas, is a system with plenty of variation. The sources of variation are myriad, and the result is that the measurement of air temperature, sea surface temperature, and deeper ocean temperature appears as a set of squiggly lines.

    In many systems, variation exists at more than one scale.

    So, at the centennial scale, we see global surface temperatures not varying much century by century for a thousand years, then the 20th century average is higher than the previous centuries, and the 21st century average, estimated by 15% of the years of a century, is higher still. That is the effect of industrialization, where we shift from using energy from human and animal work, together with a bit of wind and water power, to using energy stored in carbon bonds in fossil fuels. This combined with population increase and increasing demands to support a consumer-driven comfort-based lifestyle have caused us to release fossil carbon into the atmosphere at an alarming rate.

    At the decadal scale, we see a few recent decades that stick up above the others, and a few that are lower than others or at least don’t go up as much as others. Over the last 100 years, the decadal average temperatures have gone up on average, but with variation. The primary explanation for this variation is two fold. First, there is an increase in the absolute amount of greenhouse gas, and the rate at which we are adding greenhouse gasses to the atmosphere, so over time, greenhouse gasses have become the main determinant of temperature change (causing an increase). Earlier on, when greenhouse gas concentration was lower, other factors had a bigger impact. The second (and related) explanation is variation in aerosols, aka dust, in the atmosphere from various industrial processes, volcanoes, and such. Decadal or multidecadal variation over the last century has been mainly caused by aerosols, but with this effect diminishing in its importance as it gives way to the increasingly important role of greenhouse gas.

    At a finer scale, of a year or a few years, we see variation caused mainly by the interaction of the surface (the air and the sea surface) and the upper ocean (this is sometimes examined for the top, say, 700 meters, other times, for the top 2000 meters, etc.) When we look at just ocean temperatures or just surface temperatures, we see a lot of squiggling up and down on an ever increasing upward trend. When we look at both together, we note that the squiggles cancel out to some extent. The ocean warmed considerably during recent years when the surface warmed more slowly. This is because heat is being exchanged back and forth between the surface and the deeper sea in a away that itself varies.

    That is the simple version. In reality things are more complex. Even though ocean and surface temperatures vary from year to year, with the major variations caused by El Nino and La Nina events in the ENSO cycle, there are longer term variations in how this exchange of heat trends. This time scale is in the order of several decades going in one direction, several decades going in the other direction. (see this post) Then, this sort of variation may have much larger scales of change, at century or even millennial time scales, as ocean currents that facilitate this exchange, undergo major changes, which in turn alters the interaction of the surface and the sea. And, of course, both sea and ocean temperature can affect the major ocean currents, so there is a complex causal interaction going in both directions between those two sources of variation.

    This is not a digression but it is annoying

    Have you ever been annoyed by someone who makes a claim about the health benefits, or negative effects, of some kind of food or other ingestible substance? You know, one of those totally non-scientific “findings” from the usual internet sources. Here is a little trick you can do if you want to challenge such a claim.

    In order to truly evaluate a health related claim, and have that evaluation be credible, you have to be able to do one of the following things, depending on the claim. Being able to do this is not enough to validate your expertise, but it is a starting point. It is a gate-keeper thought experiment. If you can’t do this, then you can’t really make the claim you are making with any credibility.

    • Name all the parts of a cell and what they do (for many health claims, especially those that have to do with diet, energy, metabolism, etc.)

    • Name all the different components of the immune system and explain how they work in detail (for many disease or illness related claims).

    • Describe, in detail, the digestive process, i.e., the process of food sitting on a plate being ingested and eventually being used by a human body, at the molecular level (for many claims about the beneficial or negative effects of certain foods, or the benefits of various dietary supplements).

    You might be a climate scientist if …

    All that stuff I said above about variation is the very simple version of what happens in the climate with respect to global surface temperature imbalance and global warming. If you read what I wrote and the whole time were thinking things like “yeah, but, he’s totally glossing this” or “no, it isn’t that simple, what really happens is…” then you might be a climate scientist.

    If, on the other hand, this extensive tl;dr yammering on variation seemed senseless or a waste of time, or you didn’t find it interesting or don’t get the point, the you may not be prepared to evaluate claims like the one about the so-called “pause” or “hiatus” in global warming. More importantly, there is a good chance that a person making the claim that there has been such a pause is unprepared to do so, just as the person claiming that wearing a $50 fragment of a discarded circuit board around their neck will protect them from EMF can not really make that claim because they are a total dumb-ass when it comes to energy fields and cell biology.

    Or, the person making the claim (most common in the area of global warming) is just trying to fool somebody. They are like the person who sells the fragment of the discarded circuit board.

    Change Point

    The first paper is “Change Points of Global Temperature” by Niamh Cahill, Stefan Rahmstorf and Andrew Parnell, published in IOP Science Environmental Research Letters.

    A long series of data may demonstrate the outcome of a set of variables where all the variables act in similar ways over time, and any trend plus or minus variation will be clear. But if the variables change in their level of effect over time, it may be that parts of the long term data series need to be treated separately. This requirement has led to the development of a number of statistical techniques to break a series of data up into different segments, with each segment having a different statistical model applied to it.

    The statistical approaches to this problem initially arose in an effort to better understand variation in the process of making key electronic components in the telecommunications industry. An early method was the “Control Chart” developed by Walter A. Shewhart at Bell Labs. The method allowed engineers to isolate moments in time when a source of variation contributing to mechanical failure changed, perhaps because some new factor came into play.

    More recently, the statistical method of “Change Point Analysis” was developed to provide a better statistical framework for identifying and assessing the statistical significance of changes in sources of variation. The process involves determining whether or not a change in the sources of variation has occurred, and also, estimating if multiple change points have occurred. The process is pretty complicated, numerically, but is automated by a number of available statistical tools.

    The new paper attempts to assess the reality of a “pause” or “hiatus” in global surface temperature increase using change point analysis. The change point analysis used four of the major commonly used data sets reflecting surface temperature changes. In each case, they found three change points to be sufficient to explain the medium to long term variation in the data. Most importantly, the most recent detectable change point was in the 1970s, after which there is no detectable change in the trend of increasing global temperature.

    The results of the analysis are summarized in this graphic:

    erl517152f1_lr

    Figure 1. Overlaid on the raw data are the mean curves predicted by the three CP model. The grey time intervals display the total range of the 95% confidence limits for each CP. The average rates of rise per decade for the three latter periods are 0.13 ± 0.04 °C, ?0.03 ± 0.04 °C and 0.17 ± 0.03 °C for HadCRUT, 0.14 ± 0.03 °C, ?0.01 ± 0.04 °C and 0.15 ± 0.02 °C for NOAA, 0.15 ± 0.05 °C, ?0.03 ± 0.04 °C and 0.18 ± 0.03 °C for Cowtan and Way and 0.14 ± 0.04 °C, ?0.01 ± 0.04 °C and 0.16 ± 0.02 °C for GISTEMP.

    Those who claim that there was a pause in global warming point to certain dates as the origin of that pause. The authors tested that idea by forcing the change point analysis to assume that this was correct. The alleged starting points for a global warming hiatus failed the statistical test. They are not real. The authors determined that the change point analysis “…provides strong evidence that there has been no detectable trend change in any of the global temperature records either in 1998 or 2001, or indeed any time since 1980. Note that performing the CP analysis on the global temperature records excluding the 2013 and 2014 observations does not alter this conclusion.”

    In addition, even though the alleged starting points for a global warming hiatus were found to be bogus, they were found to be more bogus in one of the four data sets, that developed by Cowtan and Way, which in turn is generally thought to be the data set that eliminates most of the biases and other problems found in this sort of information. In other words, using the best representation available of global surface temperature increase, the so called hiatus is not only statistically insignificant, it is even less significant!

    But that wasn’t enough. The authors took it even a step further.

    Finally to conclusively answer the question of whether there has been a ‘pause’ or ‘hiatus’ we need to ask: If there really was zero-trend since 1998, would the short length of the series since this time be sufficient to detect a CP? To answer this, we took the GISTEMP global record and assumed a hypothetical climate in which temperatures have zero trend since 1998. The estimated trend line value for 1998 is 0.43 °C (obtained by running the CP analysis on the original data up to and including 1998). Using this, we simulated 100 de-trended realizations for the period 1998–2014 that were centered around 0.43 °C. We augmented the GISTEMP data with each hypothetical climate realization and ran the four CP model on the augmented data sets. This allowed us to observe how often a fourth CP could be detected if the underlying trend for this period was in fact zero. Results showed that 92% of the time the four CP model converged to indicate CPs in approximately 1912, 1940, 1970 and a fourth CP after 1998. Thus, we can be confident that if a significant ‘pause’ or ‘hiatus’ in global temperature did exist, our models would have picked up the trend change with a high probability of 0.92.

    Climate change deniers are always trying to make the graphs go down. The graphs do not cooperate.
    Climate change deniers are always trying to make the graphs go down. The graphs do not cooperate.
    One is forced, sadly, to think about what deniers might say about any new climate change study. In this case, I think I know what they might say. Look again at the graph shown above. We see two periods when temperatures seem to be going down, and two periods when temperatures seem to be going up. So, half the time, they are going down and half the time they are going up, right? So, what happens if, as suggested by some climate deniers, we are due for a downward trend? Maybe there will be enough multi-decadal downward trends over the next century or so to significantly attenuate the overall trend. Hey, we might even see cooling. Right?

    Well, no. For one thing, as mentioned above, the overall pattern has been an increase in the importance of greenhouse gasses as the variable controlling surface temperatures. Whatever factors caused the flattish or downward trends many decades ago may still be in place but are relatively less important from now on, even if we quickly curtail CO2 output. Also, one of those factors, aerosols, is reduced permanently (we hope). Industrial pollution, in the past, caused a lot of aerosols to be released into the atmosphere. This has been reduced by changing how we burn things, so that source attenuation of surface temperatures is reduced. Also, as noted above, there are multi-decadal changes in the relationship between the surface (air and sea surface) and the ocean, and at least one major study suggests that over coming decades this will shift into a new phase with more surface heating.

    I asked author Stefan Rahmstorf to comment on the possibility of a future “hiatus.” He told me that one is possible, but “I don’t expect that a grand solar minimum alone could do it (see Feulner and Rahmstorf ERL 2010). Maybe an exceptionally large volcanic eruption could do it but it would have to be far bigger than Pinatubo, which did not cause one.” He also notes that some IPCC climate models have suggested a future slowdown, and the possibility of cooling in not non-zero. The key point, he notes, is “it just has not happened thus far, as the data analysis shows.”

    Author Andrew Parnell noted, “I think anybody who claims that these current data demonstrate a hiatus is mistaken.”

    Think there is a global warming hiatus? Slow down a second…

    The second paper is “Lack of evidence for a slowdown in global temperature” by Grant Foster and John Abraham. Foster and Abraham start out by noting that there is a widely held belief, even among the climate science community, “…that the warming rate of global surface temperature has
    exhibited a slowdown over the last decade to decade and a half.” They examine this idea “…and find no evidence to support claims of a slowdown in the trend.”

    The authors note that most of the discussion of global warming involves reference to “ the relatively small thermal reservoir of the lower atmosphere” (what I refer to above as the “surface”), but since this is only a small part of the planets heat storage, this can be misleading. When the ocean is taken into account, we see no slowdown in warming. The paper by Foster and Abraham refers to the above discussed paper on change point analysis, so I’ll skip that part. The remaining thrust of the paper is to apply some basic statistical tests to the temperature curves to see if there is a statistically valid slowdown.

    They derived residuals, using the GISS data set, for the last several decades, indidating the divergence of each year from an expected value given an upward trend. This looks like this:

    Screen Shot 2015-08-04 at 10.38.58 AM

    They then took sets of adjoining residuals, and tested the hypothesis “This set of numbers is different from the other numbers.” If there was a statistically significant decrease, or increase, in temperature change for several years it would show up in this analysis. The statistical test of this hypothesis failed. As beautiful as a pause in global warming may seem, the idea has been killed by the ugly fact of ever increasing temperatures. To coin a phrase.

    Then…

    As a last attempt to find evidence of a trend in the residuals, we allowed for models in which not only the slope (the warming rate) changes, but the actual value itself. These are discontinuous trends, which really do not make sense physically … but because our goal is to investigate as many possible changes as is practical, we applied these models too. This is yet another version of change-point analysis, in which we test all practical values of the time at which the slope and value of the time series change. Hence it too must be adjusted for multiple trials.

    Again, no statistical significance. If you look at the global temperature curve, and see a pause, what you are really seeing is noise.

    Foster and Abraham conclude:

    Our results show that the widespread acceptance of the idea of a recent slowdown in the increase of global average surface temperature is not supported by analytical evidence. We suggest two possible contributors to this. First, the natural curiosity of honest scientists strongly motivates them to investigate issues which appear to be meaningful even before such evidence arrives (which is a very good thing). Second, those who deny that manmade global warming is a danger have actively engaged in a public campaign to proclaim not just a slowdown in surface temperature increase, but a complete halt to global warming. Their efforts have been pervasive, so that in spite of lack of evidence to back up such claims, they have effectively sown the seeds of doubt in the public, the community of journalists, and even elected officials.

    A less sexist approach to addressing climate change

    Men and women are different, on average, in a number of ways. It all probably starts with who has the physiology to have babies and who doesn’t, and the differences spread out from there, affecting both the body and the mind. Decades of research show us that many of the body differences (but not all) are determined by developmental processes while many of the mind differences (but maybe not all) are determined by culture, but culture still has men and women as being different, so those differences tend to be persistent and predictable, on average.

    One of the differences which seems to meld body and mind, in the West anyway, is the tendency for women to be cold while men are comfortable across a certain range of temperature. It turns out that many decades ago a study was done that developed standards for installing and running air conditioning systems that, typically, set ambient in-room temperature levels to accommodate men. Damn the patriarchy, one more time. Since men are more comfortable on average at lower temperatures, this means a) air conditioners are set relatively low (which means high, in terms of energy use) and, b0 on average, women are doomed to wear sweaters or carry around blanket like objects while at work, at movies, at the mall, or anywhere where sexist air conditioning is operating.

    This is important not only for comfort of half the population, but also for climate change. A large amount (about 30%) of the CO2 emmissions in the West are the result of energy use in the buildings we live and work in, and a good part of that is heating and cooling. A new study, just out in Nature Climate Change, addresses this issue. The study is by Boris Kingma and Wouter van Marken Lichtenbelt, and is called “Energy consumption in buildings and female thermal demand.” The authors point out that not only is a large amount of our energy used to heat and cool buildings, but that about 80% of the variation in that energy use is account for by variation in the behavior of the humans that live and work in those buildings.

    The standard for setting ambient building temperatures is set by ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers).

    ASHRAE founded in 1894, is a global society advancing human well-being through sustainable technology for the built environment. The Society and its members focus on building systems, energy efficiency, indoor air quality, refrigeration and sustainability within the industry. Through research, standards writing, publishing and continuing education, ASHRAE shapes tomorrow’s built environment today. ASHRAE was formed as the American Society of Heating, Refrigerating and Air-Conditioning Engineers by the merger in 1959 of American Society of Heating and Air-Conditioning Engineers (ASHAE) founded in 1894 and The American Society of Refrigerating Engineers (ASRE) founded in 1904.

    And, according to Wikipedia,

    ANSI/ASHRAE Standard 55 (Thermal Environmental Conditions for Human Occupancy) is a standard that provides minimum requirements for acceptable thermal indoor environments. It establishes the ranges of indoor environmental conditions that are acceptable to achieve thermal comfort for occupants. It was first published in 1966, and since 2004 has been updated periodically by an ASHRAE technical committee composed of industry experts. The most recent version of the standard was published in 2013.

    According to the new study, the standard overestimates the “female metabolic rate by up to 35%” which “may cause buildings to be intrinsically non-energy-efficient in providing comfort to females.” The study suggests using actual metabolic rates to set the standard, and provides information on how to approach this. “Ultimately, an accurate representation of thermal demand of all occupants leads to actual energy consumption predictions and real energy savings of buildings that are designed and operated by the buildings service community.”

    So, let’s turn the air conditioners up. Meaning down. Depending on what you mean when you say that.