Falsehood: Correlation Implies/Does Not Imply Causality

Spread the love

As is the case with any good falsehoods, one can never really be sure what the falsehood may actually be. In this case, there are two falsehoods: 1) When we see a statistical correlation between two measurements or observations, we can not assume that there is a causal link from one to the other. This is the way the statement “Correlation does not imply causality” or some similar version of that aphorism generally means, and this is an admonishment we often hear; and 2) When we see a statistical correlation between two measurements or observations, there probably is a causal link in there somewhere, even when we hear the admonishment “Correlation does not imply causality” from someone, usually on the Internet. To put a finer point on this: What do you think people mean when they say “Correlation does not equal causality?” or, perhaps more importantly, what do you think that statement invokes in other people’s minds?

When I hear it I usually think “Don’t be a dumbass.” I mean, really, nobody is thinking that a mere statistical correlation means that two sets of observations have a definitive causal link. Almost always a correlation is being referred to because there is reason to suspect a causal link between two things, and this link is, we suspect, illustrated by this correlation.

When we hear “Causation doesn’t imply causation” is the person saying that two series of, say, 200 pairs of numbers that closely describe a straight line or a nice well behaved curve on a graph are not so seemingly linked because of causation happening somewhere, and that its just random? Often, yes, that is what they are saying. Recently, a friend of mine mentioned a possible link between a number of physical things about herself and a described medical syndrome, and a friend of hers said “That’s correlation. Correlation doesn’t mean causation.” I thought that was an interesting example of the use of the phrase. My friend with the interesting symptoms was not comparing a series of measurements of two phenomena, but rather, a series of attributes, and a mixture of quantitative and qualitative attributes at that, and how well they matched a similar list thought to be linked to a certain condition. She was diagnosing, not measuring. She was carrying out a Peircian abductive inference, not a quantitative induction. Yet the phrase came up in a rather scolding manner, from a well meaning yet somehow paternalistic observer. And it meant, as it often does, nothing helpful.

To explore this concept further, let’s examine what we think “causality” is at a basic level: Most of the time, when we use some variant of that term, we mean that one thing is causing another thing. Gravity causes the apple to fall to the ground instead of sideways when it detaches from the tree (although we are saying nothing about why it un-attached from the tree, so we are not giving a full causal explanation for the observation). Pacific El Nino cycles cause corresponding cycles of aridity or increased rainfall in other parts of the world. Heavy traffic causes my drive to be longer. And so on.

Sometimes, we have reason to believe that two things co-vary because of one or more external causes. Aridity in one region of the world is correlated with higher rainfall in another reason of the world, and it turns out that both meteorological variations are caused by the effects of the Pacific El Nino. Quite often, especially in complex systems like are often dealt with in the social sciences, we can replicate correlations among various phenomena but we may have multiple ideas about what the causal structure underlying the phenomenon at hand may be. Repeated observations rule out random associations or meaninglessness in the data, but we are faced with multiple alternative models for where to put the causal arrows. In other words, we’re pretty sure there is a “causal link” somewhere, but we can’t see, or agree amongst ourselves, on what it is.

For instance, there is an association between hunting success (by males) in some forager groups and what might be called “mating success” measured as either married/not married; age of first marriage; married for more years vs. fewer; one vs. more than one wife; fewer vs. more children. (There have been a number of studies using a number of variables.) I’m pretty sure that there are two distinctly different causes for this “correlation.” 1) Better hunters are preferred by some women; and 2) Men who are married and, especially, have a couple of kids, are compelled to be more successful at hunting. (The truth is that most forager men are excellent hunters; Day to day variation in success is mostly random; Therefore hunting “success” can be most reliably increased by hunting more and possibly by simply focusing on the effort more keenly rather than screwing off.) Both causes are probably at work in most systems. The causal arrows are much more varied and fickle than the very arrows the men carry in their quivers.

This means that if you find a correlation between some measure of hunting success and some measure of mating success in a group of Hunter-Gatherers, the statement “correlation does not imply causation” is meaningless, though the statement “the specific model you present to explain your data is wrong in that you have causation backwards” may be correct! Or not!

Scientists (and others) often arrive at a point where they assume, pragmatically, that there is a causal link between two things even when the link can’t be explained in a coherent model. In fact, this happens quite often and is probably what directs a lot of research, as novel experiments or exploratory programs are designed to pin down such a model. When this happens, the presumption of causality has been derived from mere correlation. It has been said (go look it up in Wikipedia) that correlation does not prove causation, but it can be a hint. In practice, and logically, there is too large a gap between the statement “Correlation implies or proves causality” and “Correlation is a hint.” Correlation is as good as the data, its replicatbility, the relevant statistics, and yes, even p-values, if you know how to use them. If you take numerous honest stabs at a relationship between phenomena, measure things a few different ways to help rule out a bias in how that is being done, avoid doing stupid statistics (like accidentally correlating a variable to itself), replicate with the same results, don’t throw out trials unless there is a valid reason to do so, the statistics are sufficiently robust or at least correctly chosen, and the p-values are kick-ass, then your correlations are not hints. No. Your correlations imply causation. They may not imply a simple causal effect with one thing you’ve measured causing change in the other … see the discussion above for where the causal arrows may be pointing. There may be parts of your model missing or obscured, but correlation implies causation.

So, there are several aspect to this fallacy.

“Correlation equals causation” is a misstatement because there are reasons other than causation that correlations between data series can emerge.

“Correlation equals causation” can be wrong because it specifies a causal structure that happens to be wrong, and more subtly but also more importantly, correlation of an “X” variable (on the horizontal axix) and a “Y” variable (on the vertical axis) usually implies, even though this is entirely arbitrary, that X causes Y (X being the independent variable, and Y being the dependent variable). Similarly, an equation “Y = mX + b” seems to be saying that X causes Y. Similarly, a statement like “when we increase altitude, temperature seems to decrease” implies that temperature varies as a function of … because of … as something caused by …. altitude. But the fact that things can be ordered this way on a graph, put in this kind of equation, or described with this kind of language does not in and of itself mean that the causal arrow has been spotted and tamed.

“Correlation does not imply causation” is entirely wrong. If, that is, you think the word “imply” means “suggest.” Correlation does indeed “suggest” causation, though it may not suggest a particular directionality or structure of causation. So, if a person says:

“I have a correlation over here. This suggests some kinda causal tingie going on here.”

Then the response:

“No, dear, correlation does not imply causation”

is a dumb-ass thing do say. If, on the other hand, a person says:

“I’ve noticed this correlation between thing one and thing two. This strongly implies an underlying truth consisting of thing one causing thing two”

Then the response:

“OK, that’s interesting, but correlation does not mean causation:”

is a worthy missive.

Finally, and to the reason I wrote this post to begin with. I think there is a correlation between when someone says “correlation does not imply causation” and the person saying that having an agenda other than spreading the word on introductory level statistics. Sometimes it is just an effort to get the person off the topic. In the example above, about the illness, the person was trying to get the affected individual to not link symptoms with some awful diseases as a matter of denial: Hopefulness that the person didn’t really have the disease. In other cases it is more paternalistic. But then there are those instances that are more troubling and possibly more common: Denialism. We see statements like “correlation does not imply causation” when decades of data from multiple sources analyzed a variety of different ways consistently and repeatedly link the release of fossil carbon into the atmosphere with warming, for example. In these cases not only is the statement being used incorrectly and even nefariously, it is being used in a more bizarro-land sort of way: Correlation means that THERE IS NO CAUSATION. How do you get from a strong statistical argument for something to the idea that a strong statistical argument means the opposite of what it means? By having a statement like “correlation does not imply causation” reach aphorism level of inanity. Under such linguistic conditions, statements like “I could not possibly care less than I do about this, meaning that I care not at all” transform to statements like “I could care less” which means the opposite, in words, but the same, in spirit. “Correlation does not bla bla bla” in the denialist context means “Statistics are wrong.” And that’s just wrong.

So, there, I said it but you may not have heard it: “Correlation does not mean causation” or some variant thereof is, sometimes, a dog whistle.

Have you read the breakthrough novel of the year? When you are done with that, try:

In Search of Sungudogo by Greg Laden, now in Kindle or Paperback
*Please note:
Links to books and other items on this page and elsewhere on Greg Ladens' blog may send you to Amazon, where I am a registered affiliate. As an Amazon Associate I earn from qualifying purchases, which helps to fund this site.

Spread the love

14 thoughts on “Falsehood: Correlation Implies/Does Not Imply Causality

  1. The correlations may be causally related, but the arrow of causation may go the other way, or come from some as yet unknown thing.

    I see that lots of times in nitric oxide physiology. Lots of adverse health issues are correlated; obesity, type 2 diabetes, depression, hypertension, neurodegenerative disorders, infertility, congestive heart failure, kidney failure, liver failure, osteoporosis, sleep disordered breathing, and so on. My hypothesis is that all of these things have a common pathway of low nitric oxide, and also that these things also lower nitric oxide levels. So once you start getting things on the low nitric oxide spectrum, you can expect to get more of them, and the ones that you have will get worse.

    Obesity doesn’t “cause” osteoporosis, but low nitric oxide can “cause” both, but it is complicated because obesity also causes increased bone loading which tends to reverse osteoporosis.

  2. How does nitric oxide level relate to asthma? It is said to be a diagnostic of it, levels above 20 or some such.

  3. High NO can be produced by iNOS, which is a sign of inflammation, which can be a characteristic of asthma..

    Exhaled NO is not a “good” marker for asthma, primarily because your nasal passages normally produce pretty high levels of NO (about 200 ppb), to facilitate matching the perfusion of blood in the lung with gas exchange. Oxyhemoglobin in the lung takes out NO, so that what you exhale is lower than the 200 ppb that your nasal passages produce.

    Bacteria on your tongue can also produce NO. Normally saliva has about 10x nitrate levels over plasma, then bacteria on the tongue reduce that nitrate to nitrite. You can get 10 mM/L nitrite in saliva “normally”. When that saliva is swallowed, the low pH of the stomach can produce 100 ppm NO in the head space (yes, that is ppm, not ppb). Green leafy vegetables are a good source of nitrate (few tenths of a percent), that is the main dietary source of nitrate, much larger than water. Nitrate in vegetables is probably good for you. Nitrate in water is associated with blue baby syndrome, but that is probably due to using non-sterile water to make formula, where the bacteria reduce the nitrate to nitrite using the substrates in the powdered formula. If the water was sterilized first, that likely wouldn’t happen. Also, infants don’t have as robust a methemoglobin reductase system.

    Usually NO inhibits inflammation, and high NO is a sign of low inflammation. This is complicated because inflammation generates both NO and superoxide; but they destroy each other to make peroxynitrite (in the near field), which nitrates proteins and stuff, making them more immunogenic. That happens where immune cells are “doing their thing”, which is how antibodies toward those things get amplified, and why chronic inflammation can cause autoimmune sensitization.

    Asthma may have high NO in the near field, but will be low NO in the far field (everywhere remote from the local inflammation). Treating asthma by raising the NO level is probably the way to do it.

    What causes the “activation” of immune stuff is the “respiratory burst”, which generates superoxide which destroys NO in the near field and potentiates the immune system in the near field, including mast cell degranulation.

  4. I’m trying to think of a link between 2 way ANOVA and correlation, but I can’t.

    The basic messages we try to get to introductory stat students on the first pass are

    * Correlation itself treats the variables equally, so you should not automatically think about the distinction between response and predictor
    * If you are taking the time to calculate a correlation in practice you probably have a suspicion that there is some type of linear relationship between the two variables. If the correlation is large spend time and effort trying to suss out what might be the source of that relationship
    * Don’t use correlation in connection with regression through the origin — it is meaningless then

    1. Dean, brilliant! I was. Wondering who would bring that up.

      At some level all statistics are the same as some other statistics. Ie. you can convert an R-squared value to an F statistic without redoing the stats on the same data.

      Regression analysis and ANOVA are two faces of the same underlying model.

  5. I realize that Greg – I am a statistician. My first two papers were on robust measures of correlation and multiple determination. My point (originally intended to be tongue in cheek) was that two-way ANOVA itself doesn’t automatically relate to correlation.

    1. Not as you would apply it (I taught advanced multivariable stats to grad students at Harvard, as long as we are trading highly impressive resume items, though I don’t consider my self a statistician).

      As noted, you can equate the F statistic and R, which is generally known as the coefficient of correlation.

      But mainly, I liked the layout of the formula and the fact that the background on the image is transparent so it can be used elsewhere.

  6. It is cool.

    I don’t consider my resume highly impressive – it was meant to support my comment.

    At the other extreme: last summer an in-law told me that he no longer believed anything scientists or statisticians claimed was “supported by research” — because

    If they were really right they wouldn’t have to have any nonsense about “plus or minus” stuff: they’d give the right answer and that would be it.

    I’ve started using comments like that in class to stress the importance of grasping why variation matters.

  7. A scientist, an engineer, and a politician go hunting for rabbits… and after a bit they come across some tracks.
    Scientist. ” These are are rabbit tracks. We will soon get them. ”
    Engineer. ” Nah, idjit! Kangaroos made these .”
    Pollie ” You people are nuts. These were made by a wombat. ”
    They were still arguing an hour later, till a train ran over em…..

  8. The head of the statistics department at a large university was walking across the grounds to a lunch meeting when he saw several students standing around a light pole. Seeing that they were having an animated conversation he walked closer to hear, and determined that they were trying to figure out a way to measure the height of the pole as part of a class assignment. He walked over to them and said

    “Look, the pole is bolted to this metal plate on the bottom. Get some wrenches, unbolt it, lay it down, then measure it.” He then walked off.

    The students looked at each other, laughed and one said “Isn’t that just like a statistician. He knows you want the height of something and he tells you how to figure out how long it is.”

    1. One more for you Greg:

      Why do all frequentists become Bayesians after five drinks?

      Because that’s when they start pulling conclusions from their posteriors.

Leave a Reply

Your email address will not be published. Required fields are marked *