A study recently published by Irva Hertz-Picciotto and Lora Delwiche of the M.I.N.D. Institute, UC Davis, addresses the question of an apparent rise in the frequency of diagnosed autism in California.
This study is quickly becoming the focus of attention as the various factions with an interest in autism square off on assessing its validity. In the mean time, the study itself is rather modest in what it attempts and what it concludes.
Let’s have a look.
To date, there are three kinds of explanations given for this rise in Autism rate:
1) There is some artifact in the system such as changes in reporting or diagnosis, or changes in the definition of autism, that makes the rate of autism look like it is going up, but it is actually not going up;
2) There is a genetic change in populations, either because of local changes in the gene pool or immigration/migration causing this condition to increase in frequency;
3) There is an environmental cause of an actual increase in autism which is indicated in the increase in numbers of autism diagnoses.
The study by Hetz-Picciotto and Delwiche examines a number of different previously suggested explanations that fall into the first of these categories, and finds that these do have an impact on the apparent rise of reported autism, but according to these authors not enough to explain the entire phenomenon. The second hypothesis is also tested, or at least partly controlled for, by excluding immigrants (to California) from the study. While the authors are left concluding that explanations of the third category should be more closely investigated, they do not offer specific environmental explanations, and in fact conclude that “the extent to which the continued rise represents a true increase in the occurrence of autism remains unclear.
Taken at face value, this study seems to suggest an evening out of funding for autism research, which is allegedly highly biased towards genetic studies, to include more investigation into environmental causes. However, life is not so simple, and the following three considerations come to mind:
1) Previously, public interest groups had emerged which made a link between autism and vaccination. These groups asserted that specific chemicals in vaccines caused autism in some children, and that this accounted for the rise in autism that we see. These groups were largely funded and staffed by parents with autistic children who, understandably, wanted to find some entity to blame, but not so understandably, may have chosen a scape goat, with the cost of diminishing chances of finding a real explanation for autism (regardless of any rise in incidence). Various scientific studies seem to have debunked the vaccine link, the most important of which showed no decreases in autism prevalence after the most often blamed chemical found in some vaccines were removed from use.
2) It is possible that anti-vaccine denialist spokespeople, i.e, those who have all along asserted that the special interest groups mentioned above were wrong, have expanded their argument to include all or most environmental causes of the disorder, at the same time that those special interest groups have entrenched in an environmental cause camp. This makes it difficult to evaluate any research that simply tries to address the cause of the condition.
3) The research reported here was conducted by members of an institute funded by special interest groups that may have overlapped with those mentioned above. One has to consider the possibility of links between funding sources and researchers.
None of these three aspects of the problem should matter in the long run. The research reported here can to some extent be evaluated on its own terms, and attempts to replicate it or disprove it’s implications can and should (and likely will) be done by other research teams. But they do matter a great deal when it comes to discussing this issue. Recently, I posted a paragraph from a Scientific American article addressing this issue on this blog, without comment, and was instantly accused of being … I don’t know, some kind of anti-science denialist. Presumptions were made about what I was thinking, and not one person previously involved in this debate saw fit to be even remotely polite.
What does this mean? It means that if there are an anti- or non-scientific factions working hard in the wrong direction than necessary to nail down autism and related conditions, find out what causes them and work towards cure or relief, they will do better than they otherwise might in their efforts. Instant add-water-and-stir vitriol is stupid and counter productive. Go read the comments on my post and marvel at it all!
But what about the study at hand?
This is actually fairly complicated. This is an epidemiological study that looks at state databases showing autism diagnoses. The study notes and documents a remarkably high increase in autism rates, going from fewer than one per 10,000 children to close to 12 for the youngest cohort and 40 for the oldest cohort. For a broader context, other studies have shown rates of varying amounts, but close to (yet above) 10 per 10,000 for Autism, and a much larger value of near 50 or in some cases much more for the broader categories of diagnosis that include but are not limited to autism. The study consideres, but argues against, the idea that the shift seen in California is a change from a narrow diagnostic range to a broader one.
The study looks at the following possible causes of the increase: Younger ages of diagnosis, migration of autism-rich populations into the state, changes in diagnostic criteria, and inclusion of milder cases into the category as time goes by.
The study concludes that while any of these could account for some of the changes, the sum of these effects is insufficient to explain the data. In particular, this study suggests that a 56% increase can be explained by inclusion of milder cases, and a 12% increase can be explained by changes of age at diagnosis.
The problem is that this analysis does not (and can not) do what we really want, and that is to measure the actual number of children with the same exact criteria at two or more points in time. What we are left with instead is a difficult game of measuring rates. Rates are always tricky.
Fourty is 400 percent of 10, so a shift from 10 to 40 cases per 10,000 sounds like a large increase. But what if that ’10’ was off a bit, and it was actually five? What was 400 percent is now 800 percent. But what if that 10 was off in the other direction, and was actually a 15? Now we’re talking about 270%.
This is the problem with extended ratios. Small changes at one end or the other and especially changes at the ‘short’ end of the ratio, can make large changes in the key index variable (the percent increase).
Even more dramatic would be a comparison of ‘then’ vs. ‘now’ in which we assumed that some of the effects occurred early vs. later. That could get very complicated.
By the most extreme reasoning in one direction, the increase in new cases per 10,000 can be explained at one order of magnitude by diagnostic and reporting factors, but is occurring at one more level of magnitude higher. By the most extreme reasoning in the other direction, cumulative effects of diagnostic and reporting artifacts can easily include the increase.
There is another way to think about this. Take the data at face value. In particular as shown in this figure from the paper:
Is this a genetic change? No. Too fast. This is way sub-generational, and migration has been excluded from the analysis. Is this an environmental change? It looks like one would expect for certain environmental changes, but if it is, then what factor in the environment is so different spanning some 20 years. Such a thing should be obvious. This does resemble, in my view, a secular change in effect of the kind one might expect with differences in diagnosis and reporting, because such changes take years to take hold, but can have a large influence.
In other words, other than ruling out in situ evolution, I don’t think you can tell what causes this. The fact that a big chunk of this variation is probably explained by changes in reporting and diagnostic criteria suggests that more of the same sort of effect may be sufficiently explanatory. The fact that a careful look at reporting and diagnostic effects does not readily explain the level of magnitude of the change we see here suggests that more explanation is needed.
In the absence of a correlation between these data and a list of causal effects (which could then lead to some effective hypothesis testing) it important to keep an open mind about what causes autism. I can think of no reason that this study’s validity or lack thereof informs us in this regard. Those who wish to insist that no matter what there is no increase in autism rates are no less a failure at explaining autism as those who see a real increase in graphs like this one.
Meanwhile, the authors of this study and others are looking into the data further to test for environmental links. According to a press release from UC Davis:
Hertz-Picciotto and her colleagues at the M.I.N.D Institute are currently conducting two large studies aimed at discovering the causes of autism. Hertz-Picciotto is the principal investigator on the CHARGE (Childhood Autism Risk from Genetics and the Environment) and MARBLES (Markers of Autism Risk in Babies-Learning Early Signs) studies.
CHARGE is the largest epidemiologic study of reliably confirmed cases of autism to date, and the first major investigation of environmental factors and gene-environment interactions in the disorder. MARBLES is a prospective investigation that follows women who already have had one child with autism, beginning early in or even before a subsequent pregnancy, to search for early markers that predict autism in the younger sibling.
“We’re looking at the possible effects of metals, pesticides and infectious agents on neurodevelopment,” Hertz-Picciotto said. “If we’re going to stop the rise in autism in California, we need to keep these studies going and expand them to the extent possible.”
Irva Hertz-Picciotto, Lora Delwiche (2009). The Rise in Autism and the Role of Age at Diagnosis Epidemiology, 20 (1), 84-90 DOI: 10.1097/EDE.0b013e3181902d15
0-4 years old ? How the heck do you diagnose autism in a 1 year old ?
Let’s be careful with this sort of assumption, Arthur. It is often assumed that early diagnosis of behavioral traits is impossible or difficult, but in fact practitioners do have valid diagnostic tools for many behavioral traits for early on. I’m not defending this particular data set, but the assumption that it can’t be done because it seems hard is not valid .
You can make a diagnoses under two, but there’s a greater chance for error. According to one study, nearly one in five of those kids lose their diagnoses within the next 24 months.
Yes, by all means be careful, and compare 1 years old with 5 years old.
Notably, the authors did not investigate diagnostic substitution, which is what other studies have found to be a (if not the) dominant phenomenon. Compare autism vs. “PDD NOS” over time and see what comes up.
“A shift from about 10 to about 40 cases per 10,000 is a 400% increase.”
No, it isn’t – it is 300% increase. The size of the increase is 30 cases, which is 300% of the original number.
“The number of cases increased to four times the original level” would be correct.
dean,
Technically, a shift from about 10 to about 40 cases per 10,000 is about a 300% increase.
Greg,
You might want to take a look at this blog:
http://autismnaturalvariation.blogspot.com/2006/08/no-autism-epidemic-update.html
The numbers Joseph has used are freely available from the California CDDS website. I think it’s very interesting that the autism population has changed in character so much over the time considered in the Hertz-Picciotto study. In 1992, only 27.8% of the people with autism had no mental retardation, while in 2005, 63.6% had no mental retardation. This fact alone suggests that the diagnostic criteria has very substantially broadened, something that Hertz-Picciotto suggests. The real issue is by how much the criteria has broadened. Hertz-Picciotto takes a factor of 2.2, which is supported by one study in the literature, however, another study suggests a factor of 3.6 is appropriate. The whole increase tabulated by Hertz-Picciotto is explained by using 3.6 instead of 2.2, making the argument for an environmental cause much weaker.
Dean and Pundit: I’ve adjusted the verbiage to make my point clearer. Do you have an actual point now? Would love to hear it!
Sorry Greg – it’s the pedantic teacher in me.
I do have a question – I’m not too impressed with the study – it seems to not be as strong as the claims made for it. Is there any chance somebody will point out the problems to the media at large?
Dean: High chance someone will point it out. Lower chance any such correction will be picked up and spread through the media.
Bob Somersby’s dailyhowler.com has been consistently pointing out problems with media coverage of “school testing” reports. He continually has new examples to report, because (despite his pointing them out) the same problems persist.
We the science blogosphere are doing that right now. The Blogging Peer Reviewed Research icon and the reference at the bottom puts this on the BPRR feed, plus this is on the SB peer reviewed research feed, and also on the select feed. What the press normally does is to take the press release and use whatever it says. But then there’ us.
I don’t think the study is inherently weak. I think there is a huge gulf between what is said or implied in the press review and what the study does. THen, the study does what it does (what I say in the very beginning of my disussion) and not much more. They even conclude, and I shall repeat:
I didn’t pull that out of some orifice. I did not inordinately focus on it. They highlight that as a major part of their conclusions.
I suspect that much of the “this study sucks” commentary we are seeing in various places should be translated into “I don’t really know what a scientific study is, but gee, I think this might not be a good one” or “I have a preconceived notion about autism and what people must be thinking when they use the word “autism” and I chose to believe that the people who wrote this study are on the opposite side of some polarized playing field that I’m in somewhere.”
The study is limited in what it tries to do, and it is is limited in what it concludes. Most individual papers are that way. It is not a weak study, it is a rather strong study in a field that never rarely makes what people think of as strong conclusions being pushed by a press package that is not too much about the study.
I’m really looking forward to the other studies they claim to be working on.
It is not a weak study, it is a rather strong study in a field that never rarely makes what people think of as strong conclusions being pushed by a press package that is not too much about the study.
Greg, I don’t think there may not have been one too many negatives in that sentence.
Greg,
Sorry but when a study doesn’t interview the people who work at the place where the data they use is collected that means they’re using bad data. As I’ve commented on in your other thread, H-P 2009 claims DDS don’t include PDD-NOS or Aspergers. They do. Thats a potentially massive amount of kids not accounted for. Sloppy.
When they don’t take account of a whole idea (diagnostic substitution) than they’re going to get bad results.
I suspect the commentary coming from you on this subject is along the lines of “I don’t really know why I blogged on this study as I hadn’t even read it when I did, but gee, now I have I’m going to defend myself to the hilt as I’m too proud to do otherwise.”
Just my 2p.
Kev: Those are important points but not necessarily valid critiques of this study. They look in this study at some factors, not all, and they conclude that the factors they looked at do not explain the whole effect, but then say that other factors they did not look at might. Almost all valid published peer reviewed studies are limited in this way. Few studies, if any, do everything all at once, and for good reasons.
Since I work mainly with data from either dead people or non-humans (like wild animals or plants) I rarely have the chance to interview those “where the data are collected.” And when I’ve done research in such a circumstance, it would have been an ethical problem. SO I’m not relating to your point about not interviewing people.
They used census data. Should they have interviewed census collection agents?
Let’s look more closely to show how your critique is at the same time valid and invalid:
Simply put:
let’s say that X = ay + by + cy + [unknown shit]
A study is done to evaluate a and b, and this can explain 25% of the variance in X. The conclusion is “a and b seem to explain about 25%, then there’s c which we did not look at, and unknown shit, and that’s all we have so far.”
You would be claiming that this is a bad study because they did not look at c. But the don’t have to look at c.
Yes, your point may be very valid that c is important. But that does not mean that if the next study that comes along does not look at c that it is invalid. Hey, you do the study!!!!
I’m going to suggest you have a bias, and you are free to talk me out of this because I’m only guessing here. I suspect that you think diagnostic shifts or some other reporting factor is the explanation and this is a bad study because it a) did not conclude what you expect and/or b) did not look at the particular factor that you think is important.
If I’m right, then note also that you’ve missed an important point of the study (see extensive repeated in your face discussion of what the conclusion of the study was, above!).
Or am I totally wrong about what you are getting at.
Greg, I think Kev’s saying the review misunderstood the methodology behind the data they were relying on, and didn’t doublecheck their assumptions with the providers of that data; that doing so would have corrected the misunderstanding; and that doing so in general is wise when relying on others’ data.
I’m not expressing my opinion of that claim, just that it’s how I read what he’s saying.
Greg, I think Kev’s saying the review misunderstood the methodology behind the data they were relying on, and didn’t doublecheck their assumptions with the providers of that data; that doing so would have corrected the misunderstanding; and that doing so in general is wise when relying on others’ data.
I’m not expressing my opinion of that claim, just that it’s how I read what he’s saying.
The problem with H-P 2009 (among others) is that it also fails to put itself in context with prior studies that strongly implicated diagnostic substitution as accounting for the “autism epidemic,” for example:
http://scienceblogs.com/insolence/2006/04/evidence_against_an_autism_epi.php
I’d include more, except that my comment would get held up for moderation, and you have yet to notice and approve a comment I made on the other thread with more than one link in it.
In any case, this topic is not something new. This is not something that hasn’t been studied before. That history isn’t really properly reflected in H-P. Indeed, the discussion in the introduction is entirely inadequate; it doesn’t even mention Shattuck’s study, which was a very important one. Also, H-P’s criticism of the Schechter study (Ref. 13) as using an “error-prone” method is also off-base. Worse, it rather uncritically cites not one, not two, but three by thimerosal/autism conspiracy mongers, specifically two studies by Mark Blaxill (Refs. 11 and 25; by the way Mark also blogs for the crank antivaccine blog Age of Autism, in case you doubt his antivax cred) and one study Mark and David Geier (Ref. 12; search for “Geier” on my blog, and you’ll find a panoply of their nonsense examined). In fact, Ref. 12 as published in the Journal of American Physicians and Surgeons, a crank journal that publishes lots of antivaccine articles. (Search for “JPANDS” on my blog, and you’ll see; I’d also be happy to show you the link on my blog to my analysis of that very study.) If you knew the background and the players in this game, you would have noticed these problems right away before even looking at the methods and the data analysis. You don’t, and you didn’t; so you thought this study was a lot better than it really is.
Another thing: It’s a straw man to claim that we who are skeptical of this study deny the possibility of an environmental influence. We don’t. It is also a straw man to imply that we deny the possibility of a real increase in autism prevalence, as you seem to be doing. We do not deny that possibility, either. In this case, what we do say is that the Hertz-Picciotto study is not particularly good evidence for an environmental component to autism, and, as Steve Novella points out, David Kirby and other antivaccinationists have coopted this study as evidence that it must be the vaccines when it is nothing of the sort. Moreover, the way its investigators themselves are representing it in the press is a marked exaggeration of the conclusions that the study could actually support. Whenever I see that, red flags go up.
Finally, your bit about being “instantly accused of being … I don’t know, some kind of anti-science denialist” is more than a little overwrought. For my part, all I said was that I was afraid the study you were posting uncritically without comment (or “hawking,” as I put it) is a really crappy one, and I provided links to show you why I thought so. Perhaps I shouldn’t have used the word “hawking,” but otherwise I’d take nothing back. Your response was incredibly defensive, and you were unfortunately eager to impute a motive to me that you had no way of knowing whether I had or not. You started by saying “bite me” and then continued on to say, “it is really too bad for all those kids who should be getting vaccines that you are on their side. They deserve better.” In other words, you went straight for the insult aimed at me, when I said nothing about you personally in my comment. I had simply pointed out that you had chosen to highlight a crappy study on your blog. Grow a thicker skin, fer cryin’ out loud. Methinks Kev has you pegged. You posted without much critical analysis; you got called out for it; and now you’re painting yourself into a corner defending your initial take.
Quite frankly, Greg, you now come across as someone who has only just discovered the concept of the “autism epidemic” and remains unaware that there have actually been quite a few studies that have looked at the issue other before this one and that the emerging consensus is generally that, if there is a true increase in the rate of ASDs, it is small compared to the apparent increase due to diagnostic substitution, broadening of the diagnostic criteria, and increased surveillance, and, if there is an environmental component, it, too, is quite small compared to genetics. This study does nothing to cast significant doubt on that tentative consensus.
Yes, the authors conclude that
However, they also conclude:
Note the phrasing: “cannot fully explain” rather than “may not fully explain.” That, in turn, leads to their follow up statement
and their apparently even stronger statements to the media.
And yet, with no confidence intervals on their 2.2, 1.56, and 1.24 odds ratios, how can we conclude that those factors cannot explain the observed 7-8 fold increase the authors report?
<>
It is obvious to this parent of two autistic boys…one vaccinated and one not, that ACETAMINOPHEN use along with the vaccines as well as prenatal use is what is triggering the spike in autism. Autistic kids are often found to be deficient in glutathione, an important antioxidant that helps the body to filter out toxins (including mercury in vaccines).
Acetaminophen is known to deplete glutathione, and when you give it to a baby before vaccinations, you are stripping them of glutathione when they need it the most-to filter out the contents of the vaccines. Look at when the spike started…it was after aspirin was linked to Reye’s Syndrome. In addition, the makers of Tylenol suffered a lot of bad publicity after some deranged individual tampered with their product, so Johnson and Johnson launched a huge marketing campaign, touting the safety of their product to reassure consumers. Tylenol use skyrocketed and continues to this day. Not only that, but there was also a lot of publicity in the 80’s surrounding reports of very high fevers and seizures in babies receiving the DPT shot, so pediatricians started recommending acetaminophen BEFORE vaccines.
Hopefully, more Americans will wake up soon and realize that we are overmedicating our children. We may not have epidemics of measles and polio anymore, but is a nation of chronically ill children that will need lifelong medical care really any better?
Orac: “The problem with … tentative concensus”
Testosterone Spill! Aisle Five!!!!
But seriously, Orac, I’m sure you’re substantive comments on the paper are important and valid, and I’m very glad you’ve made them. I’m having some trouble with the SB back end right now, so I have not freed your comment but I will shortly. I look forward to following the additional links you’ve sent, learning everything I can about autism and vaccines, and joining the discussion full on. It will be great working with you on this.
Orac, I see your earlier comment now. Ignoring for the moment your whining about what you suppose to be my irrelevant complaining about your uncalled for ranting about how and what I should or should not blog that one of us indulged in after the other one very irrelevantly ranted about how and what I should comment about after your post, thanks very much for the links.
G
Hi Greg,
one reason why some of us are very quick to point to the limitations in the paper is that we remember another paper from 1998 in which the lead author made statements to the press that could not be justified by the content of the paper. The result was the MMR-autism debacle which still lumbers on.
People like Rick Rollens and David Kirby are using this as evidence for an autism epidemic caused by environmental toxins. They could not do this on the basis of the paper alone. I doubt they have even read it. But they are merely taking up the points in the press release that was issued alongside the study. Th authors are as responsible for the press release as they are for the paper.
I find this irresponsible. Comments that would never pass peer review are in the public domain and the paper is taken as proof of these comments’ pertinence.
I, for one, welcome the Orac/Greg smackdown!
Mike: It is interesting that this sort of thing seems to develop the way it does in the health biz surrounding vaccines. Remember the flu vaccine/Jerry Ford/Congress debacle?
Throw in the fact that epidemiology, which is always interesting and fun of course, is a conceptually difficult science and one that is almost never exact.
TSP: Orac and Greg would hav a very boring smakdown because I doubt we disagree on much. It’s just that Orac is not well socialized.
A new study this side of the pond suggests high levels of testosterone in the foetal environment. Interesting.
Where does this testosterone come from and what kind is it exactly?
Autism is a syndrome, not a disease. It has many causes, including the brain damage caused by whatever made the kid retarded (rubella, Fragile X syndrome, CMV etc.)
In the past, I suspect a lot of these kids were called retarded or childhood schizophrenia….but in 2000, especially after “RainMan”, parents would be ashamed to admit their kid with major behavior and cognitive problems and an IQ of 40 was “retarded”, hence the diagnosis of autism.
On the other hand, what percentage of the kids had signs their retardation and autism was present at birth? How many had fragile X syndrome? How many were “normal” then developed deterioration after vaccine or a viral illness?
I suspect chemicals and illicit drugs cause some cases, but has anyone done environmental studies?
In other words, someone needs to talk about these nuances to the press, or else we will get more anti measles vaccine hysteria.
It’s a very weak study. It does not produce any new data of note, for one. For example, their figure for the impact of changes in criteria is simply taken from a single Finnish study whose ascertainment methods are not comparable to the way CalDDS counts autistic children.
Also note the difference between the press release of the study and the actual conclusions of the paper. Since they did not take awareness into account, a true increase “remains unclear.”