Autism, epidemiology, and the public perception of evidence

“One day it's C-sections, the next it's pollution, now so many genes. Connect the dots, causation changes like the wind”

That quote is from a brief conversation I had on Twitter recently, with someone who is sceptical of the evidence that the causes of autism are overwhelmingly genetic (as described here). For me, it sums up a huge problem in how science is reported and perceived by the general public. This problem is especially stark when it comes to reportage of epidemiology studies, which seem to attract disproportionate levels of press interest.

The problem was highlighted by reports of a recent study that claims to show a statistical link between delivery by Caesarean section and risk of autism. This study was reported in several Irish newspapers, with alarming headlines like “C-sections ‘raise autism risk’” and in the UK Daily Mail, whose headline read (confusingly): “Autism '23% more likely in babies born by C-section': Women warned not to be alarmed by findings because risk still remains small”.

The study in question was a meta-analysis – a statistical analysis of the results of many previous studies – which looked at rates of autism in children delivered by C-section versus those delivered by vaginal birth. Across 25 studies, the authors found evidence of a 23% increased risk of autism in children delivered by C-section – a finding reported by all of the newspaper articles and cited in several of the headlines.

23% increased risk!!! That sounds huge! It almost sounds like 1 in 4 kids delivered by C-section will get autism. It also sounds like it is the fact that they were delivered by C-section that would be the cause of them having autism. In fairness, that’s not what the study or the newspaper articles say – in fact, there are any number of caveats and qualifications in these reports that should mitigate against such conclusions being drawn. But they won’t.

http://nourishbaby.com.au/blogs/news/8173077-preparing-for-an-elective-c-section-birth

They won’t because most people will only see or will only remember the headlines. It is the juxtaposition of the two terms – C-sections, autism – that will stick in people’s minds.

Most people not trained in epidemiology are not well equipped to evaluate the strength of the evidence, the size of the effect or the interpretation of causality from statistical associations. That should be the job of a science reporter, but it was not done in this case. In fact, most of the articles read like (and presumably are) merely a slightly re-hashed press release, the job of which is obviously to make the results sound as significant as possible. They include no critical commentary, no perspective or explanation and no judgment about whether the findings of this study are newsworthy to begin with.

For any study of this kind, you can ask several questions: 1. Is the evidence for an effect solid? 2. Is the effect significant (as in substantial)? 3. What does the effect mean? And a responsible journalist (or scientist thinking of whether or not to issue a press release) might also ask themselves: 4. Could uncritical reporting of these findings be misinterpreted and cause harm?

So, let’s have a look at the details here and see if we can answer those questions. In this study, published in the Journal of Child Psychology and Psychiatry, the authors look at the results of 25 previously published studies that investigated a possible link between C-sections and autism. These studies vary widely in size and methodologies (was it an elective or emergency C-section, a case-control or cohort study, were siblings used as controls, were the findings adjusted for confounders such as maternal age or smoking during pregnancy or gestational age at delivery, was autism the only outcome or were other things measured, was C-section the only risk factor or were other factors included, etc., etc.).

The point of a meta-analysis is for the authors to devise ways to statistically correct for these different approaches and combine the data to derive an overall conclusion that is supposed to be more reliable than findings from any one study. The authors make a series of choices of which studies to include, what weight to give them and how to statistically combine them. These choices are all reported, of course, but the point is that different choices and approaches might lead to different answers.


In this case, the authors concentrate on 13 studies that adjusted for potential confounds (as much as any epidemiological study can). Each of these compares the frequency of autism in a cohort of children delivered by C-section with that in a group of children delivered vaginally. A difference in frequency is described by the odds ratio (OR) – if the rates are equal, then the OR=1. If the rate is, say 10% higher in those delivered by C-section, then the OR=1.1. The results of these studies are shown below:



One important thing jumps out – some of these studies have vastly more subjects than others. (For some reason, the numbers in the Langridge et al 2013 study are not listed in the table: it had 383,000 children in total). What should be obvious is that the studies showing the biggest odds ratios (5.60 or 3.11) are the ones with the smallest sample sizes (n = 278 or 146). The biggest studies (with n > 690,000 or >268,000!) show either negative or very small positive odds ratios (0.97 or 1.04).

Now, why you would want to combine results from studies with samples in the hundreds, with those with samples in the hundreds of thousands is beyond me and the way the authors do it also seems odd. In order to combine them, the odds ratios of these studies are weighted by the inverse of the variance in each study. Maybe that’s a standard meta-analysis thing to do, but it seems much more intuitive to weight them by the sample size (or just get rid of the ones with dinky sample sizes). When you do it that way, the overall odds ratio comes out barely over 1. (This doesn’t even take into account possible publication bias, where studies that found no effect were never even published).

Anyway, all of that discussion is merely to draw attention to the fact that the methodological choices of the authors can influence the outcome. The headline reported 23% increase in risk is thus not necessarily a solid finding.

But, for the sake of argument, let’s take it at face value and try to express what it actually means in ways that people can understand. The problem with odds ratios is they represent an increase in risk, compared to the baseline risk, which is very small. So, the 23% increased risk really is not an increase in absolute risk, as it sounds, but a relative increase by 23% of the baseline risk, which is about 1% (so really an increase of 0.23%).

A clearer way to report that is to express it in natural frequencies: if the base rate of autism is ~10 children out of 1,000, you would expect ~12 with autism out of 1,000 children all delivered by C-section. Those are numbers that people can grasp intuitively (and most people would see that the supposed increase is fairly negligible – that is, there’s not much of a difference between 10/1,000 and 12/1,000). Certainly nothing newsworthy.

But let’s say it’s a slow news day and we have this pre-prepared press release describing these findings in front of us and space to fill. What should we say about what this statistical association means? Does such a correlation imply that one thing causes the other thing? Is it evidence that the fact of being delivered by C-section is the cause of an increased risk of autism?

I think most people can see that it does not, at least not necessarily. It is of course possible that the link is causal and direct. But it seems much more likely that the C-section is merely an indicator of obstetric complications, which are themselves a statistical risk factor for autism. (In which case, having the section is likely to reduce, not increase the chances of harm!). Moreover, obstetric complications can arise due to an underlying condition of the fetus. In such a case, the arrow of causation would be exactly opposite to what it appears – the child having a neurodevelopmental condition would be the cause of the C-section.

So, to answer questions 1-3: the findings are not necessarily as solid as they appear, the size of the effect is nowhere near as large as the “23% increased” risk phrase suggests, and, even if the actually small effect were real, it does not imply that C-sections are a bad thing.

Now, for question 4: if this finding is reported, is it likely to be misunderstood (despite a wealth of caveats) and is that misunderstanding likely to cause harm? In this case, for an emotive and already confused issue like autism, the answer to both those questions is pretty obviously yes. It doesn’t take much imagination to see the effect on pregnant women faced with the decision of whether to undergo a C-section, possibly in difficult and stressful circumstances. It seems a very real possibility that this perceived risk could lead some women to refuse or delay a C-section, which could actually increase the rates of neurodevelopmental disorders due to obstetric complications.

More generally, the reportage of this particular study illustrates a much wider problem, which is that the media seem fascinated with epidemiology studies. One reason for this is that such studies typically require no background knowledge to understand. You don’t need to know any molecular or cell biology, any complicated genetics or neuroscience, to (mis)understand the idea that X is associated with increased risk of Y. That makes it easy for reporters to write and superficially accessible for a wide readership.

Unfortunately, it leads to two effects: one, people will misinterpret the findings and ascribe a high level of risk and a direct causal influence to some factor when the evidence does not support that at all. That has potential to do real harm, as in the case of reduced vaccination, for example.

The second effect is more insidious – people get jaded by these constant reports in the media. First, butter was bad for us, now it’s good for us, first fat was bad for us, now it’s sugar, etc., etc. The overall result of this constant barrage of rubbish findings is that the general public loses faith in science. If we apparently change our minds on a weekly basis, why should they trust anything we say? All science ends up being viewed as equivalent to epidemiology, which is really not what Thomas Kuhn has called “normal science”.

Normal science involves an established framework of inter-supporting facts, which constrain and inform subsequent hypotheses and experiments, so that any new fact is based on and consistent with an unseen mountain of previous work. That is not the case for epidemiology – you could do a study on C-sections and autism pretty much on a whim – that hypothesis is not constrained by a large framework of research (except previous research on precisely that issue). I don’t mean to knock epidemiology as an exploratory science, just to illustrate its well-known limitations.

In the case of autism, this leads people like our tweeter, above, to erroneously take the strength of the evidence for C-sections or pollution or genetics as equivalent (and, in this case, to dismiss all of it as just the flavour of the month). That seriously undermines efforts to communicate what is an exceptionally robust framework of evidence for genetic causation of this condition. The answer is not blowing in the wind...

Comments

  1. Great post!

    It seems however, no matter how many people ring the alarm bells, it will take a long time before this framework is abandoned.

    It doesn't help when you have TV doctors trying to explain health outcomes by "bad diets" and such.

    ReplyDelete
  2. In general, my prescription is this:

    Research practices that need to end

    1. Almost all uncontrolled correlational studies
    2. Small and poorly powered randomized controlled trials (RCTs)
    3. Most small brain imaging studies – for much the same reason
    4. Sex and other research that rely purely on self-report
    5. Most any study done on Psych 101 students
    6. Most psychology experiments (e.g. "priming")
    7. Any evo psych research that generalizes to the whole species from findings in NW Euros
    8. Personality and behavior research that doesn't use peer report
    9. Small GWAS and genomic linkage studies

    10. Most of all THE PRESS NEEDS TO STOP REPORTING ON FINDINGS FROM THESE TYPES OF STUDIES!

    ReplyDelete
  3. http://neurosciencenews.com/autism-cerebellar-damage-1285/

    I found this interesting, and incongruent to most of the autism research that goes on. I don't know what to make if it. This may or may not go along with it. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC25000/

    ReplyDelete

Post a Comment

Popular posts from this blog

Grandma’s trauma – a critical appraisal of the evidence for transgenerational epigenetic inheritance in humans

Undetermined - a response to Robert Sapolsky. Part 1 - a tale of two neuroscientists

Undetermined - a response to Robert Sapolsky. Part 2 - assessing the scientific evidence