Nav

Monday, November 24, 2014

Top-down causation and the emergence of agency

http://nelsoncosentino.deviantart.com/art/Machine-Brain-2-129467519There is a paradox at the heart of modern neuroscience. As we succeed in explaining more and more cognitive operations in terms of patterns of electrical activity of specific neural circuits, it seems we move ever farther from bridging the gap between the physical and the mental. Indeed, each advance seems to further relegate mental activity to the status of epiphenomenon – something that emerges from the physical activity of the brain but that plays no part in controlling it. It seems difficult to reconcile the reductionist, reverse-engineering approach to brain function with the idea that we human beings have thoughts, desires, goals and beliefs that influence our actions. If actions are driven by the physical flow of ions through networks of neurons, then is there any room or even any need for psychological explanations of behaviour?

How vs Why
To me, that depends on what level of explanation is being sought. If you want to understand how an organism behaves, it is perfectly possible to describe the mechanisms by which it processes sensory inputs, infers a model of the outside world, integrates that information with its current state, weights a variety of options for actions based on past experience and predicted consequences, inhibits all but one of those options, conveys commands to the motor system and executes the action. If you fill in the details of each of those steps, that might seem to be a complete explanation of the causal mechanisms of behaviour.

If, on the other hand, you want to know why it behaves a certain way, then an explanation at the level of neural circuits (and ultimately at the level of molecules, atoms and sub-atomic particles) is missing something. It’s missing meaning and purpose. Those are not physical things but they can still have causal power in physical systems.

Why are why questions taboo?
http://scienceandreligion472.blogspot.ie/2013/08/aristotle-4-causes-and-substance-versus.htmlAristotle articulated a theory of causality, which defined four causes or types of explanation for how natural objects or systems (including living organisms) behave. The material cause deals with the physical identity of the components of a system – what it is made of. On a more abstract level, the formal cause deals with the form or organisation of those components. The efficient cause concerns the forces outside the object that induce some change. And – finally – the final cause refers to the end or intended purpose of the thing. He saw these as complementary and equally valid perspectives that can be taken to provide explanations of natural phenomena.

However, Francis Bacon, the father of the scientific method, argued that scientists should concern themselves only with material and efficient causes in nature – also known as matter and motion. Formal and final causes he consigned to Metaphysics, or what he called “magic”! Those attitudes remain prevalent among scientists today, and for good reason – that focus has ensured the phenomenal success of reductionist approaches that study matter and motion and deduce mechanism.

Scientists are trained to be suspicious of “why questions” – indeed, they are usually told explicitly that science cannot answer such questions and shouldn’t try. And for most things in nature, that is an apt admonition – really against anthropomorphising, or ascribing human motives to inanimate objects, like single cells or molecules or even to organisms with less complicated nervous systems and, presumably, less sophisticated inner mental lives. Ironically, though, some people seem to think we shouldn’t even anthropomorphise humans!

Causes of behaviour can be described both at the level of mechanisms and at the level of reasons. There is no conflict between those two levels of explanation nor is one privileged over the other – both are active at the same time. Discussion of meaning does not imply some mystical or supernatural force that over-rides physical causation. It’s not that non-physical stuff pushes physical stuff around in some dualist dance. (After all, “non-physical stuff” is a contradiction in terms). It’s that the higher-order organisation of physical stuff – which has both informational content and meaning for the organism – constrains and directs how physical stuff moves, because it is directed towards a purpose.

Purpose is incorporated in artificial things by design – the washing machine that is currently annoying me behaves the way it does because it is designed to do so (though it could probably have been designed to be quieter). I could explain how it works in purely physical terms relating to the activity and interactions of all its components, but the reason it behaves that way would be missing from such a description – the components are arranged the way they are so that the machine can carry out its designed function. In living things, purpose is not designed but is cumulatively incorporated in hindsight by natural selection. The over-arching goals of survival and reproduction, and the subsidiary goals of feeding, mating, avoiding predators, nurturing young, etc., come pre-wired in the system through millions of years of evolution. 

Now, there’s a big difference between saying higher-order design principles and evolutionary imperatives constrain the arrangements of neural systems over long timeframes and claiming that top-down meaning directs the movements of molecules on a moment-to-moment basis. Most bottom-up reductionists would admit the former but challenge the latter. How can something abstract like meaning push molecules around?

Determinism, randomness and causal slack
http://ie.ign.com/articles/2013/09/12/transporter-prank-used-to-promote-star-trek-into-darknessThe whole premise of neuroscientific materialism is that all of the activities of the mind emerge from the actions and interactions of the physical components of the brain – and nothing else. If you were transported, Star Trek-style, so that all of your molecules and atoms were precisely recreated somewhere else, the resultant being would be you – it would have all the knowledge and memories, the personality traits and psychological characteristics you have – in short, precisely duplicating your brain down to the last physical detail, would duplicate your mind. All those immaterial things that make your mind yours must be encoded in the physical arrangement of molecules in your brain right at this moment, as you read this.

To some (see examples below, in footnote), this implies a kind of neural determinism. The idea is that, given a certain arrangement of atoms in your brain right at this second, the laws of physics that control how such particles interact (the strong and weak nuclear forces and the gravitational and electromagnetic forces), will lead, inevitably, to a specific subsequent state of the brain. In this view, it doesn’t matter what the arrangements of atoms mean, the individual atoms will behave how they will behave regardless.

To me, this deterministic model of the brain falls at the first hurdle, for one simple reason – we know that the universe is not deterministic. If it were, then everything that happened since the Big Bang and everything that will happen in the future would have been predestined by the specific arrangements and states of all the molecules in the universe at that moment. Thankfully, the universe doesn’t work that way – there is substantial randomness at all levels, from quantum uncertainty to thermal fluctuations to emergent noise in complex systems, such as living organisms. I don’t mean just that things are so complex or chaotic that they are unpredictable in practice – that is a statement about us, not about the world. I am referring to the true randomness that demonstrably exists in the universe, which makes nature essentially non-deterministic.

Now, if you are looking for something to rescue free will from determinism, randomness by itself does not do the job – after all, random “choices” are hardly freely willed. But that randomness, that lack of determinacy, does introduce some room, some causal slack, for top-down forces to causally influence the outcome. It means that the next lower-level state of all of the components of your brain (which will entail your next action) is not completely determined merely by the individual states of all the molecular and atomic components of your brain right at this second. There is therefore room for the higher-order arrangements of the components to also have causal power, precisely because those arrangements represent things (percepts, beliefs, goals) – they have meaning that is not captured in lower-order descriptions.

Information and Meaning
In information theory, a message (a string or sequence of digits, letters, beeps, atoms, anything at all really) has a quantifiable amount of information proportional to how unlikely that particular arrangement is. So, there’s more information in knowing that a roll of a six-sided die ended up a four than in knowing that a flip of a coin ended up heads. That’s important for signal transmission especially because it determines how compressible a message is and how efficiently it can be encoded and transmitted, especially under imperfect or noisy conditions.

Interestingly, that measure is analogous to the thermodynamic property of entropy, which can be thought of as an inverse measure of how much order there is in a system. This reflects how likely it is to be in the state that it’s in, relative to the total number of such states that it could have been in (the coin could only have been in two states, while the die could have been in six). In physical terms, the entropy of a gas, for example, corresponds to how many different organisations or microstates of its molecules would correspond to the same macrostate, as characterised by specific temperature and pressure. 
http://www.science4all.org/le-nguyen-hoang/entropy/

Actually, this analogy is not merely metaphorical – it is literally true that information and entropy measure the same thing. That is because information can’t just exist by itself in some ethereal sense – it has to be instantiated in the physical arrangement of some substrate. Landauer recognised that “any information that has a physical representation must somehow be embedded in the statistical mechanical degrees of freedom of a physical system”.

However, “entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.” In fact, the information theory sense of information is not concerned at all with the semantic content of the message. For sentences in a language or for mathematical expressions, for example, information theory doesn’t care if the string is well-formed or whether it is true or not.

So, the string: “your mother was a hamster” has the same information content as its anagram “warmth or easy-to-use harm”, but only the former has semantic content – i.e., it means something. However, that meaning is not solely inherent in the string itself – it relies on the receiver’s knowledge of the language and their resultant ability to interpret what the individual words mean, what the phrase means and, further, to be aware that it is intended as an insult. The string only means something in the context of that knowledge.
http://www.quickmeme.com/meme/36eya0

In the nervous system, information is physically carried in the arrangements of molecules at the cellular level and in the patterns of electrical activity of neurons. For sensory information, this pattern is imposed by physical objects or forces from the environment (e.g., photons, sound waves, odor molecules) impinging on sensory neurons and directly inducing molecular changes and neuronal activity. The resultant patterns of activity thus form a representation of something in the world and therefore have information – order is enforced on the system, driving one particular pattern of activity from an enormous possible set of microstates. This is true not just for information about sensory stimuli but also for representations of internal states, emotions, goals, actions, etc. All of these are physically encoded in patterns of nerve cell activity.

These patterns carry information in different ways: in gradients of electrical potential in dendrites (an analog signal), in the firing of action potentials (a digital signal), in the temporal sequence of spikes from individual neurons (a temporally integrated signal), in the spatial patterns of coincident firing across an ensemble (a spatially integrated signal), and even in the trajectory of a network through state-space over some period of time (a spatiotemporally integrated signal!). The operations that carry out the spatial and temporal integration occur in the process of transmitting the information from one set of neurons to another. It is thus the higher-order patterns that encode information rather than the lower-order details of the arrangements of all the molecules in the relevant neurons at any given time-point.

But we’re not done yet. Just like that sentence about your mother (yeah, I went there), for that semantic content to mean anything to the organism it has to be interpreted, and that can only occur in the much broader context of everything the organism knows. (That’s why the French provocateur spoke in English to the stupid English knights, instead of saying “votre mère était un hamster”. Not much point insulting someone if they don’t know what it means).

The brain has two ways of representing information – one for transmission and one for storage. While information is transmitted in the flow of electrical activity in networks of neurons, as described above, it is stored at a biochemical and cellular level, through changes to the neural network, especially to the synaptic connections between neurons. Unlike a computer, the brain stores memory by changing its own hardware.

Electrical signals are transformed into chemical signals at synapses, where neurotransmitters are released by one neuron and detected by another, in turn inducing a change in the electrical activity of the receiving neuron. But synaptic transmission also induces biochemical changes, which can act as a short-term or a long-term record of activity. Those changes can alter the strength or dynamics of the synapse, so that the next time the presynaptic neuron fires an electrical signal, the output of the postsynaptic neuron will be different.

When such changes are implemented across a network of neurons, they can make some patterns of activity easier to activate (or reactivate) than others. This is thought to be the cellular basis of memory – not just of overt, conscious memories, but also the implicit, subconscious memories of all the patterns of activity that have happened in the brain. Because these patterns comprise representations of external stimuli and internal states, their history reflects the history of an organism’s experience.

So, each of our brains has been literally physically shaped by the events that have happened to us. That arrangement of weighted synaptic connections constitutes the physical instantiation of our past experience and accumulated knowledge and provides the context in which new information is interpreted.

But I think there are still a couple elements missing to really give significance to information. The first is salience – some things are more important for the organism to pay attention to at any given moment than others. The brain has systems to attribute salience to various stimuli, based on things like novelty, relevance to a current goal (food is more salient when you are hungry, for example), current threat sensitivity and recent experience (e.g., a loud noise is less salient if it has been preceded by several quieter ones).

The second is value – our brains assign positive or negative value to things, in a way that reflects our goals and our evolutionary imperatives. Painful things are bad; things that smell of bacteria are bad; things that taste of bitter/likely poisonous compounds are bad; social defeat is bad; missing Breaking Bad is bad. Food is good; unless you’re dieting in which case not eating is good; an opportunity to mate is (very) good; a pay raise is good; finally finishing a blogpost is good.

The value of these things is not intrinsic to them – it is a response of the organism, which reflects both evolutionary imperatives and current states and goals (i.e., purpose). This isn’t done by magic – salience and value are attributed by neuromodulatory systems that help set the responsiveness of other circuits to various types of stimuli. They effectively change the weights of synaptic connections and reconfigure neuronal networks, but they do it on the fly, like a sound engineer increasing or decreasing the volume through different channels.

Top-down control and the emergence of agency
The hierarchical, multi-level structure of the brain is the essential characteristic that allows this meaning to emerge and have causal power. Information from lower-level brain areas is successively integrated by higher-level areas, which eventually propose possible actions based on the expected value of the predicted outcomes. The whole point of this design is that higher levels do not care about the minutiae at lower levels. In fact, the connections between sets of neurons are often explicitly designed to act as filters, actively excluding information outside of a specific spatial or temporal frequency. Higher-level neurons extract symbolic, higher-order information inherent in the patterned, dynamic activity of the lower level (typically integrated over space and time) in a way that does not depend on the state of every atom or the position of every molecule or even the activity of every neuron at any given moment.

There may be infinite arrangements of all those components at the lower level that mean the same thing (that represent the same higher-order information) and that would give rise to the same response in the higher-level group of neurons. Another way to think about this is to assess causality in a counterfactual sense: instead of asking whether state A necessarily leads to state B, we can ask: if state A had been different, would state B still have arisen? If there are cases where that is true, then the full explanation of why state A leads to state B does not inhere solely in its lower-level properties. Note that this does not violate physical laws or conflict with them at all – it simply adds another level of causation that is required to explain why state A led to state B. The answer to that question lies in what state A means to the organism.

To reiterate, the meaning of any pattern of neural activity is given not just by the information it carries but by the implications of that information for the organism. Those implications arise from the experiences of the individual, from the associations it has made, the contingencies it has learned from and the values it has assigned to past or predicted outcomes. This is what the brain is for – learning from past experience and abstracting the most general possible principles in order to assign value to predicted outcomes of various possible actions across the widest possible range of new situations.

https://www.flickr.com/photos/nossreh/10987353554/This is how true agency can emerge. The organism escapes from a passive, deterministic stimulus-response mode and ceases to be an automaton. Instead, it becomes an active and autonomous entity. It chooses actions based on the meaning of the available information, for that organism, weighted by values based on its own experiences and its own goals and motives. In short, it ceases to be pushed around, offering no resistance to every causal force, and becomes a cause in its own right.

This kind of emergence doesn’t violate physical law. The system is still built of atoms and molecules and cells and circuits. And changes to those components will still affect how the system works. But that’s not all the system is. Complex, hierarchical and recursive systems that incorporate information and meaning and purpose produce astonishing and still-mysterious (but non-magical) emergent properties, like life, like consciousness, like will.

Just because it’s turtles all the way down, doesn’t mean it’s turtles all the way up.






Footnote: Here are some examples of prominent scientists and others who support the idea of a deterministic universe and who infer that free will is therefore an illusion (except Dennett and other compatibilists):

Stephen Hawking: "…the molecular basis of biology shows that biological processes are governed by the laws of physics and chemistry and therefore are as determined as the orbits of the planets. Recent experiments in neuroscience support the view that it is our physical brain, following the known laws of science, that determines our actions and not some agency that exists outside those laws…so it seems that we are no more than biological machines and that free will is just an illusion (Hawking and Mlodinow, 2010, emphasis added)." Quoted in this excellent blogpost: http://www.sociology.org/when-youre-wrong-youre-right-stephen-hawkings-implausible-defense-of-determinism/

Patrick Haggard: "As a neuroscientist, you've got to be a determinist. There are physical laws, which the electrical and chemical events in the brain obey. Under identical circumstances, you couldn't have done otherwise; there's no 'I' which can say 'I want to do otherwise'. It's richness of the action that you do make, acting smart rather than acting dumb, which is free will."
http://www.telegraph.co.uk/science/8058541/Neuroscience-free-will-and-determinism-Im-just-a-machine.html

Sam Harris: "How can we be “free” as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend and of which we are entirely unaware?" http://www.samharris.org/free-will

Jerry Coyne: "Your decisions result from molecular-based electrical impulses and chemical substances transmitted from one brain cell to another. These molecules must obey the laws of physics, so the outputs of our brain—our "choices"—are dictated by those laws." http://chronicle.com/article/Jerry-A-Coyne/131165/

Daniel Dennett: Who concedes physical determinism is true but sees free will as compatible with that. This is a move that I have never fully understood the logic of or found at all convincing, yet apparently some form of compatibilism is a majority view among philosophers these days. http://plato.stanford.edu/entries/compatibilism/





Further reading:

Baumeister RF, Masicampo EJ, Vohs KD. (2011) Do conscious thoughts cause behavior? Annu Rev Psychol. 2011;62:331-61. http://www.ncbi.nlm.nih.gov/pubmed/?term=21126180

Björn Brembs (2011) Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates. Proc Biol Sci. 2011 Mar 22;278(1707):930-9. http://rspb.royalsocietypublishing.org/content/278/1707/930.full

Bob Doyle (2010) Jamesian Free Will, the Two-Stage Model of William James. William James Studies 2010, Vol. 5, pp. 1-28. williamjamesstudies.org/5.1/doyle.pdf

Buschman TJ, Miller EK.(2014) Goal-direction and top-down control. Philos Trans R Soc Lond B Biol Sci. 2014 Nov 5;369(1655). http://www.ncbi.nlm.nih.gov/pubmed/25267814

Damasio, Antonio (1994). Descartes' Error: Emotion, Reason, and the Human Brain, HarperCollins Publisher, New York.

George Ellis (2009) Top-Down Causation and the Human Brain. In Downward Causation and the Neurobiology of Free Will. Nancey Murphy, George F.R. Ellis, and Timothy O’Connor (Eds.) Springer-Verlag Berlin Heidelberg www.thedivineconspiracy.org/Z5235Y.pdf

Friston K. (2010) The free-energy principle: a unified brain theory? Nat Rev Neurosci. 2010 Feb;11(2):127-38. http://www.ncbi.nlm.nih.gov/pubmed/20068583

James Gleick (2011) The Information: A History, a Theory, a Flood http://www.amazon.com/The-Information-History-Theory-Flood/dp/1400096235

Paul Glimcher: Indeterminacy in brain and behavior. Annu Rev Psychol. 2005;56:25-56. http://www.ncbi.nlm.nih.gov/pubmed/15709928

Douglas Hofstadter (1979) Gödel, Escher, Bach www.physixfan.com/wp-content/files/GEBen.pdf

Douglas Hofstadter (2007) I am a Strange Loop occupytampa.org/files/tristan/i.am.a.strange.loop.pdf

William James (1884) The Dilemma of Determinism. http://www.rci.rutgers.edu/~stich/104_Master_File/104_Readings/James/James_DILEMMA_OF_DETERMINISM.pdf

Roger Sperry (1965) Mind, brain and humanist values. In New Views of the Nature of Man. ed. J. R. Platt, University of Chicago Press, Chicago, 1965. http://www.informationphilosopher.com/solutions/scientists/sperry/Mind_Brain_and_Humanist_Values.html

Roger Sperry (1991) In defense of mentalism and emergent interaction. Journal of Mind and Behavior 12:221-245 (1991)  http://people.uncw.edu/puente/sperry/sperrypapers/80s-90s/270-1991.pdf

Monday, November 3, 2014

Autism, epidemiology, and the public perception of evidence

“One day it's C-sections, the next it's pollution, now so many genes. Connect the dots, causation changes like the wind”

That quote is from a brief conversation I had on Twitter recently, with someone who is sceptical of the evidence that the causes of autism are overwhelmingly genetic (as described here). For me, it sums up a huge problem in how science is reported and perceived by the general public. This problem is especially stark when it comes to reportage of epidemiology studies, which seem to attract disproportionate levels of press interest.

The problem was highlighted by reports of a recent study that claims to show a statistical link between delivery by Caesarean section and risk of autism. This study was reported in several Irish newspapers, with alarming headlines like “C-sections ‘raise autism risk’” and in the UK Daily Mail, whose headline read (confusingly): “Autism '23% more likely in babies born by C-section': Women warned not to be alarmed by findings because risk still remains small”.

The study in question was a meta-analysis – a statistical analysis of the results of many previous studies – which looked at rates of autism in children delivered by C-section versus those delivered by vaginal birth. Across 25 studies, the authors found evidence of a 23% increased risk of autism in children delivered by C-section – a finding reported by all of the newspaper articles and cited in several of the headlines.

23% increased risk!!! That sounds huge! It almost sounds like 1 in 4 kids delivered by C-section will get autism. It also sounds like it is the fact that they were delivered by C-section that would be the cause of them having autism. In fairness, that’s not what the study or the newspaper articles say – in fact, there are any number of caveats and qualifications in these reports that should mitigate against such conclusions being drawn. But they won’t.

http://nourishbaby.com.au/blogs/news/8173077-preparing-for-an-elective-c-section-birth

They won’t because most people will only see or will only remember the headlines. It is the juxtaposition of the two terms – C-sections, autism – that will stick in people’s minds.

Most people not trained in epidemiology are not well equipped to evaluate the strength of the evidence, the size of the effect or the interpretation of causality from statistical associations. That should be the job of a science reporter, but it was not done in this case. In fact, most of the articles read like (and presumably are) merely a slightly re-hashed press release, the job of which is obviously to make the results sound as significant as possible. They include no critical commentary, no perspective or explanation and no judgment about whether the findings of this study are newsworthy to begin with.

For any study of this kind, you can ask several questions: 1. Is the evidence for an effect solid? 2. Is the effect significant (as in substantial)? 3. What does the effect mean? And a responsible journalist (or scientist thinking of whether or not to issue a press release) might also ask themselves: 4. Could uncritical reporting of these findings be misinterpreted and cause harm?

So, let’s have a look at the details here and see if we can answer those questions. In this study, published in the Journal of Child Psychology and Psychiatry, the authors look at the results of 25 previously published studies that investigated a possible link between C-sections and autism. These studies vary widely in size and methodologies (was it an elective or emergency C-section, a case-control or cohort study, were siblings used as controls, were the findings adjusted for confounders such as maternal age or smoking during pregnancy or gestational age at delivery, was autism the only outcome or were other things measured, was C-section the only risk factor or were other factors included, etc., etc.).

The point of a meta-analysis is for the authors to devise ways to statistically correct for these different approaches and combine the data to derive an overall conclusion that is supposed to be more reliable than findings from any one study. The authors make a series of choices of which studies to include, what weight to give them and how to statistically combine them. These choices are all reported, of course, but the point is that different choices and approaches might lead to different answers.


In this case, the authors concentrate on 13 studies that adjusted for potential confounds (as much as any epidemiological study can). Each of these compares the frequency of autism in a cohort of children delivered by C-section with that in a group of children delivered vaginally. A difference in frequency is described by the odds ratio (OR) – if the rates are equal, then the OR=1. If the rate is, say 10% higher in those delivered by C-section, then the OR=1.1. The results of these studies are shown below:



One important thing jumps out – some of these studies have vastly more subjects than others. (For some reason, the numbers in the Langridge et al 2013 study are not listed in the table: it had 383,000 children in total). What should be obvious is that the studies showing the biggest odds ratios (5.60 or 3.11) are the ones with the smallest sample sizes (n = 278 or 146). The biggest studies (with n > 690,000 or >268,000!) show either negative or very small positive odds ratios (0.97 or 1.04).

Now, why you would want to combine results from studies with samples in the hundreds, with those with samples in the hundreds of thousands is beyond me and the way the authors do it also seems odd. In order to combine them, the odds ratios of these studies are weighted by the inverse of the variance in each study. Maybe that’s a standard meta-analysis thing to do, but it seems much more intuitive to weight them by the sample size (or just get rid of the ones with dinky sample sizes). When you do it that way, the overall odds ratio comes out barely over 1. (This doesn’t even take into account possible publication bias, where studies that found no effect were never even published).

Anyway, all of that discussion is merely to draw attention to the fact that the methodological choices of the authors can influence the outcome. The headline reported 23% increase in risk is thus not necessarily a solid finding.

But, for the sake of argument, let’s take it at face value and try to express what it actually means in ways that people can understand. The problem with odds ratios is they represent an increase in risk, compared to the baseline risk, which is very small. So, the 23% increased risk really is not an increase in absolute risk, as it sounds, but a relative increase by 23% of the baseline risk, which is about 1% (so really an increase of 0.23%).

A clearer way to report that is to express it in natural frequencies: if the base rate of autism is ~10 children out of 1,000, you would expect ~12 with autism out of 1,000 children all delivered by C-section. Those are numbers that people can grasp intuitively (and most people would see that the supposed increase is fairly negligible – that is, there’s not much of a difference between 10/1,000 and 12/1,000). Certainly nothing newsworthy.

But let’s say it’s a slow news day and we have this pre-prepared press release describing these findings in front of us and space to fill. What should we say about what this statistical association means? Does such a correlation imply that one thing causes the other thing? Is it evidence that the fact of being delivered by C-section is the cause of an increased risk of autism?

I think most people can see that it does not, at least not necessarily. It is of course possible that the link is causal and direct. But it seems much more likely that the C-section is merely an indicator of obstetric complications, which are themselves a statistical risk factor for autism. (In which case, having the section is likely to reduce, not increase the chances of harm!). Moreover, obstetric complications can arise due to an underlying condition of the fetus. In such a case, the arrow of causation would be exactly opposite to what it appears – the child having a neurodevelopmental condition would be the cause of the C-section.

So, to answer questions 1-3: the findings are not necessarily as solid as they appear, the size of the effect is nowhere near as large as the “23% increased” risk phrase suggests, and, even if the actually small effect were real, it does not imply that C-sections are a bad thing.

Now, for question 4: if this finding is reported, is it likely to be misunderstood (despite a wealth of caveats) and is that misunderstanding likely to cause harm? In this case, for an emotive and already confused issue like autism, the answer to both those questions is pretty obviously yes. It doesn’t take much imagination to see the effect on pregnant women faced with the decision of whether to undergo a C-section, possibly in difficult and stressful circumstances. It seems a very real possibility that this perceived risk could lead some women to refuse or delay a C-section, which could actually increase the rates of neurodevelopmental disorders due to obstetric complications.

More generally, the reportage of this particular study illustrates a much wider problem, which is that the media seem fascinated with epidemiology studies. One reason for this is that such studies typically require no background knowledge to understand. You don’t need to know any molecular or cell biology, any complicated genetics or neuroscience, to (mis)understand the idea that X is associated with increased risk of Y. That makes it easy for reporters to write and superficially accessible for a wide readership.

Unfortunately, it leads to two effects: one, people will misinterpret the findings and ascribe a high level of risk and a direct causal influence to some factor when the evidence does not support that at all. That has potential to do real harm, as in the case of reduced vaccination, for example.

The second effect is more insidious – people get jaded by these constant reports in the media. First, butter was bad for us, now it’s good for us, first fat was bad for us, now it’s sugar, etc., etc. The overall result of this constant barrage of rubbish findings is that the general public loses faith in science. If we apparently change our minds on a weekly basis, why should they trust anything we say? All science ends up being viewed as equivalent to epidemiology, which is really not what Thomas Kuhn has called “normal science”.

Normal science involves an established framework of inter-supporting facts, which constrain and inform subsequent hypotheses and experiments, so that any new fact is based on and consistent with an unseen mountain of previous work. That is not the case for epidemiology – you could do a study on C-sections and autism pretty much on a whim – that hypothesis is not constrained by a large framework of research (except previous research on precisely that issue). I don’t mean to knock epidemiology as an exploratory science, just to illustrate its well-known limitations.

In the case of autism, this leads people like our tweeter, above, to erroneously take the strength of the evidence for C-sections or pollution or genetics as equivalent (and, in this case, to dismiss all of it as just the flavour of the month). That seriously undermines efforts to communicate what is an exceptionally robust framework of evidence for genetic causation of this condition. The answer is not blowing in the wind...

s;o