Nav

Monday, November 24, 2014

Top-down causation and the emergence of agency

http://nelsoncosentino.deviantart.com/art/Machine-Brain-2-129467519There is a paradox at the heart of modern neuroscience. As we succeed in explaining more and more cognitive operations in terms of patterns of electrical activity of specific neural circuits, it seems we move ever farther from bridging the gap between the physical and the mental. Indeed, each advance seems to further relegate mental activity to the status of epiphenomenon – something that emerges from the physical activity of the brain but that plays no part in controlling it. It seems difficult to reconcile the reductionist, reverse-engineering approach to brain function with the idea that we human beings have thoughts, desires, goals and beliefs that influence our actions. If actions are driven by the physical flow of ions through networks of neurons, then is there any room or even any need for psychological explanations of behaviour?

How vs Why
To me, that depends on what level of explanation is being sought. If you want to understand how an organism behaves, it is perfectly possible to describe the mechanisms by which it processes sensory inputs, infers a model of the outside world, integrates that information with its current state, weights a variety of options for actions based on past experience and predicted consequences, inhibits all but one of those options, conveys commands to the motor system and executes the action. If you fill in the details of each of those steps, that might seem to be a complete explanation of the causal mechanisms of behaviour.

If, on the other hand, you want to know why it behaves a certain way, then an explanation at the level of neural circuits (and ultimately at the level of molecules, atoms and sub-atomic particles) is missing something. It’s missing meaning and purpose. Those are not physical things but they can still have causal power in physical systems.

Why are why questions taboo?
http://scienceandreligion472.blogspot.ie/2013/08/aristotle-4-causes-and-substance-versus.htmlAristotle articulated a theory of causality, which defined four causes or types of explanation for how natural objects or systems (including living organisms) behave. The material cause deals with the physical identity of the components of a system – what it is made of. On a more abstract level, the formal cause deals with the form or organisation of those components. The efficient cause concerns the forces outside the object that induce some change. And – finally – the final cause refers to the end or intended purpose of the thing. He saw these as complementary and equally valid perspectives that can be taken to provide explanations of natural phenomena.

However, Francis Bacon, the father of the scientific method, argued that scientists should concern themselves only with material and efficient causes in nature – also known as matter and motion. Formal and final causes he consigned to Metaphysics, or what he called “magic”! Those attitudes remain prevalent among scientists today, and for good reason – that focus has ensured the phenomenal success of reductionist approaches that study matter and motion and deduce mechanism.

Scientists are trained to be suspicious of “why questions” – indeed, they are usually told explicitly that science cannot answer such questions and shouldn’t try. And for most things in nature, that is an apt admonition – really against anthropomorphising, or ascribing human motives to inanimate objects, like single cells or molecules or even to organisms with less complicated nervous systems and, presumably, less sophisticated inner mental lives. Ironically, though, some people seem to think we shouldn’t even anthropomorphise humans!

Causes of behaviour can be described both at the level of mechanisms and at the level of reasons. There is no conflict between those two levels of explanation nor is one privileged over the other – both are active at the same time. Discussion of meaning does not imply some mystical or supernatural force that over-rides physical causation. It’s not that non-physical stuff pushes physical stuff around in some dualist dance. (After all, “non-physical stuff” is a contradiction in terms). It’s that the higher-order organisation of physical stuff – which has both informational content and meaning for the organism – constrains and directs how physical stuff moves, because it is directed towards a purpose.

Purpose is incorporated in artificial things by design – the washing machine that is currently annoying me behaves the way it does because it is designed to do so (though it could probably have been designed to be quieter). I could explain how it works in purely physical terms relating to the activity and interactions of all its components, but the reason it behaves that way would be missing from such a description – the components are arranged the way they are so that the machine can carry out its designed function. In living things, purpose is not designed but is cumulatively incorporated in hindsight by natural selection. The over-arching goals of survival and reproduction, and the subsidiary goals of feeding, mating, avoiding predators, nurturing young, etc., come pre-wired in the system through millions of years of evolution. 

Now, there’s a big difference between saying higher-order design principles and evolutionary imperatives constrain the arrangements of neural systems over long timeframes and claiming that top-down meaning directs the movements of molecules on a moment-to-moment basis. Most bottom-up reductionists would admit the former but challenge the latter. How can something abstract like meaning push molecules around?

Determinism, randomness and causal slack
http://ie.ign.com/articles/2013/09/12/transporter-prank-used-to-promote-star-trek-into-darknessThe whole premise of neuroscientific materialism is that all of the activities of the mind emerge from the actions and interactions of the physical components of the brain – and nothing else. If you were transported, Star Trek-style, so that all of your molecules and atoms were precisely recreated somewhere else, the resultant being would be you – it would have all the knowledge and memories, the personality traits and psychological characteristics you have – in short, precisely duplicating your brain down to the last physical detail, would duplicate your mind. All those immaterial things that make your mind yours must be encoded in the physical arrangement of molecules in your brain right at this moment, as you read this.

To some (see examples below, in footnote), this implies a kind of neural determinism. The idea is that, given a certain arrangement of atoms in your brain right at this second, the laws of physics that control how such particles interact (the strong and weak nuclear forces and the gravitational and electromagnetic forces), will lead, inevitably, to a specific subsequent state of the brain. In this view, it doesn’t matter what the arrangements of atoms mean, the individual atoms will behave how they will behave regardless.

To me, this deterministic model of the brain falls at the first hurdle, for one simple reason – we know that the universe is not deterministic. If it were, then everything that happened since the Big Bang and everything that will happen in the future would have been predestined by the specific arrangements and states of all the molecules in the universe at that moment. Thankfully, the universe doesn’t work that way – there is substantial randomness at all levels, from quantum uncertainty to thermal fluctuations to emergent noise in complex systems, such as living organisms. I don’t mean just that things are so complex or chaotic that they are unpredictable in practice – that is a statement about us, not about the world. I am referring to the true randomness that demonstrably exists in the universe, which makes nature essentially non-deterministic.

Now, if you are looking for something to rescue free will from determinism, randomness by itself does not do the job – after all, random “choices” are hardly freely willed. But that randomness, that lack of determinacy, does introduce some room, some causal slack, for top-down forces to causally influence the outcome. It means that the next lower-level state of all of the components of your brain (which will entail your next action) is not completely determined merely by the individual states of all the molecular and atomic components of your brain right at this second. There is therefore room for the higher-order arrangements of the components to also have causal power, precisely because those arrangements represent things (percepts, beliefs, goals) – they have meaning that is not captured in lower-order descriptions.

Information and Meaning
In information theory, a message (a string or sequence of digits, letters, beeps, atoms, anything at all really) has a quantifiable amount of information proportional to how unlikely that particular arrangement is. So, there’s more information in knowing that a roll of a six-sided die ended up a four than in knowing that a flip of a coin ended up heads. That’s important for signal transmission especially because it determines how compressible a message is and how efficiently it can be encoded and transmitted, especially under imperfect or noisy conditions.

Interestingly, that measure is analogous to the thermodynamic property of entropy, which can be thought of as an inverse measure of how much order there is in a system. This reflects how likely it is to be in the state that it’s in, relative to the total number of such states that it could have been in (the coin could only have been in two states, while the die could have been in six). In physical terms, the entropy of a gas, for example, corresponds to how many different organisations or microstates of its molecules would correspond to the same macrostate, as characterised by specific temperature and pressure. 
http://www.science4all.org/le-nguyen-hoang/entropy/

Actually, this analogy is not merely metaphorical – it is literally true that information and entropy measure the same thing. That is because information can’t just exist by itself in some ethereal sense – it has to be instantiated in the physical arrangement of some substrate. Landauer recognised that “any information that has a physical representation must somehow be embedded in the statistical mechanical degrees of freedom of a physical system”.

However, “entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.” In fact, the information theory sense of information is not concerned at all with the semantic content of the message. For sentences in a language or for mathematical expressions, for example, information theory doesn’t care if the string is well-formed or whether it is true or not.

So, the string: “your mother was a hamster” has the same information content as its anagram “warmth or easy-to-use harm”, but only the former has semantic content – i.e., it means something. However, that meaning is not solely inherent in the string itself – it relies on the receiver’s knowledge of the language and their resultant ability to interpret what the individual words mean, what the phrase means and, further, to be aware that it is intended as an insult. The string only means something in the context of that knowledge.
http://www.quickmeme.com/meme/36eya0

In the nervous system, information is physically carried in the arrangements of molecules at the cellular level and in the patterns of electrical activity of neurons. For sensory information, this pattern is imposed by physical objects or forces from the environment (e.g., photons, sound waves, odor molecules) impinging on sensory neurons and directly inducing molecular changes and neuronal activity. The resultant patterns of activity thus form a representation of something in the world and therefore have information – order is enforced on the system, driving one particular pattern of activity from an enormous possible set of microstates. This is true not just for information about sensory stimuli but also for representations of internal states, emotions, goals, actions, etc. All of these are physically encoded in patterns of nerve cell activity.

These patterns carry information in different ways: in gradients of electrical potential in dendrites (an analog signal), in the firing of action potentials (a digital signal), in the temporal sequence of spikes from individual neurons (a temporally integrated signal), in the spatial patterns of coincident firing across an ensemble (a spatially integrated signal), and even in the trajectory of a network through state-space over some period of time (a spatiotemporally integrated signal!). The operations that carry out the spatial and temporal integration occur in the process of transmitting the information from one set of neurons to another. It is thus the higher-order patterns that encode information rather than the lower-order details of the arrangements of all the molecules in the relevant neurons at any given time-point.

But we’re not done yet. Just like that sentence about your mother (yeah, I went there), for that semantic content to mean anything to the organism it has to be interpreted, and that can only occur in the much broader context of everything the organism knows. (That’s why the French provocateur spoke in English to the stupid English knights, instead of saying “votre mère était un hamster”. Not much point insulting someone if they don’t know what it means).

The brain has two ways of representing information – one for transmission and one for storage. While information is transmitted in the flow of electrical activity in networks of neurons, as described above, it is stored at a biochemical and cellular level, through changes to the neural network, especially to the synaptic connections between neurons. Unlike a computer, the brain stores memory by changing its own hardware.

Electrical signals are transformed into chemical signals at synapses, where neurotransmitters are released by one neuron and detected by another, in turn inducing a change in the electrical activity of the receiving neuron. But synaptic transmission also induces biochemical changes, which can act as a short-term or a long-term record of activity. Those changes can alter the strength or dynamics of the synapse, so that the next time the presynaptic neuron fires an electrical signal, the output of the postsynaptic neuron will be different.

When such changes are implemented across a network of neurons, they can make some patterns of activity easier to activate (or reactivate) than others. This is thought to be the cellular basis of memory – not just of overt, conscious memories, but also the implicit, subconscious memories of all the patterns of activity that have happened in the brain. Because these patterns comprise representations of external stimuli and internal states, their history reflects the history of an organism’s experience.

So, each of our brains has been literally physically shaped by the events that have happened to us. That arrangement of weighted synaptic connections constitutes the physical instantiation of our past experience and accumulated knowledge and provides the context in which new information is interpreted.

But I think there are still a couple elements missing to really give significance to information. The first is salience – some things are more important for the organism to pay attention to at any given moment than others. The brain has systems to attribute salience to various stimuli, based on things like novelty, relevance to a current goal (food is more salient when you are hungry, for example), current threat sensitivity and recent experience (e.g., a loud noise is less salient if it has been preceded by several quieter ones).

The second is value – our brains assign positive or negative value to things, in a way that reflects our goals and our evolutionary imperatives. Painful things are bad; things that smell of bacteria are bad; things that taste of bitter/likely poisonous compounds are bad; social defeat is bad; missing Breaking Bad is bad. Food is good; unless you’re dieting in which case not eating is good; an opportunity to mate is (very) good; a pay raise is good; finally finishing a blogpost is good.

The value of these things is not intrinsic to them – it is a response of the organism, which reflects both evolutionary imperatives and current states and goals (i.e., purpose). This isn’t done by magic – salience and value are attributed by neuromodulatory systems that help set the responsiveness of other circuits to various types of stimuli. They effectively change the weights of synaptic connections and reconfigure neuronal networks, but they do it on the fly, like a sound engineer increasing or decreasing the volume through different channels.

Top-down control and the emergence of agency
The hierarchical, multi-level structure of the brain is the essential characteristic that allows this meaning to emerge and have causal power. Information from lower-level brain areas is successively integrated by higher-level areas, which eventually propose possible actions based on the expected value of the predicted outcomes. The whole point of this design is that higher levels do not care about the minutiae at lower levels. In fact, the connections between sets of neurons are often explicitly designed to act as filters, actively excluding information outside of a specific spatial or temporal frequency. Higher-level neurons extract symbolic, higher-order information inherent in the patterned, dynamic activity of the lower level (typically integrated over space and time) in a way that does not depend on the state of every atom or the position of every molecule or even the activity of every neuron at any given moment.

There may be infinite arrangements of all those components at the lower level that mean the same thing (that represent the same higher-order information) and that would give rise to the same response in the higher-level group of neurons. Another way to think about this is to assess causality in a counterfactual sense: instead of asking whether state A necessarily leads to state B, we can ask: if state A had been different, would state B still have arisen? If there are cases where that is true, then the full explanation of why state A leads to state B does not inhere solely in its lower-level properties. Note that this does not violate physical laws or conflict with them at all – it simply adds another level of causation that is required to explain why state A led to state B. The answer to that question lies in what state A means to the organism.

To reiterate, the meaning of any pattern of neural activity is given not just by the information it carries but by the implications of that information for the organism. Those implications arise from the experiences of the individual, from the associations it has made, the contingencies it has learned from and the values it has assigned to past or predicted outcomes. This is what the brain is for – learning from past experience and abstracting the most general possible principles in order to assign value to predicted outcomes of various possible actions across the widest possible range of new situations.

https://www.flickr.com/photos/nossreh/10987353554/This is how true agency can emerge. The organism escapes from a passive, deterministic stimulus-response mode and ceases to be an automaton. Instead, it becomes an active and autonomous entity. It chooses actions based on the meaning of the available information, for that organism, weighted by values based on its own experiences and its own goals and motives. In short, it ceases to be pushed around, offering no resistance to every causal force, and becomes a cause in its own right.

This kind of emergence doesn’t violate physical law. The system is still built of atoms and molecules and cells and circuits. And changes to those components will still affect how the system works. But that’s not all the system is. Complex, hierarchical and recursive systems that incorporate information and meaning and purpose produce astonishing and still-mysterious (but non-magical) emergent properties, like life, like consciousness, like will.

Just because it’s turtles all the way down, doesn’t mean it’s turtles all the way up.






Footnote: Here are some examples of prominent scientists and others who support the idea of a deterministic universe and who infer that free will is therefore an illusion (except Dennett and other compatibilists):

Stephen Hawking: "…the molecular basis of biology shows that biological processes are governed by the laws of physics and chemistry and therefore are as determined as the orbits of the planets. Recent experiments in neuroscience support the view that it is our physical brain, following the known laws of science, that determines our actions and not some agency that exists outside those laws…so it seems that we are no more than biological machines and that free will is just an illusion (Hawking and Mlodinow, 2010, emphasis added)." Quoted in this excellent blogpost: http://www.sociology.org/when-youre-wrong-youre-right-stephen-hawkings-implausible-defense-of-determinism/

Patrick Haggard: "As a neuroscientist, you've got to be a determinist. There are physical laws, which the electrical and chemical events in the brain obey. Under identical circumstances, you couldn't have done otherwise; there's no 'I' which can say 'I want to do otherwise'. It's richness of the action that you do make, acting smart rather than acting dumb, which is free will."
http://www.telegraph.co.uk/science/8058541/Neuroscience-free-will-and-determinism-Im-just-a-machine.html

Sam Harris: "How can we be “free” as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend and of which we are entirely unaware?" http://www.samharris.org/free-will

Jerry Coyne: "Your decisions result from molecular-based electrical impulses and chemical substances transmitted from one brain cell to another. These molecules must obey the laws of physics, so the outputs of our brain—our "choices"—are dictated by those laws." http://chronicle.com/article/Jerry-A-Coyne/131165/

Daniel Dennett: Who concedes physical determinism is true but sees free will as compatible with that. This is a move that I have never fully understood the logic of or found at all convincing, yet apparently some form of compatibilism is a majority view among philosophers these days. http://plato.stanford.edu/entries/compatibilism/





Further reading:

Baumeister RF, Masicampo EJ, Vohs KD. (2011) Do conscious thoughts cause behavior? Annu Rev Psychol. 2011;62:331-61. http://www.ncbi.nlm.nih.gov/pubmed/?term=21126180

Björn Brembs (2011) Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates. Proc Biol Sci. 2011 Mar 22;278(1707):930-9. http://rspb.royalsocietypublishing.org/content/278/1707/930.full

Bob Doyle (2010) Jamesian Free Will, the Two-Stage Model of William James. William James Studies 2010, Vol. 5, pp. 1-28. williamjamesstudies.org/5.1/doyle.pdf

Buschman TJ, Miller EK.(2014) Goal-direction and top-down control. Philos Trans R Soc Lond B Biol Sci. 2014 Nov 5;369(1655). http://www.ncbi.nlm.nih.gov/pubmed/25267814

Damasio, Antonio (1994). Descartes' Error: Emotion, Reason, and the Human Brain, HarperCollins Publisher, New York.

George Ellis (2009) Top-Down Causation and the Human Brain. In Downward Causation and the Neurobiology of Free Will. Nancey Murphy, George F.R. Ellis, and Timothy O’Connor (Eds.) Springer-Verlag Berlin Heidelberg www.thedivineconspiracy.org/Z5235Y.pdf

Friston K. (2010) The free-energy principle: a unified brain theory? Nat Rev Neurosci. 2010 Feb;11(2):127-38. http://www.ncbi.nlm.nih.gov/pubmed/20068583

James Gleick (2011) The Information: A History, a Theory, a Flood http://www.amazon.com/The-Information-History-Theory-Flood/dp/1400096235

Paul Glimcher: Indeterminacy in brain and behavior. Annu Rev Psychol. 2005;56:25-56. http://www.ncbi.nlm.nih.gov/pubmed/15709928

Douglas Hofstadter (1979) Gödel, Escher, Bach www.physixfan.com/wp-content/files/GEBen.pdf

Douglas Hofstadter (2007) I am a Strange Loop occupytampa.org/files/tristan/i.am.a.strange.loop.pdf

William James (1884) The Dilemma of Determinism. http://www.rci.rutgers.edu/~stich/104_Master_File/104_Readings/James/James_DILEMMA_OF_DETERMINISM.pdf

Roger Sperry (1965) Mind, brain and humanist values. In New Views of the Nature of Man. ed. J. R. Platt, University of Chicago Press, Chicago, 1965. http://www.informationphilosopher.com/solutions/scientists/sperry/Mind_Brain_and_Humanist_Values.html

Roger Sperry (1991) In defense of mentalism and emergent interaction. Journal of Mind and Behavior 12:221-245 (1991)  http://people.uncw.edu/puente/sperry/sperrypapers/80s-90s/270-1991.pdf

Monday, November 3, 2014

Autism, epidemiology, and the public perception of evidence

“One day it's C-sections, the next it's pollution, now so many genes. Connect the dots, causation changes like the wind”

That quote is from a brief conversation I had on Twitter recently, with someone who is sceptical of the evidence that the causes of autism are overwhelmingly genetic (as described here). For me, it sums up a huge problem in how science is reported and perceived by the general public. This problem is especially stark when it comes to reportage of epidemiology studies, which seem to attract disproportionate levels of press interest.

The problem was highlighted by reports of a recent study that claims to show a statistical link between delivery by Caesarean section and risk of autism. This study was reported in several Irish newspapers, with alarming headlines like “C-sections ‘raise autism risk’” and in the UK Daily Mail, whose headline read (confusingly): “Autism '23% more likely in babies born by C-section': Women warned not to be alarmed by findings because risk still remains small”.

The study in question was a meta-analysis – a statistical analysis of the results of many previous studies – which looked at rates of autism in children delivered by C-section versus those delivered by vaginal birth. Across 25 studies, the authors found evidence of a 23% increased risk of autism in children delivered by C-section – a finding reported by all of the newspaper articles and cited in several of the headlines.

23% increased risk!!! That sounds huge! It almost sounds like 1 in 4 kids delivered by C-section will get autism. It also sounds like it is the fact that they were delivered by C-section that would be the cause of them having autism. In fairness, that’s not what the study or the newspaper articles say – in fact, there are any number of caveats and qualifications in these reports that should mitigate against such conclusions being drawn. But they won’t.

http://nourishbaby.com.au/blogs/news/8173077-preparing-for-an-elective-c-section-birth

They won’t because most people will only see or will only remember the headlines. It is the juxtaposition of the two terms – C-sections, autism – that will stick in people’s minds.

Most people not trained in epidemiology are not well equipped to evaluate the strength of the evidence, the size of the effect or the interpretation of causality from statistical associations. That should be the job of a science reporter, but it was not done in this case. In fact, most of the articles read like (and presumably are) merely a slightly re-hashed press release, the job of which is obviously to make the results sound as significant as possible. They include no critical commentary, no perspective or explanation and no judgment about whether the findings of this study are newsworthy to begin with.

For any study of this kind, you can ask several questions: 1. Is the evidence for an effect solid? 2. Is the effect significant (as in substantial)? 3. What does the effect mean? And a responsible journalist (or scientist thinking of whether or not to issue a press release) might also ask themselves: 4. Could uncritical reporting of these findings be misinterpreted and cause harm?

So, let’s have a look at the details here and see if we can answer those questions. In this study, published in the Journal of Child Psychology and Psychiatry, the authors look at the results of 25 previously published studies that investigated a possible link between C-sections and autism. These studies vary widely in size and methodologies (was it an elective or emergency C-section, a case-control or cohort study, were siblings used as controls, were the findings adjusted for confounders such as maternal age or smoking during pregnancy or gestational age at delivery, was autism the only outcome or were other things measured, was C-section the only risk factor or were other factors included, etc., etc.).

The point of a meta-analysis is for the authors to devise ways to statistically correct for these different approaches and combine the data to derive an overall conclusion that is supposed to be more reliable than findings from any one study. The authors make a series of choices of which studies to include, what weight to give them and how to statistically combine them. These choices are all reported, of course, but the point is that different choices and approaches might lead to different answers.


In this case, the authors concentrate on 13 studies that adjusted for potential confounds (as much as any epidemiological study can). Each of these compares the frequency of autism in a cohort of children delivered by C-section with that in a group of children delivered vaginally. A difference in frequency is described by the odds ratio (OR) – if the rates are equal, then the OR=1. If the rate is, say 10% higher in those delivered by C-section, then the OR=1.1. The results of these studies are shown below:



One important thing jumps out – some of these studies have vastly more subjects than others. (For some reason, the numbers in the Langridge et al 2013 study are not listed in the table: it had 383,000 children in total). What should be obvious is that the studies showing the biggest odds ratios (5.60 or 3.11) are the ones with the smallest sample sizes (n = 278 or 146). The biggest studies (with n > 690,000 or >268,000!) show either negative or very small positive odds ratios (0.97 or 1.04).

Now, why you would want to combine results from studies with samples in the hundreds, with those with samples in the hundreds of thousands is beyond me and the way the authors do it also seems odd. In order to combine them, the odds ratios of these studies are weighted by the inverse of the variance in each study. Maybe that’s a standard meta-analysis thing to do, but it seems much more intuitive to weight them by the sample size (or just get rid of the ones with dinky sample sizes). When you do it that way, the overall odds ratio comes out barely over 1. (This doesn’t even take into account possible publication bias, where studies that found no effect were never even published).

Anyway, all of that discussion is merely to draw attention to the fact that the methodological choices of the authors can influence the outcome. The headline reported 23% increase in risk is thus not necessarily a solid finding.

But, for the sake of argument, let’s take it at face value and try to express what it actually means in ways that people can understand. The problem with odds ratios is they represent an increase in risk, compared to the baseline risk, which is very small. So, the 23% increased risk really is not an increase in absolute risk, as it sounds, but a relative increase by 23% of the baseline risk, which is about 1% (so really an increase of 0.23%).

A clearer way to report that is to express it in natural frequencies: if the base rate of autism is ~10 children out of 1,000, you would expect ~12 with autism out of 1,000 children all delivered by C-section. Those are numbers that people can grasp intuitively (and most people would see that the supposed increase is fairly negligible – that is, there’s not much of a difference between 10/1,000 and 12/1,000). Certainly nothing newsworthy.

But let’s say it’s a slow news day and we have this pre-prepared press release describing these findings in front of us and space to fill. What should we say about what this statistical association means? Does such a correlation imply that one thing causes the other thing? Is it evidence that the fact of being delivered by C-section is the cause of an increased risk of autism?

I think most people can see that it does not, at least not necessarily. It is of course possible that the link is causal and direct. But it seems much more likely that the C-section is merely an indicator of obstetric complications, which are themselves a statistical risk factor for autism. (In which case, having the section is likely to reduce, not increase the chances of harm!). Moreover, obstetric complications can arise due to an underlying condition of the fetus. In such a case, the arrow of causation would be exactly opposite to what it appears – the child having a neurodevelopmental condition would be the cause of the C-section.

So, to answer questions 1-3: the findings are not necessarily as solid as they appear, the size of the effect is nowhere near as large as the “23% increased” risk phrase suggests, and, even if the actually small effect were real, it does not imply that C-sections are a bad thing.

Now, for question 4: if this finding is reported, is it likely to be misunderstood (despite a wealth of caveats) and is that misunderstanding likely to cause harm? In this case, for an emotive and already confused issue like autism, the answer to both those questions is pretty obviously yes. It doesn’t take much imagination to see the effect on pregnant women faced with the decision of whether to undergo a C-section, possibly in difficult and stressful circumstances. It seems a very real possibility that this perceived risk could lead some women to refuse or delay a C-section, which could actually increase the rates of neurodevelopmental disorders due to obstetric complications.

More generally, the reportage of this particular study illustrates a much wider problem, which is that the media seem fascinated with epidemiology studies. One reason for this is that such studies typically require no background knowledge to understand. You don’t need to know any molecular or cell biology, any complicated genetics or neuroscience, to (mis)understand the idea that X is associated with increased risk of Y. That makes it easy for reporters to write and superficially accessible for a wide readership.

Unfortunately, it leads to two effects: one, people will misinterpret the findings and ascribe a high level of risk and a direct causal influence to some factor when the evidence does not support that at all. That has potential to do real harm, as in the case of reduced vaccination, for example.

The second effect is more insidious – people get jaded by these constant reports in the media. First, butter was bad for us, now it’s good for us, first fat was bad for us, now it’s sugar, etc., etc. The overall result of this constant barrage of rubbish findings is that the general public loses faith in science. If we apparently change our minds on a weekly basis, why should they trust anything we say? All science ends up being viewed as equivalent to epidemiology, which is really not what Thomas Kuhn has called “normal science”.

Normal science involves an established framework of inter-supporting facts, which constrain and inform subsequent hypotheses and experiments, so that any new fact is based on and consistent with an unseen mountain of previous work. That is not the case for epidemiology – you could do a study on C-sections and autism pretty much on a whim – that hypothesis is not constrained by a large framework of research (except previous research on precisely that issue). I don’t mean to knock epidemiology as an exploratory science, just to illustrate its well-known limitations.

In the case of autism, this leads people like our tweeter, above, to erroneously take the strength of the evidence for C-sections or pollution or genetics as equivalent (and, in this case, to dismiss all of it as just the flavour of the month). That seriously undermines efforts to communicate what is an exceptionally robust framework of evidence for genetic causation of this condition. The answer is not blowing in the wind...

Sunday, October 19, 2014

Autism: The Truth is (not) Out There

Parents of a child affected by autism naturally want to know the cause. Autism can dramatically disrupt the typical childhood pattern of cognitive, behavioural and social development. At the most severe end, the child may require care for the rest of their lives. Even at the milder end, it may make mainstream education impossible and exclude many opportunities available to typically developing children. Any parent would hope that knowing the cause could lead to better treatment and management options for their child.

Unfortunately, until very recently, it has not been possible to identify causes in individual children (with rare exceptions). Science and medicine had apparently failed to solve this mystery. (I say “had”, because, as we will see below, this is no longer true). The typical experience of children and their parents in the health system has been one of frustration, often with a long diagnostic odyssey, limited options for medical intervention and a struggle to obtain access to specialised educational services – all during a critical period in the child’s development. Given this frustration, it is understandable that a variety of alternative theories of autism causation have become popular.

http://news.bme.com/tag/geek/page/9/Parents should beware, however – while such theories appeal to those common frustrations, they generally have no scientific support whatsoever. Many of these are not just non-scientific, but actively anti-scientific in nature. They tend to be based on anecdote, narrative and outright speculation, rather than the scientific method (objective assessment of empirical evidence). Many play to conspiracy theories, casting scientists and doctors as pawns of Big Pharma, for example, and those proposing alternative theories as brave mavericks fighting against the establishment to get The Truth out there.

Ironically, the truth is that many of the people pushing alternative theories are looking to make money off them – often by taking advantage of vulnerable parents. Not all, by any means, but very often a commercial interest is not hard to find (such as selling costly diets or supplements or even more dangerous supposed “treatments”; claims that alternative therapies like homeopathy can cure the condition; pricey seminars; or a new book to promote)*. Alternative theories for autism and the treatments that go with them are big business.

The other irony is that these theories actively ignore our growing knowledge of the real causes of autism, which are clearly mainly genetic. The Truth is known but it’s not out there. Scientists have done a poor job of communicating the extraordinary advances made in the last few years in understanding the genetic causes of autism. (Even many scientists and doctors seem unaware of these advances, in fact). This leaves a void that can be filled by theories that are highly speculative or sometimes frankly bizarre, and that are also either unsupported or flatly contradicted by available evidence.


The unusual suspects
http://www.mnn.com/green-tech/gadgets-electronics/blogs/best-ipad-apps-for-toddlersThe old psychoanalytical theory that autism is caused by “cold parenting” has long since been discredited, but still pops up every now and then (and is still quite prevalent in France and Argentina, for some reason). It is often espoused by people who also happen to offer psychological courses that purport to realign this relationship and thereby ameliorate the condition. Bizarrely, this theory has been resurrected in modern form by neuroscientist Susan Greenfield, who has suggested that autism is caused by overuse of digital technology and immersion in social media, with a concomitant withdrawal from direct human contact. The refrigerator mother has been replaced by the unfeeling screen of the iPad.

This technophobic notion is largely incoherent and has no supporting evidence. (When asked for evidence, Greenfield has described her own theory as follows: "I point to the increase in autism and I point to internet use. That is all.”). The fact that autism is typically diagnosed by two or three years of age, well before most kids have Facebook or Twitter accounts, rather fatally undermines the idea.

http://www.misija.com/gmo/vy/15/genetically-modified-foodAnother class of theories propose that autism is caused not by an impoverished psychosocial environment, but by a toxic physical environment. There is no shortage of potential culprits: fluoride in the water, mercury in dental amalgam, vaccinations, genetically modified food, herbicides, pesticides, food allergies, microwaves, cell phone towers, traffic fumes, even toxins in everyday items like mattresses and dental floss.

(These days, many of these are given a pseudoscientific gloss by invoking the magic of “epigenetics”, a term now so corrupted as to be worse than useless).

A driving factor behind all of these theories is the fact that rates of autism diagnoses have been increasing steadily in some countries for the past couple decades. This has led some to declare “an autism epidemic”, with the obvious connotation that something in the environment must be causing it.

This premise is flawed, however, as it assumes the rate of diagnosis mirrors a real rise in the rate of the disease. In fact, the rise in diagnosis rates can be largely explained by better recognition of the condition among doctors and broader awareness among the general public, and by diagnostic substitution, whereby children who previously would have been given a general diagnosis of mental retardation are now more commonly diagnosed with ASD. After all, prior to 1943, no one was diagnosed with autism because the term had not yet been applied to this childhood condition. The gradual rise in autism diagnoses following that period could hardly be thought of as signaling a sudden epidemic. The criteria used by psychiatrists to define the condition have changed multiple times over the years, including in the most recent version of the DSM, and each change leads to a change in the number of children who fit under this diagnosis. The label is thus artificial and changeable and its application has also varied widely over time. There is no reason to think these variations reflect changes in the rate of the condition itself.


There is, moreover, no evidence linking any of the potential environmental factors listed above to autism. In fact, in many cases, there is very strong evidence disproving any such link. (See here and here for a discussion of the absence of any link with vaccination, for example). Regrettably, however, some of these stories simply refuse to die.


Undead memes
Part of their persistence may arise from the way they are framed as anti-mainstream theories – for many adherents this inoculates them against scientific critiques or counter-evidence, due to mistrust of the scientific establishment or a lack of acceptance of the scientific method as a means of objectively discovering the truth. It is, moreover, very difficult to counter emotive personal anecdotes and highly publicised but methodologically flawed studies (some of which have later been retracted or even shown to be fraudulent), with, for example, dry statistical data showing no epidemiological link to vaccines or fluoride or dental floss or any other supposed environmental toxins.

http://shortstoriesshort.com/story/noahs-ark/In one sense, such arguments grant too much credibility to these theories by allowing the battle to be fought solely on their turf. It puts the onus on scientists to disprove each new theory. (This is like arguing with creationists by trying to disprove the existence of Noah’s ark, instead of simply presenting the positive evidence for evolution by natural selection). The problem with this is that negative findings are simply not very compelling, psychologically, regardless of the statistical strength of the conclusion. It’s too easy to misinterpret what is really strong evidence that something is not the case as merely the absence of evidence that it is (which would leave it an open question, requiring “more research”). This means the “is not” side is at a disadvantage in an “is too”/“is not” argument. 

In my opinion, the strongest counter-argument is one that is often not mentioned in such discussions – the positive evidence for genetic causation. Instead of expending so much effort trying to prove “it’s not that”, we can simply say “it’s this – look”. There is really no explanatory void to fill. We know what causes autism, in general, and we are identifying more and more of the specific factors that cause it in individuals.


Autism is genetic
The evidence that autism is largely genetic is overwhelming – in fact, it is among the most heritable of common disorders. This has been established through family and twin studies that look at the rate of occurrence of the disorder (or statistical “risk”) in relatives of patients with autism. If one child in a family has autism, the risk to subsequent children has been estimated to be between ~10-20%, far higher than the 1% population average. If two children are affected, the risk to another child can be as high as 50%.

Now, you might argue that this does not prove genetic influences, as environmental factors may also be shared between family members. Twin studies have been designed for precisely that reason. Here, we compare the risk to one co-twin when the other has a diagnosis of autism, in two cases: when the twins are identical (or monozygotic, sharing 100% of their DNA) versus when they are fraternal (or dizygotic, sharing 50% of their DNA). This design is so powerful because it separates genetic effects from possible environmental ones. Genetic effects should make identical twins more similar than fraternal twins, while environmental effects should not differ between these pairs.

http://ngm.nationalgeographic.com/2012/01/twins/schoeller-photographyThe results are dramatic – if one of a pair of identical twins is autistic, the chance that the other one will be too is over 80%, while the rate in fraternal twins is less than 20%. (Even in cases when the co-twin does not have a diagnosis of autism, they very often have some other psychiatric diagnosis, again much more so in identical than fraternal co-twins). These results, which have been replicated many times, show that variation across the population in risk of autism is overwhelmingly due to genetic differences. Crucially, these results are not consistent with an important role for variable environmental factors in the etiology of the disorder – these should affect identical and fraternal twins equally. Similarly, full siblings of someone with autism are at ~2 times greater risk than half-siblings, again consistent with genetic but not with environmental causation.

These kinds of analyses answer the question: in a given population at a given time, why do some people get autism while others don’t? The answer is unequivocal – this is overwhelmingly down to genetic differences.


Finding mutations in specific genes
The fact that autism is largely a genetic disorder has been known for decades. What has not been known is the identity of the specific genes involved, with the exception of a couple examples, involving genes associated with syndromes in which autistic symptoms are common, such as Fragile X syndrome or Rett syndrome. These syndromes are caused by mutations in specific single genes and account for 3-4% of all autism cases. However, the vast majority of cases were left unexplained, and not for want of looking.

http://sparkonit.com/2014/04/30/gene-mutation-that-leads-to-abnormal-development-responsible-for-autism-discovered/
This apparent failure to find the specific genes involved clearly has led to the impression that genetics can not explain the condition and that other factors must therefore be involved. This is not the case at all – even if we remained completely ignorant of specific causes, the fact that autism is extremely highly heritable would remain just as true. As it happens, the failure to find specific causes had a technical reason – it was simply very difficult to discover the kinds of mutations that cause the condition. This is because such mutations are individually very rare in the population and because there is not just one gene involved, or two, or even ten, but probably many hundreds.

These mutations are now detectable thanks to new technologies that allow the entire genome to be surveyed (either for changes to single letters or bases of DNA or for deletions or duplications of bits of chromosomes). Using these technologies, it has been possible to find over a hundred different genes (or regions of chromosomes) in which a mutation can lead to autism. Collectively, the known causes now account for 20-25% of cases of autism.
http://ultimateautismguide.com/2011/09/autism-news-genetics-of-autism-spectrum-disorders/

It is worth emphasising that point: doctors and clinical geneticists can now ascribe a specific genetic cause to perhaps a quarter of individual autism patients. This is a vast increase from even a few years ago and new risk loci are being discovered at an ever-increasing rate. There is every reason to think we are only at the beginning of these discoveries as we have really just begun to look. Again, regardless of how many cases we have explained currently, the very high heritability of autism remains a fact – the important factors in the vast majority of the remaining cases will still be genetic. Far from being a failure, modern genetics has been extraordinarily successful at uncovering specific causes of autism.


Autism can be genetic, but not inherited
http://www.dailymail.co.uk/health/article-2125128/Is-autism-children-mutation-sperm-eggs-older-fathers.html
One common objection to the idea that autism is a genetic condition is that so many cases of autism are sporadic – they occur in a family where no one else has autism. How could it be the case that the condition is genetic if it is apparently not inherited? This situation can arise when the condition is caused by a new mutation – a change in the DNA that occurs in the generation of sperm or egg cells (mostly sperm, as it happens). These occur all the time – this is how genetic variation enters the population. Most of the time these “de novo” mutations have no effect, but sometimes they disrupt an important gene and can result in disease. When they disrupt one of the many hundreds of genes important for brain development, they can result in autism.

It has been estimated that as many as half of all autism cases are caused by de novo mutations. By comparing the sequence of an affected child’s genome with that of their parents it is possible to tell whether a mutation was inherited or arose de novo. This is obviously important information in assessing the risk in that family to future offspring – in the case of a de novo mutation, this should not be higher than the population baseline risk. By contrast, if the mutation was inherited, then risk to subsequent children may approach 50%.

Another important finding is that the effects of such mutations are more severe in males than in females. Not all carriers of the known disease-linked mutations actually develop autism. Some develop other disorders, while some are apparently healthy and unaffected (or at least have no clinical diagnosis). This means people can be carriers of such a mutation but not have autism themselves. This is especially true for females. In cases where a pathogenic mutation in an autism patient was inherited from an unaffected parent, that parent is twice as likely to be the mother as the father. Also, the mutations observed in female patients who do have a diagnosis of autism tend to be much more severe than those observed in male patients. These data are consistent with a model where the male brain is more susceptible to the effects of autism-causing mutations than the female brain. This can explain why an apparently unaffected couple can have multiple children with autism.

These genetic findings also highlight a fundamental point: autism is not a single condition. The clinical heterogeneity has always been acknowledged (leading to the use of the term autism spectrum disorder), but it is now also clear that is also extremely heterogeneous from an etiological point of view. Autism is really an umbrella term – it refers to a set of symptoms that can arise as a consequence of probably hundreds of distinct genetic conditions.


Defining new genetic syndromes
Those distinct conditions were never obvious before, because we had no way to distinguish between people who carry mutations in different genes. But now genomic technologies can identify people with the same mutations and are allowing clinicians to define new syndromes, which may be characterised by a typical profile of symptoms. For example, mutations in a gene called CHD8 are a newly discovered, very rare cause of autism, but enough cases have now been studied to define a symptom profile, showing for example that these patients are at especially high risk of co-morbid gastrointestinal problems (found at higher rate in autism generally, but not in all cases). Knowing the cause in individuals can thus provide important information on prognosis, common co-morbidities, even responsiveness to medications.

http://en.wikipedia.org/wiki/File:Genetictesting.png
The application of genetic testing in cases of autism should spare many children and parents the diagnostic odyssey that many currently suffer through. A definitive diagnosis can bring important benefits in terms of how families think of and deal with the condition. Indeed, support groups have arisen for many rare genomic disorders, allowing parents to compare experiences with other families with the same condition. On the other hand, as described in a recent review on this topic: “we should balance our enthusiasm for finding a genetic diagnosis with the recognition that autistic traits represent one aspect of a diverse behavioral spectrum, and work to avoid any potential stigmatization of the patient and family through identification of genetic susceptibility”.

The use of genetic information in clinical management is likely to become increasingly important in the near future as we learn more about newly discovered syndromes and their underlying biology. This kind of personalised medicine is already happening in other fields, such as oncology. One can hope that its application in psychiatry will go a long way towards transforming the experiences in the health service of autism patients and their parents and reducing the frustrations that arose when we were effectively operating in the dark.

This is a positive message of real success in science that is already changing how we think about disorders like autism and that is likely to completely transform the practice of psychiatry, especially for neurodevelopmental disorders. Scientists need to do a better job of getting that truth out there.  



*(For the record, I declare no such conflicts myself).

 Thanks to Dorothy Bishop, Svetlana Molchanova and Emily Willingham for helpful comments on this post.


Tuesday, July 22, 2014

Exciting findings in schizophrenia genetics – but what do they mean?

-->
A paper published today represents a true landmark in psychiatric genetics. It reports results of a genome-wide association study (GWAS) of schizophrenia, involving 36,989 cases and 113,075 controls. Assembling this sample required collaboration on a massive scale, with over 300 authors involved. This huge sample gives unprecedented statistical power to detect genetic variants that predispose to disease, even if their individual effects on risk are tiny. The study reports 108 regions of the genome where genetic differences affect risk of disease. This achievement is rightly being widely celebrated and reported, but what do these results really mean?

GWAS look at sites in the genome where the particular base in the DNA sequence is variable – it might sometimes be an “A”, other times a “T”, for example. There are millions of such sites in the human genome (which comprises over 3 billion bases of sequence). Each such site represents a mutation that happened some time in the distant past, which has since been inherited and spread throughout the population, while not supplanting the previous version completely. This leaves some people with one version and some with another – these different versions are thus called “common variants”. [More correctly, since we each have two copies of each chromosome, each of us carries two copies of each variable site, so the combined genotype could be AA, AT or TT, in the example above].

The idea of a GWAS is to look across the entire genome at over a million such variants for ones at higher frequency in disease cases than in controls. That difference in frequency might be very minor (say, the “A” version might be seen at a frequency of 30% in cases but 27% in controls), but with such a huge sample size, that kind of variation can be statistically significant. In epidemiological terms, the variant that is more common in cases is termed a “risk factor” – if you have it, you are statistically more likely to be in the case group than in the control group. (Just as smoking is more common in people with lung cancer than in people without, although in that case the difference in frequency is massive).

For any individual common variant, the increased statistical risk is tiny – most increase risk by less than 1.1-fold. But the idea is that the combined risk associated with a large number of such variants could be quite large – large enough to push people into disease. Since the variants are common, each of us will carry many of them, but some people will carry more than others. This will generate a distribution of “risk variant burden” across the population. If there are 108 sites, each in two copies, then the range of that distribution could theoretically be from 0 to 216 risk variants. The actual distribution is far narrower however, with the vast majority of the population carrying somewhere between 90 and 130 risk variants (assuming the relative frequencies of the two variants are around 50:50, on average).

One way to conceptualise the combined effects of many variants is the “liability-threshold” model, which suggests that though there is a smooth distribution of genetic burden (or liability) across the population, only those above a certain threshold become ill (say the top 1% in the case of schizophrenia). This is known as a polygenic model of risk because it assumes the causal action of a large number of genes in any individual.

An alternative model views common disorders such as schizophrenia as arising mainly due to very rare mutations of large effect, but in different genes in different individuals (and with the possibility of modifying effects of other variants in the genetic background). This scenario is known as genetic heterogeneity. Many such rare, high-risk mutations are known but the ones we currently know about collectively account for less than 10% of cases of schizophrenia, e.g., here (and 15-30% of cases of autism).

So, with that as background, let’s consider what the GWAS signals mean, individually and collectively. First, GWAS signals are a bit like #Greenfieldisms: they point to a locus and they point to an increased statistical risk of disease – that is all. This is because the common variant that is interrogated is being used as a tag of wider genetic variation at that locus (a locus is just a small region of the genome). Chromosomes tend to be inherited in large chunks without too much mixing (or recombination) between the two copies present in each parent. That means that one common variant at one position will tend to be co-inherited with other common variants nearby. The signal derived from GWAS is associated with one of those (or sometimes several), but tags a lot of additional variation.

Generally, the presumption is that one of the common variants is having a causal effect and the others are merely passengers. However, there are also lots of rare mutations that come along for the ride. These are mutations that arose much more recently and that are therefore present in far fewer individuals. Though GWAS can’t see them directly, any such mutation necessarily arises on the background of a particular set of common variants (called a haplotype). Most people with that haplotype will not carry the rare mutation, but it may be possible that several such mutations in the population (if they are of large effect and thus found mainly in cases) can give an aggregate signal that boosts the frequency of the common haplotype in cases, resulting in a GWAS signal (driven by a “synthetic association”). Several examples of such cases are now found in the literature, for other conditions (e.g., 1, 2, 3, 4), though it is not clear if synthetic associations drive any of the signals in the most recent schizophrenia study.
http://www.ncbi.nlm.nih.gov/pubmed/22269335

It is striking, however, that many of the loci implicated by GWAS signals are known to sometimes carry rare mutations that dramatically increase risk of disease. Some of the 108 loci implicated contain only one gene, but some encompass many, while others have no gene in the region or even nearby. Cases where the implicated gene is clear include genes like TCF4, CACNA1C, CACNB2, CNTN4, NLGN4X and multiple others, where rare mutations are known to cause specific genetic syndromes. Moreover, there is substantial enrichment in the GWAS loci for genes in which rare mutations have been discovered in cases with schizophrenia, autism or intellectual disability (including CACNA1I, GRIN2A LRP1, RIMS1 and many others).

These findings strongly reinforce the validity of the GWAS results and also suggest that many of the loci identified sometimes carry rare, high-risk mutations that should be very informative for follow-up mechanistic studies. Whether the GWAS signals themselves are driven by such rare mutations in the samples under study is an open question. (Another paper just out suggests that signals from the GRM3 locus, which encodes a metabotropic glutamate receptor, may be driven by a rare variant that increases risk of mental illness generally by about 2.7-fold). But there are also many examples of loci where both rare and common variation is known to play a role in disease risk and the GWAS signal could well be driven purely by common variants with direct functional effects.

However, such effects need not be tiny in individuals, even if their overall signal of increased risk across the population is very small. We know of many examples of common variants that strongly modify the effects of rare mutations, at the same locus or at one encoding an interacting protein. In such cases, the common variant may increase risk of expression of a disorder due to a rare mutation, but essentially have no effect in most of the population who do not carry such a rare mutation. This situation is exemplified by Hirschsprung’s disease, a condition affecting innervation of the gut. It can be caused by rare mutations in any of 18 known genes. However, such mutations do not always cause disease and the range of severity is also very wide. Common variants at several of those same risk loci have been found to be much more frequent in people with rare mutations who develop disease than in those with the same mutations who remain healthy. When averaged across the population, as in a GWAS study, such effects would yield only a tiny average increase in risk, but this may reflect a large effect in a small subset of people and no effect in the majority.

This brings us to a larger point – what do the GWAS signals tell us collectively? More specifically, should they be taken as evidence in support of a polygenic model of disease risk, where it is the collective burden of common risk variants that causes the majority of disease cases?

One way to test that is to model the variance of the “liability” to the disease, which is actually an unmeasurable parameter, but which is assumed to be normally distributed in the population. With that and a number of other assumptions in place, one can then ask how much of the variance in this trait is accounted for by the loci identified by the GWAS? The authors state that a combined risk profile score “now explains about 7% of variation on the liability scale to schizophrenia across the samples”. That is an improvement over previous studies (the first 13 loci accounted for about 3%), but certainly not as much as might have been expected under a purely polygenic model. Of course, it could just be that only a fraction of the contributing common variants have been found and that larger studies would identify more.

However, the GWAS data are also fully consistent with a more complex model of genetic heterogeneity, which involves common variants interacting with rare variants to determine individual risk. Population averages of their effects remain just that – statistical measures that cannot be applied to individuals. Even combining all the common variants to generate a risk profile score does not generate a predictive measure of risk for individuals. (One reason for that is that non-additive genetic interactions that are likely highly important in individuals are averaged out by population-level signals).

So, the current study points the finger at a large set of new genes, but does not really discriminate between models of genetic architecture. The overlap between the GWAS signals and the genes known to carry rare, high-risk mutations certainly suggests that the GWAS has been successful in identifying important risk loci - a tremendous advance for which the authors should be congratulated (as well as for their willingness to collaborate on this level). This is, however, just a first step in understanding the biology of the disease. The underlying genetic heterogeneity presents a tremendous challenge but also an opportunity, as individual high-risk mutations can be followed up in functional studies to elucidate some of the mechanisms through which a change in some piece of DNA can ultimately produce the particular psychological symptoms of this often-devastating disease.

s;o