Thursday, March 21, 2013

The genetics of emergent phenotypes
Why are some brain disorders so common? Schizophrenia, autism and epilepsy each affect about 1% of the world’s population, over their lifetimes. Why are the specific phenotypes associated with those conditions so frequent? More generally, why do particular phenotypes exist at all? What constrains or determines the types of phenotypes we observe, out of all the variations we could conceive of? Why does a system like the brain fail in particular ways when the genetic program is messed with? Here, I consider how the difference between “concrete” and “emergent” properties of the brain may provide an explanation, or at least a useful conceptual framework

There is now compelling evidence that disorders like epilepsy, schizophrenia and autism can be caused by mutations in any of a very large number of different genes (sometimes singly, sometimes in combinations). This is fundamentally changing the way we think about these disorders. It is no longer tenable to consider them as unitary categories. Instead, it is very clear that the underlying etiology is extremely heterogeneous – possibly more so than for any other human disease.

How can this fact be explained? Why is it that mutations in so many different genes (perhaps thousands) can give rise to the specific phenotypes associated with those disorders?

The normal logic of genetic analysis entails some correspondence between the phenotypes associated with mutations in specific genes and the functions of the products encoded by those genes. This connection between mutation and phenotype is one of the main reasons why experimental genetics is so powerful. For example, if we carry out a genetic screen for mutations affecting cell death in a worm, or embryonic patterning in a fruit fly, the expectation is that the genes we discover will be directly involved in those processes. That is how the molecular processes regulating cell death and embryonic patterning were discovered.

This logic can sometimes be applied to humans too – but not always. Let’s consider two genetic conditions – microcephaly and epilepsy – both affecting the brain, but in quite distinct ways.

MRI of child with microcephaly (top). Source.
Microcephaly is a rare condition characterised by a small brain. In particular, the cerebral cortex is smaller than normal, due to a defect in the generation of the normal number of neurons in this brain area. It can be inherited in a simple, Mendelian fashion, due to a mutation in any one of at least six different genes. Remarkably, the proteins encoded by these genes are all involved in some aspect of cell division of neuronal progenitors. In particular, they determine whether early divisions expand the initial pool of progenitors (in the normal situation) or prematurely generate neurons (when any of these genes is mutated). 

The genes implicated in microcephaly are thus directly involved in the process affected: the generation of neurons in the cerebral cortex. It is not too inaccurate to say that that is what these genes are “for”.

This is not the case for epilepsy. It too can be inherited due to specific mutations, but there are many, many more of them and the known genes involved have diverse functions: from controlling cell migration or specifying synaptic connectivity to encoding ion channels or metabolic enzymes. These are not genes “for” regulating the spatial and temporal dynamics of electrical activity in neuronal networks.

Put another way, the reason that we see microcephaly as a phenotype is that there are genes that control the process we are looking at – generation of neurons in the cortex. The existence of that phenotype thus reflects a property of the genetic system. In contrast, the generation of seizures does not relate in any meaningful way to the genetic system – instead, it is an emergent property of the neural system. We see that phenotype not because there are many genes directly controlling that process, but because it is a state that the brain tends to get into, in response to a wide diversity of insults. (Indeed, seizures are one of the symptoms sometimes associated with microcephaly).

I have used the term “emergent” twice now without defining it and had better do so before I get pilloried by those allergic to the word. There is good reason for a negative reaction, as the term is fraught with multiple meanings and seemingly mystical connotations.

Concepts of emergence range from the mundane (the whole is more than the sum of its parts) to the magical (where the behaviour of a system is not reducible to or predictable from the state and interactions of all its components, and where new properties emerge apparently “for free”). In fact, it is possible to allow for new principles and properties at higher levels without invoking such mystical concepts or over-riding the fundamental laws of physics.

Nature is organised hierarchically into systems at different levels. Subatomic particles are arranged as atoms, atoms into molecules, molecules in cells, cells into tissues and organs, and ultimately organisms, individual organisms in collectives and societies. At each level, qualitatively novel properties arise from the collective action of the components at the level below. Emergence refers to the idea that many of these properties are highly unexpected and extremely difficult to predict (though not necessarily impossible in principle). One objection to the term is that it is therefore essentially a statement about us (about our level of understanding) and not about the system itself. I think it goes further than that, however, and does denote some principles of nature that actually exist in the world, regardless of whether we understand them or not.

While the emergent behaviour of a system is reducible to the microstates of the components at the level below and the fundamental physical laws controlling them, the emergent properties are not deducible purely from those laws. To put it another way, the microstates of a system are sufficient to explain the properties or macrostates observed at any moment but are not sufficient to answer another question – why those properties exist. Why is it that those are the properties observed in that particular system, or that tend to be observed across diverse systems? These properties arise because additional laws or principles apply at the higher level, which constrain the arrangements of the components at the lower level to some purpose.

Many of these principles of functional organisation are abstract and apply to diverse systems – principles of network organisation, cybernetics and control theory, information content, storage and processing, and many others. All of these principles constrain the architecture of a system in a way that ensures its optimality for some function.

In artificial design of complex machines, these engineering principles are incorporated to ensure that the parts are arranged so as to produce the desired functions of the system as a whole. In living organisms, it is natural selection that does this work, leading to the illusion of design (or teleonomy), apparent only in hindsight. System architectures that produce useful emergent properties at the higher level (i.e., the phenotype of the organism, which is all that selection can see) are retained and those that do not are removed. In this way, the abstract engineering principles constrain the functional organisation of the components of the system – there are only certain types of arrangements that can generate specific functions. This is top-down causation, but over a vastly different timescale from the mystical, moment-to-moment versions proposed by some emergence theorists.

Let’s move from the abstract to a more specific example and think about how these issues relate to the kinds of phenotypes we see when a system is challenged. Consider a complicated, highly specified system like a fighter jet. It has many different parts – engines, turbines, fuselage, flaps, wheels, weapons, etc. – each with multiple subcomponents and each with a specific job to do. If we were examining multiple designs for a jet, we might consider various specs for, say, the turbines. We might vary the number of blades, their size, angle, etc. These are all concrete properties of the system and there are a finite number of them.

Source: Newcastle University
Contrast that with an emergent property of the jet, something like aerodynamic stability, fuel efficiency or even something harder to define, like “performance”. These properties depend on the specs of all the individual components of the plane, but also, more importantly, on their functional organisation and the interactions between them (and the interactions of the whole system with the environment). A property like performance is not easily linked to any specific component – instead it emerges in a highly non-linear fashion from the specs of all of the components of the system and how they are combined.

If you randomly broke one component in the jet, it is thus much more likely that you would affect performance than that you would affect the turbines specifically. The bits of the turbines are not “for performance”, per se – they are for whatever job they do in the turbine. There aren’t any bits of the jet that you would say are “for performance”, in fact, but all of them can affect performance.

The kinds of functions affected by disorders like epilepsy, autism or schizophrenia are like performance. For epilepsy, it is the highest-order properties of neural systems – the temporal and spatial dynamics of electrical activity. For schizophrenia and autism, it is functions like perception, cognition, sense of self, executive planning, social cognition and orderly thought – the most sophisticated and integrative functions of the human mind. These rely on the intact functioning of neural microcircuits in many different areas and the coordinated actions of distributed brain systems. Evolution has crafted a complex and powerful machine with remarkable capabilities, but those capabilities are consequently vulnerable to attack on any of a very large number of components.

Thinking about these phenotypes in this way thus provides an explanation for why epilepsy and schizophrenia are so much more common than microcephaly. The mutational target – the number of genes in which mutations can cause a particular phenotype – is much, much bigger. (This obviates the need to invoke some kind of counter-balancing benefit of the mutations that cause these disorders to explain why they persist at a high frequency. The individual causal mutations do not persist – they are strongly selected against, but new mutations arise all the time. Under this mutation-selection balance model, the prevalence of a disorder is determined by an equilibrium between the mutational target size and the strength of selection).

But this perspective does not explain everything that needs explaining. These conditions do not manifest simply as a general decrease in brain “performance”. It is not just that normal brain functions are somewhat degraded. Instead, qualitatively new states or phenotypes emerge. Psychosis is probably the most striking example – psychiatrists call the hallucinations and delusions that characterise psychosis “positive symptoms”, reflecting the fact that they are a novel, additional manifestation, not just a decrease in the function of specific mental faculties (as with the negative symptoms, such as a decrease in working memory).

Why does this specific, qualitatively novel state arise as a consequence of so many distinct mutations? This is where our fighter jet runs out of steam, as a (now mixed) metaphor. The problem with that metaphor is that fighter jets are designed and built from a blueprint. Parts of the blueprint correspond to parts of the jet and their arrangement is also specified directly on the blueprint.

This is not at all the case for the anatomy of the brain. The genome is not a blueprint – there are no parts of the DNA sequence that correspond to parts of the brain. Instead, the structure of the brain emerges through epigenesis – the execution of the developmental algorithms encoded in the genome, which direct the unfolding of the organism. (Aristotle coined this term epigenesis, which contrasted with the prevailing theory, known as pre-formationism – the idea that the fertilised egg already contains within it a teeny-weeny person, with all its bits in place, which simply grows over the period of gestation).

The ultimate phenotype of an organism is thus emergent in the more common sense of that word – it is something that arises over time. This emphasises the need to consider developmental trajectories when trying to understand the highly heterogeneous etiology of these disorders.

Modified from: Kitano, 2004
Complex, dynamic systems tend to gravitate towards certain stable patterns of activity and interactions in the network. Such patterns are called “basins of attraction” or “attractors”, for short. You can think about them like hollows in a flat sheet, with the current network state represented by the position of a ball rolling over this landscape. The flat bits of this landscape represent unstable, fluid states that are likely to change. The hollows represent more stable states – particular patterns of activity of the network that are easy to get into and hard to get out of. Generally speaking, the deepest such basin will represent the typical pattern of brain physiology. It takes a big push to get the ball up and out of this basin. But there are other basins – alternative stable states and the pathophysiological state we recognise as psychosis may be one of those.

Such alternate states may exist as by-products of the functional organisation of the system. The system architecture will have been selected to robustly generate a particular functional outcome. However, when individual components are interfered with, new functional states may emerge – ones that are unexpected and that the system has not been selected to produce. They arise instead as an emergent property of the broken system, as a specific failure mode.

It is vital to understand not just the nature of such states, but the trajectories that dynamic systems (in this case organisms) follow to get into them. (In dynamic systems, the relations between components of the system are not fixed but change over time). If we take our flat sheet and tilt it from one end, turning it into a board with channels in it, rather than hollows, then we can represent the path of a developing organism through phenotype space, over time. 

This is Conrad Waddington’s famous “epigenetic landscape” – a powerful metaphor for understanding how dynamic systems can be channelled into specific, stable states. The shape of the landscape will be determined by an individual’s genotype – some people may have much deeper channels heading towards typical brain physiology while others may have a greater chance of heading towards particular pathophysiological states, like psychosis or epilepsy.

One reason why psychosis and epilepsy may be common states is that they can reinforce themselves, through altering the relations of components of the system. In a process known as “kindling”, seizures induce changes in neuronal networks that render them increasingly excitable and more likely to undergo further seizures. A similar dynamic process, involving homeostatic processes in dopaminergic signaling pathways, may be involved in psychosis. These homeostatic mechanisms in the developing brain can, under certain circumstances, be maladaptive, pushing the network state into a particular pathophysiological pattern, in response to diverse primary insults.

Finally, a developmental perspective can also provide an explanation for the high levels of phenotypic variability observed with mutations conferring risk for psychiatric disorders. Such mutations can manifest in different ways, statistically increasing risk for multiple conditions. A person’s risk for developing schizophrenia is statistically much higher if they have a close relative with the condition, but their risks of developing autism or epilepsy (or bipolar disorder or depression or attention-deficit hyperactivity disorder) are all also higher. Even monozygotic (“identical”) twins are often not concordant for these clinical diagnoses. So, while genetics can lead to a much greater susceptibility to these conditions, whether a specific individual actually develops them depends also on other factors.

One of those factors, often overlooked, is intrinsic developmental variation. The development of the brain is inherently probabilistic, not deterministic (more like a recipe than a blueprint). This is evident at the level of individual cells, nerve fibres and synapses and can manifest at the macro level as variation in specific traits or symptoms in individuals with the exact same genotype.

Waddington’s landscape can also visualise this important role of chance in determining an individual’s eventual phenotypic outcome. If you roll a marble down this board multiple times, you will get multiple outcomes, essentially by chance (due to thermodynamic noise at the molecular level, affecting gene expression, protein interactions, etc.).

For a concrete property such as brain size, the amount of noise affecting the phenotype will be low, as a small number of components and processes are involved. The correspondence between genotype and phenotype will therefore be quite linear for concrete properties. In contrast, emergent properties that depend on large numbers of components will be more subject to noise and the relationship between genotype and phenotype will be far less linear.  This explains why mutations causing psychiatric disorders show lower penetrance and higher variability in phenotypic expression – this is the predicted pattern for emergent properties.

To sum up, thinking about these kinds of disorders as affecting emergent properties can explain why they are common, why the genes responsible are so diverse, why their products are only distally and indirectly related to the processes affected by the clinical symptoms and why the phenotypic outcomes are inherently variable.

Wednesday, March 13, 2013

Genes, brains and human nature; the joys and challenges of writing about science for non-scientists.

This post first appeared as part of a SpotOn NYC special event on Communication and the Brain (March 2013), on  

When I started the Wiring the Brain blog a few years ago, it was with the intention of writing mainly for students, scientists and clinicians in the fields of genetics and neuroscience. Many of the posts deal with advances in our scientific understanding of the causes of neurodevelopmental and psychiatric disorders. Perhaps because of that, or just due to general interest in the broader themes, the blog has also become widely read by non-specialists.

This presents some exciting opportunities to write in a different way and to convey the excitement of the field of neurogenetics to the general public, but also raises some particular challenges. The biggest difference I have found is that the assumption of a shared, global perspective is not always valid. When writing for scientific colleagues there is an implicit expectation of a common starting point – not just a background of specific knowledge about a subject, but a foundation of wider shared beliefs that do not need to be articulated explicitly.

In the context of the themes of the Wiring the Brain blog, these include: that the diversity of life arose through evolution by natural selection; that human genes and human brains are not that different from animal genes and animal brains; that human minds emerge from the activity of human brains and nothing else; that variation in genes can affect behaviour; that studying the components of a system is a good way to make progress in understanding the whole; and most fundamentally, that the scientific method is the best way we have of finding things out and not just one of many “ways of knowing”.

Being challenged on some of those positions has been an eye-opener. It makes you look for the evidence that supports them. For evolution, that is pretty much all the observations ever made in the field of biology. In that circumstance, the job becomes marshalling that evidence to convince someone who may not have heard it all laid out before.

On another topic, this prompted one of the few blogposts I have written that veered into philosophical territory, entitled “On discovering you’re an android”. This presented the overwhelming evidence for neuroscientific materialism, the position that the mind is what the brain does, with no need to invoke any immaterial or supernatural stuff. This theory is both counter-intuitive and highly discomfiting. After all, it doesn’t feel like you are an android (though one made out of meat). One’s “self” feels pretty real and stable and the idea that it emerges from and relies on the continued activity of the neurons in your brain can leave one feeling existentially precarious. It is human nature to recoil at this idea, but that’s what all the science says and science wins.

Or does it? Not everyone would accept that assertion. Many argue that science is just another belief system with no special claim to validity. Part of the point, for me, of writing the blog is to illustrate how science works, how we accumulate evidence, how current paradigms can be challenged, modified or even overturned by new data. That is, in fact, the polar opposite of a belief system. Also, it works!

A related claim, and a common enough reaction to writings on the subject of human nature is that scientists like myself are just reducing human existence to mindless biochemistry, even down to physics. This charge of “Reductionism!”, which comes from psychologists as much as from members of the general public, misses an important distinction between methodological and theoretical reductionism. Yes, geneticists approach a problem by looking for components of a system that can vary in a way that affects the performance of the system. That is an experimental approach that has proven hugely powerful, allowing one to identify important parts of a system and ultimately analyse how they function together to mediate that system’s functions. If the system is a human being, then that necessarily entails understanding it in the context of its relations to other human beings.

Changes in single genes can have large effects on behavioural traits in humans (for example, see here on genetic influences on impulsivity). This does not mean that the behaviour in question is mediated by a single gene. Nor does it mean that human behaviour is determined by genes – it simply says that variation in that component of the system can contribute to variation in patterns of behaviour over time. But no matter how precisely that sentence is worded, it is important to realise, as a writer, that it can still be misconstrued by people who are “reading between the lines”, or extrapolated to infer a much broader claim consistent with a reader’s preconceived notions of what scientists think.

That view may have been informed by the shorthand that many scientists and journalists use about “genes for this” and “genes for that”, which does indeed sound very deterministic and reductionist. The absurd hype in many press releases, driven by pressures for the next grant, adds greatly to this problem. (There’s no shortage of that kind of thing in coverage of neuroscience either, now affectionately known as “neurobollocks”). This kind of wording is sloppy, sensationalist and deeply wrong at a conceptual level. Still, “Scientists discover one of many factors that contributes to people’s behavioural tendencies, which express themselves over time in the context of each individual’s life experiences” does not make a good headline.

It is no wonder, then, that readers often conclude that a larger claim is being made – exposure to relentless hype in science coverage fully justifies that expectation. I occasionally get comments starting, “So, what you’re really saying is…”, which continue to say something I really wasn’t saying. Anticipating and pre-empting these kinds of over-extrapolations can be an important part of this kind of writing.

Another challenge, especially in writing about the causes of clinical disorders such as autism or schizophrenia, is that these issues are necessarily fraught for people suffering from these conditions or with children who are affected. Many have strongly held views about the causes of their or their child’s particular condition, sometimes unfortunately founded on misinformation. The scientific hoax linking autism with vaccines has been incredibly hard to dislodge from the public’s consciousness. It is almost impossible to combat moving personal anecdotes with dry statistical data showing no association. The former are much more psychologically available – we are cognitively wired to learn from specific instances of apparent correlations and very poorly adapted for statistical thinking. The apparent “autism epidemic” reinforces the notion of some environmental causes, though there is clear evidence that this reflects only an increase in awareness and diagnosis, not of the true underlying rates of the condition. Showing how such data can be evaluated, scientifically, can go some way to equipping people with the tools to distinguish solid claims from one-off observations and correlation from causation.

My experience of writing for scientists and non-scientists alike has been very enjoyable and stimulating and I have learned a lot from it. I think it has made me a better teacher and a more thoughtful researcher. I have been struck in particular by both the tremendous interest in science among the general public and by how poorly it is served by traditional media. Blogging provides an exciting opportunity for scientists to help fill that void directly.

Photo Credit: Oliver Burston, Wellcome Images - Position of the brain inside the head
Digital artwork/Computer graphic 2004 Collection: Wellcome Images
Copyrighted work available under Creative Commons by-nc-nd 2.0 UK: England & Wales, see