Tuesday, December 18, 2012

Genetic entropy and the human intellect (or why we're not getting dumber)

This is a modified version of a letter published in Trends in Genetics: Kevin J. Mitchell (2012). Genetic entropy and the human intellect. Trends in Genetics, 14th December 2012.

Two articles by Gerald Crabtree float the notion that we, as a species, are gradually declining in average intellect, due to the accumulation of mutations that deleteriously affect brain development or function [1, 2]. The observations that prompted this view seem to be: (i) intellectual disability can be caused by mutations in any one of a very large number of genes, and: (ii) de novo mutations arise at a low but steady rate in every new egg or sperm. He further proposes that: (iii) genes involved in brain development or function are especially vulnerable to the effects of such mutations. Considered in isolation, these could reasonably lead to the conclusion that mutations reducing intelligence must be constantly accumulating in the human gene pool. Thankfully, these factors do not act in isolation.

If we, as a species, were simply constantly accumulating new mutations, then one would predict the gradual degradation of every aspect of fitness over time, not just intelligence. Indeed, life could simply not be sustained over evolutionary time in the face of such genetic entropy. Fortunately – for the species, though not for all individual members – natural selection is an attentive minder.

Analyses of whole-genome sequences from large numbers of individuals demonstrate an “excess” of rare or very rare mutations [3, 4]. That is, mutations that might otherwise be expected to be at higher frequency are actually observed only at low frequency. The strong inference is that selection is acting, extremely efficiently, on many mutations in the population to keep them at a very low frequency.

One of the key misconceptions in the Crabtree articles is that mutations happen to “us”, as a species. His back-of-the-envelope calculations lead him to the following conclusions: “Every 20-50 generations we should sustain a mutation in one copy of one of our many ID [intellectual deficiency] genes. In the past 3000 years then (~120 generations), each of us should have accumulated at the very least 2.5-6 mutations in ID genes”.

The loose phrasing of these sentences reveals a fundamental underlying fallacy. “We” have not sustained mutations in “our” ID genes, and “each of us” has not accumulated anything over the past 3000 years, having only existed for a fraction of that. Mutations arise in individuals, not populations. Nor does it matter that there are many thousands of genes involved in the developmental systems that generate a well-functioning human brain – selection can very effectively act, in individuals, on new mutations that impair these systems.

Mutations causing intellectual disability dramatically impair fitness, explaining why so many cases are caused by de novo mutations – the effects are often too severe for them to be inherited [5]. Furthermore, this selective pressure extends into the normal range of intelligence, as described by Deary: “One standard deviation advantage in intelligence was associated with 24% lower risk of death over a follow-up range of 17 to 69 years… The range of causes of death with which intelligence is significantly associated… include deaths from cardiovascular disease, suicide, homicide, and accidents” [6].

A more recent study of over 1 million Swedish men found that lower IQ in early adulthood was also associated with increased risk of unintentional injury.“After adjusting for confounding variables, lower IQ scores were associated with an elevated risk of any unintentional injury (Hazard ratio per standard deviation decrease in IQ: 1.15), and of cause-specific injuries other than drowning (poisoning (1.53), fire (1.36), road traffic accidents (1.25), medical complications (1.20), and falling (1.17). These gradients were stepwise across the full IQ range”. None of that sounds good.

Crabtree suggests, however, that brains and intelligence are special cases when it comes to the effects of genetic variation and natural selection. First, he argues that ID genes are members of a chain, where every link is fragile, rather than of a robust network. This view is mistaken, however, as it ignores all the genes in which mutations do not cause ID – this is the robust network in which ID genes are embedded. He also cites several studies reporting high rates of retrotransposition (jumping around of mobile DNA elements derived from retroviruses) and aneuploidy (change in chromosome number) in neurons in human brains. He argues that these processes of somatic mutation would make brain cells especially susceptible to loss of heterozygosity, as an inherited mutation in one copy of a gene might be followed by loss of the remaining functional copy in many cells in the brain.

If these reports are accurate, such processes would indeed exacerbate the phenotypic consequences of germline mutations, but this would only make them even more visible to selection. However, it seems unlikely, a piori, that these mechanisms play an important role. First, it would seem bizarre that evolution would go to such lengths to craft a finely honed human genome over millions of years only to let all hell break loose in what Woody Allen calls his second favourite organ. One would predict, in fact, that if such mechanisms really prevailed we would all be riddled with brain cancer. In addition, if these processes had a large effect on intelligence in individuals, this would dramatically reduce the heritability of the trait (which estimates the contribution of only inherited genetic variants to variance in the trait). The fact that the heritability of IQ is extremely high (estimated between ~0.7 to 0.8) suggests these are not important mechanisms [6]. This view is directly reinforced by a recent study by Chris Walsh and colleagues, who, using the more direct method of sequencing of the entire genomes of hundreds of individual human neurons, found vanishingly low rates of retrotransposition and aneuploidy [7].

Crabtree additionally suggests that modern societies shelter humans from the full scrutiny of natural selection, permitting dullards to thrive and deleterious mutations to accumulate. He speculates that high intelligence would have been more important in hunter-gatherer societies than in more modern societies that arose with high-density living. No evidence is offered for this idea, which contradicts models suggesting just the opposite – that the complexities of social interactions in human societies were actually a main driver of increasing intelligence [8]. Indeed, over the past millennium at least, there is evidence of a strong association between economic success (itself correlated with intelligence) and number of surviving children, suggesting selection on intelligence at least up until very recent times. In contrast, number of offspring in contemporary hunter-gatherer societies has been linked more to aggression and physical prowess [9].

Arguments that reduced intelligence does not impair fitness in modern societies can thus be directly refuted. But there is another way to think about this association, which considers intelligence from a very different angle [10]. Rather than a dedicated cognitive faculty affected by variation in genes specifically “for intelligence”, or, conversely, degraded by mutations genes “for intellectual disability”, intelligence may actually be a non-specific indicator of general fitness. In this scenario, the general load of deleterious mutations in an individual cumulatively impairs phenotypic robustness or developmental stability – the ability of the genome to direct a robust program of development, including development of the brain. Reduced developmental stability will affect multiple physiological parameters, intelligence being just one of them [11].

This model is supported by observed correlations between intelligence and measures of developmental stability, such as minor physical anomalies and fluctuating asymmetry (a more robust developmental program generating a more symmetric organism) [12]. Intelligence is also correlated with diverse physical and mental health outcomes, from cardiovascular to psychiatric disease [6]. Under this model, intelligence gets a free ride. It is maintained not by selection on the trait itself, but on the coat-tails of selection against mutational load generally.

Whether causally or as a correlated indicator, intelligence is thus strongly associated with evolutionary fitness, even in current societies. The threat posed by new mutations to the intellect of the species is therefore kept in check by the constant vigilance of selection. Thus, despite ready counter-examples from nightly newscasts, there is no scientific reason to think we humans are on an inevitable genetic trajectory towards idiocy.


1 Crabtree, G.R. (2012) Our fragile intellect. Part II. Trends Genet
2 Crabtree, G.R. (2012) Our fragile intellect. Part I. TrendsGenet
3 Tennessen, J.A., et al. (2012) Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science 337, 64-69
4 Abecasis, G.R., et al. (2012) An integrated map of genetic variation from 1,092 human genomes. Nature491, 56-65
5 Ku, C.S., et al. (2012) A new paradigm emerges from the study of de novo mutations in the context of neurodevelopmental disease. Mol Psychiatry
6 Deary, I.J. (2012) Intelligence. Annu Rev Psychol 63,453-482
7 Evrony, G.D., et al. (2012) Single-neuron sequencing analysis of l1 retrotransposition and somatic mutation in the human brain. Cell 151, 483-496
8 Pinker, S. (2010) The cognitive niche: coevolution of intelligence, sociality, and language. Proc Natl Acad Sci U S A 107 Suppl 2,8993-8999
11 Yeo, R.A., et al. (2007) Developmental Instability and Individual Variation in Brain Development. Implications for the Origin of Neurodevelopmental Disorders. CurrentDirections in Psychological Science 16, 245-249
12 Banks, G.C., et al. (2010) Smarter people are (a bit) more symmetrical: A meta-analysis of the relationship between intelligence and fluctuating asymmetry. Intelligence38, 393-40

Wednesday, December 12, 2012

Do you see what I see?

An enduring question in philosophy and neuroscience is whether any individual’s subjective perceptual experiences are the same as those of other people. Do you experience a particular shade of red the same way I do? We can both point to something in the outside world and agree that it’s red, based on our both having learned that things causing that perceptual experience are called “red”. But whether the internal subjective experience of that percept is really the same is almost impossible to tell.

There are some exceptions, of course, where there are clear differences between people’s perceptions. Colour blindness is the most obvious, where individuals clearly do not experience visual stimuli in the same way as non-colourblind people. This can be contrasted with the experience of people who are tetrachromatic – who can distinguish between a greater number of colours, due to expression of a fourth opsin gene variant. Conditions like face blindness and dyslexia may involve difficulties in higher-order processing of specific categories of visual inputs. And synaesthesia provides a striking example of a difference in subjective perceptual experience, where certain stimuli (such as sounds, musical notes, graphemes, odours or many other inducers) are accompanied by an extra visual percept (or percept or association in some other modality).

But what about more general experience? Do people without such striking conditions exhibit stable individual differences in how they see things? Geraint Rees and colleagues have done some fascinating work to show that they do and also linked such differences in subjective experience to differences in the size of the visual cortex.

They have used a number of visual illusions as tools to quantify individuals’ subjective experience. One of these is the well-known Ebbinghaus illusion, where a circle seems to differ in size when surrounded by either bigger or smaller circles. (Even though you know it’s the same size it’s almost impossible to see it that way). Rees and colleagues assayed how susceptible people were to this illusion by asking them to match the perceived size of the internal circle with one of a set of circles that really did vary in size.

They then used functional neuroimaging to map the spatial extent of the primary visual cortex (area V1) in these people. This is the first region of the cerebral cortex that receives visual information. This information is conveyed by direct connections from the dorsal lateral geniculate nucleus (dLGN), the visual part of the thalamus, which itself receives inputs from the retina. The important thing about the projections of nerve fibres from the retina to the dLGN and from there to area V1, is that they form an orderly map. Neurons that are next to each other in the retina project to target neurons that are next to each other in the dLGN and so on, up to V1. Since the visual world is itself mapped by the lens across the two-dimensional surface of the retina, this means that it is also mapped across the surface of V1.

Rees and colleagues took advantage of this feature to map the extent of V1 using functional magnetic resonance imaging (fMRI). By moving a stimulus across the visual field one gets an orderly response of neurons from different parts of V1, until a point is reached at which the responses reverse – this is the start of the second visual area, V2.

Remarkably, the strength of the visual illusion (and of another one called the Ponzo illusion) experienced by individuals correlated strongly (and negatively) with the size of their V1. That is, individuals with a smaller V1 experienced the illusion more strongly – they were the least accurate in judging the true size of the inner circle. Put another way, their perception of the inner circle was more affected by the nearby outer circles. This suggests a possible explanation for this effect.

Neurons in V1 receive inputs from the dLGN but also engage in lateral interactions with nearby V1 neurons. These integrate responses from neighbouring visual fields and help sharpen response to areas of higher contrast, such as edges of objects. If the visual world is projected across a physically smaller sheet of neurons, then the responses of neurons in one part may be more affected by neighbouring neurons responding to nearby visual stimuli (the outer circles in this example). Conversely, a larger V1 could mean that each neuron integrates across a smaller visual field, increasing visual resolution generally and reducing responsiveness to the Ebbinghaus illusion.

That makes a pretty neat explanation of that effect (probably too neat and simple, but a good working hypothesis), but leads us on to another question. How do differences in the size of V1 come about? What factors determine the spatial extent of the primary visual cortex determined? There is considerable variation in this parameter across individuals, as assessed by neuroimaging or by post mortem cytoarchitecture (as in the diagram, showing the extent of V1, labelled as 17 and of V2, labelled as 18, in eight individuals). Is the extent of V1 genetically determined or more dependent on experience?

The heritability of the extent of V1 itself has not been directly studied, to my knowledge, but the surface area of the occipital lobe (encompassing V1 and other visual areas) is moderately heritable (h2 between 0.31 and 0.64 in one study). This is supported by a more recent twin study, which used genetic correlations to parcellate the brain and also found that the surface area of the entire occipital lobe is heritable, largely independently of other regions of the brain. (Another reason to expect the extent of V1 to be heritable is that it is very highly correlated (r=0.63) with the peak gamma frequency of visual evoked potentials as measured by EEG or MEG. This electrophysiological parameter is a stable trait and has itself been shown to be extremely highly heritable (h2=91%!)). Size and shape of cortical areas also vary between inbred mouse strains, demonstrating strong genetic effects on these parameters.

Interestingly, cortical thickness and cortical surface area are independently heritable. This is consistent with the radial unit hypothesis of cortical development, which suggests that the surface area will depend on the number of columns produced while the thickness will depend on the number of cells per column. These parameters are likely affected by variation in distinct cellular processes.

What could these processes be? What kinds of genes might affect the surface area of V1? One class could be involved in early patterning of the cortical sheet. A kind of competition between molecular gradients from the front and back of the embryonic brain determines the relative extent of anterior versus more posterior cortical areas. Mutations affecting these genes in mice can lead to sometimes dramatic increases or decreases in the extent of V1 (and other areas). A negative correlation that has been observed between size of V1 and size of prefrontal cortex in humans might be consistent with such an antagonistic model of cortical patterning. This mechanism establishes the basic layout of the cortical areas, but is only the first step.

The full emergence of cortical areas depends on their being innervated by axons from the thalamus. For example, axons from the dLGN release an unidentified factor that affects cell division in V1, driving the expansion of this area. The size of the dLGN is thus ultimately correlated with that of V1. In addition, the maturation of V1, including the emergence of patterns of gene expression, the local cellular organisation and even the connectivity with other cortical areas all depend on it being appropriately innervated. Variation in genes controlling this innervation could thus indirectly affect V1 size.

Axons from the dLGN are specifically guided to V1 by molecular cues, though the identity of these cues remains largely mysterious. For example, my own lab has shown – in studies of a line of mice with a mutation in an axon guidance factor, Semaphorin-6A – that even if dLGN axons are initially misrouted to the amygdala, they eventually find their way specifically to V1 and are even able to evict interloping axons from somatosensory thalamus, which had invaded this vacant territory. Not all the misrouted axons make it to V1, however, and many that do not eventually die. The end result is that the dLGN is smaller than normal and V1 is also smaller. I am not suggesting that this specific scenario contributes to variation in V1 size in humans but it illustrates the general point that the number of dLGN axons reaching V1 is another factor that will affect its ultimate size.

Whatever the mechanisms, the studies by Rees and colleagues clearly demonstrate considerable variation in subjective visual experience across the population and provide a plausible explanation for this in a heritable variation in brain structure. So, the short answer to the question in the title is most probably “No”. (And the long answer is already way too long, so I’ll stop!)

Friday, October 12, 2012

It’s not the crime, it’s the cover-up: reactivity in the developing brain and the emergence of schizophrenia

In thinking about the causes of schizophrenia, a central question keeps coming up: why does the brain end up in that particular state? Despite a high degree of variability in presentation and difficulties in defining it precisely, there is a recognisable syndrome that we call schizophrenia. This has a number of characteristic attributes, most striking of which are psychotic symptoms such as hallucinations, delusions and disorganised thoughts. These are truly, deeply strange phenomena that require an explanation: why do brain systems fail in that particular way? More to the point, why does that particular brain state emerge in so many people from so many different initial causes?

Because though we don’t know all the causes of this disorder, we know for sure that there are a lot of them. On the genetic front, a large number of distinct, rare mutations in different genes (or regions of the genome) are associated with a high risk for schizophrenia. Genome-wide association studies have implicated additional loci that may modify risk weakly. There are also many environmental risk factors identified through epidemiological studies, such as maternal infection, cannabis use, migration, urban living, winter birth, obstetric complications and others, each of which modestly increases risk, statistically-speaking. Currently, we do not know how all these risk factors interact and there is ongoing debate about their relative importance. What we can say with certainty, however, is that the causes of this disorder are extremely heterogeneous.

The other thing that there is general agreement on is that schizophrenia is a neurodevelopmental disorder. Even though the overt symptoms of psychosis or cognitive decline do not emerge until adolescence or even later, the evidence is compelling that in most cases the initial insults probably occurred decades earlier during fetal or postnatal development. So, the question becomes: why do defects in neural development of so many different types all lead to this same, strange condition? Why is psychosis such an easy phenotype to get to?

This is where Richard Nixon comes in.

The Watergate scandal inspired the saying “it’s not the crime, it’s the cover-up that gets you”*. In regard to schizophrenia, the idea is that mutations in various genes (or environmental insults) may disrupt neural development in diverse ways, affecting different cellular processes and causing different primary phenotypes. The reason that all these different insults can lead to the same outcome lies in the way the developing brain reacts (or over-reacts) to them.

This idea is appealing because it is very parsimonious – it provides a common pathway to the end-state we recognise as schizophrenia, even from extremely diverse starting points. It can also explain the high incidence of the condition: mutations in many, many different genes can cause the disorder because it is an emergent property of the developing brain and not related directly to the genes’ primary functions. It also has a lot of experimental support, especially for the example of psychosis, where many investigations have highlighted a central role for the dopamine system.

Dopamine attracted attention because both typical and atypical antipsychotics target the dopamine D2 receptor and their efficacy is related to their affinity for this receptor. Conversely, amphetamine, which increases dopamine levels, can induce psychotic symptoms. More direct evidence of changes in the dopamine system comes from imaging studies of schizophrenia patients, which have found alterations in dopamine synthesis and release and in baseline occupancy of D2 receptors in the striatum. These findings are complicated and not uncontroversial, but the general convergence of different lines of evidence strongly supports the model that a disturbance in dopaminergic signaling is causing psychosis (or at least contributing to it).

This led many people to look for variation in genes encoding components of the dopaminergic system in patients with schizophrenia, without success. No variation in these genes was found to be associated with risk of the disease. This does not undermine the association of dopamine with the state of psychosis – it simply suggests that the primary changes are in other systems and that the alterations to the dopamine system are secondary reactions.

Animal models have shown how such changes can come about. Many different animal models have been generated to try and model aspects of schizophrenia. Some expose animals to known environmental risk factors, others use pharmacological or surgical procedures and, more recently, a growing number recapitulate high-risk mutations in mice.

With a couple exceptions, none of these manipulations targets the dopamine system directly. Nevertheless, in many of these models, an altered state of dopamine signaling emerges. This can be observed in a suite of behaviours (such as hyperlocomotion and altered pre-pulse inhibition), which are characteristic of animals in which the dopamine system is hyper-responsive. Many of these animal models show reversal of these phenotypes with antipsychotics and heightened sensitivity to drugs like amphetamine. Alterations in dopamine release, in levels of dopamine receptors or other parameters have also been directly observed in some models.

So, the evidence from animal models converges with that from humans – changes to the dopaminergic system can emerge as secondary consequences of a range of different primary insults. One model has been particularly informative as to how these changes can come about. If young postnatal rats are given a lesion to a particular brain region, the ventral hippocampus, then they will later – only after rat “adolescence” – develop symptoms that parallel psychosis in humans, as described above. These symptoms can be reversed by antipsychotics and are correlated with changes in dopaminergic signaling, but these arise in very different regions of the brain to the one with the lesion.

The circuitry driving these changes has been worked out in great detail by Anthony Grace, Patricio O’Donnell and their respective colleagues. The lesion to the ventral hippocampus renders the connected region, the ventral subiculum hyperactive. In turn, this region projects to part of the midbrain, where most dopaminergic neurons live. It leads to excessive dopamine neuron excitability, which alters the drive to the target areas of these neurons in the striatum and cortex. Crucially, the development of these areas in turn, is changed as they develop under a regime of altered activity.

The initial connectivity of the brain is specified by a molecular program of axon guidance and synaptic connectivity cues, but this generates only a rough map. This scaffold is extensively modified by activity. The developing brain is not silent – it is extremely active, electrically. It has its own rhythms and modes of activity, quite different from adults, and these intrinsically generated patterns of electrical firing are essential for driving the wiring of neuronal circuits. Any defect in initial wiring that results in altered architecture of local circuits is likely to also alter patterns of activity, which will propagate through developing circuits to connected areas, inducing a cascade of knock-on effects.

In some cases, neurons may attempt tocompensate for altered levels of activity, but these attempts actually exacerbate the situation, compounding the initial defect. This can happen in particular in cases where the initial molecular defect impairs not just the patterns of activity but the systems that monitor and interpret that activity. Neurons may be getting excessive activity but “think” they are getting less, causing what would normally be homeostatic responses to instead amplify the initial difference. 

So, changes in dopaminergic signaling in cortex and striatum – the signatures of psychosis – can emerge as very downstream effects of diverse initial disturbances in brain development. This is by no means the whole explanation of the state we call schizophrenia or even of the symptoms of psychosis, but it certainly seems to contribute to it. There are many other changes that also emerge over development in both human patients and animal models, notably including a suite of biochemical changes in particular inhibitory interneurons in the cortex. Again, the pathways through which these changes come about are beginning to be worked out.

In this model, the phenotype of schizophrenia is an emergent property of the developing brain – it is not linked in any direct way to the primary etiological factors. In the plot of a classic farce, a minor misunderstanding leads to catastrophe when the hapless protagonist overcompensates for it, getting himself into deeper and deeper trouble (think Basil Fawlty). Things get progressively out of hand and hilarity ensues. When the equivalent happens in brain development, insanity ensues.

*With thanks to David Dobbs for the meme, which came up in a really interesting conversation on how genotypes relate to psychological phenotypes (very indirectly).

Friday, September 7, 2012

The grand schema things

Even a quick glance at the adjacent picture should bring to mind for most people not only the name of this famous person, but a whole host of associated information – what he does (he’s an actor), perhaps some movies he’s been in, who he’s married to, maybe even who he’s no longer married to. Most of this information (depending on one’s level of familiarity with the particulars of the gentleman in question) will have sprung to mind automatically and effortlessly– indeed, it would be very difficult to actively stop it springing to mind, once the person is recognised. (Try thinking of an elephant without thinking of what colour it is).

For some people, however, it is anything but effortless – it is impossible. Readers with prosopagnosia, for example, may still be waiting to find out who the hell I’ve been writing about (it’s Brad Pitt). This condition, also known as face blindness, impairs the ability to recognise people by their faces – the visual stimulus of the face is not linked in the normal way to the rest of the associated information. Similarly, people with colour agnosia may be perfectly able to think of elephants without thinking of the colour grey – in fact, they might be unable to bring the appropriate colour to mind even if asked to. In this case, colour is not connected into the wider concepts of types of objects.

These networks of associations are called “schemas” – the mental representations of the various attributes of a person, object or concept. The interesting thing about them is that they link very different types of information (from the different senses, for example) – information that is processed in very different parts of the brain. For a schema to emerge that includes all the relevant information, all the relevant regions of the brain have to be talking to each other. One hypothesis to explain conditions like prosopagnosia or colour agnosia is that the face or colour regions of the brain are not wired into the wider network normally.

Schemas are built up through experience – we learn that bananas are yellow and curved and about so big and a bit mushy and smell and taste like banana. Just thinking of a banana will bring many of those attributes to mind (though, interestingly, usually not the smell or taste, even though smelling or tasting one can activate the schema). Similarly, we learn that the letter “A” looks like that, or like this: “a” or this: “a” or this: “a” and makes a sound like “ay” or “ah” or the way New Zealanders pronounce it, which cannot be written down.

This kind of learning is believed to involve strengthening connections between ensembles of neurons that represent the various attributes of an object (or type of object). If these various attributes reliably occur together, then the connections between these representations are strengthened. So much so, that activating the representation of one of the attributes of an object (like the shape of the letter A) is usually enough to cross-activate the representations of its other attributes (such as its canonical sounds).

Schemas are thus the neural substrates of knowledge – the statistical regularities and contingencies of our experience wired into our neural networks. The conditions referred to as “agnosias” – literally the lack of knowledge of something – can generally be thought of as a failure to link all the attributes of an object into a schema. These include the conditions mentioned above, but also other types of object agnosia (which are quite diverse and sometimes bizarrely specific), as well as things like congenital amusia (also known as tone or tune deafness) and dyslexia and dyscalculia.

Agnosias can be acquired – usually caused by injury to specific parts of the brain. But there is a growing appreciation that they can also be congenital, and, in some cases, are very clearly inherited in what seems like a Mendelian manner. These conditions are also not as rare as one might expect – face blindness may affect 1-2% of the population, for example. Familial inheritance of this condition, as well as congenital amusia and colour agnosia have all been documented and the high heritability of dyslexia and dyscalculia is well known.

Specific mutations may thus cause an inability to link faces to other information about people, or to link the sounds and shapes of letters, or to link colour to the rest of our knowledge about objects. This may sound like a paradox – after all, I have just been saying that the development of these schemas depends on experience. That is true, but the ability to link different attributes together depends on the wiring of the brain, which can be affected by genetic mutations. If the different regions of the brain are not connected normally then the opportunity for experience to link contingent stimuli will not arise.

There is evidence of physical differences in connectivity in the brains of people with some of these conditions, particularly between regions processing the various stimuli. These conditions may also be characterised by not only a disconnection between various elements of a network – which represent various attributes of an object – but by a further disconnection between these networks and frontal areas that mediate conscious awareness.

There is good evidence in prosopagnosia, for example, that faces of familiar people are actually “recognised” by visual brain areas (which show a different response than to strange faces), even though the person is unaware of this recognition. Similar results have been reported for congenital amusia, where discordant notes are detected by the brain, but not reported to the mind of the person.

If these disorders can be thought of as “disconnection syndromes”, the condition of synaesthesia may reflect just the opposite – hyperconnectivity between brain areas. This condition is usually thought of as a cross-sensory phenomenon – a triggering of visual percepts by sounds, for example, or the experience of tactile shapes triggered by tastes. In many cases, however, the triggers and the induced experiences are not sensory, but cognitive – the automatic association, conceptually, of some extra attribute into the schema of an object. This extra attribute may or may not be actively experienced as a percept.

For example, a common experience in synaesthesia is having a coloured alphabet, where different letters “have” different colours. These associations are highly idiosyncratic but also very stable and very definite – the colour of a particular letter is as much an integral part of its schema for that person as the shapes and sounds associated with it. Similarly, the shape of a taste or the taste of a word, the smell of a musical note or the personality of a number – these are all extra attributes integrally tied to the larger concepts of the inducing stimuli.  

Like many agnosias, synaesthesia runs very strongly in families and most are characterised by an apparently simple mode of inheritance. It is quite common for different members of a family to have different types of synaesthetic experiences, however. This suggests that a general predisposition to the condition can be inherited, but that the particular form that emerges depends on additional factors, possibly including chance variation in brain development as well as experience.

One hypothesis to explain synaesthesia is that genetic differences affect the wiring of networks of brain areas, resulting in this case in the inclusion of extra areas into networks to which they do not normally belong. An initial difference in wiring may then alter the subjective experience of the person as they are learning to recognise and categorise various types of stimuli (such as letters, numbers, musical notes, flavours, etc.). If, for example, the “colour area(s)” of the brain are reliably co-activated when letters are being learned, then the experienced colour will be automatically incorporated into the schema of each letter.

We do not yet know the identity of any of the affected genes in these conditions, (with a couple exceptions for dyslexia), but it is likely they will be discovered in the near future. It is important to note that these will not be genes “for reading” or “for face processing” or “for not thinking letters have colours”. Their normal functions may be far removed from the effects when they are mutated. For example, one gene known to result in a condition characterised by dyslexia encodes a protein required for normal cell migration. Altered neuronal migration leads to groups of ectopic neurons located in the white matter of the brain – these are thought to impair communication along these nerve fibres, resulting in disconnection of areas required for linking the visual shapes of graphemes with their associated phonemes (that’s the working hypothesis at least). This is a gene for neuronal migration, not for reading.

But the identification of genes involved in these conditions will tell us a lot about an area of developmental neurobiology we still know very little about – how different areas of the cerebral cortex become, on the one hand, specialised to process specific types of information and, on the other, integrated into larger networks that allow different aspects to be associated through experience. This relies on both the initial wiring of cortical networks, driven by a genetic program, and their subsequent refinement as we learn that elephants are grey, bananas are mushy and Brad Pitt is one lucky son-of-a-bitch.

For more on this, see here.

Tuesday, August 21, 2012

Why have genetic linkage studies of schizophrenia failed?

“If there really were rare, highly penetrant mutations that cause schizophrenia, linkage would have found them”. This argument is often trotted out in discussions of the genetic architecture of schizophrenia, which centre on the question of whether it is caused by rare, single mutations in most cases or whether it is due to unfortunate combinations of thousands of common variants segregating in the population. (Those are the two extreme starting positions).

It is true that many genetic linkage studies have been performed to look for mutations that are segregating with schizophrenia across multiple affected members in families. It is also true that these have been unsuccessful in identifying specific genes, but what does this tell us? Does it really rule out or even argue against the idea that most cases are caused by a single, rare mutation? (In the sense that, if the person did not have that mutation, they would not be expected to have the disorder).

This depends very much on the details of how these studies were carried out, their underlying assumptions, their specific findings and the real genetic architecture of the disorder. The idea of genetic linkage studies is that if you have a disease segregating in a particular family, you can use neutral genetic markers across the genome to look at the inheritance of different segments of chromosomes through the pedigree and track which ones co-segregate with the disease. For example, maybe all the affected children inherited a particular segment of chromosome 7 from mom, which can be tracked back to her dad and which is also carried by two of her brothers, who are affected, but not her sister, who is unaffected.

The problem is this: for each transmission from parent to child, 50% of the parent’s DNA is passed on (one copy of each chromosome, which is a shuffled version of the parent’s two copies of that chromosome – usually one segment from grandma, one from granddad, though sometimes there is a little more shuffling). If we only have one such transmission to look at, then we can only narrow down the region carrying a presumptive mutation to 50% of the genome – not much help really. In order for linkage studies to have power, you need to get data from many such transmissions and you therefore need big pedigrees – really huge pedigrees, actually, with information across multiple generations and preferably extending to lots of second or third degree relatives.

Where linkage studies of other diseases have been successful, those are the kinds of pedigrees that have been analysed. But they are not easy to find. That is especially true for schizophrenia, for a simple and tragic reason – this is a devastating disorder that strikes at an early age and causes very substantial impairment. It is associated with much higher mortality and drastically reduced fecundity (about a third the number of offspring), on average. The result is that people with the disorder tend to have fewer children and, if a mutation causing it is segregating in a pedigree, one would expect the pedigree to be smaller overall.

So, finding really large pedigrees where schizophrenia is clearly segregating across multiple generations has not been easy – in fact, there are very few reported that would be large enough by themselves to allow a highly powered linkage study.  (Here are some exceptions: Lindholm et al., 2001; Teltsh et al., 2008; Myles-Worsley et al. 2011)

Another thing that is absolutely imperative for linkage studies to work is that you know you are looking at the right phenotype – you must be certain of the affected status of each member of the pedigree. The analyses can tolerate misassignment of a few people, and can incorporate models of incomplete penetrance – where not all carriers of the mutation necessarily develop the disease. But if too many individuals are misassigned, the noise outweighs the signal. This is a particular problem for neuropsychiatric disorders, which we are now realising have highly overlapping genetic etiology. This is seen at the epidemiological level, in terms of shared risk across clinical categories, but also in the effects of particular, identified mutations, none of which is specific for a specific disorder. All the known mutations predispose to disease across diagnostic boundaries, manifesting in some people as schizophrenia, in others as bipolar disorder, autism, epilepsy, intellectual disability or other conditions.

Thus, schizophrenia does not typically “breed true” – there are few very large pedigrees where schizophrenia appears across multiple individuals in the absence of some other neuropsychiatric conditions in the family. Such mixed diagnosis pedigrees were typically excluded from linkage studies on the assumption that schizophrenia represents a natural kind at a genetic level. In fact, they might have been the most useful (and still could be) if what is tracked is neuropsychiatric disease more broadly. 

Given the scarcity of large pedigrees where schizophrenia was clearly segregating across multiple generations, researchers tried another approach, which is to find many smaller pedigrees and analyse them together. If schizophrenia is caused by the same mutation in different families across a population, this method should find it. That assumption holds for some simple Mendelian diseases where there is only or predominantly one genetic locus involved – such as Huntington’s disease or cystic fibrosis. But you can see what would happen to your study if it does not hold – if the disorder can in fact be caused by mutations in any of a large number of different genes – any real signals from specific families would be diluted by noise from all the other families.

Many such studies have been published, some combining very large numbers of families (in the hundreds). These have failed to localise any clearly consistent linkage regions, never mind specific genes, that harbour schizophrenia-causing mutations. This leads to one (and only one) very firm conclusion: schizophrenia is not like cystic fibrosis or Huntington’s disease – it is not caused by mutations at a single genetic locus or, indeed, at a small number of loci.

Nothing else can be concluded from these negative results. 

In particular, they do not argue against the possibility that schizophrenia is indeed caused by specific, single mutations in each individual or each family where it is segregating, if such mutations can occur at any of a large number of loci; i.e., if the disorder is characterised by very high genetic heterogeneity. This is not an outlandish model – one only has to look at conditions like intellectual disability, epilepsy, congenital deafness or various kinds of inherited blindness for examples of conditions that can be caused by mutations in dozens or even hundreds of different genes.

As it happens, the schizophrenia linkage studies have not necessarily been completely negative – many have found some positive linkages peaks, pointing to particular regions of the genome. These studies have not had the power to refine these signals down to a specific mutation, however, and most specific findings have not been replicated across other studies. It is therefore hard to tell if the statistical signals represent true or false positives in each study. But this lack of replication is to be expected under a model of extreme heterogeneity.

So, we can lay that argument to rest – the absence of evidence from linkage studies is not the evidence of anything – it does not, at least, bear on the current debate.

It should be stressed, however, that the failure of linkage also does not provide positive support for the model of extreme genetic heterogeneity – it is simply consistent with it. There are additional lines of evidence that argue against the most extreme version of the multiple rare variants model – the one that says each case is caused by a single mutation. I and others have argued that that is the best theoretical starting point and that we should complicate the model as necessary – but not more than necessary – to accommodate empirical findings (as opposed to jumping immediately to a massively polygenic model of inheritance, which has some very shaky underlying assumptions).

Such empirical findings include the incomplete penetrance of schizophrenia-associated mutations (which manifest as schizophrenia in only a percentage of carriers) and the range of additional phenotypes that they can cause. These findings suggest a prominent role for genetic modifiers – additional genetic variants in the background of each individual that modify the phenotypic expression of the primary mutation. This is to be expected – in fact, it is observed for even the most classically “Mendelian” disorders. In some cases, it may be impossible to even identify one mutation as “primary” – perhaps two or three mutations are required to really cause the disorder. Alternatively, some families, especially those with a very high incidence of mental illness, may have more than one causal mutation segregating, possibly coming from both parental lines (complicating linkage studies in just those families that look most promising).

One hope for finding causal mutations is the current technical ease and cost-effectiveness of sequencing the entire genome (or the part coding for proteins – the exome) of large numbers of individuals. The first whole-exome-sequencing study of schizophrenia has recently been published, with results that seem disappointing at first glance. The authors sequenced the exomes of 166 people with schizophrenia, identifying around a couple hundred very rare, protein-changing mutations in each person. This is normal – each of us typically carries that kind of burden of rare, possibly deleterious mutations. Finding which ones might be causing disease is the tricky bit – the hope is that multiple hits in the same gene(s) might emerge across the affected people. (This has been the case recently for similar studies of autism, with larger sample sizes and looking specifically for de novo, rather than inherited, mutations). No clear hits emerged from this study and follow-up of specific candidate mutations in a much larger sample did not provide strong support for any of them. (It should be stressed, this was a test for very particular mutations, not for the possible effects of any mutations in a given gene).

Again, we should be cautious about over-extrapolating from these negative data. The justified conclusion is that there are no moderately rare mutations segregating in the population that cause this disorder. These findings do not rule out (or even speak to) the possibility that the disease is caused by very rare mutations, specific instances of which would not be replicated in a wider population sample.

Much larger sequencing studies will be required to resolve this question. If, like intellectual disability, there are hundreds of genetic loci where mutations can result in schizophrenia, then samples of thousands of individuals will be required to find enough multiple hits to get good statistical evidence for any specific gene (allowing for heterogeneity of mutations at each locus). Such studies will emerge over the next couple of years and we will then be in a position to see how much more complicated our model needs to be. If even these larger studies fail to collar specific culprits, then we will have to figure out ways to resolve more complex genetic interactions that can explain the heritability of the disorder. For now, there are no grounds to reject the working model of extreme genetic heterogeneity with a primary, causal mutation in most cases.

Tuesday, August 14, 2012

Are human brains especially fragile?

As many as a quarter of people will experience mental illness at some point in their life (over a third in any given year with more expansive definitions). At least 5% of the population suffer from lifelong brain-based disorders, including intellectual disability, autism spectrum disorders, schizophrenia, epilepsy and many others. Many of these conditions dramatically increase mortality rates and reduce fecundity (number of offspring), impacting heavily on evolutionary fitness.

Faced with these numbers, we have to ask the question: are human brains especially fragile? Are we different from other species in this regard? Is the brain different from other organs? As all of the disorders listed above are highly heritable, these questions can be framed from a genetic perspective: is there something about the genetic program of human brain development that makes it especially sensitive to the effects of mutations?     

I have written lately about the robustness of neural development – how, despite being buffeted by environmental insults, intrinsic noise on a molecular level and a load of genetic mutations that we all carry, most of the time the outcome is within a species-typical range. In particular, the molecular programs of neural development are inherently robust to the challenge of mutations with very small effect on protein levels or function, even the cumulative effects of many such mutations (at least, that is what I have argued). The flipside is that development could be vulnerable to mutations in a set of specific genes (especially ones encoding proteins with many molecular interactions) when the mutations have a large effect at the protein level.

That fits generally with what we know about robustness in complex systems, especially so-called small-world networks, which are resistant to error of most components but vulnerable to attack on highly interconnected nodes. But here’s the rub: the number of such genes seems too high. Geneticists are finding that mutations in any of a very large number of genes (many hundreds) could underlie conditions such as autism and intellectual disability. In addition, many of these mutations arise de novo in the germline of unaffected parents (who are not carriers of the responsible mutation) and thus have dominant effects – mutation of just one copy of a gene or chromosomal locus is sufficient to cause a disorder. This is not what is predicted, necessarily, from consideration of robustness of complex systems and the way it evolves. Why are so many different genes sensitive to dosage (the number of copies of the gene) in neural development?

The first thing to assess is whether this situation is actually unusual. It seems like it is, but maybe heart development or eye development are just as sensitive. (Certainly there are lots of genetic conditions affecting these systems too, though I don’t know of any studies comparing the genetic architecture of defects across systems). Those are organs where defects are often picked up, but you could imagine subtle defects in other systems which would go unnoticed. Maybe we just have a huge ascertainment bias in picking up mutations that affect the nervous system. After all, we are a highly social species and finely adapted to analyse each other’s behaviour. I might not know if your liver is functioning completely normally but might readily be able to detect quite subtle differences in your sociability or threat sensitivity or reality testing or any other of the myriad cognitive faculties affecting human behaviour.

As for whether this situation is unique to humans, it is very hard to tell. Having analysed dozens of lines of mutant mice for nervous system defects, I can tell you it is not that easy to detect subtle differences in anatomy or function. Many mutants that one might expect to show a strong effect (based on the biochemical function and expression pattern of the encoded protein) seem annoyingly normal. However, in many cases, more subtle probing of nervous system function with sophisticated behavioural tasks or electrophysiological measures does reveal a phenotypic difference, so perhaps we are simply not well attuned to the niceties of rodent behaviour.

That kind of ascertainment bias seems to me like it could be an important part of the explanation for this puzzle – it’s not that human brains are more sensitive, it’s that we are better at detecting subtle differences in outcome for human brains than for other systems or other animals. That’s just an intuition, however.

So, just to follow a line of thought, let’s assume it is true that human brain development and/or function is actually more sensitive to mutations (including loss of just one copy of any of a large number of genes) than development or function of other systems. How could such a situation arise?

Well, most obviously, it could simply be that more genes are involved in building a brain than in building a heart or a liver. This is certainly true. At least 85% of all genes are expressed in the brain, far higher than any other organ, and many are expressed during embryonic and fetal brain development in highly dynamic and complex ways not captured by some genomic technologies (such as microarrays). So, maybe there are just more bits to break. The counter-argument is that natural systems with more components tend to be more robust to loss of any one component as more redundancy and flexibility gets built into the system. Robustness may come for free with increasing complexity.

To really understand this, we have to approach it from an evolutionary angle, though – how did this system evolve? Wouldn’t there have been selective pressure against this kind of genetic vulnerability? Well, possibly, though robustness may more typically evolve due to pressure to buffer environmental variation and/or intrinsic noise in the system – robustness to mutations may be a beneficial side-effect as opposed to the thing that was directly selected for. (After all, natural selection lacks foresight – it can only act on the current generation, with no knowledge of the future usefulness of new variations).    

Still, imagine a mutation arises that increases vulnerability to subsequent mutations. Given a high enough rate of such new mutations, the original mutation may well eventually be selected against; i.e., it would not rise to a high frequency in the population. Unless, that is, it conferred a benefit that was greater than the burden of increased vulnerability. That may in fact be exactly what happened. Perhaps the mutations (likely many) that gave rise to our larger and more complex brains gave such an immediate and powerful evolutionary advantage that positive selection rapidly fixed them in the population, potential vulnerability be damned.

This would be like upgrading your electronic devices to a new operating system, even though you know there are bugs and will be occasional crashes – it’s usually so much more powerful that it’s worth it. The selective pressures of the cognitive niche, which early humans started to carve out for themselves, may have pushed ever harder for increasing brain complexity, despite the consequences. 

Increased size and complexity may also be intimately tied to another feature of human brain development – early birth and prolonged maturation. Early birth, while the brain is more immature than that of other species, was probably necessitated by the growth of the brain and the size limits of the birth canal. One effect of this is that the human brain is more exposed to experience during periods of early plasticity, providing the opportunity to refine neural circuitry through learning. Indeed, human brain maturation continues for decades, with the evolutionarily newest additions, such as prefrontal cortex, maturing latest.

This brings obvious advantages, especially providing greater opportunities for an amplifying interplay between genetic and cultural evolution. But it has a downside, in that the brain is vulnerable during these periods to insults, such as extreme neglect, stress or abuse. Perhaps selection for early birth and prolonged maturation also made the human brain more sensitive to the effects of genetic mutations, some of which may only become apparent as maturation proceeds.    

For now, it is hard to tell whether human brains are really especially fragile or whether we are just very good at detecting subtle differences. If they are fragile, one can certainly imagine this as a tolerated cost of the vastly increased complexity and prolonged development of the human brain.