Tuesday, December 18, 2012

Genetic entropy and the human intellect (or why we're not getting dumber)

This is a modified version of a letter published in Trends in Genetics: Kevin J. Mitchell (2012). Genetic entropy and the human intellect. Trends in Genetics, 14th December 2012.

Two articles by Gerald Crabtree float the notion that we, as a species, are gradually declining in average intellect, due to the accumulation of mutations that deleteriously affect brain development or function [1, 2]. The observations that prompted this view seem to be: (i) intellectual disability can be caused by mutations in any one of a very large number of genes, and: (ii) de novo mutations arise at a low but steady rate in every new egg or sperm. He further proposes that: (iii) genes involved in brain development or function are especially vulnerable to the effects of such mutations. Considered in isolation, these could reasonably lead to the conclusion that mutations reducing intelligence must be constantly accumulating in the human gene pool. Thankfully, these factors do not act in isolation.

If we, as a species, were simply constantly accumulating new mutations, then one would predict the gradual degradation of every aspect of fitness over time, not just intelligence. Indeed, life could simply not be sustained over evolutionary time in the face of such genetic entropy. Fortunately – for the species, though not for all individual members – natural selection is an attentive minder.

Analyses of whole-genome sequences from large numbers of individuals demonstrate an “excess” of rare or very rare mutations [3, 4]. That is, mutations that might otherwise be expected to be at higher frequency are actually observed only at low frequency. The strong inference is that selection is acting, extremely efficiently, on many mutations in the population to keep them at a very low frequency.

One of the key misconceptions in the Crabtree articles is that mutations happen to “us”, as a species. His back-of-the-envelope calculations lead him to the following conclusions: “Every 20-50 generations we should sustain a mutation in one copy of one of our many ID [intellectual deficiency] genes. In the past 3000 years then (~120 generations), each of us should have accumulated at the very least 2.5-6 mutations in ID genes”.

The loose phrasing of these sentences reveals a fundamental underlying fallacy. “We” have not sustained mutations in “our” ID genes, and “each of us” has not accumulated anything over the past 3000 years, having only existed for a fraction of that. Mutations arise in individuals, not populations. Nor does it matter that there are many thousands of genes involved in the developmental systems that generate a well-functioning human brain – selection can very effectively act, in individuals, on new mutations that impair these systems.

Mutations causing intellectual disability dramatically impair fitness, explaining why so many cases are caused by de novo mutations – the effects are often too severe for them to be inherited [5]. Furthermore, this selective pressure extends into the normal range of intelligence, as described by Deary: “One standard deviation advantage in intelligence was associated with 24% lower risk of death over a follow-up range of 17 to 69 years… The range of causes of death with which intelligence is significantly associated… include deaths from cardiovascular disease, suicide, homicide, and accidents” [6].

A more recent study of over 1 million Swedish men found that lower IQ in early adulthood was also associated with increased risk of unintentional injury.“After adjusting for confounding variables, lower IQ scores were associated with an elevated risk of any unintentional injury (Hazard ratio per standard deviation decrease in IQ: 1.15), and of cause-specific injuries other than drowning (poisoning (1.53), fire (1.36), road traffic accidents (1.25), medical complications (1.20), and falling (1.17). These gradients were stepwise across the full IQ range”. None of that sounds good.

Crabtree suggests, however, that brains and intelligence are special cases when it comes to the effects of genetic variation and natural selection. First, he argues that ID genes are members of a chain, where every link is fragile, rather than of a robust network. This view is mistaken, however, as it ignores all the genes in which mutations do not cause ID – this is the robust network in which ID genes are embedded. He also cites several studies reporting high rates of retrotransposition (jumping around of mobile DNA elements derived from retroviruses) and aneuploidy (change in chromosome number) in neurons in human brains. He argues that these processes of somatic mutation would make brain cells especially susceptible to loss of heterozygosity, as an inherited mutation in one copy of a gene might be followed by loss of the remaining functional copy in many cells in the brain.

If these reports are accurate, such processes would indeed exacerbate the phenotypic consequences of germline mutations, but this would only make them even more visible to selection. However, it seems unlikely, a piori, that these mechanisms play an important role. First, it would seem bizarre that evolution would go to such lengths to craft a finely honed human genome over millions of years only to let all hell break loose in what Woody Allen calls his second favourite organ. One would predict, in fact, that if such mechanisms really prevailed we would all be riddled with brain cancer. In addition, if these processes had a large effect on intelligence in individuals, this would dramatically reduce the heritability of the trait (which estimates the contribution of only inherited genetic variants to variance in the trait). The fact that the heritability of IQ is extremely high (estimated between ~0.7 to 0.8) suggests these are not important mechanisms [6]. This view is directly reinforced by a recent study by Chris Walsh and colleagues, who, using the more direct method of sequencing of the entire genomes of hundreds of individual human neurons, found vanishingly low rates of retrotransposition and aneuploidy [7].

Crabtree additionally suggests that modern societies shelter humans from the full scrutiny of natural selection, permitting dullards to thrive and deleterious mutations to accumulate. He speculates that high intelligence would have been more important in hunter-gatherer societies than in more modern societies that arose with high-density living. No evidence is offered for this idea, which contradicts models suggesting just the opposite – that the complexities of social interactions in human societies were actually a main driver of increasing intelligence [8]. Indeed, over the past millennium at least, there is evidence of a strong association between economic success (itself correlated with intelligence) and number of surviving children, suggesting selection on intelligence at least up until very recent times. In contrast, number of offspring in contemporary hunter-gatherer societies has been linked more to aggression and physical prowess [9].

Arguments that reduced intelligence does not impair fitness in modern societies can thus be directly refuted. But there is another way to think about this association, which considers intelligence from a very different angle [10]. Rather than a dedicated cognitive faculty affected by variation in genes specifically “for intelligence”, or, conversely, degraded by mutations genes “for intellectual disability”, intelligence may actually be a non-specific indicator of general fitness. In this scenario, the general load of deleterious mutations in an individual cumulatively impairs phenotypic robustness or developmental stability – the ability of the genome to direct a robust program of development, including development of the brain. Reduced developmental stability will affect multiple physiological parameters, intelligence being just one of them [11].

This model is supported by observed correlations between intelligence and measures of developmental stability, such as minor physical anomalies and fluctuating asymmetry (a more robust developmental program generating a more symmetric organism) [12]. Intelligence is also correlated with diverse physical and mental health outcomes, from cardiovascular to psychiatric disease [6]. Under this model, intelligence gets a free ride. It is maintained not by selection on the trait itself, but on the coat-tails of selection against mutational load generally.

Whether causally or as a correlated indicator, intelligence is thus strongly associated with evolutionary fitness, even in current societies. The threat posed by new mutations to the intellect of the species is therefore kept in check by the constant vigilance of selection. Thus, despite ready counter-examples from nightly newscasts, there is no scientific reason to think we humans are on an inevitable genetic trajectory towards idiocy.


1 Crabtree, G.R. (2012) Our fragile intellect. Part II. Trends Genet
2 Crabtree, G.R. (2012) Our fragile intellect. Part I. TrendsGenet
3 Tennessen, J.A., et al. (2012) Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science 337, 64-69
4 Abecasis, G.R., et al. (2012) An integrated map of genetic variation from 1,092 human genomes. Nature491, 56-65
5 Ku, C.S., et al. (2012) A new paradigm emerges from the study of de novo mutations in the context of neurodevelopmental disease. Mol Psychiatry
6 Deary, I.J. (2012) Intelligence. Annu Rev Psychol 63,453-482
7 Evrony, G.D., et al. (2012) Single-neuron sequencing analysis of l1 retrotransposition and somatic mutation in the human brain. Cell 151, 483-496
8 Pinker, S. (2010) The cognitive niche: coevolution of intelligence, sociality, and language. Proc Natl Acad Sci U S A 107 Suppl 2,8993-8999
11 Yeo, R.A., et al. (2007) Developmental Instability and Individual Variation in Brain Development. Implications for the Origin of Neurodevelopmental Disorders. CurrentDirections in Psychological Science 16, 245-249
12 Banks, G.C., et al. (2010) Smarter people are (a bit) more symmetrical: A meta-analysis of the relationship between intelligence and fluctuating asymmetry. Intelligence38, 393-40

Wednesday, December 12, 2012

Do you see what I see?

An enduring question in philosophy and neuroscience is whether any individual’s subjective perceptual experiences are the same as those of other people. Do you experience a particular shade of red the same way I do? We can both point to something in the outside world and agree that it’s red, based on our both having learned that things causing that perceptual experience are called “red”. But whether the internal subjective experience of that percept is really the same is almost impossible to tell.

There are some exceptions, of course, where there are clear differences between people’s perceptions. Colour blindness is the most obvious, where individuals clearly do not experience visual stimuli in the same way as non-colourblind people. This can be contrasted with the experience of people who are tetrachromatic – who can distinguish between a greater number of colours, due to expression of a fourth opsin gene variant. Conditions like face blindness and dyslexia may involve difficulties in higher-order processing of specific categories of visual inputs. And synaesthesia provides a striking example of a difference in subjective perceptual experience, where certain stimuli (such as sounds, musical notes, graphemes, odours or many other inducers) are accompanied by an extra visual percept (or percept or association in some other modality).

But what about more general experience? Do people without such striking conditions exhibit stable individual differences in how they see things? Geraint Rees and colleagues have done some fascinating work to show that they do and also linked such differences in subjective experience to differences in the size of the visual cortex.

They have used a number of visual illusions as tools to quantify individuals’ subjective experience. One of these is the well-known Ebbinghaus illusion, where a circle seems to differ in size when surrounded by either bigger or smaller circles. (Even though you know it’s the same size it’s almost impossible to see it that way). Rees and colleagues assayed how susceptible people were to this illusion by asking them to match the perceived size of the internal circle with one of a set of circles that really did vary in size.

They then used functional neuroimaging to map the spatial extent of the primary visual cortex (area V1) in these people. This is the first region of the cerebral cortex that receives visual information. This information is conveyed by direct connections from the dorsal lateral geniculate nucleus (dLGN), the visual part of the thalamus, which itself receives inputs from the retina. The important thing about the projections of nerve fibres from the retina to the dLGN and from there to area V1, is that they form an orderly map. Neurons that are next to each other in the retina project to target neurons that are next to each other in the dLGN and so on, up to V1. Since the visual world is itself mapped by the lens across the two-dimensional surface of the retina, this means that it is also mapped across the surface of V1.

Rees and colleagues took advantage of this feature to map the extent of V1 using functional magnetic resonance imaging (fMRI). By moving a stimulus across the visual field one gets an orderly response of neurons from different parts of V1, until a point is reached at which the responses reverse – this is the start of the second visual area, V2.

Remarkably, the strength of the visual illusion (and of another one called the Ponzo illusion) experienced by individuals correlated strongly (and negatively) with the size of their V1. That is, individuals with a smaller V1 experienced the illusion more strongly – they were the least accurate in judging the true size of the inner circle. Put another way, their perception of the inner circle was more affected by the nearby outer circles. This suggests a possible explanation for this effect.

Neurons in V1 receive inputs from the dLGN but also engage in lateral interactions with nearby V1 neurons. These integrate responses from neighbouring visual fields and help sharpen response to areas of higher contrast, such as edges of objects. If the visual world is projected across a physically smaller sheet of neurons, then the responses of neurons in one part may be more affected by neighbouring neurons responding to nearby visual stimuli (the outer circles in this example). Conversely, a larger V1 could mean that each neuron integrates across a smaller visual field, increasing visual resolution generally and reducing responsiveness to the Ebbinghaus illusion.

That makes a pretty neat explanation of that effect (probably too neat and simple, but a good working hypothesis), but leads us on to another question. How do differences in the size of V1 come about? What factors determine the spatial extent of the primary visual cortex determined? There is considerable variation in this parameter across individuals, as assessed by neuroimaging or by post mortem cytoarchitecture (as in the diagram, showing the extent of V1, labelled as 17 and of V2, labelled as 18, in eight individuals). Is the extent of V1 genetically determined or more dependent on experience?

The heritability of the extent of V1 itself has not been directly studied, to my knowledge, but the surface area of the occipital lobe (encompassing V1 and other visual areas) is moderately heritable (h2 between 0.31 and 0.64 in one study). This is supported by a more recent twin study, which used genetic correlations to parcellate the brain and also found that the surface area of the entire occipital lobe is heritable, largely independently of other regions of the brain. (Another reason to expect the extent of V1 to be heritable is that it is very highly correlated (r=0.63) with the peak gamma frequency of visual evoked potentials as measured by EEG or MEG. This electrophysiological parameter is a stable trait and has itself been shown to be extremely highly heritable (h2=91%!)). Size and shape of cortical areas also vary between inbred mouse strains, demonstrating strong genetic effects on these parameters.

Interestingly, cortical thickness and cortical surface area are independently heritable. This is consistent with the radial unit hypothesis of cortical development, which suggests that the surface area will depend on the number of columns produced while the thickness will depend on the number of cells per column. These parameters are likely affected by variation in distinct cellular processes.

What could these processes be? What kinds of genes might affect the surface area of V1? One class could be involved in early patterning of the cortical sheet. A kind of competition between molecular gradients from the front and back of the embryonic brain determines the relative extent of anterior versus more posterior cortical areas. Mutations affecting these genes in mice can lead to sometimes dramatic increases or decreases in the extent of V1 (and other areas). A negative correlation that has been observed between size of V1 and size of prefrontal cortex in humans might be consistent with such an antagonistic model of cortical patterning. This mechanism establishes the basic layout of the cortical areas, but is only the first step.

The full emergence of cortical areas depends on their being innervated by axons from the thalamus. For example, axons from the dLGN release an unidentified factor that affects cell division in V1, driving the expansion of this area. The size of the dLGN is thus ultimately correlated with that of V1. In addition, the maturation of V1, including the emergence of patterns of gene expression, the local cellular organisation and even the connectivity with other cortical areas all depend on it being appropriately innervated. Variation in genes controlling this innervation could thus indirectly affect V1 size.

Axons from the dLGN are specifically guided to V1 by molecular cues, though the identity of these cues remains largely mysterious. For example, my own lab has shown – in studies of a line of mice with a mutation in an axon guidance factor, Semaphorin-6A – that even if dLGN axons are initially misrouted to the amygdala, they eventually find their way specifically to V1 and are even able to evict interloping axons from somatosensory thalamus, which had invaded this vacant territory. Not all the misrouted axons make it to V1, however, and many that do not eventually die. The end result is that the dLGN is smaller than normal and V1 is also smaller. I am not suggesting that this specific scenario contributes to variation in V1 size in humans but it illustrates the general point that the number of dLGN axons reaching V1 is another factor that will affect its ultimate size.

Whatever the mechanisms, the studies by Rees and colleagues clearly demonstrate considerable variation in subjective visual experience across the population and provide a plausible explanation for this in a heritable variation in brain structure. So, the short answer to the question in the title is most probably “No”. (And the long answer is already way too long, so I’ll stop!)