Nav

Tuesday, December 21, 2010

Self-organising principles in the nervous system

The circuitry of the brain is too complex to be completely specified by genetic information – at least not down to the level of each connection. There are hundreds of billions of neurons in your brain, each making an average of 1,000 connections to other cells. There are simply not enough genes in the genome to specify all of these connections.

What the genetic program can achieve is a very good wiring diagram of initial projections between neurons in different brain areas (or layers or between particular cell types). This circuitry is then refined and elaborated at the cellular level by processes of activity-dependent development, under the principle that “cells that fire together, wire together”. The circuitry of the brain is thus a self-organising system, which assembles under the influence of local interactions, mediated first by molecular interactions and second by patterns of electrical activity.

A new study highlights an important additional factor that allows global patterns of nerve projections, or “neural maps”, to emerge from these local interactions. Neural maps are systematic representations of sensory information across the surface of the brain. A study of the structures of visual maps across a range of quite distantly related species reveals a universal pattern and argues strongly that it cannot be explained by either genetic or environmental instructions but instead arises due to self-organising principles. Remarkably, mathematical descriptions of these principles fit the observed structures extremely well and reveal that one important structural parameter is constant across all species and equal to the mathematical constant π.

Obtaining such a robust mathematical result in any biological system is a rare event and reinforces the view that it reflects a fundamental principle of self-organising systems. To understand the significance of this result, we need to examine the organisation of the visual system in more detail. Starting in the retina, the visual system is built up in a hierarchical series of relays. At each level, the system is wired to combine and compare inputs from neighbouring cells in the preceding level. In this way, more and more complex and global patterns of visual objects can be extracted (starting with dots, then lines, then parts of shapes, simple geometrical shapes and eventually complex objects).



Photons of light are initially detected by photoreceptors in the retina. Each single photoreceptor at any given moment registers light coming into the retina from a particular point of visual space. These cells relay information through a series of layers to the retinal ganglion cells, which are the output cells of the retina. Importantly, each ganglion cell integrates information from multiple, neighbouring photoreceptors. These connections can be either excitatory or inhibitory. A single ganglion cell is usually most strongly activated when a central photoreceptor is active but its neighbours are not. This means that ganglion cells are particularly sensitive to areas of visual space with high contrast – where there is an edge of an object, for example. (If the light across the visual field is uniform then the ganglion cells are less active).

Retinal ganglion cells project in turn to the visual thalamus, which relays this information to the primary visual cortex (area V1). Cells in V1 integrate information from multiple retinal ganglion cells, extracting more high-level features of the visual information. In particular, many cells in V1 respond best to short lines – you can imagine how such a response can be achieved by integrating inputs from neighbouring retinal ganglion cells, each responding to high contrast in a central domain (a line in visual space would then maximally excite these cells, compared to a solid block for example).

Depending on the layout of the ganglion cells whose inputs are integrated, each cell in V1 will be most sensitive to lines of a particular orientation (vertical, horizontal, diagonal). This sensitivity can be directly observed by using electrodes to record the responses of cells in V1 when an animal is shown various visual stimuli. The ground-breaking work of Hubel and Wiesel first revealed the remarkable preferences of individual cells for lines of different orientation. It also revealed another important principle, which is that the organisation of these cells with respect to each other is highly structured.

This structure is apparent at two levels: first, cells with similar orientation selectivity form small clusters, called columns (because the selectivity actually extends in a column across the six layers of the cortex). Second, clusters are laid out across the surface of V1 in a non-random pattern characterised by a “pinwheel” structure, where the direction of orientation selectivity varies smoothly across neighbouring columns, which are arranged in a spiral fashion around the pinwheel centre. (The diagram represents the layout of columns with different orientation selectivities, denoted by the colour code).



Not all species show these properties. Cells in visual cortex of rodents, for example, are selective for particular orientations of stimuli but they are not clustered – individual cells are effectively scattered across V1. But wherever clustering is observed, the pinwheel organisation is also observed. This is true across multiple species where it must have evolved independently. This result is not trivial – there are many other ways that these maps could theoretically be structured (stripes, lattices, etc.). So why do they emerge in this particular pattern?

To investigate this, Matthias Kaschube, Fred Wolf and colleagues analysed the orientation maps in three distantly related species: ferrets, tree shrews and galagos. Tree shrews, despite their name, are not rodents but a sister group of primates. Ferrets are on the carnivore branch and galagos, also known as bush-babies, are primates. Importantly, these three species have quite different habits and ecological habitats, arguing against any commonalities in environmental experience as driving similarities in the organisation of visual maps.

All three species show orientation columns and all show the pinwheel organisation. However, the sizes of individual columns vary considerably across these species and even across individuals within each species. To determine whether there was really any universality in the organisation of these maps, the authors painstakingly measured a range of parameters across many individuals. These parameters include the average column size, the average distance between columns of the same orientation preference and the density of pinwheel centres. They found that the pinwheel density, in relation to the other parameters, was constant across all species.

Not fairly constant or kind of constant – really constant (or as close as one could ever expect in a biological system). And not only was it constant in the sense that it was consistent – the value was equal to a mathematical constant: π (pi, the ratio of a circle’s circumference to its diameter). This had been predicted from mathematical models of the underlying processes, which I wish I understood better. Even though they are all Greek to me, the fact that the value is not just some arbitrary number indicates that it reflects a fundamental mathematical constraint on the self-organisation of this system.

The authors show that this constraint is most likely imposed by the pattern of long-range connections, which link columns of similar orientation selectivity. These horizontal connections, which are formed in an activity-dependent manner, impose a more global structure on the layout of columns and constrain the possible organisation of the map as a whole.

The results of this study argue strongly that neither genetic nor environmental instruction is sufficient to generate the observed pattern. Instead, given a set of initial conditions and biochemical algorithms instructing changes in connectivity based on local interactions, global patterns will emerge based on very general mathematical principles of self-organising systems.

Kaschube M, Schnabel M, Löwel S, Coppola DM, White LE, & Wolf F (2010). Universality in the evolution of orientation columns in the visual cortex. Science (New York, N.Y.), 330 (6007), 1113-6 PMID: 21051599

Monday, November 29, 2010

New insights into Rett syndrome


A pair of papers from the lab of Fred Gage has provided new insights into the molecular and cellular processes affected in Rett syndrome. This syndrome is associated with arrested development and autistic features. It affects mainly girls, who typically show normal development until around age two, followed by a sudden and dramatic deterioration of function, regression of language skills and the emergence of autistic symptoms. It is caused mainly by mutations in the gene encoding MeCP2, which resides on the X chromosome. Complete removal of the function of this gene is effectively lethal, explaining why Rett syndrome is not observed in boys – males who inherit that mutation are not viable. Females, who have a back-up copy of the X chromosome survive but subsequently show the symptoms of the disease.

The function of the MeCP2 protein seems very far removed from the kinds of symptoms observed when it is deleted. The job of MeCP2 is to bind to DNA that carries a specific chemical tag – a methyl group – which marks DNA for repression. When MeCP2 binds, it recruits a host of other proteins which shut down that section of DNA and prevent any genes within it from being expressed. How a defect in a process that is so fundamental could result in such specific symptoms has been a mystery.

A major barrier in understanding these processes has been the inability to assay the effects of the mutation in this gene in neurons of people who carry it. After all, unlike some other cell types, one cannot easily simply extract neurons from patients. (They tend to be using them). New stem cell technologies developed over the last few years offer a way around this problem. It is possible to extract fibroblasts from patients with a simple skin biopsy. By transfecting these cells with genes that are normally expressed in embryonic stem cells it is possible to “de-differentiate” them – to turn them back into a stem cell. (The difference between a skin cell and a stem cell lies in the genes that are being expressed – transfecting the cells with the master regulatory genes that determine embryonic stem cell identity forces the expression profile back to that state). These “induced pluripotent stem cells” (iPS cells) can then be encouraged to differentiate into any of the cell-types of the body, including neurons. In this way, a virtual biopsy of a patient’s neurons can be obtained.

Gage and colleagues did exactly that, generating neurons in a dish from patients with Rett syndrome. I make that technique sound simple, but of course it isn’t, and these experiments represent a technical tour de force. They were then able to characterise various parameters of these neurons to assay more directly the molecular and cellular effects of MeCP2 mutation. These experiments revealed a not unexpected defect in the formation of synapses between Rett mutation neurons. Neurons from Rett mutation-carriers developed normally and showed normal electrophysiological properties but made fewer synapses with each other and showed a concomitant decrease in network activity. I say not unexpected because it had previously been shown that mouse neurons carrying a MeCP2 mutation show similar effects. This fits with highly convergent findings from autism genetics showing that many other implicated genes function in synapse formation.

What is important about the iPS cells, compared to the information that can be learned from studying mouse cells with MeCP2 knocked out, is that they give a picture of the effects, first, of the specific mutation in this gene in each patient, and second, of the genetic background of each patient, which may modify the effects of the MeCP2 mutation. This gives a far more direct view of the specific effects of each patient’s complete genotype on the development and function of their neurons.

While defects in synapse formation suggest a fundamental role for MeCP2 in neural development, which might imply an irreversible defect, in fact several lines of evidence suggest that the requirement for the function of MeCP2 may be ongoing, in processes of activity-dependent wiring, where neurons within networks strengthen connections based on their patterns of activity. This fits with the apparently normal early development, prior to age two, of girls with Rett syndrome, and also with evidence from mouse models that restoring MeCP2 function in adults can largely reverse the symptoms. These discoveries therefore hold out the promise that intervention in Rett syndrome patients, even in older children, may be effective.

Gage and colleagues tested a couple potential therapies on the neuronal networks derived from Rett syndrome patients and were able to show some degree of rescue of the defects. One of these, the protein insulin-like growth factor-1 (IGF-1), was previously shown to be effective in partially rescuing the defects in MeCP2 mutant mice, most likely by stimulating greater synapse production and compensating for the loss of MeCP2 activity. Clinical trials are now planned to test the efficacy of this approach in patients. Having the cells derived from patients should also greatly facilitate screening for new drugs that can correct the neuronal network defects.

Another paper from the same group, also analysing these cells, revealed a far less expected effect – one that suggests (far more speculatively) the possible involvement of a totally different pathogenic mechanism. One of the functions of the system that methylates DNA is to defend the genome against invaders. Our genome is riddled with parasitic elements – pieces of DNA that can replicate themselves and “jump” around the genome. Fully 45% of our “human” genome is made up of these so-called transposable elements. Most of the copies of these elements are inactive but a subset can generate new copies that will integrate at random into the genome. What has this got to do with Rett syndrome?

Well, MeCP2 is apparently one of the proteins whose job it is to shut down these transposable elements. Gage and colleagues could show that one particular class of these elements, called L1 elements, was far more active in cells derived from Rett syndrome patients. The L1 elements expressed higher levels of the proteins they encode and they generated additional copies of themselves, which were scattered around the genome. Interestingly, this effect seems to be restricted to neurons, presumably because the function of MeCP2 is especially required in that cell-type.

Though highly speculative, this raises the idea that high rates of somatic mutation (somatic meaning it happens in the body, not in the germline and thus will not be inherited), caused by L1 elements jumping around and landing in the middle of genes, may contribute to the severity and also the variability of the phenotype caused by MeCP2 mutations. The alternative is that the L1 transposition has no pathogenic effect but is simply a consequence of the Rett syndrome mutations. Future experiments will be required to tell which of these possibilities is correct.



Marchetto MC, Carromeu C, Acab A, Yu D, Yeo GW, Mu Y, Chen G, Gage FH, & Muotri AR (2010). A model for neural development and treatment of rett syndrome using human induced pluripotent stem cells. Cell, 143 (4), 527-39 PMID: 21074045

Muotri AR, Marchetto MC, Coufal NG, Oefner R, Yeo G, Nakashima K, & Gage FH (2010). L1 retrotransposition in neurons is modulated by MeCP2. Nature, 468 (7322), 443-6 PMID: 21085180

Monday, November 22, 2010

A synaesthetic mouse?

An amazing study just published in Cell starts out with fruit flies insensitive to pain and ends up with what looks very like a synaesthetic mouse. Penninger and colleagues were interested in the mechanisms of pain sensation and have been using the fruit fly as a model to investigate the underlying biological processes. Like any good geneticist faced with profound ignorance of how a process works, they began by screening for mutant flies that are insensitive to pain. Making use of a very powerful genetic resource developed in Vienna (a bank of fly lines expressing RNA interference constructs for every gene in the genome) they screened through all these genes to see which ones were required in neurons for flies to respond to pain. (In particular, pain caused by excessive heat).

Why should anyone care how a fly feels pain? Well, like practically everything else you can think of, the basic physiology and molecular biology of pain sensation is very highly conserved from flies to mammals. It starts with specialized proteins called TRP channels, which are ion channels that span the cell membrane and allow ions to pass across it in response to various stimuli. Some of these TRP channels respond specifically to painful stimuli, some even more specifically to painful heat, and these molecules are highly conserved. The hope was that by screening for other genes they would identify additional conserved elements of the pathway.

This was exactly what they found. Among hundreds of new mutants that were insensitive to pain, they focused in this report on one, a gene called straightjacket. This gene codes for a protein called alpha2delta3, or CACNA2D3, which is a member of a conserved family of proteins that make up part of a calcium channel. These proteins are involved in modulating neurotransmission and also in some aspects of development, including the formation of synapses. Interestingly, mutations in other members of this gene family are associated with bipolar disorder, schizophrenia, Timothy syndrome (the symptoms of which include autism), epilepsy and migraine.

This particular gene is conserved in mammals and the authors show that mutation of the gene in mice also leads to insensitivity to pain induced by heat, but not to painful mechanical stimuli – a remarkably specific functional conservation. In addition, they show suggestive evidence that variants in the gene in humans are also associated with a higher pain tolerance. These latter data will have to be replicated but tantalizingly suggest that variation in this gene in humans may contribute to differences in pain sensitivity.

Mutation of this gene seems to cause pain insensitivity not by blocking pain responses in the sensory neurons or by blocking transmission of this signal to the brain, but by blocking transmission from the first relay station of the brain, the thalamus, to the cortex, where it must pass to be consciously perceived. The authors could show that the sensory neurons still respond to painful stimuli and that a spinal pain reflex was intact. They also used functional magnetic resonance imaging in mice to show that the thalamus was active as normal in response to painful stimuli. However, a network of areas in the cortex (the “pain matrix”) was completely unresponsive. Somehow, deletion of CACNA2D3 alters connectivity within the thalamus or from thalamus to cortex in a way that precludes transmission of the signal to the pain matrix areas.

This is where the story really gets interesting. While they did not observe responses of the pain matrix areas in response to painful stimuli, they did observe something very unexpected – responses of the visual and auditory areas of the cortex! What’s more, they observed similar responses to tactile stimuli administered to the whiskers. Whatever is going on clearly affects more than just the pain circuitry.



The authors suggest that this kind of sensory cross-activation may represent a model for synaesthesia, which is characterised by very similar effects. While this condition is highly familial, no genes have yet been isolated for it. Could CACNA2D3 be a viable candidate? It certainly seems possible, though one point suggests that whatever is happening, while similar to developmental synasthesia, may be somewhat distinct.

Synaesthesia usually involves an extra percept in response to some stimulus, without any decrement in the response to the stimulus itself. So, people who see colours when they hear music hear the music normally – the colour is just part of that experience. This is rather different from a situation where one sense is deficient and is taken over by another. That situation can arise due to injury, for example, and can even be surgically induced in animal models (used to study brain plasticity). One recent report (see below) described a patient who had a lesion in the thalamus in the somatosensory nucleus. This region was subsequently invaded by fibres carrying auditory information so that the patient was able to feel sounds. (The auditory fibres were activated by sound, which cross-activated the somatosensory area, which communicated this activity to the somatosensory cortex, where it was perceived as a touch on the surface of the body).

Could such an effect explain what was happening in these mice? Perhaps for the pain circuits, though one would typically expect that they would be invaded by other senses, rather than the other way around. But for the tactile stimuli, the message was apparently still getting through to the somatosensory cortex, it was just also activating visual and auditory areas. That starts to look like a pretty good model for synaesthesia. Whether it really is would most convincingly be demonstrated by finding a mutation in this gene in someone with synaesthesia. A good place to start might be testing the carriers of the variants in this gene in humans which affected pain sensitivity for any signs of synaesthesia.

Even if it does not correspond exactly to what we call developmental synaesthesia, one can predict that something pretty strange would result from mutation of this gene in humans. Given that every base of the genome is probably mutant in someone on the planet it seems certain that such mutations will eventually crop up.

It is not yet clear what cellular mechanism can explain the cross-activation observed in the mutant mice. One can imagine any number of scenarios, including structural rewiring between thalamic nuclei (which are specialized to transmit different types of sensory information) or from thalamus to cortex. Alternatively, changes in neurotransmission might explain the effects, for example by damping down cross-inhibitory processes that normally sharpen responses to one sense at a time. One way to dissociate these would be to see whether blocking the function of the protein just in adults is sufficient to induce the effect or if it has to be blocked during development. This might be achieved using drugs – a close relative of CACNA2D3 is blocked by gabapentin, a drug used in humans as an antiepileptic and also to block neuropathic pain (like that which can arise due to shingles, for example). Whether this or a similar drug could affect the A2D3 subunit is not, I think, known, but no doubt someone is now looking for a drug that can.


Neely GG, Hess A, Costigan M, Keene AC, Goulas S, Langeslag M, Griffin RS, Belfer I, Dai F, Smith SB, Diatchenko L, Gupta V, Xia CP, Amann S, Kreitz S, Heindl-Erdmann C, Wolz S, Ly CV, Arora S, Sarangi R, Dan D, Novatchkova M, Rosenzweig M, Gibson DG, Truong D, Schramek D, Zoranovic T, Cronin SJ, Angjeli B, Brune K, Dietzl G, Maixner W, Meixner A, Thomas W, Pospisilik JA, Alenius M, Kress M, Subramaniam S, Garrity PA, Bellen HJ, Woolf CJ, & Penninger JM (2010). A Genome-wide Drosophila Screen for Heat Nociception Identifies α2δ3 as an Evolutionarily Conserved Pain Gene. Cell, 143 (4), 628-38 PMID: 21074052

Beauchamp MS, & Ro T (2008). Neural substrates of sound-touch synesthesia after a thalamic lesion. The Journal of neuroscience : the official journal of the Society for Neuroscience, 28 (50), 13696-702 PMID: 19074042

Wednesday, November 3, 2010

Announcing the Wiring the Brain conference 2011

I am pleased to announce the Wiring the Brain conference, which will be held over the 12th-15th April 2011, in Ireland. This is an international scientific conference which aims to explore how the brain is wired and what happens when that wiring is faulty.

It will bring together world-leaders in developmental neurobiology, psychiatric genetics, molecular and cellular neuroscience, systems and computational neuroscience, cognitive science and psychology. A major goal is to break down traditional boundaries between these disciplines to enable links to be made between differing levels of observation and explanation.

We will explore, for example, how mutations in genes controlling the formation of synaptic connections between neurons can alter local circuitry, changing the interactions between brain regions, thus altering the functions of large-scale neuronal networks, leading to specific cognitive dysfunction, which may ultimately manifest as the symptoms of schizophrenia or autism. Though the subjects dealt with will be much broader than that, this example illustrates the kind of explanatory framework we hope to develop, level by level, from molecules to mind.

A list of confirmed speakers is provided below. We are excited to have an outstanding programme of leading researchers across many different fields. The full programme is available at http://www.wiringthebrain.com. Registration and abstract submission are now open. You can follow updates on the meeting and pre-meeting discussion topics on the Wiring the Brain Facebook group.

The conference is being held in association with Neuroscience Ireland and with BioMed Central and we are delighted to have them both involved. We have also received generous support from Science Foundation Ireland and from other sponsors (listed on the conference website).

The venue is the beautiful Ritz Carlton hotel in Powerscourt, Co. Wicklow, a convenient drive from Dublin airport and one of the most scenic areas of the country.

We hope to see some of you there!


The Organising Committee

Kevin Mitchell, Trinity College Dublin
Aiden Corvin, Trinity College Dublin 

Isabella Graef, Stanford University

Edward Hubbard, Vanderbilt University

Franck Polleux, The Scripps Research Institute



Keynote lectures

Gyorgy Buzsaki, Rutgers University
- brain oscillations and cognitive functions

Carla Shatz, Stanford University,
- activity-dependent mechanisms of neural development

Chris Walsh, Harvard Medical School
- genetics of cortical development and cortical malformations


Plenary speakers

Rosa Cossart, INSERM U901, Université de la Méditerranée, Marseilles
- neuronal network development and function

Ricardo Dolmetsch, Stanford University
- neuronal signaling pathways; molecular mechanisms in autism

Dan Geschwind, University of California, Los Angeles
- genetics and pathogenic mechanisms of autism; brain systems biology

Michael Gill, Trinity College Dublin
- genetics and pathogenic mechanisms of psychiatric disorders

Anirvan Ghosh, University of California, San Diego
- molecular mechanisms of neuronal connectivity

Melissa Hines, University of Cambridge
- sexual differentiation of the nervous system

Josh Huang, Cold Spring Harbor Laboratories
- molecular mechanisms of synaptogenesis

Heidi Johansen-Berg, University of Oxford
- diffusion-weighted tractography in the human brain

Mark Johnson, Birkbeck College, University of London
- cognitive development, neuroconstructivism

Maria Karayiorgou, Columbia University
- genetics and pathogenic mechanisms of schizophrenia

Isabelle Mansuy, University of Zurich
- epigenetic mechanisms of synaptic plasticity and dysfunction

Andreas Meyer-Lindenberg, University of Heidelberg
- functional and structural neuroimaging in psychiatric disorders

Bita Moghaddam, University of Pittsburgh
- network development and mechanisms of psychiatric dysfunction

Tomas Paus, University of Nottingham
- maturation of cortical connectivity in adolescence

Linda Richards, Queensland Brain Institute
- axon guidance, cortical connectivity

Akira Sawa, Johns Hopkins University
- molecular and cellular functions of psychiatric risk genes

Bradley Schlaggar, Washington University, St. Louis
- functional connectivity networks

Klaas Stephan, University of Zurich
- computational modeling of brain connectivity

Pierre Vanderhaeghen, University of Brussels
- molecular mechanisms of cortical development

Sunday, October 24, 2010

Searching for a needle in a needle-stack


Whole-genome sequencing is a game-changer for human genetics. It is now possible to deduce every base of an individual’s genome (all 6 billion of them – two copies of 3 billion each) for a couple of thousand euros, and dropping. (Yes, euros). Even Ozzy Osbourne just got his genome sequenced! For researchers searching for the causes of genetic disease (or resistance to vast quantities of drugs and alcohol), this means they no longer have to infer where a mutation is by tracking a sampling of “markers” spaced across the genome – they can directly see all of the genetic information.

The problem is, they directly see all of the genetic information. If each of us carries thousands of mutations – changes that are very rare or may even have never been seen before in any other person – then telling which one of those changes is actually causing the condition is a tough task. Researchers in psychiatric genetics are currently grappling with how to handle this glut of information.

The problem is particularly acute in this field, where there is a (very slowly) growing realisation that many so-called common disorders, such as schizophrenia and autism – are really umbrella terms for collections of very rare disorders. Each of these conditions can be caused by mutations in single genes. The reason they are so common is that there are so many genes required to wire the brain properly – mutations in any of probably hundreds of genes can lead to the kinds of neurodevelopmental defects that ultimately result in psychopathology. (At least, that is the working hypothesis - see review below for a discussion of the evidence supporting it).

Very large studies are now underway to sequence the genomes of thousands of people with schizophrenia, autism or other psychiatric disorders, along with “control” individuals from different populations. The hope is that by comparing the spectrum of mutations in patients with those in controls, it will be possible to deduce which mutations are pathogenic. The most obvious ones will be those which recur in multiple individuals with a psychiatric disorder, are not present in the control population and are predicted to affect the biochemical function of the encoded protein. Those parameters can be used to prioritise candidate mutations for further study.

So far, however, it has been far more difficult to generate the type of statistical evidence that psychiatric geneticists have been used to from genome-wide association or linkage studies. One major problem is that, while it is true that mental illness can be caused by single mutations, it is also true that the situation is likely more complicated than that in many cases. Most such mutations that have been identified to date are only partially “penetrant” – that means that not all of the people who carry the mutation have the disorder in question. Another way of describing that is to say that the mutations have “variable expressivity” – that means the phenotypes they result in vary widely across mutation-carriers. This makes it crucially important for genetic studies to very carefully define the phenotype being mapped – in many cases a particular clinical diagnosis will not be the best phenotype to choose.

One reason for such variable phenotypes due to a mutation in any single gene is that its effects may be modified by other mutations that each person carries. That situation is not unique to psychiatric disease – it’s actually true of all so-called Mendelian disorders. Even in classical examples like cystic fibrosis, which is caused by mutations in a single gene, the effects of such mutations are quite variable and are strongly affected by genetic background.

But it does pose a major problem – if you find a mutation in two or three people with disease and one person without disease, how can you assign a p-value to the likelihood of that mutation being causative? And how do you distinguish mutations in that gene from those that happen to occur in all the other genes in the genome? Hopefully, this problem will partly solve itself as larger samples of patients and control individuals are sequenced. A move back to family-based studies will also be hugely helpful as it will provide evidence based on which mutations segregate with illness (or, even better, with some more fundamental neurobiological “endophenotype”).

However, we will still likely be left with a situation where the statistical evidence we can get from considering the spectrum of mutations in single genes will run into mathematical limits. At some point it will be necessary to look for other types of evidence from outside the system. One type of evidence will come from analysing the biochemical pathways of the implicated genes – it is already becoming apparent that many such genes encode proteins that interact with each other (see review below for examples).

For example, mutations in the gene Contactin-associated protein 2 (CNTNAP2) have been found in patients with autism, schizophrenia, epilepsy, Tourette’s syndrome, ADHD and other disorders. The evidence for this gene by itself is extremely strong. Recently, mutations in genes encoding the related proteins CNTNAP4 and CNTNAP5 have also been found in patients with epilepsy and autism, respectively. By themselves, the evidence for each of these genes is not at all convincing – in fact it is not possible to even generate a p-value for how likely it is that they are causative. But taken together, the findings of mutations in each of these genes greatly strengthens the implication of the pathway in general. Findings of mutations in the genes encoding the interacting proteins Contactin-3, -4 and -5, similarly add to the weight of evidence.

These proteins are all involved in forming synaptic contacts between neurons, as are many other genes identified in patients, further implicating defects in this process as one route to mental illness.

The effects of mutations in particular genes can also be investigated in genetically modified mice. If a mutation in Gene A causes neurodevelopmental defects and physiological or behavioural phenotypes that are similar to those seen in mice with mutations in a gene known to cause psychiatric illness, then that is strong evidence that Gene A may be the culprit in individuals carrying a mutation that disrupts it.

The next few years will be tremendously exciting as the data from sequencing projects become available. To fully interpret these it will be necessary to look beyond statistical measures from the human data themselves and include evidence of biological plausibility, converging biochemical pathways and neurobiological phenotypes in both humans and animal models.


Mitchell KJ (2010). The genetics of neurodevelopmental disease. Current opinion in neurobiology PMID: 20832285

Monday, October 18, 2010

Colour my world


Colour does not exist. Not out in the world at any rate. All that exists in the world is a smooth continuum of light of different wavelengths. Colour is a construction of our brains. A lot is known about how the brain does this, beginning with complicated circuits in the retina itself. Thanks to a new paper from Greg Field and colleagues we now have an even more detailed picture of how retinal circuits are wired to enable light to be categorized into different colours. This study illustrates a dramatic and fundamental principle of brain wiring – namely that cells that fire together, wire together.

Colour discrimination begins with the absorption of light of different wavelengths. This is accomplished by photopigment proteins, called opsins, which are expressed in cone photoreceptor cells in the retina. Humans have three opsin genes, which encode proteins that preferentially absorb light of different wavelengths: short (S, in what we perceive as the blue part of the spectrum), medium (M, green) and long (L, red). Each cone expresses only one of these opsin genes and is thus particularly sensitive to light of the corresponding wavelength. However, by itself the response of a single cone cell cannot be used to determine the colour (wavelength) of incoming light. The reason is that each cone is responsive to both the wavelength and the intensity of the light – so an M-cone would respond equally to a dim green light or a strong red light.

Colour information only arises by comparing the responses of multiple cone cells. This is accomplished in two distinct channels – one which compares the inputs of L and M cones (the red-green channel) and one which compares the inputs of S cones to the combined inputs of L and M cones (the blue-yellow channel). The latter of these is the original, evolutionarily older system, dating back at least 500 million years. It is found in most mammals, in which there are only two opsin genes – an S opsin and one whose absorbance is midway between L and M.

The L/M system evolved much more recently, due to a gene duplication that occurred in the lineage of Old World primates, probably around 40 million years ago. The duplication of the primordial L/M opsin gene allowed the two resultant genes to diverge from each other in sequence, generating proteins with different absorption spectra, which could then be compared. Something similar can actually be achieved even in species with only one copy of the L/M gene. This gene is on the X chromosome, so females will carry two copies of it. Due to the random inactivation of one X chromosome in each cell in females, each cone will express only one of the two copies of this opsin gene. If the two copies differ from each other, encoding proteins with alterations in the amino acid sequence that affect their light absorbance, then what will arise is a set of L cones and a set of M cones.

All of this raises an important question – how are the inputs to these different cone cells compared? If the cells which express L and M cones are essentially the same, with the sole difference being that they express different opsin genes, then how is the wiring in the retina set up so that their inputs are distinguished, allowing their subsequent comparison? Cells in the retina are arranged in a series of layers. Cone cells connect, through bipolar and other cells, to retinal ganglion cells, which in turn convey visual information to the brain. Retinal ganglion cells integrate inputs from multiple cones, but in a very specialized way – some cones connect through ON bipolar cells (which are activated by light) and others through OFF bipolar cells (which are inactivated). Typically, one cone in the centre of an array of cells is connected to an ON bipolar cell, while surrounding cones connect to the same retinal ganglion cell target via OFF bipolar cells. The result is that the light signal hitting an array of cones is integrated – if the central cone is an L cell and the surrounding cones are M cells then the retinal ganglion cell will be most strongly activated by red light.

This has been known for quite a long time now. What has not been clear is how this system gets wired up during development. S, M and L cones are distributed randomly across the retina. S cones, which are the least frequent, are molecularly distinct from L/M cones in many ways and connect to a dedicated set of S channel bipolar and retinal ganglion cells. The development of the wiring that carries out the comparison between S and L/M cones is thus molecularly specified. This cannot be the case for the comparison between L and M cones, which differ only in the opsin gene they express.

The new study by Field and colleagues worked out in breathtaking detail the circuitry of the retina at a cellular level. Their results reveal the beauty and elegance of this circuitry but also resolve an important question relating to how L and M cone cells are wired. Each retinal ganglion cell in the centre of the retina receives ON inputs from a single cone and OFF inputs from the surrounding cones. In the periphery, however, the ON “centre” is composed of up to twelve cones. For the ganglion cell to discriminate colours there must be a bias in how many L or M cone cells wire up to it through the ON and OFF channels.

Their results reveal exactly such a bias and further show that it cannot be explained simply by random clumping of L or M cones in the photoreceptor array. What this indicates is that there is some additional mechanism whereby inputs from just one type of cone are strengthened in each of the ON and OFF channels. In effect, the L and M cones are competing for inputs in each channel, presumably through so-called “Hebbian mechanisms” whereby inputs to a cell are strengthened if they fire at the same time and asynchronous inputs are actively weakened. Despite their being no molecular differences between these cone cells, the brain is thus primed to wire them into distinct channels based on their patterns of activity.

A remarkable experiment performed a few years ago dramatically illustrates this principle. Mice are naturally dichromatic – they only have two opsin genes (S and L/M). Researchers in Jeremy Nathans’s group replaced one copy of the L/M gene with a version of the human L gene. This meant that female mice could be generated which carried one mouse opsin (L/M) and one human version (L). Cone cells could express one or the other of these genes. The result was astonishing – in visual tests, these mice could clearly distinguish between light of wavelengths which they were previously unable to discriminate. (They could now tell red from green). Despite normally having only two channels, their nervous system was clearly primed to perform this comparison.

Amazingly, this may extend to humans as well. The opsin genes in humans can also be polymorphic – each one comes in several different versions. Females who carry one version of, say, the L gene on one X chromosome, and another on the other X chromosome, can effectively have four different channels of absorption: S, M, L and L’. If the retina is primed to compare inputs based on their patterns of activity then one would predict that such females would be tetrachromatic – they should be able to distinguish between more colours than trichromatic individuals (just as trichromats can distinguish more colours than dichromats – people with a mutation in one of the L or M opsin genes, who are red-green colourblind).

This increased ability to discriminate colours is, apparently, indeed present in about 50% of females and can be revealed by a very simple test. Consider the picture of the colour spectrum shown below. If you print this out and mark on it with a pencil everywhere there seems to be a clear border between two distinct colours, then what you will find is that most trichromats mark out about 7 colour domains, while tetrachromats mark out between 9-10 (and dichromats about 5).




So, where a man may just see “green”, a woman may see chartreuse or olive. Realising that people literally see things differently (and not just colours) could avoid needless argument. (That said, the woman is clearly more right, and it is usually best to concede graciously).


Field GD, Gauthier JL, Sher A, Greschner M, Machado TA, Jepson LH, Shlens J, Gunning DE, Mathieson K, Dabrowski W, Paninski L, Litke AM, & Chichilnisky EJ (2010). Functional connectivity in the retina at the resolution of photoreceptors. Nature, 467 (7316), 673-7 PMID: 20930838

Jacobs, G., Williams, G., Cahill, H., & Nathans, J. (2007). Emergence of Novel Color Vision in Mice Engineered to Express a Human Cone Photopigment Science, 315 (5819), 1723-1725 DOI: 10.1126/science.1138838

Jameson KA, Highnote SM, & Wasserman LM (2001). Richer color experience in observers with multiple photopigment opsin genes. Psychonomic bulletin & review, 8 (2), 244-61 PMID: 11495112

Monday, October 4, 2010

Mice with fully functioning human brains

I wouldn’t usually discuss politics in a blog like this, but a recent story caught my eye, as it provides an example of the depressing and sometimes bizarre level of scientific illiteracy among elected officials or some people who hope to be elected. The example is from the United States, which is an easy target in this regard, but we have had a similar episode in Ireland recently so I don’t think we (or indeed any other non-Americans) can feel particularly smug about it. This one is especially funny, though.

Christine O’Donnell has recently won the Republican nomination in Delaware for the upcoming election to the Senate. I just love her – for comic entertainment this woman is very good value. She makes Sarah Palin look like the most reasonable, well-informed, level-headed person around. Among many clangers that she has dropped in the past, the one that really got my attention was the following assertion, made during a debate on stem cells on The O’Reilly Factor show on Fox News a few years ago:

"American scientific companies are cross-breeding humans and animals and coming up with mice with fully functioning human brains. So they're already into this experiment."

That’s right, cross-breeding humans and animals. I’m not sure how she imagines that to have taken place and would rather not know. And yes, she did say: mice with fully functioning human brains. Now, the average mouse weighs around 20 grams. The average human brain (clearly there are exceptions) weighs around 1.4 kilograms. I’m not sure Ms. O’Donnell really thought that through, even from a purely mechanical standpoint. However, she apparently had the opportunity to qualify or alter her assertion but did not, so one can assume she meant something like what she actually said.

(She also thinks evolution is a myth, because if we evolved from monkeys, then how come the monkeys are not still evolving into humans? That some people buy that kind of “argument” exemplifies the poor grasp that many people have of geological time. And of the fact that we did not evolve from monkeys – monkeys and humans evolved from a common ancestor. It reminds me of an even funnier comment I read from another creationist: if we evolved from monkeys, then how come we don’t speak monkey? There’s just no answer to that.)

What the imaginative Ms. O’Donnell may have been trying to refer to was a story that got some press coverage at the time of scientists who had transplanted a small number of human cells into a mouse brain to see if they would migrate and integrate normally. Apparently, about 100 such cells survived, in a brain that contains over 20 million cells. So, transplantation, not cross-breeding, and not fully functioning human brains, but to be fair to her she did, in an incredibly inept and confused manner, raise an interesting issue.

That is the question of whether it is ever a good idea (or morally or ethically right) to create an organism whose cells derive from two different species – a so-called chimera (named after the mythically mixed-up creature). This is especially touchy when some of the cells are of human origin. Why, you might legitimately ask, would anyone want to do such a thing?

Well, there are lots of reasons, none of which involves playing God just for fun, or actually wanting to create a hybrid organism. Most of the studies that have carried out such experiments are designed to test the potential of stem cells for regeneration of damaged parts of the brain. Stem cells can be obtained from many different sources, including early human embryos, umbilical cord blood and bone marrow. New technologies now allow fully differentiated adult cells from various tissues to be retro-differentiated into stem cells (so-called induced pluripotent stem cells). All of these cell types hold great promise for regenerative medicine, especially ones that are of the same genotype as the prospective patient.

But how to test them? Just injecting them into patient’s brains doesn’t seem like the best approach, though actually it has been done in some cases of seriously ill patients in the late stages of Parkinson’s and Huntington’s disease. Initial results seemed to suggest some clinical improvement but larger, more carefully controlled trials have been largely disappointing. These studies involved injection of primary human fetal cells into the brains of adult patients and were not particularly sophisticated in terms of how these cells were treated prior to injection.

With better defined populations of stem cells it is now possible, for example, to differentiate them into particular types of neurons (or their direct progenitors) prior to transplantation. To determine the efficacy of such treatments, animal models have of these disorders have been used. Human cells will integrate fairly happily into a rodent or even a chick brain. (No chick jokes, please). The brain is immune privileged and grafts of foreign cells are generally well tolerated by the host. Using this approach it is possible to determine how such transplanted cells survive, migrate and integrate into the brain (under the assumption that such processes would be much the same in a human brain). More importantly, it is possible to determine whether transplantation of such cells results in any improvement in the animal’s condition.

Such studies are generating promising results in models of stroke, spinal cord injury and neurodegenerative diseases such as Alzheimer’s, Huntington’s and Parkinson’s diseases (see a few recent examples below). It is still early days, however, and a lot more pre-clinical research like this will have to be carried out to characterise how these cells behave after transplantation, before they will be approved for clinical use. So, nothing sinister, no witchcraft (sorry, Christine, I know you like that), no hybrid mouse-humans scuttling into the dark corner of the lab when the lights are turned on. Just scientists and clinicans trying hard to find cures for serious diseases. Nothing sensationalist at all really. Sorry.



Snyder BR, Chiu AM, Prockop DJ, & Chan AW (2010). Human multipotent stromal cells (MSCs) increase neurogenesis and decrease atrophy of the striatum in a transgenic mouse model for Huntington's disease. PloS one, 5 (2) PMID: 20179764

Salazar DL, Uchida N, Hamers FP, Cummings BJ, & Anderson AJ (2010). Human neural stem cells differentiate and promote locomotor recovery in an early chronic spinal cord injury NOD-scid mouse model. PloS one, 5 (8) PMID: 20806064

Lee HJ, Lim IJ, Lee MC, & Kim SU (2010). Human neural stem cells genetically modified to overexpress brain-derived neurotrophic factor promote functional recovery and neuroprotection in a mouse stroke model. Journal of neuroscience research PMID: 20818776

Friday, September 17, 2010

Ancient origins of the cerebral cortex

Just how special is the human brain? Compared to other mammals, the thing that stands out most is the size of the cerebral cortex – the thick sheet of cells on the outside of the brain, which is so expanded in humans that it has to be folded in on itself in order to fit inside the skull. The cortex is the seat of higher brain functions, the bit of the brain we see with, hear with, think with. In particular, one of its main functions is association – bringing sensory information together with information on internal states and motivation to enable flexible and context-dependent decisions to be taken, rather than simple reflexive actions in response to isolated stimuli. While undoubtedly vastly more developed in humans, a new study suggests the cerebral cortex may have much more ancient origins than previously suspected.

All mammals have a cortex and it generally increases in size over evolution. Mice and rats have a smooth cortex, while that of cats is somewhat expanded and folded. Monkeys and apes show progressively bigger cortices as they get evolutionarily closer to humans. Dolphins and elephants also show highly expanded and folded cortices, so we are not the only species to arrive at this arrangement.

Expansion is coupled with an increase in the complexity of the cortex, as defined by the number of distinct cortical areas. This is mostly due to the emergence of additional association areas, where information from different inputs is integrated, and, in humans, an increase in distinct areas in the frontal and prefrontal cortex – the seat of the most sophisticated executive functions, including decision-making and long-term planning.

But when in evolution did the cortex actually evolve? Does it have some ancient precursor or did it arise as a new invention at some point? There has been considerable debate for decades over whether birds and reptiles have a counterpart of the cortex. They do have some regions that occupy the dorsal parts of the brain and perform somewhat similar functions, but their organization is so different from that of the cortex in mammals (which is arranged into discrete layers, while these regions in birds and reptiles are arranged into clusters of cells) that it has been difficult to establish their relationship.

Whether particular brain structures in different species are related to each other (i.e., whether they diverged from a single structure in a common ancestor) is often a subject of debate and controversy. It can be difficult if not impossible to determine based solely on location, anatomical organization or functional similarity. This is because it is relatively easy for these parameters to change over the course of evolution – they can be affected by changes to one or two genes, which means there is plenty of variation in these phenotypes within the population – the raw material for evolution by natural selection.

If the final phenotypic end-point of any particular region is quite variable, the opposite is true of the genetic pathways that specify the identity of the region at earlier stages of development. These involve master regulatory genes, whose expression is turned on or off in various parts of the embryo in response to earlier pathways that specify positional information (head from tail, back from belly, etc.). So, there are genes that differentiate nervous system tissue from the rest of the embryo, that differentiate forebrain, midbrain and hindbrain and that differentiate later subdivisions, including the cerebral cortex.

These genes act as “transcription factors”, controlling the expression of sets of proteins which define the mature characteristics of any particular region. While it is relatively easy for one of these effector proteins to change over evolution – affecting some specific characteristic of the region – it is much harder for the master regulatory genes to change. This is because they do not work alone – each area is defined by the expression of a combination of such genes, which are often turned on or off in a specific sequence. These genes interact in a complicated network of feedforward and feedback loops to orchestrate this complicated sequence. The networks in which they operate are so interlocked and involved in so many different parts of the embryo that mutations to any one gene tend to have very drastic consequences and will be rapidly selected against. These early regulatory systems are thus far less variable and tend to be highly conserved across evolution. So much so, in fact, that in many cases the function of one of these genes in one species can be carried out perfectly well by a copy of the gene from even a very distantly related species.

It is thus possible to tell whether a brain region in one species is homologous to one in another species (which may look quite different in its mature characteristics) by looking at how those regions were specified. If they derive from regions of the embryo which are specified by the same sets of regulatory genes then one can infer they both evolved from the same region in a common ancestor, no matter how different they may look now.

Similar patterns of gene expression argue that the cortex of mammals and the “pallium” of birds and reptiles are indeed related to each other, but a new study from the lab of Detlev Arendt goes much further back in evolutionary time. They compared the pattern and sequence of genes involved in specifying the cerebral cortex in mammals and the mushroom bodies, a sensory-associative brain centre in a much simpler organism, an annelid worm, called Platynereis. While it had previously been suspected that there might be a relationship between these structures (particularly between the cortex and mushroom bodies in insects) it had been impossible to determine definitively due to technical difficulties in examining the expression patterns of more than one gene at a time. The researchers in Arendt’s lab solved this problem by developing a new image-registration technique so that many different gene expression patterns could be mapped onto a common template and compared.

They found the same set of genes is expressed in these regions, in the same temporal sequence, under the influence of the same patterning mechanisms (those that specify where different structures develop in the embryo). Even further, very similar profiles of gene expression were observed in specific types of neurons in the mushroom bodies and in the cerebral cortex. This similarity extend to mushroom bodies in the brain of the fruitfly Drosophila, which are well known to be involved in sensory-associative integration, as well as learning and memory.

The conclusion from all these data is that the common ancestor of annelids, insects and vertebrates (the common ancestor of protostomes and deuterostomes, for those keeping score) already possessed some brain structure, specified by this defined set of genes, which was involved in integrating sensory information and performing associative functions. The morphology and connectivity of this structure has diverged significantly in each lineage since then, but the underlying similarities in function remain.

The extension of this principle of conservation in the genetic mechanisms specifying various organs or cell-types – which has been observed in eyes, limbs, hearts, and practically everywhere else one looks – to the part of the brain that most defines our humanity reinforces the notion espoused by Darwin, that “the difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind."


Tomer R, Denes AS, Tessmar-Raible K, & Arendt D (2010). Profiling by image registration reveals common origin of annelid mushroom bodies and vertebrate pallium. Cell, 142 (5), 800-9 PMID: 20813265

Wednesday, September 8, 2010

Wild-type humans

Wild-type is the term geneticists use to refer to non-mutants. It literally means organisms that are the same, genetically, as those in the wild, compared to ones that have been grown under coddled conditions in the lab for generations, going soft in the absence of natural selection, or that are specifically mutant at some gene or other. There are no wild-type humans.

Well, maybe there are a few, somewhere, but even they are not really non-mutants. We all carry millions of mutations in our genome – positions where the sequence in our genome differs from the typical sequence. Where everyone else has a “T”, you might have an “A”, for example. Most of these mutations have no consequence – they are simply neutral variation in DNA that has no discernible function. It turns out that most of the genome is not made of genes – the bits of DNA that code for proteins actually comprise only about 2-3% of the total sequence. Mutations that change the code for proteins are by far the most likely to cause disease or to result in an obvious phenotypic difference.

New DNA sequencing technologies have revealed how many mutations of that type each of us carries, on average. Lots: around 10,000 mutations that change the amino acid code of a protein. Those can be broken down based on frequency in the population. Some mutations are seen in many individuals in the population – this implies that they occurred long ago in some individual and have subsequently spread in the descendants of that individual. The inference is that such a mutation does not have a deleterious effect as it would have been selected against if it did. About 90% of protein-changing mutations fall into this common, ancient class. In fact, in many such cases it can be difficult to say which allele (which version of the sequence at a specific position) is “wild-type”.

Some of these common mutations are actually adaptive and may be much more common in some populations than others. These include mutations that affect skin colour, for example, reflecting adaptation to either high sunlight (requiring protective melanin) or lower sunlight (requiring less melanin to allow vitamin D production), as well as variants affecting diet, such as lactose tolerance, adaptation to environmental conditions, such as high altitude, or resistance to specific pathogens or parasites. So, what is wild-type in one population may be mutant in another.

The remaining 10% of mutations are either very rare or “private”, having only ever been observed in one individual. When searching for mutations responsible for genetic diseases, these are the variants that researchers go after. Of course, not all of these will have phenotypic effects. Many changes to the code of amino acids in a protein can be tolerated without compromising function. It is possible to estimate how many rare mutations each of us carries that are likely to affect protein function – this is between 100 and 200, quite a small number, really. As well as mutations that change one DNA base to another, these also include a different class – mutations which result in the deletion or duplication of a whole chunk of a chromosome (copy number variants).

This got me to idly musing about what would happen if you took someone’s DNA sequence and “corrected” all those mutations to the wild-type version. What would the result be? Those 200 or so rare mutations may generally be tolerated (they are clearly not lethal at least) but could still result in suboptimal performance of any number of biochemical, cellular or physiological processes in each one of us. They may also contribute to differences in morphology by subtly affecting processes of growth and development. As these mutations tend to reduce the function of the encoded protein, presumably it should be “better” to have the wild-type version. (For good measure, let’s imagine we can “correct” all the mutations predicted to affect protein function, even if they are slightly more common – say up to 5-10% frequency in the population, but not so common that we can’t say what the wild-type version is).


This was the premise of the excellent movie GATTACA. Apparently the book that inspired it was also good, but I haven’t read it because it didn’t have Uma Thurman in it. The movie did, Uma being somebody’s vision of what a wild-type human female would look like (and who would argue?). Her male counterpart, Jude Law, reinforces the impression that they would be, most importantly, ridiculously good-looking. Poor Ethan Hawke was cast as the guy born by traditional procreative methods, mutations and all.

Beauty is only skin deep, of course, and what really interests me is what would their brains look like? It takes a lot of genes to assemble a human brain and all of us carry mutations in many of those genes. Those differences affect how our brains are wired and influence many aspects of our personality, perception, cognition and behaviour (as pretty much all the posts on this blog describe). What would the brain of someone with each of those deleterious mutations corrected be like? Would they be a genius? Especially empathetic? A naturally coordinated athlete? Would they be left or right-handed? What would their personality be like? Is there a wild-type level of extroversion or neuroticism or open-mindedness?


For some of those traits the optimal level may be different from the maximal level. For brain size, for example, which is correlated with intelligence, there is a trade-off in, first, being able to make it out the birth canal and also in metabolic demand – big brains use a lot of energy. And for may personality traits it is difficult to define a single optimal point along the spectrum – there are many different strategies that may succeed better in different contexts. Being fearless and aggressive may attract the ladies, but could also get you killed young. So, our wild-type humans may have perfect vision and perfect teeth, but it’s much harder to define a perfect personality.

Another consideration is that natural selection has only ever acted on individuals with a genetic burden of mutations – we may thus in some way be adapted to that situation. Some mutations that decrease the function of one protein may be beneficial in the context of another mutation in a different protein. Perhaps putting all the perfect proteins together in one person would not actually generate an optimal system.

In the movie, the generation of these “genetically perfect” beings was accomplished by gradually selecting out all such mutations by screening embryos created by in vitro fertilization. The fatal flaw in this idea is that it considers the spectrum of mutations as static in the population, suggesting that once all the bad ones are weeded out, that will be that. This ignores the fact that the rate of new mutations is actually quite high. Each of us carries about 70 new mutations that are not inherited from our parents. Most of these arise during generation of sperm. The reason that mutations in sperm are more common than in eggs is that women are born with all their eggs already generated. The cells that generate sperm, in contrast, are constantly dividing throughout life. Each division increases the chance of incorporating an error. That is the reason why the rate of dominant Mendelian diseases – which are those caused by single mutations and which include many cases of common diseases such as schizophrenia and autism – increases with paternal age.

Of course, all of the discussion above is based on the premise that genetic effects on physical and psychological traits are predominant. This extreme form of genetic determinism was also espoused in GATTACA, to the point of predicting the cause and date of a person’s death! In reality, genetic factors have a large influence on many of these traits but by no means an exclusive one – intrinsic developmental variation, environmental effects and experience will all also contribute to varying extents. On the other hand, introducing mutations tends not only to change a phenotype but to increase the variance in the phenotype – as the system becomes more compromised, its output becomes more variable.

It would be interesting to ask, therefore, exactly how much variation in these traits would be left across our wild-type humans.



Ng, S., Turner, E., Robertson, P., Flygare, S., Bigham, A., Lee, C., Shaffer, T., Wong, M., Bhattacharjee, A., Eichler, E., Bamshad, M., Nickerson, D., & Shendure, J. (2009). Targeted capture and massively parallel sequencing of 12 human exomes Nature, 461 (7261), 272-276 DOI: 10.1038/nature08250

Roach, J., Glusman, G., Smit, A., Huff, C., Hubley, R., Shannon, P., Rowen, L., Pant, K., Goodman, N., Bamshad, M., Shendure, J., Drmanac, R., Jorde, L., Hood, L., & Galas, D. (2010). Analysis of Genetic Inheritance in a Family Quartet by Whole-Genome Sequencing Science, 328 (5978), 636-639 DOI: 10.1126/science.1186802

Friday, August 27, 2010

Coloured hearing in Williams syndrome


The idea that our genes can affect many of the traits that define us as individuals, including our personality, intelligence, talents and interests is one that some people find hard to accept. That this is the case is very clearly and dramatically demonstrated, however, by a number of genetic conditions, which have characteristic profiles of psychological traits. Genetic effects include influences on perception, sometimes quite profound, and other times remarkably selective. A recent study suggests that differences in perception in two conditions, synaesthesia and Williams syndrome, may share some unexpected similarities.

Williams syndrome is a genomic disorder caused by deletion of a specific segment of chromosome 7. Due to the presence of a number of repeated sequences, this region is prone to errors during replication that can result in deletion of the intervening stretch of the chromosome, which contains approximately 28 genes. The disorder is characterised by typical facial morphology, heart defects and a remarkably consistent profile of cognitive and personality traits. These include mild intellectual disability, with relative strength in language and extreme deficits in visuospatial abilities (including being able to perceive the relationships of objects in 3D space and to construct and mentally manipulate 3D representations). Williams patients are also highly social – often to the point of being over-friendly – empathetic and very talkative. This behaviour may belie a high level of anxiety, however.

One of the most remarkable features of Williams syndrome is the strong attraction of patients for music. Many show a strong interest in music from an early age and greater emotional responses to music. They are also more likely to play a musical instrument, some using music to reduce anxiety. A recent study from Elisabeth Dykens and colleagues adds a new twist to this story. They found in a neuroimaging experiment that in addition to activating the auditory cortex, music also stimulates visual activity and perceptions in Williams patients. In fact, this is not specific to music – non-musical sounds had the same or even stronger effects.

This is very reminiscent of what happens in a form of synaesthesia, called “coloured hearing”. In this condition, which is also heritable, sounds, sometimes specifically music or words, sometimes general sounds, are accompanied by a visual percept. These percepts are typically fairly simple – patches of colour, for example – and can be experienced out in the world or “in the mind’s eye”. (They are alternatively sometimes felt more as an “association” with a visual property, so that the sound of a trombone might be blue, while a piano might be green). Importantly, these visual percepts are both idiosyncratic and extremely consistent – middle C may evoke the image of a small purple cloud, a dog’s bark may set off yellow starbursts, etc.

Neuroimaging experiments in synaesthesia have also found activation of visual areas in response to sounds. Various models have been proposed to account for this, which I have discussed previously. They all involve cross-activation from auditory circuits to those that represent visual information. This may be mediated by extra physical connections in the brains of synaesthetes, presumably due to genetic effects on how the brain is wired during development. Alternatively, the wires could be there in everyone but just working differently in synaesthetes. It has so far been very difficult to distinguish between these possibilities.

The situation in Williams syndrome may be much more amenable to investigation. Unlike synaesthesia, we know what the genetic cause is in Williams syndrome. We know which genes are deleted and researchers are beginning to dissect which ones are associated with which symptoms. Some of these genes are known to function in nerve growth and guidance. It has also been demonstrated very clearly using diffusion tensor imaging that there are large differences in various circuits in the brains of Williams patients, including the presence of additional fibre bundles to or from the intraparietal sulcus, a region involved in visuospatial construction. It will be very interesting to determine whether similar extra connections can be detected between auditory and visual areas.

It is important to recognise, however, some crucial differences between the auditory-visual effects observed in Williams syndrome and in synaesthesia. The visual percepts reported in Williams syndrome are far more complex than those reported in synaesthesia. The former reportedly involve objects and scenes, more like a dreamscape than a simple blob of colour. They also lack the consistency which is one of the defining characteristics of synaesthesia. There may thus be more than one way to end up with coloured hearing.

Whatever the cause in these conditions, they both highlight the fact that genetic differences can have profound effects on how people perceive the world.

Thornton-Wells, T., Cannistraci, C., Anderson, A., Kim, C., Eapen, M., Gore, J., Blake, R., & Dykens, E. (2010). Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome American Journal on Intellectual and Developmental Disabilities, 115 (2) DOI: 10.1352/1944-7588-115.172

Marenco, S., Siuta, M., Kippenhan, J., Grodofsky, S., Chang, W., Kohn, P., Mervis, C., Morris, C., Weinberger, D., Meyer-Lindenberg, A., Pierpaoli, C., & Berman, K. (2007). Genetic contributions to white matter architecture revealed by diffusion tensor imaging in Williams syndrome Proceedings of the National Academy of Sciences, 104 (38), 15117-15122 DOI: 10.1073/pnas.0704311104

Friday, August 20, 2010

When to blame your parents, and for what


Studies linking some aspect of parental behaviour with some trait in their offspring are depressingly common in the sociological literature. Though these studies typically only report a correlation between parental behaviour and whatever the trait is in the offspring, the implication, and often the explicit conclusion, is that one causes the other. These kinds of stories get huge play in the popular press (and in the blogosphere), where the conclusion of a causative relationship is rarely challenged. For example, the finding that children who grow up with more books in the house are more successful academically is taken as evidence that simply having books around makes kids smarter.

This kind of thinking illustrates a common and fundamental flaw in interpreting sociological or epidemiological findings – correlation does not imply causation. Red hair and freckles are highly correlated but one does not cause the other. Both are caused by something else (a mutation in a gene controlling pigmentation). It seems a simple enough distinction but it is astonishing how pervasive this mistake is, even among academics supposedly trained in statistical methodology.

In the case of books, the conclusion that having them around is the causative factor on academic success is simply not warranted by the findings. The data from this kind of study design do not pertain to that question. The books could simply be an indicator of the real cause (like freckles). It seems quite possible that the underlying link is between the IQ of the parents (or some other cognitive trait predicting both academic success and bookishness – curiosity, open-mindedness, interest in more abstract topics) and that of their children. (It is well established that such traits are quite heritable).

I am not claiming that that actually is the explanation – just that it is a highly plausible one that must be considered. In fact, the study design does not permit this conclusion to be drawn either, and that illustrates one of the major problems in dissecting the possible effects of nature and nurture. It is hugely difficult to separate confounding genetic effects on behaviour of both parents and offspring from the effects of the behaviours themselves. Adoption studies – especially of identical twins reared apart – do provide one way to dissociate genetic effects from those of the family environment. These have consistently found large effects of shared genes and very little effect of family environment on a wide range of behavioural traits.

A far more tricky task is to dissociate the effects of parental behaviour prior to birth on the future behaviour of their offspring – adoption studies obviously cannot accomplish that. However, researchers in Cardiff, led by Anita Thapar, have come up with a clever and powerful new study design which does the trick. They have made use of the growing frequency of in vitro fertilisation to examine the effects of smoking during pregnancy. It is well known that smoking during pregnancy is associated with low birth weight and a number of other health issues. It is also associated with higher rates of antisocial behaviour in the offspring. Do these correlations really reflect the effects of smoking itself or could smoking be an indicator of a distinct underlying cause?

The IVF study design, which looked at records of 779 children, allowed these factors to be dissociated by splitting the mothers into two groups – those who were biologically related to their offspring and those who had used donor eggs and thus were unrelated to their offspring. These two groups were then examined for a correlation between the smoking behaviour of the mother during pregnancy and the birth weight and a measure of antisocial behaviour of their offspring. The findings were remarkably clear – smoking was associated with lower birth weight regardless of genetic relatedness. This effect is congruent with results from experimental animal studies on the effects of nicotine, cigarette smoke or carbon monoxide on birth weight and there are a variety of biological mechanisms postulated to explain the effect. So, all the evidence is consistent with this being a genuine effect of prenatal smoking per se.

But a very different picture was observed with respect to antisocial behaviour. High rates of antisocial behaviour were observed only in those mothers who smoked during pregnancy and who were related to their offspring. So, prenatal smoking itself does not seem to influence antisocial behaviour – it is more likely an indicator of some underlying genetic effect on behaviour of both the mother and the offspring. (See here for more on this).

So, smoking while pregnant is bad, mkay, for lots of reasons, but it will not make your child antisocial. And I would never argue against having books around, but articles proclaiming “Want smart kids? Here’s what to do” are uncritically promulgating an unfounded conclusion (also known as “talking shite”).


Evans, M., Kelley, J., Sikora, J., & Treiman, D. (2010). Family scholarly culture and educational success: Books and schooling in 27 nations Research in Social Stratification and Mobility, 28 (2), 171-197 DOI: 10.1016/j.rssm.2010.01.002

Rice, F., Harold, G., Boivin, J., Hay, D., van den Bree, M., & Thapar, A. (2009). Disentangling prenatal and inherited influences in humans with an experimental design Proceedings of the National Academy of Sciences, 106 (7), 2464-2467 DOI: 10.1073/pnas.0808798106
s;o