Robustness and fragility in neural development
So many things can go wrong in the
development of the human brain it is amazing that it ever goes right. The fact that it usually does – that
the majority of people do not suffer from a neurodevelopmental disorder – is
due to the property engineers call robustness. This property has important implications for understanding
the genetic architecture of neurodevelopmental disorders – what kinds of
insults will the system be able to tolerate and what kind will it be vulnerable
to?
The development of the brain involves many
thousands of different gene products acting in hundreds of distinct molecular
and cellular processes, all tightly coordinated in space and time – from
patterning and proliferation to cell migration, axon guidance, synapse
formation and many others. Large
numbers of proteins are involved in the biochemical pathways and networks
underlying each cell biological process.
Each of these systems has evolved not just to do a particular job, but
to do it robustly – to make sure this process happens even in the face of
diverse challenges.
Robustness is an emergent and highly adaptive property of complex systems that can be selected for in response to
particular pressures. These
include extrinsic factors, such as variability in temperature, supply of
nutrients, etc., but also intrinsic factors. A major source of intrinsic variation is noise in gene expression – random fluctuations in the levels of all proteins in all
cells. These fluctuations arise due
to the probabilistic nature of gene transcription – whether a messenger RNA is
actively being made from a gene at any particular moment. The system must be able to deal with
these fluctuations and it can be argued that the noise in the system actually acts as a buffer. If the system
only worked within a narrow operating range for each component then it would be
very vulnerable to failure of any single part.
Natural selection will therefore favour
system architectures that are more robust to environmental and intrinsic variation. In the process, such systems also
indirectly become robust to the other major source of variation –
mutations.
Many individual components can be deleted
entirely with no discernible effect on the system (which is why looking
exhaustively for a phenotype in mouse mutants can be so frustrating – many gene
knockouts are irritatingly normal).
You could say that if the knockout of a gene does not affect a
particular process, that that means the gene product is not actually involved
in that process, but that is not always the case. One can often show that a protein is involved biochemically
and even that the system is sensitive to changes in the level of that protein –
increased expression can often cause a phenotype even when loss-of-function
manipulations do not.
Direct evidence for robustness of
neurodevelopmental systems comes from examples of genetic background effects on
phenotypes caused by specific mutations.
While many components of the system can be deleted without effect,
others do cause a clear phenotype when mutated. However, such phenotypes are often modified by the genetic
background. This is commonly seen
in mouse experiments, for example, where the effect of a mutation may vary
widely when it is crossed into various inbred strains. The implication is that there are some
genetic differences between strains that by themselves have no effect on the
phenotype, but that are clearly involved in the system or process, as they
strongly modify the effect of another mutation.
How is this relevant to understanding
so-called complex disorders? There
are two schools of thought on the genetic architecture of these
conditions. One considers the
symptoms of, say, autism or schizophrenia or epilepsy as the consequence of
mutation in any one of a very large number of distinct genes. This is the scenario for intellectual
disability, for example, and also for many other conditions like inherited
blindness or deafness. There are
hundreds of distinct mutations that can result in these symptoms. The mutations in these cases are almost
always ones that have a dramatic effect on the level or function of the encoded
protein.
The other model is that complex disorders
arise, in many cases, due to the combined effects of a very large number of
common polymorphisms – these are bases in the genome where the sequence is
variable in the population (e.g., there might be an “A” in some people but a
“G” in others). The human genome
contains millions of such sites and many consider the specific combination of variants that each person inherits at these sites to be the most important
determinant of their phenotype. (I
disagree, especially when it comes to disease). The idea for disorders such as schizophrenia is that at many
of these sites (perhaps thousands of them), one of the variants may predispose
slightly to the illness. Each one
has an almost negligible effect alone, but if you are unlucky enough to inherit
a lot of them, then the system might be pushed over the level of burden that it
can tolerate, into a pathogenic state.
These are the two most extreme positions –
there are also many models that incorporate effects of both rare mutations and
common polymorphisms. Models
incorporating common variants as modifiers of the effects of rare mutations
make a lot of biological sense.
What I want to consider here is the model that the disease is caused in
some individuals purely by the combined effects of hundreds or thousands of
common variants (without what I call a “proper mutation”).
Ironically, robustness has been invoked by
both proponents and opponents of this idea. I have argued that neurodevelopmental systems should be
robust to the combined effects of many variants that have only very tiny
effects on protein expression or function (which is the case for most common
variants). This is precisely
because the system has evolved to buffer fluctuations in many components all
the time. In addition to being an
intrinsic, passive property of the architecture of developmental networks,
robustness is also actively promoted through homeostatic feedback loops, which
can maintain optimal performance in the face of variations, by regulating the
levels of other components to compensate.
The effects of such variants should therefore NOT be cumulative – they should
be absorbed by the system. (In
fact, you could argue that a certain level of noise in the system is a “design
feature” because it enables this buffering).
Others have argued precisely the opposite –
that robustness permits cryptic genetic variation to accumulate in
populations. Cryptic genetic variation has no effect in the context in which it arises (allowing it to
escape selection) but, in another context – say in a different environment, or
a different genetic background – can have a large effect. This is exactly what robustness allows
to happen – indeed, the fact that cryptic genetic variation exists provides
some of the best evidence that we have that the systems are robust as it shows
directly that mutations in some components are tolerated in most contexts. But is there any evidence that such
cryptic variation comprises hundreds or thousands of common variants?
To be fair, proving that is the case would
be very difficult. You could argue
from animal breeding experiments that the continuing response to selection of
many traits means that there must be a vast pool of genetic variation that can
affect them, which can be cumulatively enriched by selective breeding, almost
ad infinitum. However, new
mutations are known to make at least some contribution to this continued
response to selection. In
addition, in most cases where the genetics of such continuously distributed
traits have been unpicked (by identifying the specific factors contributing to
strain differences for example) they come down to perhaps tens of loci showing
very strong and complex epistatic interactions (1, 2, 3). Thus, just because variation in a trait is multigenic, does
not mean it is affected by mutations of small individual effect – an
effectively continuous distribution can emerge due to very complex epistatic
interactions between a fairly small number of mutations which have surprisingly
large effects in isolation.
(I would be keen to hear of any examples
showing real polygenicity on the level of hundreds or thousands of
variants).
In the case of genetic modifiers of
specific mutations – say, where a mutation causes a very different phenotype in
different mouse strains – most of the effects that have been identified have
been mapped to one or a small number of mutations which have no effect by
themselves, but which strongly modify the phenotype caused by another
mutation.
These and other findings suggest that (i)
cryptic genetic variation relevant to disease is certainly likely to exist and
to have important effects on phenotype, but that (ii) such genetic background
effects can most likely be ascribed to one, several, or perhaps tens of
mutations, as opposed to hundreds or thousands of common polymorphisms.
This is already too long, but it begs the question:
if neurodevelopmental systems are so robust, then why do we ever get
neurodevelopmental disease? The
paradox of systems that are generally robust is that they may be quite
vulnerable to large variation in a specific subset of components. Why specific types of genes are in this
set, while others can be completely deleted without effect, is the big
question. More on that in a
subsequent post…
Awesome post.
ReplyDeleteIn the Selfish Gene, Dawkins talks about "survival of the fittest" being a special case of "survival of the stable". Same idea I guess.
In general I think most of our theories of developmental disorders are too simple and we need to take some lessons from complexity theory. You talk about feedback loops but another critically important concept I think is redundancy, meaning that any one component of the system can fail and it doesn't have much effect on the organism. Redundancy sounds inefficient but it may actually be necessary for a complex system to achieve stability.
It follows then that neurodevelopmental disorders arise when there are multiple hits to the same system. Dorothy Bishop said something along these lines a few years ago but I don't think she framed it in quite these terms.
http://www.tandfonline.com/doi/abs/10.1080/17470210500489372
Thanks Jon. I think in the case of neurodevelopmental systems that distributed robustness will play a more major role than direct redundancy of specific components (where two or more components do the exact same job - though there are plenty of examples of that too, where it takes a double mutant to get a phenotype).
DeleteWhat I didn't have space to develop here is the flip-side of robustness, which is fragility. If the system is so robust, then
why does it sometimes go wrong? In fact, we know of lots of examples of single mutations that cause neurodevelopmental disease - however, I think those are the ones we know about precisely because their effects are most penetrant. It may well be that more cases are caused by two or more mutations. Even for the highly penetrant ones - say for Fragile X syndrome or velo-cardio facial syndrome - the specific phenotype that emerges is quite variable and very likely affected by genetic background.
But here's a question: why is the system so sensitive to mutation of some genes (even to dosage in many cases) while completely insensitive to even complete removal of other ones? Are there some particular characteristics that typify each set?
Nice Article!Thanks for sharing information.Very informative blog
ReplyDeleteWow!! What nice info on this post!All information
ReplyDeletereally useful.I love your stuff very much.Thanks for sharing this helpful post.
signed guitar
There is definitely so much that goes into the development here. so much to learn to help develop it so much here. I hope people will be able to do it here. Home alarm systems Toronto
ReplyDeleteThis comment has been removed by the author.
ReplyDelete