What are the Laws of Biology?
The reductionist perspective on biology is that it all boils down to physics eventually. That anything that is happening in a living organism can be fully accounted for by an explanation at the level of matter in motion – atoms and molecules moving, exerting forces on each other, bumping into each other, exchanging energy with each other. And, from one vantage point, that is absolutely true – there’s no magic in there, no mystical vital essence – it’s clearly all physical stuff controlled by physical laws.
But that perspective does not provide a sufficient explanation of life. While living things obey the laws of physics, one cannot deduce either their existence or their behaviour from those laws alone. There are some other factors at work – higher-order principles of design and architecture of complex systems, especially ones that are either designed or evolved to produce purposeful behaviour. Living systems are for something – ultimately, they are for replicating themselves, but they have lots of subsystems for the various functions required to achieve that goal. (If “for something” sounds too anthropomorphic or teleological, we can at least say that they “do something”).
Much of biology is concerned with working out the details of all those subsystems, but we rarely discuss the more abstract principles by which they operate. We live down in the details and we drag students down there with us. We may hope that general principles will emerge from these studies, and to a certain extent they do. But it feels like we are often groping for the bigger picture – always trying to build it up from the components and processes we happen to have identified in some specific area, rather than approaching it in any principled fashion or basing it on any more general foundation.
So, what are these principles? Do they even exist? Can we say anything general about how life works? Is there any theoretical framework to guide the interpretation of all these details?
Well, of course, the very bedrock of biology is the theory of evolution by natural selection. That is essentially a simple algorithm: take a population of individuals, select the fittest (by whatever criteria are relevant) and allow them to breed, add more variation in the process, and repeat. And repeat. And repeat. The important thing about this process is it builds functionality from randomness by incorporating a ratchet-like mechanism. Every generation keeps the good (random) stuff from the last one and builds on it. In this way, evolution progressively incorporates design into living things – not through a conscious, forward-looking process, but retrospectively, by keeping the designs that work (for whatever the organism needs to do to survive and reproduce) and then allowing a search for further improvements.
But that search space is not infinite – or at least, only a very small subsection of the possible search space is actually explored. I often use a quote from computer scientist Gerald Weinberg, who said that: “Things are the way they are because they got that way”. It nicely captures the idea that evolution is a series of frozen accidents and that understanding the way living systems are put together requires an evolutionary perspective. That’s true, but it misses a crucial point: sometimes things are the way they are because that’s the only way that works.
Natural selection can explain how complex and purposeful systems evolve but by itself it doesn’t explain why they are the way they are, and not some other way. That comes down to engineering. If you want a system to do X, there is usually a limited set of ways in which that can be achieved. These often involve quite abstract principles that can be implemented in all kinds of different systems, biological or designed.
Systems biology is the study of those kinds of principles in living organisms – the analysis of circuits and networks of genes, proteins, or cells, from an engineering design perspective. This approach shifts the focus from the flux of energy and matter to emphasise instead the flow of information and the computations that are performed on it, which enable a given circuit or network to perform its function.
In any complex network with large numbers of components there is an effectively infinite number of ways in which those components could interact. This is obviously true at the global level, but even when talking about just two or three components at a time, there are many possible permutations for how they can affect each other. For a network of three transcription factors, for example, A could activate B, but repress C; or A could activate B and together they could repress C; C could be repressed by either A OR B or only when both A AND B are present; C could feed back to inactivate A, etc., etc. You can see there is a huge number of possible arrangements.
The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.
These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.
These robust network motifs are the computational primitives from which more sophisticated systems can be assembled. When combined in specific ways they produce powerful systems for integrating information, accumulating evidence and making decisions – for example, to adopt one cell fate over another, to switch on a metabolic pathway, to infer the existence of an object from sensory inputs, to take some action in a certain situation.
A conceptual framework for systems biology
Understanding how such systems operate can be greatly advanced by incorporating principles from a wide range of fields, including control theory or cybernetics, information theory, computation theory, thermodynamics, decision theory, game theory, network theory, and many others. Though each of these is its own area, with its own scholarly traditions, they can be combined into a broader schema. Writing in the 1960’s, Ludwig von Bertalanffy – an embryologist and philosopher – recognised the conceptual and explanatory power of an over-arching systems perspective, which he called simply General System Theory.
Even this broad framework has limitations, however, as does the modern field of Systems Biology. The focus on circuit designs that mediate various types of information processing and computation is certainly an apt way of approaching living systems, but it remains, perhaps, too static, linear, and unidirectional.
To fully understand how living organisms function, we need to go a little further, beyond a purely mechanistic computational perspective. Because living organisms are essentially goal-oriented, they are more than passive stimulus-response machines that transform inputs into outputs. They are proactive agents that actively maintain internal models of themselves and of the world and that accommodate to incoming information by updating those models and altering their internal states in order to achieve their short- and long-term goals.
This means information is interpreted in the context of the state of the entire cell, or organism, which includes a record or memory of past states, as well as past decisions and outcomes. It is not just a message that is propagated through the system – it means something to the receiver and that meaning inheres not just in the message itself, but in the history and state of the receiver (whether that is a protein, a cell, an ensemble of cells, or the whole organism). The system is thus continuously in flux, with information flowing “down” as well as “up”, through a constantly interacting hierarchy of networks and sub-networks.
A mature science of biology should thus be predicated on a philosophy more rooted in process than in fixed entities and states. These processes of flux can be treated mathematically in complexitytheory, especially dynamical systems theory, and the study of self-organising systems and emergence.
In addition, the field of semiotics (the study of signs and symbols) provides a principled approach to consider meaning. It emerged from linguistics, but the principles can be applied just as well to any system where information is passed from one element to another, and where the state and history of the receiver influence its interpretation of the message.
In hierarchical systems, this perspective yields an important insight – because messages are passed between levels, and because this passing involves spatial or temporal filtering, many details are lost along the way. Those details are, in fact, inconsequential. Multiple physical states of a lower level can mean the same thing to a higher level, or when integrated over a longer timeframe, even though the information content is formally different. This means that the low-level laws of physics, while not violated in any way, are not sufficient in themselves to explain the behaviour of living organisms that process information in this way. It requires thinking of causation in a more extended fashion, both spatially and temporally, not solely based on the instantaneous locations and momentum of all the elementary particles of a system.
A new pedagogical approach in Biology
In the basic undergraduate Biology textbook I have in my office there are no chapters or sections describing the kinds of principles discussed above. There is no mention of them at all, in fact. The words “system”, “network”, “computation” and “information” do not even appear in the index. The same is true for the textbooks on my shelf on Biochemistry, Molecular and Cell Biology, Developmental Biology, Genetics, and even Neuroscience.
Each of these books is filled with detail about how particular subsystems work and each of them is almost completely lacking in any underpinning conceptual theory. Most of what we do in biology and much of what we teach is describing what’s happening – not what a system is doing. We’re always trying to figure out how some particular system works, without knowing anything about how systems work in general. Biology as a whole, along with its sub-disciplines, is simply not taught from that perspective.
This may be because it necessarily involves mathematics and principles from physics, computing, and engineering, and many biologists are not very comfortable with those fields, or even acutely math-phobic. (I’m embarrassed to say my own mathematical skills have atrophied through decades of neglect). Mainly for that reason, the areas of science that do deal with these abstract principles – like Systems Biology or Computational Neuroscience, or more generally relevant fields like Cybernetics or Complexity Theory, are ironically seen as arcane specialties, rather than providing a general conceptual foundation of Biology.
As we have learned more and more details in more and more areas, we seem to have actually moved further and further away from any kind of unifying framework. If anything, discussion of these kinds of issues was more lively and probably more influential in the early and mid-1900s when scientists were not so inundated by details. It was certainly easier at that time to be a true polymath and to bring to bear on biological questions principles discovered first in physics, computing, economics, or other areas.
But now we’re drowning in data. We need to educate a new breed of biologists who are equipped to deal with it. I don’t mean just technically proficient in moving it around and feeding it into black box machine-learning algorithms in the hope of detecting some statistical patterns in it. And I don’t necessarily mean expert in all the complicated mathematics underlying all the areas mentioned above. I do mean equipped at least with the right conceptual and philosophical framework to really understand how living systems work.
How to get to that point is the challenge, but one I think we should be thinking about.
Science and the Modern World. Alfred North Whitehead, 1925.
What is Life? Erwin Schrodinger, 1943.
Cybernetics. Or Control and Communication in the Animal and the Machine. Norbert Wiener, 1948.
Cybernetics. Or Control and Communication in the Animal and the Machine. Norbert Wiener, 1948.
The Strategy of the Genes. Conrad Waddington, 1957.
The Computer and the Brain. John von Neumann, 1958.
General System Theory. Foundation, Theory, Applications. Ludwig von Bertalanffy, 1969.
Gödel, Escher, Bach. An Eternal Golden Braid. Douglas Hofstadter, 1979.
The Extended Phenotype. Richard Dawkins. 1982.
Order out of Chaos. Man’s New Dialogue with Nature. Ilya Prigogine and Isabelle Stengers, 1984.
Endless Forms Most Beautiful. Sean Carroll, 2005.
Robustness and Evolvability in Living Systems. Andreas Wagner, 2007.
Complexity: A Guided Tour. Melanie Mitchell, 2009.
The Information. A History, A Theory, A Flood. James Gleick, 2011.
Cells to Civilisations. The Principles of Change that Shape Life. Enrico Coen, 2015.