Getting to the bottom of reductionism – is it all just physics in the end?

There was some interesting recent discussion on Twitter regarding claims made in a new book by physicist Sabine Hossenfelder, in which she at least seems to assert that everything that happens in the universe is reducible to, and deducible from, the low-level laws of physics. Strikingly, she presents this view as an irrefutable scientific fact, rather than an arguable philosophical position. It’s worth digging into these ideas to probe the notion that the behavior of all complicated things, including living organisms, just comes down to physics in the end.

  

Patrick Baud quoted several passages from the book, “Existential Physics”, that argue that reductionist theories are the only game in town. With the important caveat that I have not read the book in full, and granting that some additional nuance is probably added elsewhere, it is worth quoting these passages in full to try and get the gist of these arguments. I’ve interspersed a few brief comments between the quoted sections: 

  

“Countless experiments have confirmed for millennia that things are made of smaller things, and if you know what the small things do, you can tell what the large things do. There’s not a single known exception to this rule. There is not even a consistent theory for such an exception.”

 

This can be taken in several ways. If it just means that, if, at any given moment, you know what all the bits of a system are doing, then you know what the complete system is doing, then it’s trivial. (It is in fact a statement about us, and what we know, not really about the system, per se). If, however, it’s implying that all the causes of the behavior of a given system originate in the laws governing the smallest elements (i.e., bottom-up), then it’s a much bolder claim, one which she seems to be making here:

 

“A lot of people seem to think it is merely a philosophical stance that the behavior of composite object (for example, you) is determined by the behavior of its constituents – that is, subatomic particles. They call it reductionism or materialism or, sometimes, physicalism, as if giving it a name that ends in -ism will somehow make it disappear. But reductionism – according to which the behavior of an object can be deduced from (“reduced to” as the philosophers would say) the properties, behavior, and interactions of the object’s constituents – is not a philosophy. It’s one of the best established facts about nature.”

 

First, it’s very much a philosophical position, and a highly debatable one, as I hope what follows will illustrate. Second, reductionism should not be equated with materialism or physicalism – there are non-reductive forms of both those stances. They basically just say: “no magic!” They don’t say anything about the nature of causation in physical systems and certainly don’t rule out whole-part or top-down causation. Third, “reduced to” and “deduced from” are equated here, but in reality are not the same thing. Reducing something to the behaviour of its parts is an explanation. Deducing a behaviour from those parts is a prediction, which is often much harder, and which, in a sense, includes explaining how it came to be, not just it’s behavior at a given moment.

 

And finally, the phrase “and interactions of the object’s constituents” is doing a lot of heavy lifting here – it’s what governs those interactions that is at issue. You can obviously say that a system behaves the way it does, at any given moment, because its particles are arranged in a certain organisation (and the laws of physics then entail its behavior). But you can just as well say that the particles are arranged that way (i.e., they got arranged that way) because they create a system that behaves that way. This is answering a diachronic “how come?” question, rather than a synchronic “how?” question. These perspectives are complementary, not conflicting.

 

Hossenfelder continues:

 

“We certainly know of many things that we cannot currently predict, for our mathematical skills and computational tools are limited. The average human brain, for example, contains about 1000 trillion trillion atoms. Even with today’s most powerful supercomputers, no one can calculate just how all these atoms interact to create conscious thought. But we also have no reason to think it is not possible. For all we currently know if we had a big enough computer, nothing will prevent us from simulating a brain item by item.”

 

There’s a revealing assumption here – that conscious thought is “created by the interaction of atoms”. Is that the right way to think about it? I mean, atoms are certainly involved – any kind of physical structure with dynamics which support consciousness must be composed of atoms. But is that the right level to look for what makes those structures or those dynamics special? If you start with that position, then you’re making a circular argument.

 

“In contrast, assuming that composite systems – brains, society, the universe as a whole – display any kind of behavior that is not derived from the behavior of their constituents is unnecessary. No evidence calls for it. It is as unnecessary as the hypothesis of God. Not wrong, but ascientific.”

 

Well, this is throwing down the gauntlet! It casts any kind of holistic, non-reductive thinking as mystical – on the level of religious superstition. This will be news to all the chemists, biologists, psychologists, sociologists and practitioners of all the other “special sciences”. Physics rules, and that’s that. All the action is really at the bottom.

 

It’s not, however, entirely clear what “derived from” means here. Again, if it just means that a full description of the behavior of the particles of a system entails a full description of the behavior of the whole system, well, no one’s going to argue with that. If you know what all the atoms in my car are doing, you know what the car is doing. If, however, it is meant to imply that such a description provides an explanation of why the system is the way it is, how it came to be, or why it behaves that way, well then I, for one, certainly will argue. Your detailed picture of all those atoms of my car (and me, as the driver) won’t tell you that I’m driving to pick up milk.

 

Just to make clear that Hossenfelder is not an outlier in these positions, here’s a more explicit statement from Ethan Siegel:


“…the fundamental laws that govern the smallest constituents of matter and energy, when applied to the Universe over long enough cosmic timescales, can explain everything that will ever emerge. This means that the formation of literally everything in our Universe, from atomic nuclei to atoms to simple molecules to complex molecules to life to intelligence to consciousness and beyond, can all be understood as something that emerges directly from the fundamental laws underpinning reality, with no additional laws, forces, or interactions required.”

 

That really makes it quite explicit. Not only can the laws of physics help us understand why a system behaves the way it does, they can explain “the formation” of such systems – i.e., how they came to be the way they are. That may be true for things like atoms and planets and galaxies, but we’ll see below that it falls short for more complex entities, including living organisms.

 

Siegel goes on to say that: “The alternative proposition is emergence, which states that qualitatively novel properties are found in more complex systems that can never, even in principle, be derived or computed from fundamental laws, principles, and entities.”

 

This is a common contrast to draw – reductionism versus emergence. But emergence is a famously slippery and contentious concept, so much so that Siegel (like Hossenfelder) thinks any such explanations contravene not just reductionism but physicalism and amount to appeals to the supernatural or divine. Again like Hossenfelder, he takes them to be anti-scientific. (For a counter, see this discussion on emergence).

 

A more apt contrast to draw is between reductionism and holism. Regrettably, “holism” seems to carry some mystical connotations for some scientists as well, possibly due to its centrality in some Eastern religious philosophies and its cachet among New Age woo-merchants. But, in scientific terms, this is really a contrast between completely bottom-up causation and the (very much non-supernatural) idea that the organisation of the whole may entail constraints that collectively govern the behavior of the constituents. Much more on that below.

 

Physicist Sean Carroll has similarly argued that the forces and equations of the Standard Model (or Core Theory) of quantum physics are “causally comprehensive” (at least within the ranges of normal experience – i.e., not near a black hole or the speed of light).  He grants – in what he calls in his book The Big Picture “poetic naturalism” – that it’s convenient to talk about things at other levels, but the implication is that this is at best a useful fiction.

 

Anyway, I was going to respond on Twitter with some thoughts, but they got so long I decided to list them here as a blog instead (but still in tweetstorm/bullet point format). Some (perhaps many) of these points are arguable, but I think they make a good case for the importance of higher-order causation in understanding many aspects of the universe, including our own existence.

 

 

Let’s consider some reductionist claims:

 

Claim 1: If we know the detailed microstates of all the particles in a system at a given moment, then we know all the information about the macrostate of the whole system. This is trivial and obvious.

 

[Though Philip Ball notes that even this is not universal! It’s not true of entangled states, in which information about the whole state is not reducible to information about its component particles.]

 

Claim 2: If we know the detailed microstates of all the particles in a system at a given moment, AND we can submit them to the equations of the Standard Model, then we can predict what the next state will be.

 

This is a MUCH stronger claim, and it appears to be false, given the fundamental indeterminacy at quantum levels.

 

We can only – in principle, not just in practice – make a statistical or probabilistic prediction of the possible subsequent states of the system. We can predict these probabilities very, very accurately, but in practice, so far, only for systems of very limited scale and complexity.

 

But we can’t predict the actual outcome of any given “run” or observation or measurement – only what the distribution of many such measurements would be if it were possible to make them and remake them from the identical starting position.

 

So, the claim that conditions at any given moment, plus the “laws of physics”, fully determine the state of a system at the next moment (and arbitrarily far into the future) appears to be false.

 

Claim 3: It could still be claimed that for any given system, all the important causal interactions happen at the lowest levels, even if some are random. It’s all physical forces playing out between particles or quantum fields.

 

Note that such a view has no historicity – it doesn’t matter how a system arrived at a certain state; all that matters is what that state is.

 

Systems like this have no memory – they reflect a certain arbitrary path that has been followed but they don’t accumulate any complexity. They can’t do anything and they’re not for anything.

 

Assumption: Note how the reductionist position already assumes that “systems” exist – collections of particles that have some integrity as an entity and autonomy from the environment, often with some internal structure.

 

But how would such systems even come into existence in a causally reductionist universe, without any historicity? The equations of the Standard Model, by themselves, can’t explain this tendency.

 

However, once there is some randomness, which creates a possibility space, statistical principles will come into play across collections of particles, over time. The laws of thermodynamics will favour some arrangements over others.

 

Some arrangements will be more stable than others – some will dissipate more free energy than others (or maximise the rate of entropy maximisation in the universe as a whole, even while local entropy decreases). Things will tend to get organised.

 

Now a simple mathematical principle will apply: more stable arrangements will persist longer. Note that what matters for stability is the global patterns of matter and forces and the patterns of flux – i.e., the dynamics of the macrostates, not the particulars of the microstates.

 

Claim 4: A reductionist might counter that any macrostate must supervene on some particular microstate. So it all comes down to the details at any given moment!

 

In fact, this relationship only goes one way – a given microstate must correspond to a given macrostate. But, for the purposes of determining thermodynamic stability, a given macrostate may be realised by multiple possible microstates.

 

That is, the universe itself does coarse-graining – it’s not just something we do for convenience.

 

If you want to claim this is all also “just physics”, well, fine, but it’s not just bottom-up causation derived from quantum theory. And we’ll see below how it leads to complex chemistry and ultimately to biology.  

 

Claim 5: Another (very strong!) reductionist claim is that once we know the laws governing the behavior of subatomic particles, all other theories or principles can be derived from it. This position admits that higher-order principles exist but claims they are not fundamental. 

 

This appears to not be the case. For example, Hamilton’s principle of least action or Jaynes’ principle of maximum entropy or the Free Energy Principle cannot be derived from the Standard Model. (At least as far as I know!)

 

There are all kinds of other systems principles and dynamics – familiar to engineers, economists, computer scientists, biochemists, evolutionary biologists – that do not derive from the laws of physics. They just hold.

 

The algorithm of natural selection is a pertinent example – iterations of mutation and selection will allow change to accumulate, complexity to increase, and functionality or adaptedness to emerge. This is independent of the physical medium (and, indeed, the algorithm is applied in all kinds of areas).

 

Principles like these apply on the global scale – to the organisation of systems. And they appear to be every bit as fundamental as the equations of quantum field theory.

 

Crucially, they do allow for historicity – in fact, they make it inevitable. Whatever system is favoured at any given moment becomes the initial conditions for the next moment. Because of fundamental randomness, this generates an exploratory and potentially “progressive” dynamic.

 

If a system happens into one specific state at time t, this may mean it can now reach a new, even more favoured state at time t+1. (Stuart Kaufmann’s “adjacent possible”)

 

This dynamic can lead to the amplification of very small fluctuations. Indeed, such random fluctuations are necessary to break symmetry (e.g., of the cosmic inflation) and allow inhomogeneities to emerge (like the formation of galaxies, stars, and planets).

 

The universe will thus become structured. Higher-order entities will arise, with a tendency to persist through time. (This is tautological but not trivial).

 

This opens the door to higher-order causation. But what does this mean? If we think of whole-part causation, this normally means the whole comprises some organisation of the parts, which collectively constrain each other.

 

Claim 6: In a sense, this just means there’s a solution to the global problem of all the force vectors – an energy minimum or at least transiently stable organisation of the system. A reductionist could claim this will just emerge bottom-up.

 

And of course this is true – *given the organisation at time t*. But in a universe with a real possibility space, some organisations will have been more likely to exist at time t.

 

So the probability of any given state at time t+1 depends on the prior probabilities of all the possible states that could have existed at time t. The path is historical. 

 

Given that the universe is expanding, the space of possible states is too – faster than the physical stuff can equilibrate. That is, the maximum possible entropy of the universe is increasing all the time, but the actual entropy lags behind.

 

This means that the absolute amount of information (i.e., structure or local order) in the universe can increase, even while the absolute entropy is also increasing, because the possible entropy is increasing even faster!

 

Now we get to another crucial factor, hinted at above – feedback or selection, through time. Not instantaneous whole-part relations, which suggest a kind of circular causation, but diachronic relations that extend through time – a spiral, not a circle.

 

Any given configuration that is stable enough to persist for some time generates a new adjacent possibility space, which may allow the system to reach an even more stable state, and on and on. Now we can see the kind of dynamic that leads to the emergence of increasingly complex dissipative systems.

 

Here, the structure of the whole system (“inherited” from time t) creates the initial boundary conditions that shape the possibility space at time t+1 – a whole-part causation that is not logically circular due to it being diachronic.

 

The organisation of the system imposes constraints on its components. These constraints are every bit as causal as the physical forces at play – that is, they contribute to governing the way the system will evolve from moment to moment.

 

Forces are thus not the only causes. Indeed, you won’t have any physical forces without some structure, some inhomogeneities or gradients. As Keith Farnsworth argues, using Aristotle’s terms, there are no efficient causes without formal causes.

 

In fact, higher-order constraints can become even more causally important than the lower-level details, due to the coarse-graining and multiple realisability at play.

 

This kind of complexification can lead to the emergence of life. Living systems exist far from equilibrium and perform work to stay that way.

 

They are therefore under selective pressure to find solutions that afford the greatest dynamic stability. (Very different from the inert stability evident in a crystal).

 

Particular patterns of dynamic relations (e.g., in networks of chemical reactions) can, through all kinds of feedback loops in the system, remain stable under a range of conditions.

 

But they tend to be precarious, especially in an environment that may itself be quite dynamic. A good way to persist is to “save the settings” that govern the kinetics of all those reactions in a chemically inert substrate that is itself held apart from all that activity.

 

Then if the system is perturbed, it can re-equilibrate by reference to that stable informational resource. That is the primary function that DNA performs.

 

The benefit of course is that the DNA can be replicated, the cell can divide, and the structure can be recapitulated in the daughter cells, again by reference to the genetic informational resource, thus allowing reproduction.

 

DNA can thus act as a store of information and a substrate for natural selection. If a mutation arises that alters the system dynamics and makes it more likely to persist in whatever environments it encounters, then that mutation will be selected for. And negative selection will conversely act against mutations that impair persistence.

 

These two kinds of natural selection will act at the population level with a ratchet-like mechanism, meaning change can accumulate along various possible directions. Progress can be saved at each step along the way.

 

Causation in these systems is now inherently informational and historical – they are the way they are because of the history of their ancestors’ interactions with the environments they encountered.

 

And the way they are imposes constraints on the components. They are now aligned towards a purpose – persistence. This kind of causation is absent from a reductionist’s worldview. 

 

The effects of natural selection can lead to a new kind of structure: a compartmentalised hierarchy, where different elements of the system are acting over different time-scales.

 

Some of these can provide top-down constraints to simultaneously manage short-term and long-term contingencies in an optimal manner.

 

This is yet another kind of causation, distinct from whole-part. It is actually part-part causation, but it relies on a nested, hierarchical structure, where information flows bottom-up and top-down and side-to-side, with each part trying to satisfy its own constraints, based on the context supplied by the rest of the system.

 

Crucially, those configurations can encode control policies – what to do in the case of some internal or environmental conditions. Optimal policies will have been selected for based on prior experience.

 

This is doing things for reasons. There are no reasons in a reductive world, where all the causation inheres at the lowest levels. But where exploratory dynamics lead to the emergence of true complexity, in the form of life, reasons become perfectly real causes of what happens.

 

Note that none of this requires “new physics” or in any way contravenes the Standard Model.

 

The laws describing the low-level forces remain the same. They are just subject to additional constraints in the form of initial and boundary conditions – no magic required!

 

But the system now crucially has historicity packed into its configuration – it means there is a why as well as a how, to how it evolves.

 

Ultimately, if we want to really understand things in the universe above the scale of atoms, we need to take seriously the evidence that such entities and systems behave according to higher-order principles – the particles are just the stuff they’re made of.

 

The accusation that these higher-order principles are somehow unscientific or mystical or appeal to supernatural forces is thus unwarranted. Indeed, to claim that the successes of particle physics in its own arena mean that every phenomenon at every scale and degree of complexity will also ultimately be explainable by these low-level laws is such an extrapolation beyond empirical evidence that it is the position that starts to look like an article of faith. :-)

 

Comments

Popular posts from this blog

Undetermined - a response to Robert Sapolsky. Part 1 - a tale of two neuroscientists

Grandma’s trauma – a critical appraisal of the evidence for transgenerational epigenetic inheritance in humans

Undetermined - a response to Robert Sapolsky. Part 2 - assessing the scientific evidence