Escaping Flatland - when determinism falls, it takes reductionism with it
For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain. It’s turtles all the way down.
Reductionism is related to determinism,
though not in a straightforward way. There are different types of determinism, which
are intertwined with reductionism to varying degrees.
The reductive version of determinism claims
that everything derives from the lowest level AND those interactions are
completely deterministic with no randomness. There are things that seem random, to us, but that is only a
statement about our ignorance, not about the events themselves. The randomness
in this scenario is epistemological (relating to our knowledge or lack of it),
not ontological (a real thing in the world, that we can observe, but that does
not depend on us for its existence).
That’s the clockwork universe – the one
where Laplace’s Demon (an omniscient being) could unerringly predict the future
of the entire universe from a fully detailed snapshot of the state of all the
particles in it at any given instant. It’s pretty boring, that kind of
universe.
There is also an ostensibly non-reductive flavour
of determinism, which simply affirms that every event has some antecedent physical
cause(s). Nothing “just happens”. Under this view, however, causes don’t have
to be located solely in the interactions of all the particles or limited to the
actions of basic physical forces (which also act at the macroscopic scale,
determining the orbits of the planets, for example). The causality, or some of
it at least, could inhere in the organisation of a system and the constraints
that it places on the interactions of its constituents. In this kind of scheme,
there is room for a why as well as a how.
That’s the argument, at least, though it’s
a little incoherent, in my view. If there is no real randomness in the system
(or in the universe as a whole), then I don’t see how you can escape from pure
reductionism. In a deterministic system, whatever its current organisation (or
“initial conditions” at time t) you
solve Newton’s equations or the Schrodinger equation or compute the wave
function or whatever physicists do (which is in fact what the system is doing) and that gives the next state of the
system. There’s no why involved. It doesn’t matter what any of the states mean
or why they are that way – in fact, there can never be a why because the
functionality of the system’s behaviour can never have any influence on
anything. I would go even further and say you can never get a system that does things under strict determinism.
(Things would happen in it or to it or near it, but you wouldn’t identify the
system itself as the cause of any of those things).
But what if determinism is false? Let’s see
what happens to reductionism when you introduce some randomness, some
indeterminacy in the system. Of course, this is exactly what quantum theory
does, at least under one interpretation, though there is deep disagreement
among physicists as to whether the randomness observed at quantum levels is
epistemological or ontological. But let’s say it’s the latter – that randomness
really exists in the universe – that some things, at very small scales at
least, do “just happen”.
What effect does that have on things at big
scales – the scales of rocks and cats and babies, and other things we care
about? Some people argue – rather casually, in my view – that randomness at
quantum levels will not have any effect at the level of classical physics,
because all that noise will be somehow absorbed or averaged out in the system
and will not percolate up to higher levels. This means the behaviour of the
system at classical levels can still be considered to be deterministic.
Is that true? Do quantum effects stay there
at the quantum level? I can think of lots of instances where they wouldn’t –
like Schrödinger’s famous cat, for example, whose fate was to be determined by
the random decay of a radioactive atom. And I would guess that the randomness
of quantum phenomena has some important implications for real-world quantum
computing.
But just speaking philosophically, what’s strange
about this assertion – as I said, often thrown out very casually – is that it betrays
the very idea of reductionism. It relies on the idea that reality is not in
fact flat. It claims explicitly that things can be happening at the lowest
level of the system that do not percolate up to higher levels. Heresy! How can
a good reductionist believe that? Does reductionism only apply at classical
levels? Does it stop being true at some scale? Why?
If the properties at the classical level
derive from interactions at the quantum level (and we know they do because they
can be derived from quantum theory), then why would the subset of such
interactions that happen to have arisen randomly not also manifest at higher
levels? How would the system know which ones were random and which were
determined?
I know that’s a silly way to put it, but it
highlights something crucial – the idea that something important is happening at the level of the system. You might say that the reason those quantum
fluctuations don’t manifest at the level of the whole system is because they
average out. They are random, after all, and if there are many of them and they
are independent, then their collective effects should cancel each other out.
But that relies on a very non-reductionist mechanism: coarse-graining.
For that averaging out to happen, it means
that the low-level details of every particle in a system are not all-important
– what is important is the average of all their states. That describes an
inherently statistical mechanism. It
is, of course, the basis of the laws of thermodynamics and explains the
statistical basis of macroscopic properties, like temperature. But its use here
implies something deeper. It’s not just a convenient mechanism that we can use
– it implies that that’s what the system
is doing, from one level to the next.
Once you admit that, you’ve left Flatland.
You’re allowing, first, that levels of reality exist. And second, that what
happens at one level is only a coarse-grained, statistical reflection of what
is happening at the level below.
In my view, that’s almost but not quite right.
Any hierarchical system is averaging, or integrating over, or in some way
coarse-graining the low-level details, but not just the random ones (how could
it be?) – it’s coarse-graining ALL of them. And not just from the lowest,
quantum level, to the next one up. This happens at EVERY level. It may be
turtles all the way down, but it’s not turtles all the way up.
The macroscopic state as a whole does
depend on some particular microstate, of course, but there may be a set of such
microstates that corresponds to the same macrostate. And a different set of
microstates that corresponds to a different macrostate. If the evolution of the
system depends on those coarse-grained macrostates (rather than on the precise
details at the lower level), then this raises something truly interesting – the
idea that information can have causal
power in a hierarchical system, and, more generally, in the universe.
The low level details alone are not
sufficient to predict the next state of the system. Because of random events,
many next states are possible. What determines the next state (in the types of
complex, hierarchical systems we’re interested in) is what macrostate the
particular microstate corresponds to. The system does not just evolve from its
current state by solving classical or quantum equations over all its
constituent particles. It evolves based on whether the current arrangement of
those particles corresponds to macrostate A or macrostate B.
Some criteria embodied in the structure of
the system itself drive a different response to these two macrostates. Simple
versions could involve a threshold effect, such as a thermostat triggering a
heater if the temperature drops below its set point, or a neuron firing an
action potential if the voltage across its membrane is high enough. That kind
of control is inherently informational.
In philosophical terms, it relies on counterfactuals being ontologically real – that is, the current state can only carry
causally effective information if in fact it was actually possible that it
could have been different.
That little bit of indeterminacy is thus key
– otherwise it doesn’t matter what the microstate corresponds to as the system
is simply going to follow a deterministic trajectory. But I’ve just been talking
about those random events being coarse-grained, along with all the non-random
events, so how could they lead to different macrostates? The answer is they’re
averaged but not always averaged out.
Sometimes those random events will make a crucial difference, especially for a
system poised at the boundary between two macrostates. In fact, such a scenario
actually amplifies small random
fluctuations, by causing a qualitative change in macrostate.
In complex, dynamical systems that are far
from equilibrium, some small differences due to random fluctuations may thus indeed
percolate up to the macroscopic level, creating multiple trajectories along
which the system could evolve. This
brings into existence something necessary (but not by itself sufficient) for things
like agency and free will: possibilities.
What this means is that causation does not
reside simply at the lowest levels and the basic laws of physics, nor is it
completely instantaneous. The system will not evolve along a single
pre-determined line, nor will its evolution simply follow a random path in a
tree of possibilities. Instead, in some types of systems – like living
organisms – how the system evolves will depend on what those various
macrostates mean. What do they
correspond to or reflect in the environment, what consequences are they linked
to in terms of action, what feedback does the organism get on the outcomes of
those actions, and how does that feedback alter the configuration of the system
to set criteria for processing that information in the future?
By building up, not out, creating a
functional hierarchy of levels within their own structure, and incorporating
meaning in feedback loops that extend through action and consequence into the
environment and over time, evolution has created organisms that use the wiggle
room provided by stochasticity to exert “top-down” causal power to do things
for reasons. The organism itself can choose
among those branching possibilities. (More on that here and much more to come).
To come back to where we started, while
they are often presented as independent, I argue that if strict
determinism falls, it takes reductionism down with it. Turns out a little bit
of randomness is the key to escaping Flatland.
Further reading:
Mitchell KJ. Does Neuroscience Leave Room for Free Will?. Trends Neurosci. 2018;41(9):573-576. doi:10.1016/j.tins.2018.05.008
Sara Imari Walker (2014) Top-down causation and the rise of information in the emergence of life. Information 2014, 5(3), 424-439; doi:10.3390/info5030424
Krakauer D, Bertschinger N, Olbrich E, Flack JC, Ay N. The information theory of individuality. Theory Biosci. 2020;139(2):209-223. doi:10.1007/s12064-020-00313-7
Noble R, Noble D. Harnessing stochasticity: How do organisms make choices?. Chaos. 2018;28(10):106309. doi:10.1063/1.5039668
Comments
Post a Comment