Does quantum indeterminism defeat reductionism? (Response to Coel Hellier)

 KM: I’m grateful to Coel Hellier (@colhellier) for writing a blogpost in response to one I wrote arguing that if determinism falls, it takes reductionism with it. Rather than expect people to bounce back and forth between these posts, I have pasted Coel’s entire blog and intercalated my responses, in bold, below. For clarity, I have color-coded his excerpts from my original blog in blue:

After writing a piece on the role of metaphysics in science, which was a reply to neuroscientist Kevin Mitchell, he pointed me to several of his articles including one on reductionism and determinism. I found this interesting since I hadn’t really thought about the interplay of the two concepts. Mitchell argues that if the world is intrinsically indeterministic (which I think it is), then that defeats reductionism. We likely agree on much of the science, and how the world is, but nevertheless I largely disagree with his article.

 

Let’s start by clarifying the concepts. Reductionism asserts that, if we knew everything about the low-level status of a system (that is, everything about the component atoms and molecules and their locations), then we would have enough information to — in principle — completely reproduce the system, such that a reproduction would exhibit the same high-level behaviour as the original system. Thus, suppose we had a Star-Trek-style transporter device that knew only about (but everything about) low-level atoms and molecules and their positions. We could use it to duplicate a leopard, and the duplicated leopard would manifest the same high-level behaviour (“stalking an antelope”) as the original, even though the transporter device knows nothing about high-level concepts such as “stalking” or “antelope”.

 

KM: I would describe this position simply as physicalism. It just states that if you made an exact physical duplicate of a living being, you would regenerate not just the low-level positions of all the atoms and molecules, but the high-level organization and properties as well. Of course you would – the high-level properties inhere in that organization. I don’t suppose anyone would dispute that, but there’s nothing reductionist about this assertion, as CH kind of concedes below:

 

As an aside, philosophers might label the concept I’ve just defined as “supervenience”, and might regard “reductionism” as a stronger thesis about translations between the high-level concepts such as “stalking” and the language of physics at the atomic level. But that type of reductionism generally doesn’t work, whereas reductionism as I’ve just defined it does seem to be how the world is, and much of science proceeds by assuming that it holds. While this version of reductionism does not imply that explanations at different levels can be translated into each other, it does imply that explanations at different levels need to be mutually consistent, and ensuring that is one of the most powerful tools of science.

 

KM: This stronger philosophical version of reductionism is indeed my target – the idea that the observed high-level properties and behaviors of complex systems (indeed even the existence of those systems in the first place) can in fact be reduced to and fully explained by the playing out of all the low-level interactions between the atoms and molecules. I agree with CH that this version of reductionism “doesn’t work”! But this is not some kind of straw man, as implied. It is a view that is espoused by many, especially in discussions on things like free will, and it is the version of reductionism that I argue is tightly intertwined with determinism.  

 

Our second concept, determinism, then asserts that if we knew the entire and exact low-level description of a system at time t  then we could — in principle — compute the exact state of the system at time t + 1. I don’t think the world is fully deterministic. I think that quantum mechanics tells us that there is indeterminism at the microscopic level. Thus, while we can compute, from the prior state, the probability of an atom decaying in a given time interval, we cannot (even in principle) compute the actual time of the decay. Some leading physicists disagree, and advocate for interpretations in which quantum mechanics is deterministic, so the issue is still an open question, but I suggest that indeterminism is the current majority opinion among physicists and I’ll assume it here.

 

KM: Some kind of indeterminism does indeed seem to be a majority view among physicists – I recorded an 80% agreement in a decent-sized but entirely unscientific Twitter poll. (Interestingly, the 80:20 split was the same among physicists and non-physicists who responded). However, there is much less agreement on where this indeterminacy comes from, how it should be interpreted, what it tells us about the nature of reality, and what its effects might be at the classical level.

 

This raises the question of whether indeterminism at the microscopic level propagates to indeterminism at the macrosopic level of the behaviour of leopards. The answer is likely, yes, to some extent. A thought experiment of coupling a microscopic trigger to a macroscopic device (such as the decay of an atom triggering a device that kills Schrodinger’s cat) shows that this is in-principle possible. On the other hand, using thermodynamics to compute the behaviour of steam engines (and totally ignoring quantum indeterminism) works just fine, because in such scenarios one is averaging over an Avogadro’s number of partlces and, given that Avogadro’s number is very large, that averages over all the quantum indeterminicity. 

 

What about leopards? The leopard’s behaviour is of course the product of the state of its brain, acting on sensory information. Likely, quantum indeterminism is playing little or no role in the minute-by-minute responses of the leopard. That’s because, in order for the leopard to have evolved, its behaviour, its “leopardness”, must have been sufficiently under the control of genes, and genes influence brain structures on the developmental timescale of years. On the other hand, leopards are all individuals. While variation in leopard brains derives partially from differences in that individual’s genes, Kevin Mitchell tells us in his book Innate that development is a process involving much chance variation. Thus quantum indeterminicity at a biochemical level might be propogating into differences in how a mammal brain develops, and thence into the behaviour of individual leopards. 

 

That’s all by way of introduction. So far I’ve just defined and expounded on the concepts “reductionism” and “determinism” (but it’s well worth doing that since discussion on these topics is bedeviled by people interpreting words differently). So let’s proceed to why I disagree with Mitchell’s account. 

 

He writes:

For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain. 

 

There is never only one explanation of anything. We can always give multiple different explanations of a phenomenon — certainly for anything at the macroscopic level — and lots of different explanations can be true at the same time, so long as they are all mutually consistent. Thus one explanation of a leopard’s stalking behaviour will be in terms of the firings of neurons and electrical signals sent to muscles. An equally true explanation would be that the leopard is hungry.

 

Reductionism does indeed say that you could (in principle) reproduce the behaviour from a molecular-level calculation, and that would be one explanation. But there would also be other equally true explanations. Nothing in reductionism says that the other explanations don’t exist or are invalid or unimportant. We look for explanations because they are useful in that they enable us to understand a system, and as a practical matter the explanation that the leopard is hungry could well be the most useful. The molecular-level explanation of “stalking” is actually pretty useless, first because it can’t be done in practice, and second because it would be so voluminous and unwieldy that no-one could assimilate or understand it.

 

KM: So, this is what I call the ecumenical version of reductionism which comes in weaker or stronger forms. CH states a weak form above – where he says that explanations at various levels are equally valid. The physicist Sean Carroll espouses a similar view, but states it in what is a subtly stronger (and kind of patronising) way – he describes explanations at higher levels as useful ways of talking about complicated systems – in effect, as convenient fictions. But, he seems to insist that the real explanation is at the lowest level and a description at that level would be the most comprehensive and would necessarily entail and explain all the higher-level organization and apparent causality. (CH makes a somewhat similar argument below).

 

As a comparison, chess-playing AI bots are now vastly better than the best humans and can make moves that grandmasters struggle to understand. But no amount of listing of low-level computer code would “explain” why sacrificing a rook for a pawn was strategically sound — even given that, you’d still have all the explanation and understanding left to achieve. 

 

So reductionism does not do away with high-level analysis. But — crucially — it does insist that high-level explanations need to be consistent with and compatible with explanations at one level lower, and that is why the concept is central to science.

 

Mitchell continues:

In a deterministic system, whatever its current organisation (or “initial conditions” at time t) you solve Newton’s equations or the Schrodinger equation or compute the wave function or whatever physicists do (which is in fact what the system is doing) and that gives the next state of the system. There’s no why involved. It doesn’t matter what any of the states mean or why they are that way – in fact, there can never be a why because the functionality of the system’s behaviour can never have any influence on anything.

 

I don’t see why that follows. Again, understanding, and explanations and “why?” questions can apply just as much to a fully reductionist and deterministic system. Let’s suppose that our chess-playing AI bot is fully reductionist and deterministic. Indeed they generally are, since we build computers and other devices sufficiently macroscopically that they average over quantum indeterminacy. That’s because determinism helps the purpose: we want the machine to make moves based on an evaluation of the position and the rules of chess, not to make random moves based on quantum dice throwing.

 

KM: There is a strong position in physics and philosophy (espoused forcefully by Bertrand Russell, for example) that argues that causes in fact do not exist. If the universe is really deterministic, then there is no room for and no need for causes – understanding and explanations and “why” questions absolutely would NOT apply. The universe would simply evolve based on the initial conditions and the fundamental forces determining the movements and interactions of all the particles. Determinism thus implies reductionism. Something can only be considered a cause of something else if it being different would have caused a difference to that something else occurring. If there is no way that anything could actually be different from how it is because the universe is evolving deterministically from the dawn of time to the end of time (or indeed because it all just exists in a block universe without a direction of time), then the concept of causation simply does not apply. It relies on counterfactual possibilities being ontologically real. (And so does the idea that information can have causal power in a system).

 

But, in reply to “why did the (deterministic) machine sacrifice a rook for a pawn” we can still answer “in order to clear space to enable the queen to invade”. Yes, you can also give other explanations, in terms of low-level machine code and a long string of 011001100 computer bits, if you really want to, but nothing has invalidated the high-level answer. The high-level analysis, the why? question, and the explanation in terms of clearing space for the queen, all still make entire sense.

 

KM: Now, here, as with many compatibilist arguments or thought experiments, I think we need to back waaaaaay up. The high-level explanation only applies in this thought experiment because the computer was programmed by a human being to do things for a reason, and is playing a game developed by humans that has goals and functionalities of components embedded in it. There’s only an answer to the “why?” question because it was programmed in that way. If you are going to posit determinism and try and deduce what follows from it, then you don’t get to just assume the existence of games and computers and computer programmers and go from there. You have to first explain how, in a deterministic universe, you would ever get games and computers and computer programmers. 

 

I would go even further and say you can never get a system that does things under strict determinism. (Things would happen in it or to it or near it, but you wouldn’t identify the system itself as the cause of any of those things).

 

Mitchell’s thesis is that you only have “causes” or an entity “doing” something if there is indeterminism involved. I don’t see why that makes any difference. Suppose we built our chess-playing machine to be sensitive to quantum indeterminacy, so that there was added randomness in its moves. The answer to “why did it sacrifice a rook for a pawn?” could then be “because of a chance quantum fluctuation”. Which would be a good answer, but Mitchell is suggesting that only un-caused causes actually qualify as “causes”. I don’t see why this is so. The deterministic AI bot is still the “cause” of the move it computes, even if it itself is entirely the product of prior causation, and back along a deterministic chain. As with explanations, there is generally more than one “cause”. 

 

Nothing about either determinism or reductionism has invalidated the statements that the chess-playing device “chose” (computed) a move, causing that move to be played, and that the reason for sacrificing the rook was to create space for the queen. All of this holds in a deterministic world. 

 

KM: Again, I question the validity of the premises of this thought experiment. You have to explain how reasons would ever come to be (how creatures that could have reasons would ever evolve) in a deterministic universe or one where the only kind of causation at play is at the lowest levels of physical forces (which as we’ve seen does not really fit the concept of causation at all). Nothing would be for anything – you would have no purpose, no function, no value, no goals, no meaning.

 

Mitchell pushes further the argument that indeterminism negates reductionism:

For that averaging out to happen [so that indeterminism is averaged over] it means that the low-level details of every particle in a system are not all-important – what is important is the average of all their states. That describes an inherently statistical mechanism. It is, of course, the basis of the laws of thermodynamics and explains the statistical basis of macroscopic properties, like temperature. But its use here implies something deeper. It’s not just a convenient mechanism that we can use – it implies that that’s what the system is doing, from one level to the next. Once you admit that, you’ve left Flatland. You’re allowing, first, that levels of reality exist.

 

I agree entirely, though I don’t see that as a refutation of reductionism. At least, it doesn’t refute forms of reductionism that anyone holds or defends. Reductionism is a thesis about how levels of reality mesh together, not an assertion that all science, all explanations, should be about the lowest levels of description, and only about the lowest levels. 

 

KM: As I said above, the version of reductionism I am thinking of (which despite CH’s assertion, many people do indeed assert and seem to hold) is precisely the one that believes ultimately “that all science, all explanations, should be about the lowest levels of description, and only about the lowest levels”. More precisely and more fairly, there are many people who assert the idea that in principle all the important business is happening at the lowest levels, while allowing that in practice it is not possible to fully describe complicated things at that level and so it is much more convenient to work with measurable high-level coarse-grained parameters. 

 

I said above that determinism implies reductionism (no high-level causes, no causes at all, in fact, just the wave function inexorably evolving). But I also claimed the converse, that indeterminacy negates reductionism. This doesn’t obviously follow by necessity, so let me explain the context of that claim. What I was trying to point out in the original post was an inconsistency in the logic of people who allow that quantum indeterminacy exists but claim that it would not percolate up to affect classical levels. At the same time, they maintain a reductionist stance towards explaining the behavior of complex systems and argue for determinism at the classical level. If you admit that coarse-graining happens, from one level to the next, and that not all of the details at the lowest level matter and that many of those details have no effect on the system, then you have just rejected reductionism. You’re not just thinking of the system as a system for convenience, you’re saying that its organization is a key causal factor in its evolution, because this will determine which details matter and which don’t. This is perfectly reasonable and correct, in my view, but it’s no longer reductionist, at least not in a fully orthodox sense. If that can happen from the quantum level to the next one up, then why couldn’t it happen at every transition between levels where some coarse-graining occurs? That would, in my view, be precisely what defines levels and grants them some ontological validity.

 

So, there’s an “eating your cake, and having it too” quality to that move – it basically invokes macroscopic causation to reject an important role for quantum indeterminacy, while otherwise claiming that all the causal work in complex systems occurs at the lowest (classical) level and rejecting the idea that the organization of the system embodies crucial causal relationships and criteria.

 

Indeterminism does mean that we could not fully compute the exact future high-level state of a system from the prior, low-level state. But then, under indeterminism, we also could not always predict the exact future high-level state from the prior high-level state. So, “reductionism” would not be breaking down: it would still be the case that a low-level explanation has to mesh fully and consistently with a high-level explanation. If indeterminacy were causing the high-level behaviour to diverge, it would have to feature in both the low-level and high-level explanations.

 

Mitchell then makes a stronger claim:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate. And a different set of microstates that corresponds to a different macrostate. If the evolution of the system depends on those coarse-grained macrostates (rather than on the precise details at the lower level), then this raises something truly interesting – the idea that information can have causal power in a hierarchical system …

 

But there cannot be a difference in the macrostate without a difference in the microstate. Thus there cannot be indeterminism that depends on the macrostate but not on the microstate. At least, we have no evidence that that form of indeterminism actually exists. If it did, that would indeed defeat reductionism and would be a radical change to how we think the world works.

 

It would be a form of indeterminism under which, if we knew everything about the microstate (but not the macrostate) then we would have less ability to predict time t + 1  than if we knew the macrostate (but not the microstate). But how could that be? How could we not know the macrostate? The idea that we could know the exact microstate at time t  but not be able to compute (even in principle) the macrostate at the same time t  (so before any non-deterministic events could have happened) would indeed defeat reductionism, but is surely a radical departure from how we think the world works, and is not supported by any evidence.

 

But Mitchell does indeed suggest this:

The low level details alone are not sufficient to predict the next state of the system. Because of random events, many next states are possible. What determines the next state (in the types of complex, hierarchical systems we’re interested in) is what macrostate the particular microstate corresponds to. The system does not just evolve from its current state by solving classical or quantum equations over all its constituent particles. It evolves based on whether the current arrangement of those particles corresponds to macrostate A or macrostate B. 

 

But this seems to conflate two ideas:

1) In-principle computing/reproducing the state at time t + 1 from the state at time t (determinism).

2) In-principle computing/reproducing the macrostate at time t from the microstate at time t (reductionism).

 

Mitchell’s suggestion is that we cannot compute: {microstate at time t } {macrostate at time t + 1 }, but can compute: {macrostate at time t } {macrostate at time t + 1 }. (The latter follows from: “What determines the next state … is [the] macrostate …”.)

And that can (surely?) only be the case if one cannot compute: {microstate at time t } {macrostate at time t }, and if we are denying that then we’re denying reductionism as an input to the argument, not as a consequence of indeterminism. 

 

KM: Okay, so this is the crux of the argument and I really welcome the questioning, which will help me hopefully clarify what I mean. It is not that knowing any particular microstate at time t would not also tell you the macrostate at that time – it certainly would. It’s more a question of understanding the full picture of causality in the system. The reason that any given microstate will tend to lead to some subsequent microstate is by virtue of the macrostate that it entails meaning something

 

The existence of indeterminacy makes it possible – not inevitable, by any means, but possible – that systems could evolve (which we call living organisms) that do have goals, purpose, value, meaning, and function – that do things for reasons. Those parameters are encoded at the level of macrostates. Of course, at any given moment, they are realized in some particular microstate, but the low-level details may be incidental. Indeed, many aspects of the microstate of any given neuron or brain region are lost or actively filtered out in the coarse-graining that happens through synaptic transmission and population-level neural dynamics. It’s meaning that drives the mechanism. 

 

Mitchell draws the conclusion:

In complex, dynamical systems that are far from equilibrium, some small differences due to random fluctuations may thus indeed percolate up to the macroscopic level, creating multiple trajectories along which the system could evolve. […]

I agree, but consider that to be a consequence of indeterminism, not a rejection of reductionism.

This brings into existence something necessary (but not by itself sufficient) for things like agency and free will: possibilities.

As someone who takes a compatibilist account of “agency” and “free will” I am likely to disagree with attempts to rescue “stronger” versions of those concepts. But that is perhaps a topic for a later post.

 

KM: To be clear, the arguments laid out above are only making that case that indeterminacy is necessary for agency to exist (because it creates real possibilities for agents to choose between, and because it opens the door to macroscopic causation). How organisms have evolved to take advantage of that causal slack – to become causes in their own right – is a much longer story, and the subject of my next book ;-) With thanks again to Coel for the discussion.


For more on that argument, see: Mitchell KJ. Does Neuroscience Leave Room for Free Will? Trends Neurosci. 2018 Sep;41(9):573-576. doi: 10.1016/j.tins.2018.05.008.

Comments

Popular posts from this blog

Grandma’s trauma – a critical appraisal of the evidence for transgenerational epigenetic inheritance in humans

Undetermined - a response to Robert Sapolsky. Part 1 - a tale of two neuroscientists

Undetermined - a response to Robert Sapolsky. Part 2 - assessing the scientific evidence