Undetermined - a response to Robert Sapolsky. Part 3 - Where do intentions come from?

In his book Determined, Robert Sapolsky argues that our intentions arise in a completely deterministic fashion from the combined effects of all the prior causes that have acted on us, right up to the moment of action. He contends (i) that our intentions determine what we do, and (ii) that we have no control over their formation – they just appear when we are confronted with each successive situation we encounter. Referring to a classic turn-back-the-clock kind of thought experiment, he says:

 

But no matter how fervent, even desperate, you are, you can’t suc­cessfully wish to have wished for a different intent. And you can’t meta your way out— you can’t successfully wish for the tools (say, more self- discipline) that will make you better at successfully wishing what you wish for. None of us can. (page 46, original emphasis)

 

Here, Sapolsky seems to be arguing for psychological determinism. Your behavior at any moment is fully determined by the sets of reasons that you bring to the situation. So, echoing Arthur Schopenhauer, you can (indeed you must!) “do what you want”, but you can’t “want what you want”.

 

In a chapter entitled “Where Does Intent Come From?”, Sapolsky catalogues all the prior causes flowing seamlessly into each other:

 

So on and so on. Each moment flowing from all that came before. And whether it’s the smell of a room, what happened to you when you were a fetus, or what was up with your ancestors in the year 1500, all are things that you couldn’t control. A seam­less stream of influences that, as said at the beginning, precludes being able to shoehorn in this thing called free will that is supposedly in the brain but not of it. (Page 80)

 

Note first, as discussed in Part 2 of this series, that the studies supposedly demonstrating these influences are not, in my view, reliable (at all) and give a misleading impression of the strength of these effects. And note also the dualist framing, pointed out in Part 1 of this series, requiring free will to be “in the brain but not of it”. He continues in a similar vein:

 

In order to prove there’s free will, you have to show that some behavior just happened out of thin air in the sense of considering all these biologi­cal precursors. (page 83)

 

For Sapolsky, you are thus simply the product of all the passive influences that have moulded you to be the person you are. The way you are currently constituted then fully determines your actions in every possible scenario you might encounter, without you, as a whole self, really being involved in that process – specifically, none of what happens is up to you. You are, in essence, pushed around by your own reasons, none of which you had any hand in choosing.

 

As we will see, this kind of psychological determinism reduces to the claim that all these prior influences have collectively caused your physical brain to be configured in the way that it is, which then physically necessitates subsequent states, resulting in this conclusion: 

 

It is why it is anything but an absurdly high bar or straw man to say that free will can exist only if neurons’ actions are completely uninflu­enced by all the uncontrollable factors that came before. It’s the only re­quirement there can be, because all that came before, with its varying flavors of uncontrollable luck, is what came to constitute you. This is how you became you. (page 84)

 

Sapolsky takes issue with compatibilist arguments that claim that even if you have no choice or control over what you do right now – because of the way your brain is configured, which embodies all kinds of policies and heuristics that will inform your current intentions – nevertheless you can be held responsible for having those policies and heuristics. Here, I agree with him – it seems incoherent of compatibilists to admit that at any given moment we have no real choice (because determinism holds), but then claim that in the past we could have made choices of what policies to adopt, which we can now be held responsible for.

 

However, Sapolsky’s own view – diametrically opposed to the compatibilist position – is entirely circular. The logic is that, because you never had control, you can never have control:

 

all we are is the history of our biology, over which we had no control, and of its interaction with environments, over which we also had no control, creating who we are in the moment. (page 85)

 

He argues that because choice doesn’t exist in any instant, then you – as a person, a self – could have had no control over your own dispositions. The argument thus rests on the thing it’s trying to prove.

 

If, instead, choice does exist in any moment, and you do have causal power over your own behavior, including the adoption of policies and heuristics that will inform future actions, then this view of you being an automaton passively shaped by forces outside your control is undermined. In its place we get a picture – and a naturalised scientific framework – of organisms steering their own course through the world, as best they can, deciding what to do in any moment, based on what they have learned from past experiences, in the context of their ongoing plans and commitments and agendas. In fact, it is precisely this combination of historicity and future-directedness that constitutes being a self, with continuity through time.

 

 

Where do our intentions come from?

 

Our intentions do not, in fact, just follow automatically from how we are currently configured and then control what we do, like commands to be executed in a computer. They are the outcome of our decision-making processes. That’s the point of decision-making – to figure out what we should do. Once we achieve that, it becomes our intention. This could be an immediate motor action, like moving our hand to switch on a light. Or it could be a longer-term project, like intending to finish university – a goal we adopt, which then provides context for our future and ongoing activities, which in turn provide context for our specific actions.

 

We don’t just go around reacting to stimuli in isolated instants. We manage our behavior proactively through time. We pursue agendas, we adopt policies and heuristics, we make plans and commitments, and engage in projects with long-term goals. Choosing an action is really choosing an objective – a desired future state of the world (including the state of our selves). We make the future what we want it to be – that is the point of action. But not just the immediate next time-point – we work to make things in the far future how we’d like them to be. This longer-term view of the management of behavior through time gives a better perspective on where our goals and intentions come from.

 

Achieving our goals requires sustained action – we usually have to carry out some series of activities over some period of time. If I have the goal of eating, I may have to spend time cooking a meal. For this activity to achieve my goal, I have to keep doing it. So choosing the goal to cook dinner, based on the motivation of being hungry, constrains and informs what I will intend to do over the course of whatever period of time it takes to do that task. I may intend to chop up an onion, and sauté some chicken, and make a lovely little soy and ginger sauce, and so on. Those intentions thus come from me having adopted this goal and from me thinking about the steps I need to take to achieve it.

 

Similarly, if I decided to play a round of golf, I would have, in the process, decided to intend to put the little ball in the hole. That intention came from me. When we decide to do something, a whole bunch of intentions have to come along in order to achieve that. So it’s not the case that while we can do what we want, we can’t want what we want. Our reasons don’t just reflect the pre-configuration of our brains in isolated instants. We come to those reasons by reasoning.

 

Those processes of decision-making must take into account our current state, the state of the world, the states we would ideally like both our selves and the world to be in, and then (i) suggest, and (ii) evaluate, options for action to find the one most likely to further the likelihood of those desired states. Now, you might argue that we don’t have control over the middle bit – the states we would ideally like our selves to be in. And, at a general and low level, that’s true – we’re wired to want to be well fed and watered, and warm and safe and loved and even entertained and fulfilled, and so on. Most fundamentally, we’re wired to want to survive.

 

But those kinds of basic evolutionary drives are not sufficient to determine our behavior. They are too general and context-independent. The same is true for the broad tunings of decision-making parameters that are reflected in what we call personality traits. We can’t rely on these general drives and tunings to tell us the best thing to do in any given situation. We have to be able to assess the particular situation we’re in and set more specific, context-appropriate goals, and select actions to achieve them. This entails cognition.

 

 

Why we need cognition

 

Cognition, roughly speaking, means using information to solve problems – usually to manage behavior over changeable environments. In some cases, a lot of the work of cognition is pre-done by evolution. For example, the biochemical configuration of a bacterium may effectively embody adaptive control policies, such as the tendency to move in certain directions, based on information acquired by receptor proteins about various substance that are out in the world. These kinds of systems can be reasonably complex – able to integrate multiple signals at once and mount a response that depends on many other contextual factors, including the recent history of the individual bacterium itself.

 

Even simple behaviors like chemotaxis thus entail holistic and integrative system-level control, aimed at enacting the optimal response. But what is the optimal response? In the bacterium, there is a manageable number of variables at play. Even if the context-dependent interactions are complicated, they’re not unworkably complicated. Evolution can pre-code a system of contingencies and context-dependent relations that are sufficient to deal with most scenarios that the organism will encounter, based on the limited range of scenarios its ancestors encountered.

 

That strategy has limits, however. It certainly won’t do for us. Humanity’s superpower – our whole evolutionary schtick – is cognitive flexibility. We don’t have hard-coded responses to every situation – we couldn’t have. There isn’t enough information in our genomes or our brains to pre-code defined responses to every kind of situation we may encounter. There are too many variables at play, too many relations to consider, too many goals to manage.

 

The solution is to decouple inputs from immediate action and instead gather information about what is out in the world and submit it to a central system, where it can all be considered, in the light of our stored knowledge about the world, and our current and ongoing goals, in order to try to decide on the best course of action to globally optimise over a host of variables all at once. In other words, we need to think.

 

In some cases, there may be one clear optimal course of action. But in many cases, there may be several, or none. The organism has to take in all kinds of sensory data, do its best to infer the causes of those data (i.e., infer what is out in the world), link that to its imperfect and incomplete knowledge, in order to update its model of the world with varying degrees of certainty attached to different elements, predict the outcomes and utility of possible actions, while the world and other agents are changing too, all while trying to satisfy multiple goals at once. That just is the organism deciding what to do, in real-time.

 

There seem to be two ways to take Sapolsky’s counter-argument, in relation to these processes of decision-making. Either, as put by my student Henry Potter: (i) “all of the system’s prior influences have hammered into it a neural and biochemical configuration that essentially embodies a look-up table for which action to perform under different circumstances. There is therefore no decision-making process going on. In effect, the decision is already made, it just needs to be ‘realised’ or ‘enacted’.” Or: (ii) processes of decision-making do have to happen in each new scenario, but they always have only one possible outcome – that is, they proceed algorithmically and deterministically.

 

The first interpretation seems impossible and I presume is not actually what Sapolsky has in mind. There is no way to pre-state all the scenarios an organism might find itself in and all the appropriate actions. But the second interpretation is no better – or, at least, it’s wildly speculative.

 

 

Resting psychological determinism on neural and physical determinism

 

Sapolsky seems to be arguing, without evidence, that all of those activities of reasoning and deciding are algorithmically deterministic – that there is only ever one possible outcome. This is basically a behaviorist and mechanistic position. It sees our brains as just one big (though very complicated) stimulus-response machine. Fundamentally, it denies that cognition or mental causation is real – that us thinking about what to do could have any causal influence in this physical system. It says the “contents” of our mental states don’t really matter – it is only the vehicles of those states that have causal efficacy in the system.

 

It takes these cognitive operations as really no more than the firings of certain neural circuits, which were in fact inevitable, at every moment in this sequence of events. This rests on the idea that the current configuration of our nervous system (which is the product of all those prior causes), plus the current incoming stimuli, deterministically result in a single next state of our nervous system, entailing some specific action. There is no choice. In every situation, all these antecedent causes will necessitate just one subsequent outcome.

 

There are two ways in which these processes could be algorithmically deterministic. The most obvious one is if the substrates they are instantiated in are physically deterministic. Then the whole argument just reduces to physical determinism, and the psychological framing of reasons and intents becomes an epiphenomenon. This is indeed the argument that many free-will skeptics make.

 

However, as we will see in Part 4, Sapolsky himself accepts that isn’t the case. He discusses the evidence that there is both fundamental indeterminacy in physical systems and pervasive noisiness in neural systems. So we shouldn’t expect any neurally instantiated process to be fully deterministic in its physical details.

 

The determinist’s move to escape from this challenge is, ironically, a determinedly anti-reductive one. It relies on the idea that all that noisiness of neural components and their physical micro-constituents is coarse-grained over – filtered or averaged out over the large numbers of components of the system. Then you could get an emergently deterministic outcome at the macro-level, with the noise simply evaporating. This would of course depend on the structure of the system – on the way that it is organised.

 

This move is doubly ironic, because, as I will explore in Part 4, it is precisely these factors of indeterminacy and emergence and organisation that enable organisms themselves to come to be in charge of what happens.

 





Part 1: A tale of two neuroscientists

 

Part 2: Assessing the scientific evidence

 

Comments

Popular posts from this blog

Grandma’s trauma – a critical appraisal of the evidence for transgenerational epigenetic inheritance in humans

Undetermined - a response to Robert Sapolsky. Part 1 - a tale of two neuroscientists

Undetermined - a response to Robert Sapolsky. Part 2 - assessing the scientific evidence