Where do morals come from?
Review of “Braintrust. What Neuroscience Tells Us about Morality”, by Patricia S. Churchland
The question of “where morals come from” has exercised philosophers, theologians and many others for millennia. It has lately, like many other questions previously addressed only through armchair rumination, become addressable empirically, through the combined approaches of modern neuroscience, genetics, psychology, anthropology and many other disciplines. From these approaches a naturalistic framework is emerging to explain the biological origins of moral behaviour. From this perspective, morality is neither objective nor transcendent – it is the pragmatic and culture-dependent expression of a set of neural systems that have evolved to allow our navigation of complex human social systems.
“Braintrust”, by Patricia S. Churchland, surveys the findings from a range of disciplines to illustrate this framework. The main thesis of the book is grounded in the approach of evolutionary psychology but goes far beyond the just-so stories of which that field is often accused by offering not just a plausible biological mechanism to explain the foundations of moral behaviour, but one with strong empirical support.
The thrust of her thesis is as follows:
Moral behaviour arose in humans as an extension of the biological systems involved in recognition and care of mates and offspring. These systems are evolutionarily ancient, encoded in our genome and hard-wired into our brains. In humans, the circuits and processes that encode the urge to care for close relatives can be co-opted and extended to induce an urge to care for others in an extended social group. These systems are coupled with the ability of humans to predict future consequences of our actions and make choices to maximise not just short-term but also long-term gain. Moral decision-making is thus informed by the biology of social attachments but is governed by the principles of decision-making more generally. These entail not so much looking for the right choice but for the optimal choice, based on satisfying a wide range of relevant constraints, and assigning different priorities to them.
This does not imply that morals are innate. It implies that the capacity for moral reasoning and the predisposition to moral behaviour are innate. Just as language has to be learned, so do the codes of moral behaviour, and, also like language, moral codes are culture-specific, but constrained by some general underlying principles. We may, as a species, come pre-wired with certain biological imperatives and systems for incorporating them into decisions in social situations, but we are also pre-wired to learn and incorporate the particular contingencies that pertain to each of us in our individual environments, including social and cultural norms.
This framework raises an important question, however – if morals are not objective or transcendent, then why does it feel like they are? This is after all, the basis for all this debate – we seem to implicitly feel things as being right or wrong, rather than just intellectually being aware that they conform to or violate social norms. The answer is that the systems of moral reasoning and conscience tap into, or more accurately emerge from ancient neural systems grounded in emotion, in particular in attaching emotional value or valence to different stimuli, including the imagined consequences of possible actions.
This is, in a way, the same as asking why does pain feel bad? Couldn’t it work simply by alerting the brain that something harmful is happening to the body, which should therefore be avoided? A rational person could then take an action to avoid the painful stimulus or situation. Well, first, that does not sound like a very robust system – what if the person ignored that information? It would be far more adaptive to encourage or enforce the avoidance of the painful stimulus by encoding it as a strong urge, forcing immediate and automatic attention to a stimulus that should not be ignored and that should be given high priority when considering the next action. Even better would be to use the emotional response to also tag the memory of that situation as something that should be avoided in the future. Natural selection would favour genetic variants that increased this type of response and select against those that decoupled painful stimuli from the emotional valence we normally associate with them (they feel bad!).
In any case, this question is approached from the wrong end, as if humans were designed out of thin air and the system could ever have been purely rational. We evolved from other animals without reason (or with varying degrees of problem-solving faculties). For these animals to survive, neural systems are adapted to encode urges and beliefs in such a way as to optimally control behaviour. Attaching varying levels of emotional valence to different types of stimuli offers a means to prioritise certain factors in making complex decisions (i.e., those factors most likely to affect the survival of the organism or the dissemination of its genes).
For humans, these important factors include our current and future place in the social network and the success of our social group. In the circumstances under which modern humans evolved, and still to a large extent today, our very survival and certainly our prosperity depend crucially on how we interact and on the social structures that have evolved from these interactions. We can’t rely on tooth and claw for survival – we rely on each other. Thus, the reason moral choices are tagged with strong emotional valence is because they evolved from systems designed for optimal control of behaviour. Or, despite this being a somewhat circular argument, the reason they feel right or wrong is because it is adaptive to have them feel right or wrong.
Churchland fleshes out this framework with a detailed look at the biological systems involved in social attachments, decision-making, executive control, mind-reading (discerning the beliefs and intentions of others), empathy, trust and other faculties. There are certain notable omissions here: the rich literature on psychopaths, who may be thought of as innately deficient in moral reasoning, receives surprisingly little attention, especially given the high heritability of this trait. As an illustration that the faculty of moral reasoning relies on in-built brain circuitry, this would seem to merit more discussion. The chapter on Genes, Brains and Behavior rightly emphasises the complexity of the genetic networks involved in establishing brain systems, especially those responsible for such a high-level faculty as moral reasoning. The conclusion that this system cannot be perturbed by single mutations is erroneous, however. Asking what does it take, genetically speaking, to build the system is a different question from what does it take to break it. Some consideration of how moral reasoning emerges over time in children would also have been interesting.
Nevertheless, the book does an excellent job of synthesising diverse findings into a readily understandable and thoroughly convincing naturalistic framework under which moral behaviour can be approached from an empirical standpoint. While the details of many of these areas remain sketchy, and our ignorance still vastly outweighs our knowledge, the overall framework seems quite robust. Indeed, it articulates what is likely a fairly standard view among neuroscientists who work in or who have considered the evidence from this field. However, one can presume that jobbing neuroscientists are not the main intended target audience and that both the details of the work in this field and its broad conclusions are neither widely known nor held.
The idea that right and wrong - or good and evil - exist in some abstract sense, independent from humans who only somehow come to perceive them, is a powerful and stubborn illusion. Indeed, for many inclined to spiritual or religious beliefs, it is one area where science has not until recently encroached on theological ground. While the Creator has been made redundant by the evidence for evolution by natural selection and the immaterial soul similarly superfluous by the evidence that human consciousness emerges from the activity of the physical brain, morality has remained apparently impervious to the scientific approach. Churchland focuses her last chapter on the idea that morals are absolute and delivered by Divinity, demonstrating firstly the contradictions in such an idea and, with the evidence for a biological basis of morality provided in the rest of the book, arguing convincingly that there is no need of that hypothesis.
The question of “where morals come from” has exercised philosophers, theologians and many others for millennia. It has lately, like many other questions previously addressed only through armchair rumination, become addressable empirically, through the combined approaches of modern neuroscience, genetics, psychology, anthropology and many other disciplines. From these approaches a naturalistic framework is emerging to explain the biological origins of moral behaviour. From this perspective, morality is neither objective nor transcendent – it is the pragmatic and culture-dependent expression of a set of neural systems that have evolved to allow our navigation of complex human social systems.
“Braintrust”, by Patricia S. Churchland, surveys the findings from a range of disciplines to illustrate this framework. The main thesis of the book is grounded in the approach of evolutionary psychology but goes far beyond the just-so stories of which that field is often accused by offering not just a plausible biological mechanism to explain the foundations of moral behaviour, but one with strong empirical support.
The thrust of her thesis is as follows:
Moral behaviour arose in humans as an extension of the biological systems involved in recognition and care of mates and offspring. These systems are evolutionarily ancient, encoded in our genome and hard-wired into our brains. In humans, the circuits and processes that encode the urge to care for close relatives can be co-opted and extended to induce an urge to care for others in an extended social group. These systems are coupled with the ability of humans to predict future consequences of our actions and make choices to maximise not just short-term but also long-term gain. Moral decision-making is thus informed by the biology of social attachments but is governed by the principles of decision-making more generally. These entail not so much looking for the right choice but for the optimal choice, based on satisfying a wide range of relevant constraints, and assigning different priorities to them.
This does not imply that morals are innate. It implies that the capacity for moral reasoning and the predisposition to moral behaviour are innate. Just as language has to be learned, so do the codes of moral behaviour, and, also like language, moral codes are culture-specific, but constrained by some general underlying principles. We may, as a species, come pre-wired with certain biological imperatives and systems for incorporating them into decisions in social situations, but we are also pre-wired to learn and incorporate the particular contingencies that pertain to each of us in our individual environments, including social and cultural norms.
This framework raises an important question, however – if morals are not objective or transcendent, then why does it feel like they are? This is after all, the basis for all this debate – we seem to implicitly feel things as being right or wrong, rather than just intellectually being aware that they conform to or violate social norms. The answer is that the systems of moral reasoning and conscience tap into, or more accurately emerge from ancient neural systems grounded in emotion, in particular in attaching emotional value or valence to different stimuli, including the imagined consequences of possible actions.
This is, in a way, the same as asking why does pain feel bad? Couldn’t it work simply by alerting the brain that something harmful is happening to the body, which should therefore be avoided? A rational person could then take an action to avoid the painful stimulus or situation. Well, first, that does not sound like a very robust system – what if the person ignored that information? It would be far more adaptive to encourage or enforce the avoidance of the painful stimulus by encoding it as a strong urge, forcing immediate and automatic attention to a stimulus that should not be ignored and that should be given high priority when considering the next action. Even better would be to use the emotional response to also tag the memory of that situation as something that should be avoided in the future. Natural selection would favour genetic variants that increased this type of response and select against those that decoupled painful stimuli from the emotional valence we normally associate with them (they feel bad!).
In any case, this question is approached from the wrong end, as if humans were designed out of thin air and the system could ever have been purely rational. We evolved from other animals without reason (or with varying degrees of problem-solving faculties). For these animals to survive, neural systems are adapted to encode urges and beliefs in such a way as to optimally control behaviour. Attaching varying levels of emotional valence to different types of stimuli offers a means to prioritise certain factors in making complex decisions (i.e., those factors most likely to affect the survival of the organism or the dissemination of its genes).
For humans, these important factors include our current and future place in the social network and the success of our social group. In the circumstances under which modern humans evolved, and still to a large extent today, our very survival and certainly our prosperity depend crucially on how we interact and on the social structures that have evolved from these interactions. We can’t rely on tooth and claw for survival – we rely on each other. Thus, the reason moral choices are tagged with strong emotional valence is because they evolved from systems designed for optimal control of behaviour. Or, despite this being a somewhat circular argument, the reason they feel right or wrong is because it is adaptive to have them feel right or wrong.
Churchland fleshes out this framework with a detailed look at the biological systems involved in social attachments, decision-making, executive control, mind-reading (discerning the beliefs and intentions of others), empathy, trust and other faculties. There are certain notable omissions here: the rich literature on psychopaths, who may be thought of as innately deficient in moral reasoning, receives surprisingly little attention, especially given the high heritability of this trait. As an illustration that the faculty of moral reasoning relies on in-built brain circuitry, this would seem to merit more discussion. The chapter on Genes, Brains and Behavior rightly emphasises the complexity of the genetic networks involved in establishing brain systems, especially those responsible for such a high-level faculty as moral reasoning. The conclusion that this system cannot be perturbed by single mutations is erroneous, however. Asking what does it take, genetically speaking, to build the system is a different question from what does it take to break it. Some consideration of how moral reasoning emerges over time in children would also have been interesting.
Nevertheless, the book does an excellent job of synthesising diverse findings into a readily understandable and thoroughly convincing naturalistic framework under which moral behaviour can be approached from an empirical standpoint. While the details of many of these areas remain sketchy, and our ignorance still vastly outweighs our knowledge, the overall framework seems quite robust. Indeed, it articulates what is likely a fairly standard view among neuroscientists who work in or who have considered the evidence from this field. However, one can presume that jobbing neuroscientists are not the main intended target audience and that both the details of the work in this field and its broad conclusions are neither widely known nor held.
The idea that right and wrong - or good and evil - exist in some abstract sense, independent from humans who only somehow come to perceive them, is a powerful and stubborn illusion. Indeed, for many inclined to spiritual or religious beliefs, it is one area where science has not until recently encroached on theological ground. While the Creator has been made redundant by the evidence for evolution by natural selection and the immaterial soul similarly superfluous by the evidence that human consciousness emerges from the activity of the physical brain, morality has remained apparently impervious to the scientific approach. Churchland focuses her last chapter on the idea that morals are absolute and delivered by Divinity, demonstrating firstly the contradictions in such an idea and, with the evidence for a biological basis of morality provided in the rest of the book, arguing convincingly that there is no need of that hypothesis.
Good article! Main point: the idea of good and evil is ultimately a 'powerful and stubborn illusion'. But it is impossible to live as though heinous acts such as 'torturing babies' is merely illusory, pragmatic or socially constructed. A crucial element of morality is its 'oughtness' which is not so much explained as denied.
ReplyDeleteGreg Koukl put it nicely: "Evolution may be an explanation for the existence of conduct we choose to call moral, but it gives no explanation why I should obey any moral rules in the future. If one countered that we have a moral obligation to evolve, then the game would be up, because if we have moral obligations prior to evolution, then evolution itself can't be their source."
More on Monkey Morality: http://tinyurl.com/cbsqmf
I think what needs explanation is why moral choices have that "oughtness" about them that many people would say is innate. Why do they feel that way? The argument I put forward above is that it is adaptive to have them feel that way as an unconscious motivation to behave in an evolutionarily adaptive way.
DeleteHitler believed that the Jews were an inferior race and less evolved than Germans. TO breed with them would taint the German blood line. Hitler sought to irradiate the Jews by slaughtering them by the thousands. His thinking was based on evolutionary thinking of helping the more fit to survive. According to his own relative morals, he was right. But was Hitler right? Your article fails to answer more concrete issues because it beats around the bush and confuses recognition of morals and pragmatic following of morals versus an actual "oughtness" about morals. With what you have written, what basis or standard do you have to assert that Hitler was in fact wrong? Or are you going to assert that he was subjectively right according to his culture and laws?
DeleteI don't see how you get that from what I wrote. Part of the point was that we don't all individually get to choose our "relative morals", as you say. They are an evolved set of predispositions that generally favour social cooperation, fairness, reciprocity, harm avoidance, etc. Just because they don't exist in the abstract as "right and wrong" does not mean they don't exist at all and certainly does not mean those terms can't be applied to human behaviours.
DeleteMorality is relative to the individual or group. It mainly has to do with subjective experience. Other factors involve what we are and the proccess of how we came to be.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete"While the Creator has been made redundant by the evidence for evolution by natural selection"
ReplyDelete...umm you're wrong. Your assertion assumes that evolution attempts to answer the origin of life. It does not. The question of creation and of a creator are not asked nor answered by natural selection. Evolution answers the question "why is there diversity in biological life?" It simply does not attempt to answer the question (indeed it does not have one) what is the origin of life. You have turned evolution into a god...that is not the intent of the theory/science of evolution.
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
DeleteThis comment has been removed by a blog administrator.
DeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
Delete"While the Creator has been made redundant by the evidence for evolution by natural selection and the immaterial soul similarly superfluous by the evidence that human consciousness emerges from the activity of the physical brain"
ReplyDeleteCompletely false. Please don´t spread lies.
In Western society we have two primary competing claims for the origin and basis of Morality: naturalist evolution and scriptural theism. …Each individual must weigh for herself which alternative holds the most merit.
ReplyDeleteOn the one hand, naturalism holds that in a world where survival is contingent on both competition and social cooperation, there is bound to be a conflict between self-serving impulses (evil, from a societal standpoint) and group-serving impulses (good, from a societal standpoint).
On the other hand, Christian theism holds that an omniscient god creates a perfect human couple (knowing they will be tempted to sin by a talking serpent), then wipes out nearly the whole of the human race in the time of Noah (knowing in advance they would all turn evil), then, with his foreknowledge, ultimately consigns the majority of the human race to an endless torment in hell (while asking us to turn the other cheek against our own enemies), requiring the murder of his own son to redeem the minority of humanity that recognizes and accepts this Grand Plan.
—Kenneth W. Daniels, former evangelical Christian missionary in his book, “Why I Believed”