8

A New View of Human Nature


 
 

The theory of constructed emotion is not just a modern explanation of how emotions are made. It’s also an ambassador for a radically different view of what it means to be a human being. This view is consistent with the latest research in neuroscience. It also gives you more control over your feelings and behavior than the classical view does, and it has deep implications for how to live your life. You are not a reactive animal, wired to respond to events in the world. When it comes to your experiences and perceptions, you are much more in the driver’s seat than you might think. You predict, construct, and act. You are an architect of your experience.

Another compelling view of human nature comes from the classical view of emotion. It’s been around for thousands of years and is still embedded in law, medicine, and other critical elements of society. The two views have in fact been at war with each other throughout recorded history. In previous battles, the classical view of human nature has consistently come out on top for reasons we’ll see. But now, as we’re in the midst of a revolution of mind and brain, modern neuroscience has given us the tools to settle the conflict, and based on overwhelming evidence, the classical view has lost.

In this chapter, I lay out the distinctive new view of human nature represented by the theory of constructed emotion and compare it to the traditional ideas espoused by the classical view. I also introduce you to a shadowy culprit that has kept the classical view so prominent for so long, entrenched in science and culture, despite a steady stream of contrary evidence.

 

Most of us think of the outside world as physically separate from ourselves. Events happen “out there” in the world, and you react to them “in here” in your brain.

In the theory of constructed emotion, however, the dividing line between brain and world is permeable, perhaps nonexistent. Your brain’s core systems combine in various ways to construct your perceptions, memories, thoughts, feelings, and other mental states. You experienced this with the blobby bee picture, when you saw shapes that didn’t physically exist, demonstrating that your brain models your world through simulation. Your brain issues a storm of predictions, simulates their consequences as if they were present, and checks and corrects those predictions against actual sensory input. Along the way, your interoceptive predictions produce your feelings of affect, influence every action that you perform, and determine which parts of the world you care about in the moment (your affective niche). Without interoception, you wouldn’t notice or care about your physical surroundings or anything else, and you’d be unlikely to survive for long. Interoception enables your brain to construct the environment in which you live.

At the same time that your brain is modeling your world, the outside world helps to wire your brain. When you’re an infant, awash in sensory input, the outside world seeds your earliest concepts, as your brain hardwires itself to the realities of the physical world around you. That’s how babies’ brains become wired to recognize human faces. As your brain develops and you begin learning words, your brain hardwires itself to the social world, and you begin creating purely mental concepts like “Things That Can Protect You from Stinging Insects” and “Sadness.” These concepts from your culture appear to be in the outside world, but they are constructions of your conceptual system.

In this view, culture is not some gauzy, amorphous vapor that surrounds you. It helped to wire your brain, and you behave in certain ways that wire the brains of the next generation. For example, if a culture dictates that people with certain skin colors are less worthwhile, this social reality has a physical effect on the group: they have lower salaries and their children have poorer nutrition and living conditions. These factors change the structure of their children’s brains for the worse, making school harder and increasing the odds that the children will earn lower salaries in the future.1

Your constructions aren’t arbitrary—your brain (and the mind it creates) must keep in touch with the bits of reality that count in order to keep your body alive and healthy. Construction cannot make a solid wall unsolid (unless you have mutant superpowers), but you can redraw countries, redefine marriage, and decide who’s worthwhile and who isn’t. Your genes gave you a brain that can wire itself to its physical and social environment, and other members of your culture construct that environment with you. It takes more than one brain to create a mind.

The theory of constructed emotion also leads to a whole new way of thinking about personal responsibility. Suppose you’re angry with your boss and lash out impulsively, slamming your fist on his desk and calling him an idiot. Where the classical view might attribute some blame to a hypothetical anger circuit, partially absolving you of responsibility, construction extends the notion of responsibility beyond the moment of harm. Your brain is predictive, not reactive. Its core systems are constantly trying to guess what’s coming next so you can survive. Therefore your actions, and the predictions that launched those actions, are shaped by all your past experiences (as concepts) that led up to that moment. You slam that desk because your brain predicted an instance of anger, using your concept of “Anger,” and your past experience (whether direct, or from movies or books, etc.) includes an action of slamming the desk in a similar situation.

Your control network, you may recall, constantly shapes the course of your predictions and prediction error to help select among multiple actions, whether you experience yourself as in control or not. This network can only work with the concepts that you’ve got. So the question of responsibility becomes, Are you responsible for your concepts? Not all of them, certainly. When you’re a baby, you can’t choose the concepts that other people put into your head. But as an adult, you absolutely do have choices about what you expose yourself to and therefore what you learn, which creates the concepts that ultimately drive your actions, whether they feel willful or not. So “responsibility” means making deliberate choices to change your concepts.2

As a real-world example, pick any extended conflict in the world: Israelis versus Palestinians, Hutus versus Tutsis, Bosnians versus Serbs, Sunni versus Shia. Climbing out on a limb here, I’d like to suggest that no living member of these groups is at fault for the anger that they feel toward each other, since the conflicts in question began many generations ago. But each individual today does bear some responsibility for continuing the conflict, because it’s possible for each person to change their concepts and therefore their behavior. No particular conflict is predetermined by evolution. Conflicts persist due to social circumstances that wire the brains of the individuals who participate. Someone must take responsibility to change these circumstances and concepts. Who’s going to do it, if not the people themselves?

To make this point, a scientific study provides some preliminary hope. Researchers trained a group of Israelis to think about various negative events, such as Palestinians’ launching rockets and the kidnapping of an Israeli soldier, and recategorize them as less negative. The trainees were not only less angry afterward but they showed greater support for policies leading to more peaceful and conciliatory resolutions, such as providing aid to Palestinians, as well as less support for aggressive tactics toward Palestinians living in the Gaza Strip. Surrounding the recent Palestinian bid for membership in the United Nations, this training in recategorization led people to support giving up security control over neighborhoods in East Jerusalem in exchange for full peace, and to show less support for restrictive policies like prohibiting Palestinians from using the Israeli medical system. These latter changes persisted for five months after training.3

If you grow up in a society full of anger or hate, you can’t be blamed for having the associated concepts, but as an adult, you can choose to educate yourself and learn additional concepts. It’s certainly not an easy task, but it is doable. This is another basis for my frequent claim, “You are an architect of your experience.” You are indeed partly responsible for your actions, even so-called emotional reactions that you experience as out of your control. It is your responsibility to learn concepts that, through prediction, steer you away from harmful actions. You also bear some responsibility for others, because your actions shape other people’s concepts and behaviors, creating the environment that turns genes on and off to wire their brains, including the brains of the next generation. Social reality implies that we are all partly responsible for one another’s behavior, not in a fluffy, let’s-all-blame-society sort of way, but a very real brain-wiring way.

When I was a therapist, I worked with college-aged women who, as little girls, had suffered abuse at the hands of parents. I used to help my clients understand that they’ve been victimized twice: once in the moment and again because they’ve been left with emotional suffering that only they can resolve. Due to their trauma, their brains continue to model a hostile world, even after they’ve escaped to a better one. It was not their fault that their brains are wired for a specific, toxic environment. But each woman is the only one who can transform her conceptual system to make things better. That’s the form of responsibility that I mean. Sometimes, responsibility means that you’re the only one who can change things.

And now, we come to the question of human origin. We are accustomed to thinking about ourselves as the final destination of a long evolutionary journey. The theory of constructed emotion takes a more balanced perspective. Natural selection did not aim itself toward us. We are just another species with particular adaptations that help pass our genes to the next generation. Other animals have evolved plenty of powers that we don’t have, like leaping great distances and scaling walls, which is why we’re so fascinated by superheroes like Spider-Man. Humans are clearly the most talented at building rockets that reach other planets, and inventing and enforcing laws that exist in our minds and dictate how we treat each other. Something in the human brain gives us our unique abilities, but that “something” needn’t be separate, dedicated brain circuitry for rocketry and law enforcement—or, for that matter, emotions—passed down from our non-human ancestors.

One of your most notable adaptations is that you needn’t carry all the genetic material to create all the wiring in your brain. That would be tremendously expensive, biologically speaking. Instead, you have genes that let your brain develop in the context of the other brains around you, through culture. Just as an individual brain takes advantage of redundancy, compressing information into similarities and differences, multiple brains take advantage of one another’s redundancies (that we’re in the same culture and learned the same concepts) and wire each other. In effect, evolution improves its efficiency via human culture, and we pass culture to our offspring by wiring their brains.

The human brain, from the macro level to the micro level, is organized for variation and degeneracy. In its interacting networks, clusters of neurons are partly independent and share a lot of information efficiently. This arrangement allows ever-changing populations of neurons to form and dissolve in milliseconds, so that single neurons participate in different constructions in different situations, modeling a variable and only partly predictable world. Neural fingerprints have no place in such a dynamic environment. It would be highly inefficient for all of humanity to have one inherited set of mental modules, given that we live in such diverse geographic and social environments around the world. The human brain evolved to create different kinds of human minds, adapted to different environments. We don’t need one universal brain creating one universal mind to claim that we are all one species.4

On the whole, the theory of constructed emotion is a biologically informed, psychological explanation of who you are as a human being. It takes into account both evolution and culture. You are born with some brain wiring as determined by your genes, but the environment can turn some genes on and off, allowing your brain to wire itself to your experiences. Your brain is shaped by the realities of the world that you find yourself in, including the social world made by agreement among people. Your mind is a grand collaboration that you have no awareness of. Through construction, you perceive the world not in any objectively accurate sense but through the lens of your own needs, goals, and prior experience (as you did with the blobby bee). And you are not the pinnacle of evolution, just a very interesting sort of animal with some unique abilities.

The theory of constructed emotion provides a very different outlook on human nature than the classical view does. Classical ideas about our evolutionary origins, our personal responsibility, and our relationship with the outside world have been dominant in Western culture for thousands of years. To understand this older view of human nature and why it’s been so entrenched for so long, it’s convenient to begin—as so many scientific stories do—with Charles Darwin.

In 1872, Darwin published The Expression of the Emotions in Man and Animals, where he wrote that emotions were passed down to us, unchanging through the ages, from an early animal ancestor. Emotions in modern humans are therefore caused by ancient parts of our nervous system, according to Darwin, and each emotion has a specific, consistent fingerprint.5

To borrow a term from philosophy, Darwin was saying that each emotion has an essence. If instances of sadness occur with a pout and a slowed heart rate, then a fingerprint of “pout and slowed heart rate” may be the essence of sadness. Alternatively, the essence might be an underlying cause that makes all the instances of sadness the emotion they are, such as a set of neurons. (I’ll use the word “essence” to refer to both possibilities.)6

The belief in essences is called essentialism. It presupposes that certain categories—sadness and fear, dogs and cats, African and European Americans, men and women, good and evil—each have a true reality or nature. Within each category, the members are thought to share a deep, underlying property (an essence) that causes them to be similar, even if they have some superficial differences. There are many varieties of dog with differences in size, shape, color, gait, temperament, and so on, but these differences are considered superficial with regard to some essence that all dogs share. A dog is never a cat.

Likewise, all varieties of the classical view consider emotions like sadness and fear to have distinct essences. The neuroscientist Jaak Panksepp, for example, writes that an emotion’s essence is a circuit in the subcortical regions of your brain. The evolutionary psychologist Steven Pinker writes that emotions are like mental organs, analogous to body organs for specialized functions, and that an emotion’s essence is a set of genes. The evolutionary psychologist Leda Cosmides and the psychologist Paul Ekman assume that each emotion has an innate, unobservable essence, which they refer to as a metaphorical “program.” Ekman’s version of the classical view, called basic emotion theory, assumes that essences for happiness, sadness, fear, surprise, anger, and disgust are triggered automatically by objects and events in the world. Another version, called classical appraisal theory, inserts an additional step in between you and the world, saying that your brain first judges (“appraises”) the situation and decides whether to trigger an emotion. All versions of the classical view agree that each emotion category has a distinct fingerprint; they just disagree on the nature of the essences.7

Essentialism is the culprit that has made the classical view supremely difficult to set aside. It encourages people to believe that their senses reveal objective boundaries in nature. Happiness and sadness look and feel different, the argument goes, so they must have different essences in the brain. People are almost always unaware that they essentialize; they fail to see their own hands in motion as they carve dividing lines in the natural world.

Darwin’s belief in emotion essences, as revealed in Expression, helped to launch the modern classical view of emotion to prominence. That same belief also made Darwin unwittingly look like a hypocrite. It is no small task to criticize—let alone contradict—the ideas of one of the greatest scientists in history. But let’s have a go, shall we?

Darwin’s most famous book, On the Origin of Species, triggered a paradigm shift that transformed biology into a modern science. His greatest scientific achievement, so nicely summed up by the evolutionary biologist Ernst Mayr, was freeing biology from “the paralyzing grip of essentialism.” Regarding emotion, however, Darwin made an inexplicable about-face thirteen years later by writing Expression, a book riddled with essentialism. In doing so, he abandoned his remarkable innovations and returned to essentialism’s paralyzing grip, at least where emotions are concerned.8

You see, before Darwin’s theory from Origin became popular in the nineteenth century, essentialism ruled the animal kingdom. Each species was assumed to have an ideal form, created by God, with defining properties (essences) that distinguished it from all other species (each with their own essences). Deviations from the ideal were said to be due to error or accident. Think of this as the “dog show” version of biology. A dog show, in case you’ve never seen one, is a contest to identify the “best” dog in a field of competitors. The dogs do not directly compete with one another but are compared by judges to a hypothetical ideal dog to see who’s the closest. When rating Golden Retrievers, for example, the judges compare each competitor to the ideal image of a Golden Retriever. Is the dog the right height? Are its limbs symmetrical? Is the muzzle straight, blending smoothly with the skull? Is the coat a rich, dense, lustrous gold? Any differences from the ideal dog are regarded as error, and the dog with the smallest amount of error wins. In the same manner, influential thinkers of the early nineteenth century saw the world of living creatures as one big dog show. If you looked at a Golden Retriever and observed that its stride was longer than average, then its stride was too long compared to the ideal, or even wrong.9

Then along came Darwin, who argued that variations within a species, such as length of stride, are not errors. Instead, variations are expected and are meaningfully related to the species’ environment. Any population of Golden Retrievers has a variety of stride lengths, some of which provide a functional advantage for running, climbing, or hunting. The individuals with strides that best fit their environment will live longer and produce more offspring. This is Darwin’s theory of evolution from Origin in action, known as natural selection and sometimes called “survival of the fittest.” To Darwin, each species was a conceptual category—a population of unique individuals who vary from one another, with no essence at their core. The ideal dog doesn’t exist: it’s a statistical summary of many diverse dogs. No features are necessary, sufficient, or even typical of every individual in the population. This observation, known as population thinking, is central to Darwin’s theory of evolution.10

Population thinking is based on variation, whereas essentialism is based on sameness. The two ideas are fundamentally incompatible. Origin is therefore a profoundly anti-essentialist book. So it is baffling that where emotion is concerned, Darwin reversed his greatest achievement by writing Expression.11

It is equally baffling, not to mention ironic, that the classical view of emotion is based on the very essentialism that Darwin is famous for vanquishing in biology. The classical view explicitly labels itself as “evolutionary” and assumes that emotions and their expressions are products of natural selection, yet natural selection is completely absent from Darwin’s thinking on emotion. Any essentialist view that wraps itself in the cloak of Darwin is demonstrating a profound misunderstanding of Darwin’s central ideas about evolution.

The compelling power of essentialism led Darwin to some beautifully ridiculous ideas about emotion. “Even insects,” he wrote in Expression, “express anger, terror, jealousy, and love” when they rub their body parts together to make sounds. Think about that the next time you’re chasing a fly in your kitchen. Darwin also wrote that emotional imbalance could cause frizzy hair.12

Essentialism is not only powerful but also infectious. Darwin’s perplexing belief in unvarying emotion essences lived on after his death and distorted the legacy of other famous scientists. In the process, the classical view of emotion gained momentum. The most important example is that of William James, considered by many to be the father of American psychology. James might not be the household name that Darwin is, but he was, quite simply, an intellectual giant. His 1,200-page tome Principles of Psychology contains most of Western psychology’s most important ideas and remains, after more than a century, the foundation of the field. His name graces the highest honor that can be bestowed on a scientist from the Association of Psychological Science, the William James Prize, and Harvard’s psychology building is named William James Hall.

James is widely cited for saying that each type of emotion—happiness, fear, and so on—has a distinct fingerprint in the body. This essentialist idea is a key fact of the classical view, and generations of James-influenced researchers have searched for those fingerprints in heartbeats, respiration, blood pressure, and other bodily markers (and have written some bestselling books on emotion). James’s statement has a catch, however: he never said it. The widely believed claim that he did comes from a hundred-year-old misinterpretation of his words through the lens of essentialism.

James actually wrote that each instance of emotion, not each category of emotion, comes from a unique bodily state. This is a wildly different statement. It means you can tremble in fear, jump in fear, freeze in fear, scream in fear, gasp in fear, hide in fear, attack in fear, and even laugh in the face of fear. Each occurrence of fear is associated with a different set of internal changes and sensations. The classical misinterpretation of James represents a 180-degree inversion of his meaning, as if he were claiming the existence of emotion essences, when ironically he was arguing against them. In James’s words, “ ‘Fear’ of getting wet is not the same fear as fear of a bear.”13

How did this widespread misunderstanding of James arise? I discovered that one of James’s contemporaries sowed the confusion, a philosopher named John Dewey. He came up with his own theory of emotion by grafting Darwin’s essentialist views from Expression onto James’s anti-essentialist ideas, even though they are fundamentally incompatible. The result was a Frankenstein’s monster of a theory that inverted James’s meaning by assigning an essence to each emotion category. For the finishing touch, Dewey named his concoction after James, calling it “the James-Lange theory of emotion.”* Today, Dewey’s role in this jumble is forgotten, and countless publications attribute his theory to James. A prominent example is the writings of neurologist Antonio Damasio, author of Descartes’ Error and other popular books on emotion. To Damasio, an emotion’s unique physical fingerprint, which he calls a somatic marker, is a source of information used by the brain to make good decisions. These markers are like little bits of wisdom. Emotional experience, according to Damasio, occurs when somatic markers are transformed into conscious feelings. Damasio’s hypothesis is actually a child of the James-Lange merger, not of James’s actual views on emotion.14

Dewey’s misinterpretation of James is one of the great mistakes in modern psychology, forged by essentialism in the name of Darwin. It is ironic, not to mention absurdly tragic, when Darwin’s name is invoked to lend authority to essentialist scientific views, when his greatest scientific achievement was to vanquish essentialism in biology.

So why is essentialism so powerful that it can twist the words of great scientists and misdirect the path of scientific discovery?

The simplest reason is that essentialism is intuitive. We experience our own emotions as automatic reactions, so it’s easy to believe that they spring forth from ancient, dedicated parts of the brain. We also see emotions in blinks, furrowed brows, and other muscle twitches, and we hear emotions in the pitch and lilt of voices, without any sense of effort or agency. Therefore, it’s also easy to believe that we’ve been engineered by nature to recognize emotional displays and programmed to act on them. That’s a dubious conclusion, however. Millions of people around the world can instantly, effortlessly recognize Kermit the Frog, but that doesn’t mean the human brain is wired for Muppet recognition. Essentialism promises simple, single-cause explanations that reflect common sense, when in fact we live in a complex world.

Essentialism is also remarkably difficult to disprove. Since an essence can be an unobservable property, people are free to believe in essences even when they cannot be found. It’s easy to come up with reasons why an experiment did not detect an essence: “we haven’t looked everywhere yet,” or “it’s inside this complicated biological structure we can’t see into yet,” or “our tools today aren’t sufficiently powerful to find the essence, but one day they will be.” These hopeful thoughts are heartfelt but logically impossible to prove false. Essentialism inoculates itself against counterevidence. It also changes the way science is practiced. If scientists believe in a world of essences that are waiting to be discovered, then they devote themselves to finding those essences, a potentially endless quest.15

Essentialism also appears to be an inherent part of our psychological makeup. Humans create categories by inventing purely mental similarities, as you learned in chapter 5, and we name those categories with words. That’s why a word like “pet” or “sadness” applies to a multitude of diverse instances. Words are an incredible achievement, but they are also a Faustian bargain for the human brain. On one hand, a word like “sadness,” when applied to a collection of varied perceptions, invites you to search for (or invent) some underlying sameness that transcends their noticeable differences. That is, the word “sadness” guides you to create an emotion concept, which is a good thing. But the word also invites you to believe in a reason for that sameness: some deep, unobservable, or even unknowable quality that is responsible for their equivalence, giving them their true identity. That is, words invite you to believe in an essence, and that process is conceivably the psychological origin of essentialism. William James made a similar observation over a century ago when he wrote, “Whenever we have made a word . . . to denote a certain group of phenomena, we are prone to suppose a substantive entity existing beyond the phenomena, of which the word shall be the name.” The very words that help us to learn concepts can also trick us into believing that their categories reflect firm boundaries in nature.16

Research with children illustrates how the human brain constructs a belief in essences. A scientist shows a child a red cylinder, calling it a nonsense name like “blicket,” and demonstrates that it has a special function of lighting up a machine. Next, the child is shown two more objects, a blue square that the scientist also calls a “blicket,” and a second red cylinder that is not called a “blicket.” The child will expect only the blue square to light up the machine, despite its visual differences from the original red “blicket.” Children infer that each “blicket” contains an unseen causal force that lights the machine. This phenomenon, which scientists call induction, is an extremely efficient way for the brain to extend concepts by ignoring variation. However, induction also encourages essentialism. As a child, when you saw a friend slumped on the ground, crying at the loss of a toy, and were told that the kid felt sad, your brain inferred that there was an unseen causal force inside the child causing the feeling of sadness, the slumped body posture, and the crying. You extended your belief in this essence to other instances of children who were pouting, throwing tantrums, gritting their teeth, and engaging in other behaviors, because adults labeled them for you as sad. Emotion words reinforce the fiction that the equivalences we create are objectively real in the world, waiting to be discovered.17

Essentialism may also be a natural consequence of how your brain is wired. The same circuitry that allows you to form concepts and predict with them also makes essentializing easy. Your cortex learns concepts by separating similarities from differences, as you saw in chapter 6. It integrates information across vision, hearing, interoception, and the other sensory domains, compressing them into efficient summaries. Each summary is like a little imaginary essence, invented by your brain to represent that a bunch of instances from your past are similar.18

So, essentialism is intuitive, logically impossible to disprove, part of our psychological and neural makeup, and a self-perpetuating scourge in science. It is also the basis for the classical view’s most fundamental idea, that emotions have universal fingerprints. No wonder the classical view has such stamina—it’s powered by a virtually unkillable belief.

When you embed essentialism in a theory of emotion, you get something more than just a doctrine on how and why you have feelings. You get—yes—a compelling story of what it means to be a human being. A classical theory of human nature.

The classical story begins with your evolutionary origins. You are said to be an animal at the core. You allegedly inherited various mental essences from your non-human ancestors, including emotion essences buried deep within your subcortex. To quote Darwin, “Man, with all his noble qualities . . . with his god-like intellect . . . with all these exalted powers . . . still bears in his bodily frame the indelible stamp of his lowly origin.” Nevertheless, the classical view considers you special because your animalistic essences come gift-wrapped in rational thought. A uniquely human essence of reason supposedly lets you regulate your emotions by rational means, placing you at the pinnacle of the animal kingdom.19

The classical view of human nature also speaks to personal responsibility. It says that your behavior is governed by internal forces beyond your control: you are buffeted by the world and respond emotionally on impulse, like an erupting volcano or a boiling pot. According to this view, sometimes your emotion essences and cognitive essences vie for control of your behavior, and other times the two sets of essences work together to make you wise. Either way, if you’re at the mercy of strong emotions that can hijack you, the argument goes, then you might be less culpable for your actions. This assumption now sits at the foundation of Western legal systems, where so-called crimes of passion are given special treatment. Additionally, if you are completely devoid of emotion, then you are seen as more capable of inhuman acts. A serial killer who feels no remorse, some believe, is somehow less human than a murderer who deeply regrets his actions. If this is the case, then morality would be rooted in your ability to feel certain emotions.

The classical view also draws hard boundaries between you and the outside world. As you look around, you see objects like trees, rocks, houses, snakes, and other people. These objects exist outside your anatomical body. In this view, falling trees make a sound whether you’re present or not. Your emotions, thoughts, and perceptions, on the other hand, are said to exist inside your anatomical body, each with its own essence. So, by implication, your mind would be completely inside you and the world completely outside you.20

In a sense, the classical view wrenched human nature away from religion and placed it into the hands of evolution. You are no longer an immortal soul but a collection of specialized, distinct, inner forces. You come into the world preformed, not in God’s image but by your genes. You perceive the world accurately, not because God designed you this way but because the survival of your genes to the next generation depends on it. And your mind is a battleground, not of good and evil, righteousness and sin, but of rationality and emotionality, cortex over subcortex, inner versus outer forces, the thoughts in your brain versus the emotions in your body. You, with your animal brain wrapped in rational cortex, are distinct from other animals in nature, not because you have a soul but because you are the pinnacle of evolution, endowed with insight and reason.

Darwin embodied this essentialist view of human nature. Even though he vanquished essentialism from our understanding of the natural world, when it came to humans’ place in that world, essentialism got the better of him. Expression covered all three parts of the classical view of human nature: that animals and humans share universal essences of emotion, that emotions seek expression in the face and body outside of our control, and that they are triggered by the outside world.

In the years that followed, however, Darwin’s own essentialism came back to bite him in the behind. As Darwin’s intellectual descendants adopted his views, shaping the classical view, they ironically misinterpreted (or twisted?) his own words to conform more fully to essentialism.

Darwin indeed stated in Expression that humans display universal facial expressions that evolved from a common ancestor:

 

With mankind some expressions, such as the bristling of the hair under the influence of extreme terror, or the uncovering of the teeth under that of furious rage, can hardly be understood, except on the belief that man once existed in a much lower and animal-like condition. The community of certain expressions in distinct though allied species, as in the movements of the same facial muscles during laughter by man and by various monkeys, is rendered somewhat more intelligible, if we believe in their descent from a common progenitor.21

 

On first glance, you might think Darwin is saying that facial expressions are a useful and functional product of evolution, and, in fact, the classical view was founded on this idea. However, Darwin actually said the opposite. He wrote that smiles, frowns, eye-widening, and other expressions were useless, vestigial movements—products of evolution that no longer serve a function, like the human tailbone and appendix and the wings of the ostrich. He made this statement over a dozen times in Expression. Emotional expressions were primarily a compelling example for his broader arguments about evolution. If these expressions are useless in humans but shared with other animals, according to Darwin, they must exist because they were functional in a long-gone, common ancestor. Vestigial expressions would provide strong evidence that humans were animals, justifying his earlier views about natural selection from On the Origin of Species in 1859, which he then applied to human evolution in his next book, The Descent of Man, and Selection in Relation to Sex, in 1871.22

If Darwin didn’t claim that emotional expressions evolved to serve a survival function, then why do so many scientists fervently believe that he claimed this? I discovered the answer in the manuscripts of an early-twentieth-century American psychologist, Floyd Allport, who wrote extensively on Darwin’s ideas. In 1924, Allport made a sweeping inference from Darwin’s writing that significantly changed the original meaning. Allport wrote that expressions begin as vestigial in newborns but quickly assume function: “Instead of the biologically useful reaction being present in the ancestor and the expressive vestige in the descendant, we regard both these functions as present in the descendant, the former serving as a basis from which the latter develops.”23

Allport’s modification obtained a certain authenticity and validity, despite being inaccurate, because it supported the classical view of human nature. It was eagerly adopted by like-minded scientists who could now claim to be the heirs of the unassailable Charles Darwin. In reality, they are merely the heirs of Darwin-hacking Floyd Allport.

As you can see, Darwin’s name sometimes functions like a magical cloak that wards off the evil spirits of scientific criticism. It allowed Floyd Allport and John Dewey to transmute the words of William James and Darwin himself into their diametric opposites and shore up the classical view of emotion. The cloak is protective, for if you disagree with a Darwinian idea, you must be denying evolution. (Heck, you’re probably a closet creationist.)

Darwin’s magical cloak also helped to propagate the mistaken idea that the brain evolved as a bunch of blobs with distinct, dedicated functions. This key belief of the classical view led many scientists down the fruitless path of searching for emotion blobs in the brain. The path was paved by a Darwin-swaddled physician from the mid-nineteenth century, Paul Broca, who claimed to have discovered the brain blob for human language. He observed that patients with damage to a region of the left frontal lobe were rendered unable to speak fluently, a condition called nonfluent or expressive aphasia. When a person with Broca’s aphasia tries to say something meaningful, the words come out jumbled: “Thursday, er, er, er, no, er, Friday . . . Bar-ba-ra . . . wife . . . and, oh, car . . . drive . . . purnpike [sic] . . . you know . . . rest and . . . TV.” Broca inferred that he’d found the essence of language in the brain, much like classical view scientists point to amygdala lesions as proof of fear circuitry. The region has been known as Broca’s area ever since.24

The thing is, Broca had scant evidence for his claims, and other scientists had plenty of evidence that he was wrong. They pointed out, for example, that other patients with nonfluent aphasia had a perfectly healthy Broca’s area. But Broca’s idea prevailed anyway because it was protected by Darwin’s magical cloak reinforced by a healthy dose of essentialism. Thanks to Broca, scientists now had an evolutionary story for the origin of language—that it’s located in “rational” cortex—countering the prevailing belief that language was given by God. Today’s textbooks in psychology and neurology still hold up Broca’s area as the clearest example of localized brain function, even as neuroscience has shown that the region is neither necessary nor sufficient for language.* Broca’s area is actually a failure to localize a psychological function to a brain blob. Nevertheless, history was rewritten in Broca’s favor, lending strength to essentialist views of the mind.25

Broca and his Darwinian cloak went on to reinforce the classical fiction that emotion and reason evolved as layers in the brain, which you encountered in chapter 4 as the “triune brain.” Broca was inspired by Darwin’s claims in The Descent of Man that the human mind, like the human body, was sculpted by evolution. Darwin wrote that “animals are excited by the same emotions as ourselves,” surmising that human brains, like the rest of the human body, reflect our “lowly origin.” So Broca and other neurologists and physiologists launched a grand search for animalistic emotion circuits—our inner beast. They focused on what they believed to be ancient parts of the brain, whose circuits were allegedly regulated by the more evolutionarily advanced cortex.26

Broca localized the “inner beast” in what he believed to be an ancient “lobe” deep within the human brain. He named it le grand lobe limbique, or “the limbic lobe.” Broca did not brand his supposed lobe as the seat of emotion (actually, he thought it housed the sense of smell and other primitive survival circuitry), but he did treat limbic tissue as a single, unified entity, laying the first stone on a path toward essentializing it as the home of emotion. Over the next century, Broca’s limbic lobe morphed into a unified “limbic system” for emotion, guided by other believers in the classical view. This so-called system was said to be evolutionarily old; to be virtually unchanged from its origin in non-human mammals; and to control the heart, lungs, and other internal organs of the body. It allegedly lay between ancient “reptilian” circuits in the brainstem for hunger, thirst, and so on, and the newer, uniquely human layers of cortex that regulate mankind’s animalistic emotions. This illusory hierarchy embodied Darwin’s ideas about human evolution—base appetites having evolved first, followed by wild emotional passions, with rationality as our crowning glory.27

Scientists inspired by the classical view have claimed to localize many different emotions to limbic brain regions, such as the amygdala, that are (allegedly) under the control of the cortex and cognition. Modern neuroscience, however, has shown that the so-called limbic system is a fiction, and experts in brain evolution no longer take it seriously, let alone consider it a system. Accordingly, it’s not the home of emotion in the brain, which is unsurprising because no single brain area is dedicated to emotion. The word “limbic” still has meaning (when referring to brain anatomy), but the limbic system concept was just another example of applying an essentialist, Darwin-flavored ideology to the structure of the human body and brain.28

Long before Broca fashioned his first brain blob, the classical and construction views of human nature were at war. In Ancient Greece, Plato divided the human mind into three types of essences: rational thoughts, passions (which today we would call emotions), and appetites like hunger and sex drive. Rational thought was in charge, controlling the passions and appetites, an arrangement that Plato described as a charioteer wrangling two winged horses. A hundred years earlier, however, his countryman Heraclitus (chapter 2) was arguing that the human mind constructs perception in the moment, like constructing a river from countless drops of water. In Ancient Eastern philosophy, traditional Buddhism enumerated more than fifty discrete mental essences, called dharmas, some of which bear a striking resemblance to the so-called basic emotions of the classical view. Centuries later, a radical revision of Buddhism recast the dharmas as human constructions dependent on concepts.29

From those initial skirmishes, the war has continued throughout recorded history. The eleventh-century scientist Ibn al-Haytham, who made seminal contributions to developing the scientific method, held the constructionist view that we perceive the world through judgment and inference. Medieval Christian theologians were essentialists, associating different cavities in the brain with distinct essences of memory, imagination, and intelligence. Philosophers in the seventeenth century, such as René Descartes and Baruch Spinoza, believed in emotion essences and catalogued them, while eighteenth-century philosophers like David Hume and Immanuel Kant argued more for construction and perception-based explanations for human experience. The neuroanatomist Franz Joseph Gall in the nineteenth century founded phrenology, perhaps the ultimate essentialist view of the brain, to detect and measure mental essences as bumps on the skull (!!). Shortly thereafter, William James and Wilhelm Wundt espoused constructionist theories of the mind; as James wrote, “A science of the relations of mind and brain must show how the elementary ingredients of the former correspond to the elementary functions of the latter.” James and Darwin were also casualties within this war over human nature, as their views of emotion were, shall we say, “adjusted,” and the spoils went to scientists such as Broca who claimed a victory for evolution . . . or at least an essentialist sort of evolution.30

Plato’s essences of the mind are still around today, though their names have changed (and we’ve dispensed with the horses). Nowadays we call them perception, emotion, and cognition. Freud called them the id, the ego, and the superego. The psychologist and Nobel laureate Daniel Kahneman metaphorically calls them System 1 and System 2. (Kahneman is very careful to say it’s a metaphor, but many people seem to be ignoring him and essentializing Systems 1 and 2 as blobs in the brain.) The “triune brain” names them the reptilian brain, the limbic system, and the neocortex. Most recently, the neuroscientist Joshua Greene has used the intuitive analogy of a camera, which can operate quickly and effortlessly using its automatic settings, or more flexibly and deliberately in manual mode.31

On the other side of the fence, construction views of the mind are plentiful today. Psychologist and bestselling author Daniel L. Schacter has a construction theory of memory. And you can easily find construction theories for perception, the self, concept development, brain development (neuroconstruction), and of course the theory of constructed emotion.32

The battles today are all the more intense because it’s easy for each side to view the other in caricature. The classical view often dismisses construction as saying everything is relative, as if the mind were merely a blank slate and biology can be disregarded. Construction blasts the classical view for ignoring the powerful effects of culture and justifying the status quo. In caricature, the classical view says “nature” and construction says “nurture,” and the result has been a wrestling match between straw men.

Modern neuroscience, however, has burned down both caricatures. We are not blank slates, and our children are not “Silly Putty” to be shaped this way and that, but neither is biology destiny. When we peer into the workings of a functioning brain, we don’t see mental modules. We see core systems that interact continuously in complex ways to produce many sorts of minds, depending on culture. The human brain is itself a cultural artifact because it is wired by experience. We have genes that are turned on and off by the environment, and other genes that regulate how sensitive to the environment we are. I’m not the first person to make these points. But I am perhaps the first one to point out how brain evolution, brain development, and its resulting anatomy point in a clear direction for the science of emotion and our view of human nature.33

Ironically, the millennia-long war over human nature has itself been tainted by essentialism. Both sides have assumed that a single, superior force must be shaping the brain and designing the mind. In the classical view, this force has been nature, God, and then evolution. In construction, it has been the environment and then culture. But neither biology nor culture is responsible alone. Others have made this point before me, but it’s time to take it seriously. We don’t know every detail about how the mind and brain work, but we know enough to say definitively that neither biological determinism nor cultural determinism is correct. The boundary of the skin is artificial and porous. As Steven Pinker so nicely writes, “It is now simply misguided to ask whether humans are flexible or programmed, whether behavior is universal or varies across cultures, whether acts are learned or innate.” The devil is in the details, and the details give us the theory of constructed emotion.34

Now that the final nails are being driven into the classical view’s coffin in this era of neuroscience, I would like to believe that this time, we’ll actually push aside essentialism and begin to understand the mind and brain without ideology. That’s a nice thought, but history is against it. The last time that construction had the upper hand, it lost the battle anyway and its practitioners vanished into obscurity. To paraphrase a favorite sci-fi TV show, Battlestar Galactica, “All this has happened before and could happen again.” And since the last occurrence, the cost to society has been billions of dollars, countless person-hours of wasted effort, and real lives lost.

My cautionary tale begins in the early twentieth century, when scientists inspired by Darwin and the mutant James-Lange theory were searching in vain for the essences of anger, sadness, fear, and so on. Their repeated failures eventually led them to a creative solution. If we cannot measure emotions in the body and brain, they said, we’ll measure only what happens before and after: the events that bring on an emotion and the physical reactions that result. Never mind what’s happening inside that skull thing in the middle. Thus began the most notorious historical period in psychology, called behaviorism. Emotions were redefined as mere behaviors for survival: fighting, fleeing, feeding, and mating, collectively known as the “four F’s.” To a behaviorist, “happiness” equaled smiling, “sadness” was crying, and “fear” was the act of freezing in place. And so, the nagging problem of finding the fingerprints of emotional feelings was, with the flick of a pen, defined out of existence.35

Psychologists often recount stories of behaviorism in the same chilling tones as a ghost story around a campfire. It declared that thoughts, feelings, and the rest of the mind were unimportant to behavior or might not even exist. During this “dark ages” of emotion research, which lasted for several decades, nothing worthwhile was discovered on human emotion (supposedly). Ultimately, most scientists rejected behaviorism because it ignores a basic fact: that each of us has a mind, and in every waking moment of life, we have thoughts and feelings and perceptions. These experiences, and their relation to behavior, must be explained in scientific terms. Psychology emerged from the darkness in the 1960s, according to the official history, as a cognitive revolution reinstated the mind as a topic of scientific inquiry, likening emotion essences to modules or organs in a mind that was thought to function like a computer. With this transformation, the final pieces of the modern classical view fell into place, and the two main flavors of the classical view—basic emotion theory and classical appraisal theories—were officially anointed.36

That’s what the history books say . . . but history books are written by the victors. The official history of emotion research, from Darwin to James to behaviorism to salvation, is a byproduct of the classical view. In reality, the alleged dark ages included an outpouring of research demonstrating that emotion essences don’t exist. Yes, the same kind of counterevidence that we saw in chapter 1 was discovered seventy years earlier . . . and then forgotten. As a result, massive amounts of time and money are being wasted today in a redundant search for fingerprints of emotion.

I discovered this quite by chance in 2006 while cleaning my office, when I stumbled across a couple of old papers from the 1930s when emotion research was allegedly dead. These papers did not embrace behaviorism. They said that emotions do not have biological essences. Following a trail of references, I discovered a treasure trove of over a hundred publications, written across a span of fifty years, that most of my scientific colleagues had never heard of. The writers were nascent constructionists, though they did not use that term. They were running experiments to find physical fingerprints for distinct emotions, failing to do so, concluding that the classical view was unjustified, and speculating about constructionist ideas. I call this band of scientists the Lost Chorus because their work, published in prestigious journals, has been largely overlooked, ignored, or misunderstood since the supposed dark ages ended.37

Why did the Lost Chorus flourish for half a century and then vanish? My best guess is that these scientists did not offer a fully formed, alternative theory of emotion to compete with the compelling classical view. They presented solid counterevidence to be sure, but criticism alone was not enough to remain relevant. As philosopher Thomas Kuhn wrote about the structure of scientific revolutions: “To reject one paradigm without simultaneously substituting another is to reject science itself.” So when the classical view reasserted itself in the 1960s, half a century of anti-essentialist research was swept into history’s dustbin. And we are all the poorer for it, considering how much time and money are being wasted today in pursuit of illusory emotion essences. At press time, Microsoft is analyzing facial photographs in an attempt to recognize emotion. Apple has recently purchased Emotient, a startup company using artificial intelligence techniques in an effort to detect emotion in facial expressions. Companies are programming Google Glass ostensibly to detect emotion in facial expressions in an effort to help autistic children. Politicians in Spain and Mexico are engaging in so-called neuropolitics to discern voter preferences from their facial expressions. Some of the most pressing questions about emotion remain unanswered, and important questions remain obscured, because many businesses and scientists continue practicing essentialism while the rest of us are figuring out how emotions are made.38

It’s hard to give up the classical view when it represents deeply held beliefs about what it means to be human. Nevertheless, the facts remain that no one has found even a single reliable, broadly replicable, objectively measurable essence of emotion. When mountains of contrary data don’t force people to give up their ideas, then they are no longer following the scientific method. They are following an ideology. And as an ideology, the classical view has wasted billions of research dollars and misdirected the course of scientific inquiry for over a hundred years. If people had followed evidence instead of ideology seventy years ago, when the Lost Chorus pretty solidly did away with emotion essences, who knows where we’d be today regarding treatments for mental illness or best practices for rearing our children.39

Every scientific journey is a story. Sometimes it’s a story of gradual discovery: “Once upon a time, people didn’t know very much, but we learned more and more over the years, and today we know lots of stuff.” Other times, it’s a tale of radical change: “Everyone used to believe something that seemed correct, but boy were we wrong! Now the fascinating truth is here.”

Our journey is more of a story within a story. The inner story is how emotions are made, wrapped in an outer story of what it means to be human. “For two thousand years, people believed something about emotions, despite abundant counterevidence all around us. The human brain, you see, is wired to mistake its perceptions for reality. Today, powerful tools have yielded a more evidence-based explanation that’s almost impossible to ignore . . . yet some people still manage.”

The good news is that we’re in a golden age of mind and brain research. Many scientists are now on a path forged by the data, rather than ideology, to understand emotion and ourselves. This new, data-driven understanding leads to innovative ideas about how to live a fulfilling and healthful life. If your brain operates by prediction and construction and rewires itself through experience, then it’s no overstatement to say that if you change your current experiences today, you can change who you become tomorrow. The next few chapters delve into these implications in the areas of emotional intelligence, health, law, and our relationships with other animals.40