Kohlberg, the Harvard psychologist, believed that
moral reasoning depended upon general cognitive maturity—another way of
saying that these things take time. If indeed decisions have strong
emotional roots, as we will explore, I would also argue that moral
reasoning depends upon emotional maturity. Though Kohlberg has his
critics, his ideas remain influential, as do those of his intellectual
mentor, Jean Piaget. The ideas of both men have been applied in schools,
juvenile detention facilities, even prisons. Kohlberg outlined a
progressive process for moral development:
1.
Avoiding punishment. Moral reasoning starts out at a fairly primitive
level, focused mostly on avoiding punishment. Kohlberg calls this stage
pre-conventional moral reasoning.
2. Considering consequences. As a child’s mind
develops, she begins to consider the social consequences of her
behaviors and starts to modify them accordingly. Kohlberg terms this
conventional moral reasoning.
3. Acting on principle. Eventually, the child
begins to base her behavioral choices on well-thought-out, objective
moral principles, not just on avoidance of punishment or peer
acceptance. Kohlberg calls this coveted stage post-conventional moral
reasoning. One could argue that the goal of any parent is to land here.
Kids don’t necessarily arrive at this third stage
all by themselves. Along with time and experience, it can take a wise
parent to get a child to consistently behave in a manner congruent with
his or her inborn moral grammar. Part of the reason it’s tough is that
when children observe bad behavior, they have learned it. Even if the bad behavior is punished, it remains easily accessible in the child’s brain. Psychologist Albert Bandura was able to show this, with help from a clown.
Lessons from Bobo the clown
In the 1960s, Bandura showed preschoolers a film
involving a Bobo doll, one of those inflatable plastic clowns weighted
on the bottom. In the film, an adult named Susan kicks and punches the
doll, then repeatedly clobbers it with a hammer—buckets o violence.
After the film, the preschoolers are taken into another room filled with
toys, including (surprise) a Bobo doll and a toy hammer. What do the
children do? It depends.
If they saw a version of the film where Susan was
praised for her violent actions, they hit the doll with great frequency.
If they saw a version where Susan got punished, they hit Bobo with less
frequency. But if Bandura then strides into the room and says, “I will
give you a reward if you can repeat what you saw Susan do”, the children
will pick up a hammer and start swinging at Bobo. Whether they saw the
violence as rewarded or punished, they learned the behavior.
Bandura calls this “observational learning.” He
was able to show that kids (and adults) learn a lot by observing the
behaviors of others. It can be positive, too. A Mexican soap opera in
which the characters celebrate books, and then ask viewers to sign up
for reading classes, increased literacy rates across the country.
Bandura’s finding is an extraordinary weapon of mass instruction.
Observational learning plays a powerful role in
moral development. It is one of many skills hired in the brain’s ethical
construction project. Let’s take a peek inside.
Would you kill one to save five?
Imagine what you’d do in these two hypothetical situations:1.
You are the driver of a trolley car whose brakes have failed, and you
are now hurtling down the tracks at a breakneck, uncontrolled speed.
The trolley comes to a fork in the track, and you are suddenly
confronted with a no-win situation. If you don’t do anything, the train
will default to the left fork, killing five construction workers
repairing that side of the track. If you steer the train to the right,
you will kill only one. Which should you choose?
2. You are standing on an overpass, a trolley
track beneath you. As the trolley approaches, you see it is careening
out of control. This time there is no fateful fork in track, just the
same five poor construction workers about to get killed. But there is a
solution. A large man is standing directly in front of you, and if you
push him off the overpass, he will fall in front of the trolley and his
body will stop it in its tracks. Though he will be killed, the other
five men will be saved. What do you do?
Each case presents the same ratio, five deaths to
one. The vast majority of people find the first scenario easy to answer.
The needs of the many outweigh the needs of the few. They’d steer the
trolley to the right. But the second situation involves a different
moral choice: deciding whether to murder someone. The vast majority of
people choose not to murder the man.
But not if they are brain-damaged. There’s a
region above the eyes and behind the forehead called the ventromedial
prefrontal cortex. If this area of the brain suffers damage, moral
judgment is affected. For these people, the fact of murder is not
particularly relevant to their choice. Convinced that the needs of the
many still outweigh the needs of the one, they push the large man over
the bridge—saving five people and killing one.
What does this mean? If morality is an innate part
of our brain’s neural circuitry, then damage to those areas should
change our ability to make moral decisions. Some researchers think these
results show just that. Some researchers don’t think trolley
experiments demonstrate anything at all. They argue that no one can
relate hypothetical decisions to real-life, in-the-moment experiences.
Any way out of this controversy? There might be, though it involves the
ideas of philosophers who have been dead for more than 200 years.
Emotion vs. reason
Philosophy titans such as David Hume thought that
base passions powered moral decisions. The brilliant Immanuel Kant
argued that dispassionate reason was—or at least ought to be—the driving force behind moral decision-making. Modern neuroscience is betting that Hume was right.
Some researchers believe that we have two sets of
moral reasoning circuits and that moral decisions (and conflicts) arise
because the two systems get into arguments so frequently. The first
system is responsible for making rational moral choices—the Kant
circuits of our brain—which decides that saving five lives makes more
sense than saving one. The second system is more personal, even emotive,
working like a loyal opposition to the Kant circuitry. These neurons
let you visualize the large man plunging to his death, imagine how the
poor fellow and his family would feel, realize his horrible death would
be your responsibility. This Hume-like view causes most people’s brains
to pause, then issue a veto order over this choice. The brain’s
ventromedial prefrontal cortex is involved in mediating this
philosophical struggle. When it is damaged, Hume takes a hike.
If you lose emotions, you lose decision-making
What does this mean for parents wanting to raise a
moral child? Emotions are the
foundation of a child’s happiness. It appears that they are also the
basis of moral decision-making. A bombshell of a finding comes from a
man named Elliot, under the observant eyes of neuroscientist Antonio
Damasio.
Elliot had been a role model for his community: capable manager in a big business, terrific husband, church elder, family man.
Everything changed, however, the day he had brain surgery to remove a
tumor near his frontal lobe. He came out of his surgery with his
intelligence and perceptual skills intact. But he gained three unusual
traits.
First, he was incapable of making up his mind.
Elliot ruminated over the tiniest minutiae of life. Decisions that for
us take only seconds took him hours. He couldn’t decide what TV station
to watch, what color of pen to pick up, what to wear, where to go in the
morning. He analyzed everything endlessly. Like a man hovering over a
buffet table but incapable of putting anything on his plate, his life
became one long equivocation. Not surprisingly, Elliot’s world fell
apart. He lost his job and eventually his marriage. He started new
businesses and watched them all fail. The IRS investigated him. He
eventually went bankrupt and moved back in with his parents.
Damasio started working with Elliot in 1982. As he
put Elliot through the full gamut of behavioral tests, he soon noticed
the second unusual trait: Elliot could not feel anything emotionally.
In fact, he seemed to have no emotions at all. You could show him a
gory picture, an erotic picture, a baby. No measurable response from his
heart or his brain. He flat-lined. It was as if Damasio had hooked up
his fancy physiological electronics to a mannequin.
This led Damasio to the third trait. Elliot had trouble making moral judgments.
He couldn’t have cared less that his indecisive behaviors led to a
divorce or to bankruptcy or to any loss of social standing. Abstract
tests showed that he knew right from wrong, yet he behaved and felt as
if he didn’t. He could even remember that he used to experience such
feelings, but they were now lost in a distant moral fog. As scholar
Patrick Grim has observed, what Elliot did was clearly untethered from what Elliot knew.
This is an incredible finding. Because Elliot
could no longer integrate emotional responses into his practical
judgments, he completely lost his ability to make up his mind. His
entire decision-making machinery collapsed, including his moral
judgment.
Other studies confirm that
a loss of emotion equals a loss of decision-making. We now know that
children who have suffered damage to the ventromedial and polar
prefrontal cortices before their second birthday have symptoms quite
similar to Elliot’s.