Book Summary: The Righteous Mind by Jonathan Haidt

Book Cover for The Righteous Mind by Jonathan Haidt

In The Righteous Mind: Why Good People are Divided by Politics and Religion, Jonathan Haidt argues that there’s more to morality than harm and fairness. We actually have at least 6 different moral foundations, which is why we’re so divided.

Buy The Righteous Mind at: Amazon (affiliate link)

See also my Criticisms of “The Righteous Mind”.

Key Takeaways from The Righteous Mind

  • We have at least 6 moral foundations, like how our tongues have multiple taste-receptors. Those foundations are:
    • Care/Harm
    • Fairness/Cheating
    • Loyalty/Betrayal
    • Authority/Subversion
    • Sanctity/Degradation
    • Liberty/Oppression
  • Morality varies:
    • People from WEIRD (Western, Educated, Industrialised, Rich and Democratic) societies tend to place a lot of weight on the harm principle—the idea that actions are only morally wrong if they cause harm to others.
    • But people in other societies will condemn acts as being “wrong” even if they don’t harm anyone.
  • The different moral foundations explain we’re divided politically:
    • US Democrats generally care about 2 or 3 of these (Care, Fairness and Liberty to some degree).
    • Socially conservative Republicans value all 6.
    • Libertarians and classically liberal Republicans are more like Democrats, but place more weight on Liberty and less on Care.
  • Intuition is stronger than reasoning when it comes to morality:
    • Haidt uses the metaphor of an Elephant and its Rider. The Elephant (intuition) is bigger, and governs most of our behaviour.
    • The Rider (reasoning) is like a press secretary. It doesn’t usually make our decisions; it just makes up ad hoc explanations for them.
    • When our intuition tells us something is wrong, but we can’t find a good reason why, we experience moral dumbfoundment.
  • How to change people’s minds:
    • Because the Elephant is in charge, it’s hard to change people’s minds through reasoning alone, especially if debates are hostile.
    • People are more likely to change their minds when surrounded by “friendly Elephants”.
  • Morality binds and blinds:
    • We are 90% chimp (selfish), 10% bee (groupish). We have evolved to be mostly selfish, but occasionally things can trigger our “hive switch” and make us transcend our sense of self to become part of a bigger whole.
    • Our groupishness has helped us to cooperate in large groups and made humans incredibly successful as a species.
    • Religion can only be understood in terms of how it serves groups. Religions make groups more cohesive and help solve free rider problems.
    • We are divided by religion and politics because our minds were designed for groupish righteousness. Our moral matrices often blind us to the idea that there could be more than one form of moral truth.

Detailed Summary of The Righteous Mind

The 6 moral foundations

Haidt developed Moral Foundations Theory, which is a theory about morality that compares our sense of morality to a tongue with (at least) 6 taste receptors:

  1. Care/Harm
  2. Fairness/Cheating
  3. Loyalty/Betrayal
  4. Authority/Subversion
  5. Sanctity/Degradation
  6. Liberty/Oppression

Each of these is a moral foundation, that people care about in different ways and to different extents.

Haidt suggests our moral taste receptors are “innate”. This doesn’t mean they’re universal or unchangeable, but simply “organised in advance of experience”. That is, we’re born with first drafts, but experience can revise them.

1. Care/Harm

We likely evolved the Care/Harm foundation because natural selection favoured those who felt sensitive to signs of their children’s suffering or distress.

So the original trigger for this foundation was your own children’s suffering. But culture has now expanded it so that it can be triggered by other suffering (e.g. children in another country or animals), even though there is no evolutionary advantage to caring about this other suffering.

The negative side of this foundation is Harm. People from WEIRD societies (that is, Western, Educated, Industrialised, Rich and Democratic) tend to give the most weight to the harm principle—the idea that whether an action is immoral depends on whether it causes harm to others. In surveys involving short, morally questionable vignettes, WEIRD people are more likely to overrule their feelings and say that an action that bothered them was nonetheless morally permissible. WEIRD philosophers have also relied heavily on the harm principle in developing their moral theories.

However, Haidt argues there’s much more to morality than harm and fairness. He began work on Moral Foundations Theory by writing short vignettes about people doing offensive things, but crafted so as to ensure no one is harmed or even offended. Examples include:

  • Having sex with a dead chicken and then eating it;
  • Cooking and eating part of a fresh human cadaver; and
  • Siblings having consensual and protected sex.

Most people consider these acts to be morally wrong, even though no one is harmed.

2. Fairness/Cheating

The Fairness/Cheating moral foundation evolved because it allows us to cooperate with non-kin in ways that result in a win-win (e.g. trade). Such cooperation is pretty rare in non-human species. Our desire to punish cheating turns out to be one of the keys to cooperation in groups—we will punish wrongdoing even at a personal cost. This creates positive externalities for the group and promotes co-operation.

People on both the left and the right care about “fairness”, but in different ways:

  • On the right, fairness means proportionality — e.g. the Tea Party believes it’s unfair to transfer money from those who contribute a lot to those who contribute little.
  • On the left, fairness generally implies equality — e.g. the Occupy Wall Street movement.

However, Haidt and his colleagues noticed that concerns about “equality” came more from a dislike of oppression and a concern for victims, rather than a desire for reciprocity. So they developed the sixth foundation (Liberty/Oppression), meaning Fairness/Cheating is now mostly about proportionality, not equality.

The original triggers for the Fairness foundation were acts that directly affected us. If someone cheated us, we’d feel anger; if we cheat another person, we’d feel guilt. Today, this foundation has expanded to support more generalised concerns with people cheating or not contributing their “fair share”.

Everyone cares about Fairness (in the “proportionality” sense), but people on the right care about it more. They’re more likely to agree with slogans such as “Do the crime, do the time”. Left-wingers, on the other hand, are often uncomfortable with the negative/punishment side of reciprocity.

3. Loyalty/Betrayal

The Loyalty/Betrayal foundation originally helped us maintain cohesive coalitions to fend off attacks from other groups.

There’s a difference between how Loyalty manifests for males and females. Males tend to be more “tribal”. For example, boys spontaneously organise themselves for team competitions far more often than girls do. Men also become more cooperative when a task is framed as an intergroup competition, but women do not. For women, Loyalty tends to show up more in two-person relationships.

The original trigger for the Loyalty foundation is anything that tells you who is a team player vs who is a traitor. The trigger has now expanded to domains like sports—people can enjoy working together in intergroup competition to pursue harmless trophies.

4. Authority/Subversion

The Authority/Subversion foundation helps us negotiate status hierarchies and keeps groups stable. Across different species, low-ranking individuals make displays to appear submissive and non-threatening so that dominant, high-ranking individuals won’t beat them up.

Authority is not the same as power. Individuals with authority in a group also take on responsibility for maintaining order and justice. This is true even for chimpanzees, whose dominance hierarchies are mostly about raw power.

The original triggers for the Authority/Subversion foundation are indications of higher versus lower rank. If someone acts in a way that subverts the existing hierarchy, we feel it instantly.

Current triggers have expanded to include acts that uphold or subvert traditional values or institutions. These may include acts of obedience/disobedience, respect/disrespect, or submission/rebellion with regard to authorities we consider legitimate.

5. Sanctity/Degradation

This foundation likely evolved because subverting it triggers disgust, which helped us avoid poisonous foods and infectious diseases. Philosopher Leon Kass has argued that disgust may give us a valuable warning that we are going too far, even when we can’t justify those feelings by pointing to victims. [Similar to the Chesterton Fence argument.]

The original trigger for this foundation included smells and sights that tended to indicate the presence of dangerous pathogens (e.g. human corpses, excrement, people with visible sores).

Its triggers today, however, have greatly expanded and vary significantly across cultures. A common trigger today is out-group members, such as immigrants—some evidence shows that people feel more welcoming towards immigrants when disease risks are lower.

6. Liberty/Oppression

When Haidt and his colleagues first developed Moral Foundations T, they only had 5 foundations. Haidt notes that this 6th foundation for Liberty/Oppression is provisional in that it hasn’t been tested as rigorously as the original 5.

The Liberty/Oppression foundation likely evolved because it helped us live in small groups with individuals who would, if given the chance, dominate, bully, and constrain others. Once we developed the ability to communicate with each other and to make weapons like spears, this created reverse-dominance hierarchies. Non-dominant members of a group could easily gang up to take down or restrain a bullying alpha male whose behaviour threatened or annoyed the group. This leads to a fragile state of egalitarianism, where the most successful individuals were those that could maintain good reputations and gain others’ trust and support. So our egalitarianism seems to be rooted more in a dislike of of domination than in a love of equality.

The original triggers for this foundation are attempts to dominate and control, which causes us to respond with righteous anger or reactance. There’s a tension between the Liberty foundation and the Authority foundation—we recognise some attempts to control as legitimate, but we’re sensitive when it crosses the line into tyranny.

Current triggers for this foundation can vary on the left and the right. Everyone hates oppression, but differ in who they consider the ‘oppressor’ or ‘oppressed’. People on the left are more concerned about vulnerable groups, and are likely to see capitalism and the accumulation of wealth as oppressive. Conservatives and libertarians are more likely to view liberty as encompassing ‘the right to be left alone’, and so tend to see the government and international organisations (e.g. the UN or EU) as oppressors.

Morality varies

Morality is a cultural construction, influenced by accidents of environment and history. But it’s not so flexible that anything goes, or that all societies or cuisines are equally good. Cuisines still have to please tongues equipped with the same five taste receptors.

Morality in WEIRD societies differs from other societies

As noted above, people from WEIRD societies tend to weight the harm principle heavily. WEIRD philosophers since Immanuel Kant and John Stuart Mill have mostly generated moral systems that are individualistic, rule-based, and universalist.

But WEIRD societies are the minority. Most societies (both today and historically) are sociocentric ones that prioritise the needs of groups and institutions over the individual’s. In these societies, morality is broader than harm and fairness.

Morality within societies differs according to social class

Even within societies, people’s sense of morality differs. When Haidt and his colleagues tested their morality vignettes in different cities, they found social class had a larger effect than the actual city or country. The reactions of well-educated people in Brazil and the US were more similar than reactions between people from different social classes in the same city (see Haidt, Koller and Dias, 1993).

Morality differs according to political ideology

In the US, Haidt and his colleagues’ research found that:

  • Democrats rely on 2 or 3 moral foundations: Care, Fairness and, to some extent, Liberty.
  • Socially conservative Republicans tend to use all 6 foundations. They are much more likely to believe actions can be morally wrong even if no one is harmed.
  • Libertarian or classical liberal Republicans are similar to Democrats, but with a much greater emphasis on the Liberty foundation and much less emphasis on Care (even less than social conservatives).

Haidt and his colleagues have done similar surveys multiples times since, including outside the US, and via the website YourMorals.org. The same basic pattern holds up consistently, with differences more pronounced at the extreme ends of the political spectrum. People who consider themselves very liberal value Loyalty, Authority and Sanctity the least, while very conservative people value those foundations the most.

Turning to the three foundations which show the biggest and most consistent partisan differences:

  • Loyalty. In the 1960s, Democrats moved towards universalism and away from nationalism. The left generally celebrate diversity, support immigration, and refer to themselves as ‘citizens of the world’. Haidt argues that the left’s strong reliance on the Care foundation has made them “hostile” to US foreign policy.
  • Authority. The political left often defines itself in part by opposition to hierarchy, inequality and power. As such, it’s much easier for the political right to build on this foundation.
  • Sanctity. Social conservatives, particularly religious ones, are more likely to talk about things like “the sanctity of life” and “the sanctity of marriage”, while the left usually dismiss things like chastity as outdated and sexist. However, parts of the left still rely on this foundation—e.g. people who exalt “natural” things and try to cleanse their bodies of “toxins”, and in some parts of the environmental movement.

Haidt argues that these 3 foundations explain why rural and working-class Americans often vote Republican even when their economic self-interest would be to vote for Democrats (who favour greater redistribution)—the Republican party is better at appealing to their moral interests.

Republicans since Nixon have had a near-monopoly on appeals to loyalty (particularly patriotism and military virtues) and authority (including respect for parents, teachers, elders, and the police, as well as for traditions). And after they embraced Christian conservatives during Ronald Reagan’s 1980 campaign and became the party of “family values,” Republicans inherited a powerful network of Christian ideas about sanctity and sexuality.
—Jonathan Haidt in The Righteous Mind
Tweet

Example: Conservatives seem to understand liberals better than liberals understand conservatives

One of Haidt’s studies tested how well liberals and conservatives understood each other. They split over 2,000 Americans into 3 groups, and asked them to fill out their Moral Foundations Questionnaire:

  • One group filled it out normally, as themselves;
  • Another group filled it out for how they thought a “typical liberal” would respond;
  • The last group filled it out for how they thought a “typical conservative” would respond.
    The results consistently showed that moderates and conservatives were the most accurate in their predictions, whether they were pretending to be liberals or conservatives. Liberals, especially those who claimed to be “very liberal” were the least accurate. They particularly struggled with answering Care and Fairness questions while pretending to be conservatives.

Intuition is stronger than reasoning

Moral psychology used to be divided into two camps—nature vs nurture. Nativists believed that moral knowledge came naturally to us, while empiricists believed that children are blank slates at birth and develop morality as they grow up.

In 1987, a third option emerged—rationality. Rationalists believe we form our moral beliefs through reasoning, and that reasoning is the most reliable way to obtain moral knowledge.

But reasoning is not how most people actually form their moral beliefs. Instead, people have gut feelings about what is “wrong” and come up with reasons afterwards to support those gut feelings. Haidt points to several key findings that support this:

  • Snap judgments correspond with reasoned judgments. The snap judgments people make in a fraction of a second (before they’ve had time to think) correspond strongly with the judgments they end up with after they’ve been given time to think and come up with reasons.
  • Moral dumbfoundment exists. When people are unable to come up with reasons to support their moral judgments, they’ll cast about trying to invent victims, even though Haidt had carefully crafted his vignettes to remove all conceivable harm. For example, when people were asked if it was wrong for a family to eat their pet dog after it had been run over, many people argued that the family itself would be harmed because they might get sick from eating dog meat. People rarely changed their minds even after the researchers explained why their reason wasn’t relevant.
  • Morality is not affected by cognitive load. If morality were rational, you’d expect moral judgments to falter under a heavy cognitive load. But studies show that cognitive load doesn’t affect people’s ability to make moral judgments.

The Elephant and the Rider

Our minds are divided, like a rider on an elephant:

  • The ‘Rider’ is our conscious reasoning—the stream of words and images of which we are fully aware. It is not an automatic process. It can sometimes feel like work, and can falter under cognitive load.
  • The ‘Elephant’ is the other 99 percent of mental processes—emotion, intuition etc. These processes occur automatically and unconsciously, and govern most of our behaviour.

Example: Consensual cannibalism

In 2001, Armin Meiwes posted an ad online saying he was: “Looking for a well-built 21-to-30-year-old to be slaughtered and then consumed.” A guy named Bernd Brandes responded.

One evening, Brandes and Meiwes met up. Brandes took some sleeping pills and alcohol before Meiwes cut off Brandes’s penis and cooked it with some wine and garlic. Brandes took a bite of it and then went off to the bathtub to bleed to death. Meiwes stored Brandes’s flesh in his freezer and ate it gradually over the coming months. The two had made a video beforehand to prove that Brandes fully consented to everything.

This example is likely to leave you morally dumbfounded. Would you be willing to live in the house where this happened? Or would the ‘irrational’ feelings of stain, pollution and disgust hold you back?

Meiwes and Brandes caused no harm to anyone in a direct, material or utilitarian way. But Haidt argues they violated some bedrock moral principles, such as the idea that human life is valuable and that the human body is more than just a slab of meat.

Reasoning is designed to persuade others, not to seek truth

Not only is the Elephant bigger and stronger than the Rider, it’s also older. Humans only evolved the capacity for language and reasoning in the last million years.

When reasoning evolved, it didn’t take over most of our decision-making. Instead, it provides post-hoc justifications to help us pursue socially strategic goals, such as protecting our reputation or convincing others to support us in a dispute. If I think you did something wrong and want the community to punish you for it, I have to use reasoning to get the community on my side. I can’t just say I didn’t like what you were doing.

Philip Tetlock’s research has found that when people know in advance they’ll have to explain their decisions to another person, they think more systematically. They’re less likely to jump to premature conclusions and more likely to revise their beliefs in response to evidence.

So the Rider is like a press secretary for the Elephant. Press secretaries don’t have power to make policy. They’re told what the policy is, and their job is to find evidence and arguments that will justify the policy to the public.

Our moral thinking is much more like a politician searching for votes than a scientist searching for truth.
— Jonathan Haidt in The Righteous Mind
Tweet

We also use reasoning to persuade ourselves

We use reasoning to explain how we think we reached a judgment, even if it’s not how we actually reached it.

Lab experiments show that most people cheat if you give them the ability to do so combined with plausible deniability. And most cheaters leave as convinced of their own virtue as they were at the start.

We’re all skilled in motivated reasoning—using reasoning to reach the conclusions we want to reach. People who are experts at moral reasoning do not seem to behave any more morally than those who are not. In fact, they may be worse, because they may be better at post-hoc justification. This may sound depressing, but it makes sense if you understand that reasoning did not evolve to help us find truth.

How to change people’s minds

Though the Elephant is far stronger than the Rider, it’s not an absolute dictator. Most of us have experienced times when we questioned and revised our first intuitive judgment.

It just doesn’t happen the way most of us think.

You can’t change people’s minds by refuting their arguments through reasoning alone:

[A]s reasoning is not the source, whence either disputant derives his tenets; it is in vain to expect, that any logic, which speaks not to the affections, will ever engage him to embrace sounder principles
— David Hume as quoted by Jonathan Haidt in The Righteous Mind
Tweet

This is why moral and political arguments are often so frustrating.

Instead, the two main ways in which we change our minds on moral issues are reasoned persuasion and social persuasion. Both of these involve interacting with other people because we are terrible at seeking evidence that challenges our own beliefs, but other people happily do this for us. Both of these also require friendly Elephants—because when there is affection or a desire to please another person, we look harder for the truth in their arguments.

Reasoned persuasion

Reasoned persuasion occurs when people we like give us good arguments. That is, when we listen to Riders of friendly Elephants.

When discussions are hostile, people are unlikely to change. Even if our logic is impeccable, we won’t change any minds if our opponents are in ‘combat mode’—we’ll only impress our friends and allies.

Example: Domestic dispute

On one occasion, Haidt’s wife criticised him for leaving dirty dishes out. By the third word (“Can you not…”), before she’d even set out the substance of her complaint, Haidt had already decided he disagreed with her because he didn’t like being criticised. As soon as he heard the complaint in full, his inner press secretary started searching for an excuse to justify his actions.

He found one. And he lied so convincingly he even fooled himself (at least until he reflected on it).

So if you want to persuade someone on a moral or political matter, you should try to convey respect, warmth, and an openness to the other side’s perspective. And if you do truly see it the other person’s way, you might find your own mind opening in response. [This is similar to ideas discussed in EconTalk – How Minds Change with David McRaney and Difficult Conversations by Douglas Stone, Bruce Patton and Sheila Heen.]

Social persuasion

Social persuasion can occur when the people we like hold different views from us, even if they don’t give us any arguments. The mere presence of friendly Elephants can cause us to revise our own judgments.

Example: Living in India

In 1993, Haidt spent several months in Orissa, India to study the ethic of divinity. The moral views he held at the outset clashed with those of the people who hosted him. Haidt came from an individualist society that valued equality and personal autonomy, while the people hosting him cared more about honouring elders and gods and meeting the social expectations of a role.

After just a few weeks, Haidt experienced cognitive dissonance. He liked the people hosting him, and they were kind to him.

Soon, he found himself adopting their perspective. Though he had previously read about the ethics of community and divinity, for the first time he began to feel them. He developed an intuitive sense that left = dirty and right = clean, that certain books should be treated with reverence, not left on the floor, and funeral services and burial rites began to make emotional sense.

If you can have at least one friendly interaction with a member of the “other” group, you’ll find it far easier to listen to what they’re saying, and maybe even see a controversial issue in a new light.
— Jonathan Haidt in The Righteous Mind
Tweet

Changing our minds through reasoning alone

People can simply reason their way to a moral conclusion that conflicts with their initial intuition, but Haidt thinks this is rare. He only knows of one experimental study.

Example: Changing a moral judgment through reasoning

One study presented participants with a hypothetical scenario involving protected sex between consenting adult siblings. Researchers then varied two factors:

  • deliberation time (some participants could respond immediately; others had to wait 2 minutes); and
  • argument strength (the weak argument was that if the siblings make love, there is more love in the world; the strong argument was that our aversion to incest is caused by an ancient evolutionary adaptation for avoiding birth defects, but that doesn’t apply if the siblings use effective contraception).

Neither deliberation time nor argument strength by itself changed people’s minds. But many participants did change their mind when given a strong argument and forced to reflect on it.

[I come away from this far more bullish on our ability to change our minds through reasoning than Haidt does. A mere 2 minutes of deliberation was enough to make many people overcome their in-built aversion to incest? I would’ve expected moral judgments to change much more gradually—over months, if not years. Sure, Haidt says this is the only experimental study that shows this result, but perhaps few allow enough time for reflection.]

Morality binds and blinds

The “righteous” in the title of The Righteous Mind is meant to convey the idea that we are not just innately moral, but also righteous, critical, and judgmental. Haidt argues this is a feature, not a bug, because it enabled humans to form large cooperative groups and even nations without relying on kinship bonds. Hence, “morality binds”.

But our righteous minds also make it hard to consider that there might be other forms of moral truth, or other frameworks for judging people or running a society. Hence, morality blinds.

Humans are 90% chimp and 10% bee

Natural selection can work at multiple levels simultaneously:

  • Individual selection. We compete with other individuals, and natural selection favours selfishness at that level. So our minds have various mechanisms that make us adept at promoting our own interests in competition with our peers.
  • Group selection. We form groups that compete with groups, and natural selection favours cohesive groups with individuals that are team players. So our minds also have various mechanisms that make us adept at promoting our groups’ interests.

Haidt thinks most of human nature was shaped by natural selection operating at the level of the individual, but we have a few group-related adaptations, too. So we are about 90% chimp (selfish) and 10% bee (groupish).

Haidt acknowledges that group selection is a controversial idea. Some experts believe humans haven’t undergone any biological change in the last 40,000 to 50,000 years, and that all changes since then have been cultural only. Haidt disagrees. While he can’t say for certain that human nature was shaped by group selection, he thinks it would a long way to explaining our dual dwe are selfish and groupish at the same time. [Haidt goes into a lot of detail defending group selection in the book. It seems aimed more at his academic peers than at a general audience, so I’ve omitted it here.]

Some things can trigger our “hive switch”

While we live most of our lives in the ordinary world, we achieve our greatest joy in those brief moments when we can transcend our sense of self to become part of a bigger whole.

This transcendence happens temporarily, under special conditions. It’s almost like there’s a switch in our heads that triggers our hivishness when conditions are just right. You might have experienced this ‘switch flipping’ if you’ve previously taken hallucinogens, attended a rave, or participated in a political rally. Many describe these ‘switch flipping’ experiences in spiritual or religious terms.

Example: Psilocybin study

In a controlled experiment, Walter Pahnke brought together 20 divinity students into a chapel and gave half of them psilocybin, and the other half a placebo. (This was known as the Marsh Chapel Experiment.)

The psilocybin group experienced greater unity, deeply positive mood, sense of sacredness and difficulty describing what happened. While things returned to normal within a few hours, the positive changes in attitude and behaviour persisted for longer.

Interestingly, 25 years later, another researcher called Rick Doblin managed to track down 19 of the 20 students. He found that all the subjects given psilocybin still considered the experience to have made a uniquely valuable contribution to their spiritual lives. None of the control group reported the same.

[Before you go off to try psilocybin yourself, it’s worth noting that several subjects experienced acute anxiety during their experience. One even had to be restrained and given antipsychotic medication—he had fled the chapel convinced he’d been chosen to announce the Messiah’s return.]

Haidt suggests the ‘hive switch’ is a group-related adaptation that can only be explained by group selection. Triggering this ‘hive switch’ is likely to involve things like:

  • Synchrony. Moving together in time, such as marching in an army or dancing collectively, can bind a group together and make people forget themselves. Some Japanese corporations begin their days with synchronous company-wide exercises. Groups may similarly prepare for battle (in war and sports) with team chants and rituals.
  • Intergroup competition or external threat. We’re used to rallying around leaders when our group is competing or under threat. Research shows that strangers spontaneously organise themselves into leaders and followers when natural disasters strike.
  • Being similar. If you want to increase a group’s hivishness, downplay any differences and focus on celebrating the group’s similarities. While it would be nice if we loved everyone equally, evolutionarily, our love is rather parochial.

Our groupishness has made us incredibly successful as a species

A group that can suppress its individuals’ selfishness and function cohesively has advantages when competing with other groups. For example, insects that form colonies work very well together, and are very successful (by natural selection standards) as a result—colonial insects represent just 2 percent of all insect species, but make up the majority, by weight, of all insects on Earth.

Humans are also incredibly successful by natural selection standards. Together with our domesticated animals (e.g. cows, pigs, dogs), we would account for 98% of the world’s mammals by weight.

While hivishness can be exploited by fascist dictators, that doesn’t mean we should shun or fear the hive switch. The normal function of the hive switch is to bond groups of people together into communities of trust, cooperation, and even love. Haidt admits it’s dangerous to scale up a single hive to the size of a nation. But hives at lower levels (dozens or hundreds of people at most) can make us more happy and satisfied. In fact, Haidt argues that a nation full of people satisfied by these lower-level hives is less vulnerable to takeover by demagogues offering people meaning.

Religion can only be understood in terms of how it serves groups

Many critics of religion, including Sam Harris, Richard Dawkins and Daniel Dennett, misunderstand religion because they focus on individuals and their supernatural beliefs. They therefore see religion as a costly, wasteful institution that impairs rational thinking and causes a lot of harm.

Haidt, on the other hand, see religion as a social fact. While supernatural agents do play a central role, you can only understand it if you look at how it serves groups and creates a community.

[T]rying to understand the persistence and passion of religion by studying beliefs about God is like trying to understand the persistence and passion of college football by studying the movements of the ball.
— Jonathan Haidt in The Righteous Mind
Tweet

A lot of evidence shows that religions make groups more cohesive and help solve free rider problems. In particular, religions that create more cohesive groups tend to feature omniscient gods who punish cheating (especially through collective punishment).

More cohesive groups in turn did better in inter-group competition. Some scholars believe this only caused cultural group selection—i.e. religions that improved cohesion spread faster, but not necessarily by killing off the losers. Haidt, however, thinks genetic evolution is likely at play too—he believes our minds co-evolved with religion over tens or even hundreds of thousands of years, so it is not easy for people to abandon religion.

Morality blinds

We are divided by politics and religion not because some people are good and others are evil. We are divided because our minds were designed for groupish righteousness.

As noted above, morality varies across cultures. Groups will always have some degree of moralistic strife because our moral matrices often blind us to the idea that there could be more than one form of moral truth. We think the other side is blind to science, reason and common sense, but in fact everyone goes blind when they talk about their sacred objects.

Instead of socialising only with like-minded individuals, Haidt recommends trying to understand other groups by “following the sacredness”. As a first step, think about the 6 moral foundations and try to figure out which one is carrying the weight in any given controversy.

Other Interesting Points

  • Young children judge right and wrong by very superficial features, such as whether a person was punished for an action.
  • Rawls (1971) assumed that most people would care more about raising the worst-off than about raising the average if they had to design a society from behind a “veil of ignorance”. A 1987 study found this assumption to be false.
  • Reptiles generally don’t hang around to provide protection after their babies are born. They just lay eggs and leave. Mammals, on the other hand, suckle their young. This raises the cost of motherhood, so mammals make fewer bets and invest a lot more in each one.
  • One study showed people photos of winners and runner-ups in US Senate and House of Representative election for 0.1 seconds and asked them to judge their competence. The candidate that people thought looked more competent won around two-thirds of the time. Snap judgments of attractiveness and general likability were not as good predictors.
  • People made harsher judgements of controversial issues (e.g. cousin marriage) when they had to breathe in smelly air.

My Review of The Righteous Mind

The Righteous Mind is a thought-provoking book. As I wrote this summary, I found myself mulling over many of the points Haidt raised on walks, in the shower, and before falling asleep. The German cannibal example particularly resonated with me—I wouldn’t want to live in the house where that occurred, even though I can’t come up with great reasons why.

Haidt is a pretty persuasive writer. He uses metaphors effectively (moral ‘tastebuds’, Elephant and the Rider, chimp and bee) and, on a first read, many of his arguments felt compelling. I think this is because he introduces so much information to build up his arguments—he often explains how he reached his ideas (e.g. the historical debates around a issue, or the work they did to refine their theory). It’s easy to get overwhelmed by all this information and just go along with his arguments. But much of that background seems to be window-dressing. When you strip his arguments down to the core, they don’t look as strong.

I don’t mean to be too harsh on Haidt—he’s clearly is an expert in moral psychology and knows what he’s talking about in that domain. However, The Righteous Mind was a prime example of when we should defer to specialists and when not to. David Epstein has said we should defer to experts for facts, but not for opinions. Haidt’s expertise is in moral psychology (how people actually operate). But he also addresses questions of moral philosophy (what should we do about it), where there are no “right” answers. While I’m happy to accept Haidt’s facts for the most part, I personally disagreed with Haidt on many of the conclusions he drew from those facts. I explain why in my post Criticisms of “The Righteous Mind”.

Let me know what you think of my summary of The Righteous Mind in the comments below!

Buy The Righteous Mind at: Amazon <– This is an affiliate link, which means I may earn a small commission if you make a purchase. Thanks for supporting the site! 🙂

If you liked this summary of The Righteous Mind, you may also enjoy:

4 thoughts on “Book Summary: The Righteous Mind by Jonathan Haidt

  1. I think the Elephant and the Rider analogy most strongly resonates with me. I think trying to adopt a totally rationalist attitude towards morality leads to unhappiness – as mentioned, we are not truth-seeking creatures by design. Perhaps counter-intuitively, then, thinking less about one’s moral framework and going with the flow of intuition leads one happier. And if that’s the case, what’s the right balance? Where should rationalism fit in? What’s the downside of relying 100% on intuition here?

    1. I actually think Haidt is too dismissive of the rationalists’ approach – I’m working on a post that explains that.

      One downside of relying 100% on intuition is that it can lead us to make moral judgments that negatively affect others. It wasn’t so long ago that many people intuitively felt disgust at homosexuality or other races’ customs (and some still do), which led to policies that caused great harm to some groups. Another example is donating to charity – an intuitive approach suggests you should just donate to whatever makes you feel good, which may be a charity that’s really good at marketing but not necessarily doing good. Whereas if you took a rational approach like Givewell’s, your donation may be able to help more people.

      Relying on intuition is also likely to prioritise short-term benefits over long-term ones. For example, most people intuitively want to punish those who have committed heinous crimes, but locking people up is very expensive and can therefore be counterproductive in the long-run. A more rational approach would be to look at the best way to prevent crimes in the future, which may find that investing in rehabilitation or prevention is more effective.

      That said, I don’t think 100% rationality is the answer, either, and I think Haidt makes a valid point about the dangers of motivated reasoning. So I agree with you that there has to be a balance. The “right” balance I think is for each individual to decide, but I generally think striving for greater rationality is worthwhile.

      1. I suppose that’s what Haidt is getting at with his analogy. Rationalism can steer the direction, but intuition is the driving force. Like with your homosexuality example. Rationalism should be used as a steering mechanism such that people get an intuitive disdain for homophobic behavior. Although I do agree on your point about punishing criminals – but on this, a distinction should be made between morality for an individual and for a system.

        1. Yeah, I really liked Haidt’s analogy about “friendly elephants”. I definitely agree with his point that we’re more likely to change when we hear an opposing argument from someone we already like, because then we’re more likely to look for the truth in what they’re saying. I just felt he was too dismissive of rationality overall – though that may have come through stronger in the book than in my summary.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.