Will MacAskill discusses longtermism and his book What We Owe the Future with Russ Roberts in this EconTalk podcast.
I have previously summarised Will’s 80k Hours interview with Rob Wiblin, which covers similar issues. The audience for these two podcasts are very different: most 80k Hours listeners will be somewhat familiar with effective altruism and longtermism, whereas most EconTalk listeners will not be (as its name suggests, EconTalk mainly discusses economics, though it branches out to other areas).
If you’re new to the ideas of longtermism, you’ll likely find this EconTalk podcast more accessible. It’s also considerably shorter, at “merely” 1 hr 16 minutes, compared to the nearly 3-hour discussion on 80k Hours. The estimated reading time for this summary (excluding “My Thoughts”) is just under 14 minutes.
- Key Takeaways
- Detailed Summary
- What is longtermism
- Future people today will probably be richer than us
- We've been doing well without longtermism anyway
- Centralisation vs decentralisation/diversity
- Future people are not (completely) disenfranchised
- Why should we care about future generations at all?
- How easily could humanity bounce back?
- Will we have the same moral values?
- How do we know that our moral convictions are correct?
- Are all human lives net positive, or are some net negative?
- Having children can improve the future
- My Thoughts
Key Takeaways
- Longtermism is the idea that we should seek to improve the long-term future.
- By “long-term”, Will is talking very long term. Not just the children, grandchildren or even great-grandchildren of people alive today, but generations that may live hundreds of thousands or even millions of years from now.
- It’s impossible to give a reason for why we should be moral that isn’t circular. It’s like an epistemic black hole. At some point, you just have to accept some reasons.
- The same circularity applies if we ask why we should be selfish. Why do we care about our own happiness?
- God just adds an additional layer to the questioning, but doesn’t rescue us from the circularity.
- There’s no guarantee that the future will turn out well if we don’t focus on it:
- Will estimates a 20% chance that future people will be worse off than us. And if they are worse-off, they could be much worse-off (enslaved, etc), not just marginally worse off.
- We shouldn’t take our current situation for granted. The chances of the US and USSR having a full blown nuclear war were not zero; nor was it completely implausible that Nazi fascist values may have taken over.
- Moral progress does not seem to be pre-destined because there aren’t robust incentives to make progress in a wide variety of situations. People who contributed to the abolition of slavery stood up for what they thought was right at great cost to themselves.
- We want a balance between allowing diversity and experimentation, and mitigating the worst downsides of centralisation.
- Centralisation can solve coordination failures like climate change. But it also comes with risks, such as the risk that we fail to make moral progress.
- A diversity of moral views allows the best ideas to win out on their merits, which allows us to make moral progress.
- We shouldn’t just rely on the fact that most people don’t kill themselves to infer that their lives are, on average, positive:
- There are evolutionary reasons why we may be biased against ending our own lives.
- A lot of cultures have very strong prohibitions against ending your own life. Without such prohibitions, many more people may have killed themselves, particularly in the past when life was very bad.
- Having a child is not morally bad necessarily, and may even be good:
- You can offset any carbon emissions caused by that child.
- The child will benefit society as well as cause harm. Overall, Will thinks the benefits probably outweigh the harms.
- The child himself or herself may benefit from being born.
Detailed Summary
The 80k Hours podcast also raises many issues discussed here. In most cases I have repeated them in this summary because Will’s responses here are different. One exception is the objection that helping people today already helps people in the future – I think the 80k Hours podcast deals with that more fully (see “Longtermist causes probably aren’t the best ways of doing good in the short term”).
What is longtermism
The definition quoted from Will’s book, What We Owe the Future, is that longtermism is the idea that positively influencing the long-term future is a key moral priority of our time.
Will breaks this down into three components:
- future people matter, morally;
- there could be a lot of future people; and
- we could really make a difference to their lives.
Future people today will probably be richer than us
The base level of wellbeing and intellectual knowledge will grow over time, such that future beings are likely to be materially much, much better off than us. So why should we sacrifice anything for them?
Will has two responses:
- First, we can’t be certain that future people will be better off than us. Will expects that there’s an 80% chance people in year 2300 will be better off than us (perhaps by a lot), but that also implies a 20% chance they’ll be worse off. This could be because there’s been a nuclear war, worst case pandemics, engineered bio-weapons, or simply stagnation and decline.
- Secondly, this argument only applies to what economists call “marginal harms”. If we make ourselves worse-off today to make future people slightly better-off, then, yeah, that may not be worth doing. But if we could reduce the risk of catastrophes that would make future people much worse-off, where they might be enslaved or not existing in the first place, that’s a different question.
We’ve been doing well without longtermism anyway
Even without people focusing on longtermism, we’ve had the Industrial Revolution, rising lifespans, and dramatically increasing standards of living. Why then should we focus on longtermism?
Will’s response is basically that that’s hindsight bias [My term – but what he describes is very much that idea, also known as “resulting“.]. Humanity’s track record is not very good; we have been doing surprisingly well. Most people in history had miserable lives and were in forced labour, the world was extremely patriarchal, and there was enormous suffering and ill health. So we shouldn’t take our current good situation for granted.
Two examples of things that could have turned out much worse were:
- Nuclear weapons. Will estimates around a 1 in 3 probability that the United States and USSR would have had an all-out nuclear exchange. If there had been a nuclear war, we wouldn’t still be saying “Well, we’re doing pretty well”.ย
- Nazism. While Will thinks that Nazis were very unlikely to win WWII, it’s not crazy to imagine that the Nazi’s fascist values ended up taking over. Longtermism is about reducing as much as possible the risks of things like all-out nuclear war and totalitarian regimes.
Centralisation vs decentralisation/diversity
[The 80k Hours podcast discusses this too (see “What if our moral views today are wrong?“). Will gives a different answer here about decentralisation so I’ve included it here.]
Will wants is a society with a great diversity of moral views and a culture that allows people to debate and test those views. We can then learn over time which moral view is right, with the best ideas winning out on their merits rather than through, say, conquest. There’s a theme in the book about centralisation and decentralisation:
- Greater centralisation can help solve some major risks caused by coordination failures – e.g. carbon emissions, risk of nuclear war, AI safety.
- But greater centralisation comes with risks too – in particular, the risk that you don’t make any moral progress. In a worst case scenario, a dictatorial world government could lock in their ideology forever.
When we’re thinking about the kinds of institutions we want, we want a balance between allowing diversity and experiments with the best ideas winning out on one hand, and mitigating the worst risks from centralisation on the other.
Examples:
- Covid-19 Pandemic. Will points out in the book that the global response to Covid-19 was actually rather uniform. As a result, we lost an opportunity to learn from the pandemic.
- United States. With all its different states, the US had the potential to be a laboratory of democracy, with lots of experimentation. But instead, there’s been a push towards consolidation and centralisation. There are evolutionary reasons for that, because the US is in competition with other countries and it’ll be more powerful united. While Will does not make any comment on whether this is good or bad overall, he does note that this general trend could increase the risk that we fail to make moral progress over the long-term.
Future people are not (completely) disenfranchised
Future people are somebody’s children, grandchildren, great-grandchildren, etc. so we already care about them to some degree. But:
- This caring drops off very quickly. The vast majority of people in the future will come after the great-great-grandchildren of people alive today. If humans lived as long as the typical mammalian species, we’d have about 700,000 years left, and it’s possible we could last much longer.
- Future generations are still disenfranchised because if we do take action for them, it’s just through the votes and views of people alive today. An analogy could be made with animals – people care about their pets, and pets get treated quite well. But there are about 80 billion more animals that are kept in horrific conditions and treated terribly.
So while there is some concern for future people, it’s not nearly as much as there should be.
Why should we care about future generations at all?
Russ asks why we should even care about future people at all, especially if we don’t have kids. If someone doesn’t feel bad about hurting future people – what can you say to convince them to care? (Russ clarifies he thinks this is a “repugnant” view.)
This is one of the deepest questions in philosophy, about why we should be moral at all. Ultimately, there’s no non-circular answer to it. Why should I care? Because you should care about other people.
However, the same circularity applies to the question: why should I care about myself? Because it will make you happy. Why should I care about being happy? At some point you just have to accept a reason as being good enough as there’s no further reason available. God doesn’t save us from this circularity either – he just adds an additional level of explanation. Why should I help other people? Because God wants you to. Why should I care about what God wants me to do? Because you’ll go to hell. Why should I care about my own suffering?
(On a slightly different note, Will thinks of religion as a technological or social innovation. Because people believed that God was always watching them, that reduced the prevalence of free riding and cheating. He does worry that, as religion declines, we might get freeriding coming back.)
This applies not just to moral beliefs. Assume someone says 2 + 2 = 4 and you ask why you should believe that. They might then explain that:
1 + 1 = 2;
2 + 2 = 1 + 1 + 1+ 1 ; and
1 + 1 + 1 + 1 = 4.
You might still say you don’t believe that. At some point if someone is giving you genuine reasons but you’re not accepting them as reasons, there’s nothing more they can do. It’s an epistemic black hole.
How easily could humanity bounce back?
Humanity is remarkably resilient. If a catastrophe killed off 99% of the world’s population, Will thinks humanity would be able to recover for three reasons:
- First, smaller-scale catastrophes like the Black Death and atomic bombings in Japan show people have remarkable levels of resilience.
- Secondly, a lot of knowledge that would be preserved. Tens of thousands of libraries are in dry locations that wouldn’t be threatened by a nuclear war. There’s also the existing evidence of tools we’ve made, which makes it easier for things to be invented again in the future. [This idea of blueprint copying was explained in Guns, Germs and Steel.]
- Lastly, there don’t seem to be any particular resource bottlenecks that would prevent humanity returning to the present state.
However, that recovery could take very long:
- The smaller population means the Smithian gains from division of labour won’t be as large. If you put just 100 people on an island, they will be very poor, even if the island had abundant natural resources and the people are incredibly smart and talented.
- Many people would need to focus all their efforts on the bare necessities of life. In 1900, 40% of Americans were farmers. So if only 1% of the world’s population (80 million) remained, that would suggest 32 million people dedicated to farming. Specialisation and trade allows nearly 8 billion people to do many things they couldn’t do if the population was smaller.
Will we have the same moral values?
If civilisation bounced back, it could have considerably worse values. We may then lock-in those worse values indefinitely. The risk of losing our current egalitarian, liberal, and democratic values could possibly be the most important risk of civilisational collapse in the longer term.
Technological innovation seems largely pre-destined because there are intense incentives to innovate across many different potential cultures and moral views. However, Will doesn’t think the same is true of moral views, which could go both ways. For example, some people have argued that the economics would have ended slavery eventually. Will doesn’t think that’s necessarily true. He thinks that moral agitators like Benjamin Lay played a significant role in that:
- Lay was a Quaker and one of the earliest people to call for the abolition of slavery. He did so at enormous cost to himself. For example, he would heckle slave owners giving sermons in church which would get him kicked out of the church. Lying face-down in the mud, people had to walk around Lay as they left. He also lived in a cave, boycotting all slave-produced goods.
- Although his direct causal influence is unclear, he certainly influenced other people like John Woolman and Anthony Benezet who were themselves very influential.
- Quaker thought became part of the Enlightenment thought which convinced the British to end slavery.
- The British Empire then pressured all other colonial powers to end slavery as well. In 1700, most people in the world were in some form of forced labour. Today, there’s only 0.5% of the world’s population in any kind of forced labour, and it’s illegal in every country. That’s a remarkable change in just 300 years.
- Before learning about this example, Will would have thought that the end of slavery was inevitable too, but he no longer thinks that’s the case.
How do we know that our moral convictions are correct?
Hitler also thought deeply about morality and acted on the strength of his moral convictions. How can we be sure we won’t end up like Hitler rather than like Benjamin Lay? [The 80k Hours podcast also discusses this, but Russ challenges Will more here.]
Will agrees it’s complicated:
- We want a diversity of moral views and some moral views that would have been considered repugnant in the past (e.g. abolishing slavery, giving woman equal rights) are now commonly accepted.
- We should set up society in a way where people cannot use conquest and violence to achieve their moral views. Instead, they should have to use reason and empathy. The difficulty then is drawing the line between rational persuasion and brainwashing – after all, Hitler was a powerful and convincing orator. How can we ensure that society gets the former but not the latter? It is an enormous challenge. We want people to have strong opinions, weakly held, so that they’re open to changing their view in the face of rational argument.
- We can improve on where we currently are. For example, politicians – the most influential people in society – can get away with lying or paltering without suffering much adverse consequences. At the very least, we should be able to move in a direction where powers of argument and reason are stronger than powers of non-rational persuasion. [I’ve made a similar point about how we should have institutions and cultures that discourage antisocial power-seeking behaviours.]
Russ clarifies he’s not suggesting we should have no principles. We should have very strongly-held principles that we come to thoughtfully. He refers to William Deresiewicz’s essay who argues that a leader should study carefully how the world works. Others then follow that person because they’re authentic and have understood something profound about the world.
Are all human lives net positive, or are some net negative?
Russ thinks it’s clear that most human lives are net positive, since it’s not that hard to end your life. This is the economist’s theory of revealed preferences. He’s surprised that Will even bothered to try find evidence of whether human lives are net positive or negative. [The 80k Hours podcast discusses some of this evidence.]
Will responds that it depends on your theory of wellbeing:
- The preference-satisfaction view, which is what economists normally adopt, assumes that your wellbeing improves when you get what you want. (Philosophers normally add in caveats so that it’s getting what you want yourself to want, when you are well-informed and in a cool, considered frame of mind.)
- Will’s preferred view is that wellbeing is a matter of conscious experiences. A positive conscious experience involves things like happiness, meaningful moments, and avoiding negative experiences like suffering and depression. [This is interesting. I’ve never heard of this before, or even of the idea that there are different theories of wellbeing.]
Under this alternative conscious experiences view, people’s preferences correlate to what’s best for them, but not perfectly. We may sometimes have very biased preferences. And, from an evolutionary standpoint, we likely have a bias against dying. (Will gives some weight to this argument, but “not enormous weight”.)
Another argument is that in most cultures, there are very strong prohibitions against ending your life. A reason may be that people’s lives in the past were very bad, such that suicide would have been much more common if not for those prohibitions.
Having children can improve the future
An argument against having children that has been gaining traction is the climate change impact. While it’s true that having a child will create more carbon emissions than not having one, Will argues:
- You can nullify that harm by offsetting.
- The child will benefit society as well as cause harm.
- If the child has a sufficiently good life, that’s a benefit for them, too. Will, for example, is very happy that he was born.
These first two arguments are explained further below.
Will’s not saying everyone should have as many kids as possible or that the government should intervene. He’s just saying having children is not necessarily morally bad, and may even be good.
Offsetting
The cost of raising a child is about $15,000 per year in the US. If you increase that to $16,000 by donating an extra $1,000 to very effective climate charities, you can offset that child’s carbon impact a thousand times over.
The child will benefit society as well as cause harm
The child may benefit society as well as cause harms, particularly if you bring them up well. They can build infrastructure, pay taxes, innovate, and be moral change makers. You shouldn’t look at just the “negative” side of the ledger.
Will thinks that overall, the positives to society of having an additional person outweigh the negatives. One reason for this is, if you think about how good the world was when 50 billion people had ever existed versus how good it was when 110 billion people had ever existed, the latter seems a lot better. The world today has higher material living standards and we have made moral progress, and Will thinks that’s in significant part because we’ve had so many people who have made net positive contributions to society.
My Thoughts
This interview was a great contrast to the 80k Hours episode. From previous episodes, I knew that Russ is generally sceptical of effective altruism and utilitarianism. But he’s a thoughtful and respectful host, so it was interesting to hear him challenge Will more than Rob did. In particular, I really enjoyed the discussion about why we should be moral at all, and liked Will’s point that the same circularity applies if we ask why we should be selfish.
While I agree with Will that having children may not be morally bad, his reasoning was unconvincing. Deciding whether to have a child has enormous personal consequences but relatively minor moral consequences in expected value terms. (A possible exception is if you know the child will have some congenital disease, but let’s put that aside here.) In such a case, I don’t think we should bother even considering the moral consequences in making that decision. Our capacities for altruism are finite – we all prioritise our own wellbeing over others’ to varying degrees. Given that, I think we should encourage people to spend that limited capacity in effective ways, where one can do a significant amount of good at low personal cost. When a decision involves a marginal amount of good at enormous personal cost, we should just let people be selfish and save their altruistic capacity for better uses.
That said, I was also dubious of Will’s reasons for why having a child might be marginally good. The fact that the world now is much better than when only 50 billion people had ever existed is pretty weak evidence for thinking that adding people to the world would be better for society:
- First, it seems to be a case of hindsight bias. If, as Will suggested earlier in the podcast, the US and USSR had had an all-out nuclear war, would he still make the same argument?
- Secondly, it’s arguable that benefiting “society” is too narrow a focus if “society” is limited to human beings. In the 80k Hours discussion of whether our track record to date was good or bad, Will pointed out that our track record was unclear because, among other things, it depends on which beings we take into account. Here, I think he just focues on human beings here. That seems understandable given the audience for EconTalk, but it’s worth pointing out for completeness.
- Lastly, while the long-term trend shows a correlation between numbers of people and better living standards, correlation does not prove causation. It’s plausible that causality goes in the other direction. I am not confident about this point and may develop it further in a subsequent post once I have thought more about it. I get that they discuss this point at the end of the podcast so Will may not have wanted to introduce complex new ideas at this stage.
Finally, I was disappointed to hear Will talk about how you can offset carbon emissions by donating to highly effective non-profits, without naming any of them. In Doing Good Better (summary here), he pointed to Cool Earth. Since then, others on the EA Forum have suggested that Cool Earth had been overrated by the Effective Altruism community. It would have been good to hear Will name an organisation so that (1) we know if he still stands by that Cool Earth recommendation; and (2) listeners of the podcast can find out how to effectively offset emissions.