The Long-Term Future
Introduction
The number of people alive today pales in comparison to the number who could exist in the future. It may therefore be extremely important to ensure that human civilization flourishes far into the future, enjoying fulfilling lives free of suffering.
There are a number of ways we might work to ensure a positive future for humanity. We could work to better understand and prevent extinction risks - catastrophic events that have the potential to destroy all life on this planet.fn-1 We may want to focus on the broader category of existential risks- events that could dramatically and irreversibly curtail humanity’s potential.fn-2 Or we might focus on increasing the chance that the lives of our descendants are positive in other ways: for example, improving democracy or the ability of institutions to make good decisions.
Attempts to shape the long-term future seem highly neglected relative to the problems we face today. There are fewer incentives to address longer-term problems, and they can also be harder for us to take seriously.
It is, of course, hard to be certain about the impact of our actions on the very long-term future. However, it does seem that there are things we can do - and given the vast scale we are talking about, these actions could therefore have an enormous impact in expectation.
This profile sets out why you might want to focus your altruistic efforts on the long-term future - and why you might not. You may be particularly inclined to focus on this if you think we face serious existential threats in the next century, and if you’re comfortable accepting a reasonable amount of uncertainty about the impact you are having, especially in the short-term.
The case for the long-term future as a target of altruism
The case for focusing on the long-term future can be summarised as follows:
- The long-term future has enormous potential for good or evil: our descendants could live for billions or trillions of years, and have very high-quality lives;
- It seems likely there are things we can do today that will affect the long-term future in non-negligible ways;
- Possible ways of shaping the long-term future are currently highly neglected by individuals and society;
- Given points 1 to 3 above, actions aimed at shaping the long-term future seem to have extremely high expected value, higher than any actions aiming for more near-term benefits.
Below we discuss each part of this argument in more detail.
The long-term future has enormous potential
Civilisation could continue for a billion years, until the Earth becomes uninhabitable.fn-3 It’s hard to say how likely this is, but it certainly seems plausible - and putting less than, say, a 1% chance on this possibility seems overconfident.fn-4 You may disagree that 1% is a reasonable lower bound here, but changing the figure by an order of magnitude or two would still yield an extremely impressive result. And even if civilisation only survives for another million years, that still amounts to another ~50,000 generations of people, i.e. trillions of future lives.fn-5
If our descendants survive for long enough, then they are likely to advance in ways we cannot currently imagine - even someone living a few hundred years ago could not possibly have imagined the technological advances we’ve made today. It is possible they might even develop technology enabling them to reach and colonise planets outside our solar system, and survive well beyond a billion years.fn-6
Let’s say that if we survive until the end of the Earth’s lifespan, there is a 1% chance of space colonisation. This would make the overall probability of survival beyond Earth 1 in 10,000 (1% chance of surviving to a billion years, multiplied by a 1% chance of surviving further given that). This sounds incredibly low, but suppose that space colonisation could allow our descendants to survive up to 100 trillion yearsfn-7. This suggests we could have up to 1/10,000 x 100 trillion years = 10 billion expected years of civilisation ahead of us.
If we expect life in the future to be, on average, about as good as the present, then this would make the whole of the future about 100 million times more important than everything that has happened in the last 100 years. In fact, it seems like there could be more people in the future with better lives than those living today: economic, social, and technological progress could enable us to cure diseases, lift people out of poverty, and better solve other problems. It also seems possible that people in the future will be more altruistic than people alive todayfn-8 - which also makes it more likely that they will be motivated to create a happy and valuable world.
However, it’s precisely because of this enormous potential that it’s so important to ensure that things go as well as possible. The loss of potential would be enormous if we end up on a negative trajectory. It could result in a great deal of suffering or the end of life.fn-9 And just as the potential to solve many of the world’s problems is growing, threats seem to be growing too. In particular, advanced technologies and increasing interconnectedness pose great risks.fn-10
There are things we can do today that could affect the long-term future
There are a number of things we could work on today that seem likely to influence the long-term future:
- Reducing extinction risks: We could reduce the risk of catastrophic climate change by putting in place laws and regulations to cut carbon emissions. We could reduce the risks from new technologies by investing in research to ensure their safety. Alternatively, we could work to improve global cooperation so that we are better able to deal with unforeseen risks that might arise.
- Changing the values of a civilisation: Values tend to be stable in societies,fn-11 so attempts to shift values, whilst difficult, could have long-lasting effects. Some forms of value change, like increasing altruism, seem robustly good, and may be a way of realizing the very best possible futures. However, spreading poorly considered values could be harmful.
- Reducing suffering risks: Historically, technological advances have enabled great welfare improvements (e.g. through modern agriculture and medicine), but also some of the greatest sources of present-day suffering (e.g. factory farming). To prevent the worst risks from new technologies, we could improve global cooperation and work on specific problems like preventing worst-case outcomes from artificial intelligence.
- “Speeding up” development: Boosting technological innovation or scientific progress could have a lasting “speed up” effect on the entire future, making all future benefits happen slightly earlier than they otherwise would have. Curing a disease just a few years earlier could save millions of lives, for example. (That said, it’s not clear whether speeding up development is good or bad for existential risk - developing new technologies faster might help us to mitigate certain threats, but pose new risks of their own.)
- Ripple effects of our ordinary actions: Improvements in health not only benefit individuals directly but allow them to be more economically successful, meaning that society and other individuals have to invest less in supporting them. In aggregate, this could easily have substantial knock-on effects on the productivity of society, which could affect the future.
- Other ways we might create positive trajectory changes: These include improving education, science, and political systems.
Paul Christiano also points out that even if opportunities to shape the long-term future with any degree of certainty do not exist today, they may well exist in the future. Investing in our own current capacity could have an indirect but large impact by improving our ability to take such opportunities when they do arise. Similarly, we can do research today to learn more about how we might be able to impact the long-term future.
The long-term future is neglected, especially relative to its importance
Attempts to shape the long-term future are neglected by individuals, organisations and governments.
One reason is that there is little incentive to focus on far-off, uncertain issues compared to more certain, immediate ones. As 80,000 Hours put it, “Future generations matter, but they can’t vote, they can’t buy things, they can’t stand up for their interests.”
Problems faced by future generations are also more uncertain and more abstract, making it harder for us to care about them. There is a well-established phenomenon called temporal discounting, which means that we tend to give less weight to outcomes that are far in the future. This may explain our tendency to neglect long-term risks and problems. For example, it’s a large part of why we seem to have such difficulty tackling climate change.
Generally, there are diminishing returns to additional work in an area. This means that the neglectedness of the long-term future makes it more likely to be high impact.
Efforts to shape the long-term future could be extremely high in expected value
Even if the chance of our actions influencing the long-term trajectory of humanity is relatively low, there are extremely large potential benefits, which mean that these actions could still have a very high expected value. For example, decreasing the probability of human extinction by just one in a million could result in an additional 1,000 to 10,000 expected years of civilisation (using earlier assumptions).fn-12
Compare this to actions we could take to improve the lives of people alive today, without looking at longer-run effects. A dramatic victory such as curing the most common and deadly diseases, or ending all war, might only make the current time period (~100 years) about twice as good as otherwise.fn-13 Though this seems like an enormous success, given the calculations above, decreasing the probability of human extinction would be 10 or 100 times better in expectation.
We might want to adjust this naive estimate downwards slightly, however, given uncertainty about some of the assumptions that go into it - we could be wrong about the probability of humanity surviving far into the future, or about the value of the future (if we think that future flourishing might have diminishing value, for example.) However, even if we think these estimates should be adjusted downwards substantially, we might very conservatively imagine that reducing the likelihood of existential risk by one in a million only equates to 100 expected years of civilization. This still suggests that the value of working to reduce existential risk is comparable to the value of the biggest victories we could imagine in the current time period - and so well worth taking seriously.
Some concerns about prioritising the long-term future
This is a counterintuitive way of doing good
Some people might be sceptical about this cause because focusing on the long-term future seems counterintuitive. Altruism normally means helping those around us - not trying to ensure that our descendants survive for billions of years. If the long-term future is so important, how come almost no-one is thinking or talking about it?
Firstly, just because something goes against common sense doesn’t mean that it is mistaken. The moral intuitions of society have been wrong in the past - we now think of the way people used to treat slaves, women, and gay people as morally abhorrent, for example. There’s also evidence that our moral intuitions are subject to a whole host of biases, such as being insensitive to differences when numbers are large - intuitively, helping 10,000 people feels similarly good to helping 1 million people, even though they are vastly different in scale.fn-14
Secondly, it’s not clear that prioritising the long-term future actually is that counter-intuitive, properly understood. The long-term perspective suggests that we should prioritise things like investing in technology and economic growth, scientific research, strengthening institutions and improving democracy. These seem like pretty plausible responses to the question, “How can we best solve the world’s biggest problems?”. In fact, to some they may seem more intuitive than focusing on more proximate benefits like giving cash to poor farmers in Kenya or distributing malaria nets.fn-15
Shouldn’t we prioritise already existing people?
Another objection to this cause might be that future, unborn people don’t matter morally - at least not in the same way that people currently alive do.
It is easier to empathise with people who are already alive, and this might mean that we feel more motivated to help them. We might feel that it’s easier to help current people, because we can learn what they need. However, none of this implies that future people actually matter any less, morally.
When pressed, it seems like most people do care about the wellbeing of future generations. 80,000 Hours gives the example of a simple choice between (a) preventing one person from suffering next year, and (b) preventing 100 people from suffering (the same amount) 100 years from now. Since most people would choose the second option, they suggest that this shows that most people do value future generations.fn-16
Is there anything we can do?
A final objection is that we can’t influence the long-term future with any certainty, and so it’s useless to even try. Our efforts would be better focused on problems where we can see progress more clearly.
In the last section we discussed reasons to think that we probably can influence the long-term future. It’s worth reiterating that we don’t need to be certain of the impact of our actions. Efforts to shape the future could impact billions of lives. So even only a 1% chance of success would seem worth taking a ‘bet’ on.
Of course, we do still need to show that there is a decent probability of success, or else we could justify any total long-shot idea with this reasoning. The point here is just that we don’t need to be highly confident.
Reasons you might not choose to prioritise the long-term future
You might question how much moral weight we should give to “future people”
We generally feel an intuitive obligation to treat future, not-yet-existent people in roughly the same way as existent people. The argument laid out above assumes that we assign the same “moral weight” to future people as those living today. However, some philosophers have questioned whether we should give the same degree of moral consideration to future people. On “person-affecting views”, an action is only good or bad if it is good or bad for someone - and so the value of an action depends only on how it affects people who are either already alive, or who will come into existence regardless of our actions. This suggests that we have stronger reason to improve the lives of people living today than to help future generations. These views also suggest that human extinction, while bad for the people who die, causes no longer-term harms: there is no harm in people failing to come into existence.
A related issue is the non-identity problem, arising from the fact that sometimes future people may owe their very existence to choices made today. For example, which policies a government chooses to enact will affect which people have certain jobs, affecting which people meet and marry, and therefore causing different people to be born in the future. Those very policies might also affect how good the lives of future people are - if the government chooses to prioritise policies that increase short-term economic productivity over mitigating climate change in the longer-term, say, this could have a negative impact on future generations. But if the very policies that appear to have made future people’s lives “worse off” also ensured that those exact people were born at all, can those people really be said to have been harmed by those policies? If not, then this may be reason to prioritise the welfare of people who already exist, or whose existence does not depend on our actions.
However, the non-identity problem might also be taken as a reason to reject person-affecting views. The implication, that choosing policies that will make future generations' lives worse off is not causing those future people any harm, seems highly counterintuitive. We could instead adopt impersonal principles for evaluating the moral value of actions.fn-17 This means that we would judge not based on how they affect specific people, but based on how good they are from the perspective of the world as a whole.fn-18
You might think that we can’t affect the long-term future
You might think that there is nothing we can do that has a reliable impact on the long-term future.fn-19
For example, you may agree that our actions have indirect effects, but deny that we can tell what those effects will be in advance. It is much easier to look back and identify past actions which led to substantial knock-on benefits than it is to predict what actions will have those effects in future. Or you may think that we cannot make meaningful predictions about existential risks. Or you may think that there is little we can do about existential risks given our current knowledge, institutions, and technology.
Though it is very difficult to be confident here, there are opportunities which seem likely to have some impact. The fact that we are uncertain means that it may also be worth investing in our ability to identify more valuable opportunities to improve the future.
You might disagree with “expected value” reasoning, in principle or in practice
The final step in the argument laid out above relies on the notion of the expected value of an action. The expected value of an action combines (a) the value of each possible outcome with (b) the probability of each outcome occurring. This allows us to see why it can sometimes be better to take an action that has a smaller chance of success, but a greater reward if successful.
It’s worth separating out two subtly different ways that you might disagree with the use of expected values in this argument.
1. Objecting to the use of expected values to make decisions in principle
Some people challenge the use of expected value at all. Expected value theory can run into problems especially if we allow the possibility of infinite amounts of value (since expected value theory says that any non-zero probability of creating infinite value should dominate our decision-making).
The main alternative to expected value theory is pure risk aversion.fn-20 This view states that we should sometimes take lower expected value options if they offer extra certainty.fn-21 Risk aversion is a matter of degree: it doesn’t specify exactly what tradeoff we should make between certainty and potential impact. Introducing any amount of risk aversion weakens the case for the long-term future.
Should we be risk averse when we evaluate altruistic actions? Risk aversion seems to be irrational in a fairly fundamental sense - that is, risk averse agents can end up having inconsistent preferences that leave them open to exploitationfn-22 (though note that Buchak does argue that risk aversion can be rational).fn-23
2. Objecting to the way that expected values are estimated in practice
You might agree that expected values are how we should make decisions under uncertainty in principle, but disagree with the way that these expected values are calculated in practice.
There are always difficulties in calculating expected value. But the difficulties expand when we lack historical data or studies on the intervention. It therefore seems reasonable to be sceptical of any attempt to argue from the expected value of an intervention in this space.
Often when we’re reasoning, it makes sense to account for base rates. In this case, the base rate is the average effectiveness of a cause area. Suppose you think that the average cause area effectiveness is much lower than our estimate of the effectiveness of work to improve the future. Since our estimate is quite speculative, it seems like we should think it’s pretty possible that we’ve made a mistake, and this means that our all-things considered judgement about the effectiveness of work on the long-term future should be lower than our initial calculation suggests.fn-24
We think that the case we’ve made for prioritising the long-term future is based on wider considerations than simple expected value estimates (e.g. we’ve argued that it makes intuitive sense, and discussed a number of different heuristics which point in this direction). However you might certainly put less weight on any specific attempts to shape the long-term future if you are sceptical of our ability to estimate EV in this area.
You might think that solving current problems is the best way to influence the long-term future
Finally, you might agree with the case laid out above, but believe that the best way to improve the future is by solving the biggest problems that exist today. This isn’t, strictly speaking, an objection to the general idea that the long-term future of humanity should be a priority. It’s just a view about the best way to improve the future. As we’ve discussed, there’s reason to think that solving problems today will have hard-to-anticipate long-term effects. For instance, reducing poverty will improve health and wellbeing. But it may also have significant long-term effects, in that more people will be able to contribute productively to society, to innovation, to the economy, etc.
You might also choose to focus on current problems if you think that these problems are unlikely to be solved by general progress. One worry is that if we leave current problems unsolved, we might get locked into bad structures. For instance, it might be important to solve inequality, in case the problem gets harder to solve later.fn-25
You might therefore choose to focus on solving more immediate, concrete problems in the world even if you think that the long-term is ultimately what’s most important, if you believe (most of) the following:
- There is no immediate risk of extinction, or there is nothing much we (or you personally) can do about it;
- Solving the biggest problems we currently face as a society is likely to have large, long-term benefits, and there’s nothing else we can do which would have larger long-term benefits;
- These problems may not get solved indirectly if we focus on becoming more powerful and knowledgeable as a species, and may even worsen.
Summary
- The long-term future has enormous potential: our descendants could live for billions or trillions of years, and their lives could be very good.
- It seems likely that there are things we can do today that will affect the long-term future in non-negligible ways.
- Attempts to shape the long-term future are currently highly neglected by individuals and society.
- Given the above, actions aimed at shaping the long-term future seem to have extremely high expected value, higher than any actions aiming for more near-term benefits.
- This argument relies on the assumption that we should value future people in roughly the same way we value people currently alive - an assumption which is challenged by person-affecting views of ethics.
- It also relies heavily on “expected value” calculations, which might be questioned either in principle or practice - you might question them if you think we should be risk averse about value, for example, or if you think it’s likely that a mistake may have gone into the expected value estimate in this case.
- It’s also possible that solving current problems is the best way to affect the far future - if you think there’s little or no risk of human extinction, and progress on more immediate problems is likely to have large long-term benefits.
Maximum possible duration of civilization, M = 100,000,000,000,000 Default chance of reaching maximum possible duration of civilization (according to our guess of 1% for each of 2 filters), C = 0.01 x 0.01 = 0.0001 Default expected time remaining for civilization = M x C = 10,000,000,000
If we increase the odds of survival at one of the filters by one in a million, we can multiply one of the inputs for C by 1.000001. So our new value of C is 0.01 x 0.01 x 1.000001 = 0.0001000001 New expected time remaining for civilization = M x C = 10,000,010,000
This value is 10,000 years greater than our default expected value. So even if we've overestimated by up to 10x, by our rough guess we'd gain between 1,000 and 10,000 years.
It’s also worth noting here, however, that reducing the probability of extinction by one in a million may be harder than it sounds - given that there is not just one single threat that needs to be tackled, but many different scenarios that could seriously threaten human extinction.
- E.g. catastrophic climate change, nuclear war, or threats from advanced technologies.↩
- Failing to reach technological maturity is also classed as threatening the future of humanity, even though it may not sound like a particularly awful scenario, because of the huge loss of potential - “a technologically mature civilization could (presumably) engage in large-scale space colonization... be able to modify and enhance human biology... could construct extremely powerful computational hardware and use it to create whole-brain emulations and entirely artificial types of sentient, superintelligent minds” (Bostrom, 2012) - meaning the permanent destruction of this potential could constitute an enormous loss.↩
- “It is not absurd to consider the possibility that civilization continues for a billion years, until the Earth becomes uninhabitable” -Nick Beckstead, “On the Overwhelming Importance of Shaping the Far Future”↩
- Note that we don’t necessarily care about just the species Homo Sapiens - when we talk about our “descendants”, we mean any valuable successors we might have, and include in this non-human animals that seem to warrant moral concern.↩
- Assuming one generation every 20 years, and ~75 million people per generation.↩
- Nick Beckstead, reviewing expert opinion on the topic, concludes that, “most informed people thinking about these issues believe that space colonization will eventually be possible” https://www.fhi.ox.ac.uk/will-we-eventually-be-able-to-colonize-other-stars-notes-from-a-preliminary-review/↩
- “Stars will continue shining for about 10^14 more years” Adams, 2008↩
- Though this is a claim we will not defend in detail here, it certainly seems like our “circle of compassion” has expanded over time, and Steven Pinker presents a compelling case that violence is decreasing.↩
- It might seem that the risk of extinction is low. However, Nick Bostrom writes that “estimates of 10-20% total existential risk in this century are fairly typical among those who have examined the issue, though inevitably such estimates rely heavily on subjective judgement.”↩
- The Open Philanthropy Project suggests that “as the world becomes more interconnected, the magnitude and implications of the worst-case scenarios may be rising.”↩
- This paper from Australia, for example, suggests that values there were fairly stable and held in consensus.↩
- To arrive at this figure, we make the following calculation:↩
- “A dramatic victory around the world might make this period go twice as well as it otherwise would, say” -Nick Beckstead, “On the Overwhelming Importance of Shaping the Far Future”↩
- Nick Beckstead has an extensive discussion of these reasons in chapter 2 of his thesis, “On the Overwhelming Importance of Shaping the Far Future” (available on his website here: http://www.nickbeckstead.com/research).↩
- Ben Todd makes this point in an article, entitled “Why long-run focused effective altruism is more common sense”. He acknowledges that some very specific long-term future-oriented beliefs, like the idea that we should prioritise reducing existential risks from artificial intelligence, may be counter-intuitive. But simply focusing on the long-term perspective when doing good is commonsensical.↩
- Some people might care more about preventing future suffering than creating many happy beings. Even from this perspective, it is important to improve the quality of the future.↩
- This is roughly the perspective taken by philosopher Derek Parfit, who says that it’s so obviously good for us to help future generations that this itself gives us a definitive reason to reject person-affecting principles.↩
- Another possibility, which Parfit discusses, is to adopt wider person-affecting principles: these say that while most harms / benefits are comparative (i.e. to say we harmed a person is to say they would have been better off had we acted differently), not all are. In particular, we can say that someone was benefitted by being brought into existence if their life is overall positive. On such views, one state of the world is worse than another, even if different people exist in the two worlds (and therefore neither state is strictly speaking worse for anyone), if the lives of the people in World A are less good for those people than the lives of people in World B are for them.↩
- Note that this is subtly different from saying that our actions today don’t affect anything in the future (which would seem hard to argue for). Instead, it is the claim that we cannot reliably or usefully estimate the impact of our actions on the future.↩
- Though note that you need an extreme form of risk aversion to avoid the problems with infinite value mentioned above.↩
- Note that by pure risk aversion we mean that people are risk averse over the unit of value. This is different from monetary risk aversion, which arises from the fact that there are generally diminishing marginal returns to the utility of income.↩
- De Finetti, B. (1964). Foresight: its logical laws in subjective sources; Ramsey, F. P. (1926). Truth and probability. The foundations of mathematics and other logical essays, pages 156–198.↩
- Buchak, L. (2009). Risk Aversion and Rationality.↩
- This is called Bayesian updating. We should be similarly sceptical of cost-effectiveness estimates in other areas, like global health, but this will be less important for such areas because the evidence is more robust.↩
- Another risk would be humans becoming more and more powerful without a similar rate of increase in empathy towards less powerful species - this could result in our descendants causing more suffering for other animals even if only accidentally. We probably cause more animal suffering today than we did a few hundred years ago, for example, not because we care less about animals (we probably care more), but because it is much easier for us to farm animals en masse in conditions with little regard for their welfare.↩