The number of people alive today pales in comparison to the number who could exist in the future. It may therefore be extremely important to ensure that human civilization flourishes far into the future, enjoying fulfilling lives free of suffering.
There are a number of ways we might work to ensure a positive future for humanity. We could work to better understand and prevent extinction risks - catastrophic events that have the potential to destroy all life on this planet.fn-1 We may want to focus on the broader category of existential risks- events that could dramatically and irreversibly curtail humanity’s potential.fn-2 Or we might focus on increasing the chance that the lives of our descendants are positive in other ways: for example, improving democracy or the ability of institutions to make good decisions.
Attempts to shape the long-term future seem highly neglected relative to the problems we face today. There are fewer incentives to address longer-term problems, and they can also be harder for us to take seriously.
It is, of course, hard to be certain about the impact of our actions on the very long-term future. However, it does seem that there are things we can do - and given the vast scale we are talking about, these actions could therefore have an enormous impact in expectation.
This profile sets out why you might want to focus your altruistic efforts on the long-term future - and why you might not. You may be particularly inclined to focus on this if you think we face serious existential threats in the next century, and if you’re comfortable accepting a reasonable amount of uncertainty about the impact you are having, especially in the short-term.
The case for focusing on the long-term future can be summarised as follows:
Below we discuss each part of this argument in more detail.
Civilisation could continue for a billion years, until the Earth becomes uninhabitable.fn-3 It’s hard to say how likely this is, but it certainly seems plausible - and putting less than, say, a 1% chance on this possibility seems overconfident.fn-4 You may disagree that 1% is a reasonable lower bound here, but changing the figure by an order of magnitude or two would still yield an extremely impressive result. And even if civilisation only survives for another million years, that still amounts to another ~50,000 generations of people, i.e. trillions of future lives.fn-5
If our descendants survive for long enough, then they are likely to advance in ways we cannot currently imagine - even someone living a few hundred years ago could not possibly have imagined the technological advances we’ve made today. It is possible they might even develop technology enabling them to reach and colonise planets outside our solar system, and survive well beyond a billion years.fn-6
Let’s say that if we survive until the end of the Earth’s lifespan, there is a 1% chance of space colonisation. This would make the overall probability of survival beyond Earth 1 in 10,000 (1% chance of surviving to a billion years, multiplied by a 1% chance of surviving further given that). This sounds incredibly low, but suppose that space colonisation could allow our descendants to survive up to 100 trillion yearsfn-7. This suggests we could have up to 1/10,000 x 100 trillion years = 10 billion expected years of civilisation ahead of us.
If we expect life in the future to be, on average, about as good as the present, then this would make the whole of the future about 100 million times more important than everything that has happened in the last 100 years. In fact, it seems like there could be more people in the future with better lives than those living today: economic, social, and technological progress could enable us to cure diseases, lift people out of poverty, and better solve other problems. It also seems possible that people in the future will be more altruistic than people alive todayfn-8 - which also makes it more likely that they will be motivated to create a happy and valuable world.
However, it’s precisely because of this enormous potential that it’s so important to ensure that things go as well as possible. The loss of potential would be enormous if we end up on a negative trajectory. It could result in a great deal of suffering or the end of life.fn-9 And just as the potential to solve many of the world’s problems is growing, threats seem to be growing too. In particular, advanced technologies and increasing interconnectedness pose great risks.fn-10
There are a number of things we could work on today that seem likely to influence the long-term future:
Paul Christiano also points out that even if opportunities to shape the long-term future with any degree of certainty do not exist today, they may well exist in the future. Investing in our own current capacity could have an indirect but large impact by improving our ability to take such opportunities when they do arise. Similarly, we can do research today to learn more about how we might be able to impact the long-term future.
Attempts to shape the long-term future are neglected by individuals, organisations and governments.
One reason is that there is little incentive to focus on far-off, uncertain issues compared to more certain, immediate ones. As 80,000 Hours put it, “Future generations matter, but they can’t vote, they can’t buy things, they can’t stand up for their interests.”
Problems faced by future generations are also more uncertain and more abstract, making it harder for us to care about them. There is a well-established phenomenon called temporal discounting, which means that we tend to give less weight to outcomes that are far in the future. This may explain our tendency to neglect long-term risks and problems. For example, it’s a large part of why we seem to have such difficulty tackling climate change.
Generally, there are diminishing returns to additional work in an area. This means that the neglectedness of the long-term future makes it more likely to be high impact.
Even if the chance of our actions influencing the long-term trajectory of humanity is relatively low, there are extremely large potential benefits, which mean that these actions could still have a very high expected value. For example, decreasing the probability of human extinction by just one in a million could result in an additional 1,000 to 10,000 expected years of civilisation (using earlier assumptions).fn-12
Compare this to actions we could take to improve the lives of people alive today, without looking at longer-run effects. A dramatic victory such as curing the most common and deadly diseases, or ending all war, might only make the current time period (~100 years) about twice as good as otherwise.fn-13 Though this seems like an enormous success, given the calculations above, decreasing the probability of human extinction would be 10 or 100 times better in expectation.
We might want to adjust this naive estimate downwards slightly, however, given uncertainty about some of the assumptions that go into it - we could be wrong about the probability of humanity surviving far into the future, or about the value of the future (if we think that future flourishing might have diminishing value, for example.) However, even if we think these estimates should be adjusted downwards substantially, we might very conservatively imagine that reducing the likelihood of existential risk by one in a million only equates to 100 expected years of civilization. This still suggests that the value of working to reduce existential risk is comparable to the value of the biggest victories we could imagine in the current time period - and so well worth taking seriously.
Some people might be sceptical about this cause because focusing on the long-term future seems counterintuitive. Altruism normally means helping those around us - not trying to ensure that our descendants survive for billions of years. If the long-term future is so important, how come almost no-one is thinking or talking about it?
Firstly, just because something goes against common sense doesn’t mean that it is mistaken. The moral intuitions of society have been wrong in the past - we now think of the way people used to treat slaves, women, and gay people as morally abhorrent, for example. There’s also evidence that our moral intuitions are subject to a whole host of biases, such as being insensitive to differences when numbers are large - intuitively, helping 10,000 people feels similarly good to helping 1 million people, even though they are vastly different in scale.fn-14
Secondly, it’s not clear that prioritising the long-term future actually is that counter-intuitive, properly understood. The long-term perspective suggests that we should prioritise things like investing in technology and economic growth, scientific research, strengthening institutions and improving democracy. These seem like pretty plausible responses to the question, “How can we best solve the world’s biggest problems?”. In fact, to some they may seem more intuitive than focusing on more proximate benefits like giving cash to poor farmers in Kenya or distributing malaria nets.fn-15
Another objection to this cause might be that future, unborn people don’t matter morally - at least not in the same way that people currently alive do.
It is easier to empathise with people who are already alive, and this might mean that we feel more motivated to help them. We might feel that it’s easier to help current people, because we can learn what they need. However, none of this implies that future people actually matter any less, morally.
When pressed, it seems like most people do care about the wellbeing of future generations. 80,000 Hours gives the example of a simple choice between (a) preventing one person from suffering next year, and (b) preventing 100 people from suffering (the same amount) 100 years from now. Since most people would choose the second option, they suggest that this shows that most people do value future generations.fn-16
A final objection is that we can’t influence the long-term future with any certainty, and so it’s useless to even try. Our efforts would be better focused on problems where we can see progress more clearly.
In the last section we discussed reasons to think that we probably can influence the long-term future. It’s worth reiterating that we don’t need to be certain of the impact of our actions. Efforts to shape the future could impact billions of lives. So even only a 1% chance of success would seem worth taking a ‘bet’ on.
Of course, we do still need to show that there is a decent probability of success, or else we could justify any total long-shot idea with this reasoning. The point here is just that we don’t need to be highly confident.
We generally feel an intuitive obligation to treat future, not-yet-existent people in roughly the same way as existent people. The argument laid out above assumes that we assign the same “moral weight” to future people as those living today. However, some philosophers have questioned whether we should give the same degree of moral consideration to future people. On “person-affecting views”, an action is only good or bad if it is good or bad for someone - and so the value of an action depends only on how it affects people who are either already alive, or who will come into existence regardless of our actions. This suggests that we have stronger reason to improve the lives of people living today than to help future generations. These views also suggest that human extinction, while bad for the people who die, causes no longer-term harms: there is no harm in people failing to come into existence.
A related issue is the non-identity problem, arising from the fact that sometimes future people may owe their very existence to choices made today. For example, which policies a government chooses to enact will affect which people have certain jobs, affecting which people meet and marry, and therefore causing different people to be born in the future. Those very policies might also affect how good the lives of future people are - if the government chooses to prioritise policies that increase short-term economic productivity over mitigating climate change in the longer-term, say, this could have a negative impact on future generations. But if the very policies that appear to have made future people’s lives “worse off” also ensured that those exact people were born at all, can those people really be said to have been harmed by those policies? If not, then this may be reason to prioritise the welfare of people who already exist, or whose existence does not depend on our actions.
However, the non-identity problem might also be taken as a reason to reject person-affecting views. The implication, that choosing policies that will make future generations' lives worse off is not causing those future people any harm, seems highly counterintuitive. We could instead adopt impersonal principles for evaluating the moral value of actions.fn-17 This means that we would judge not based on how they affect specific people, but based on how good they are from the perspective of the world as a whole.fn-18
You might think that there is nothing we can do that has a reliable impact on the long-term future.fn-19
For example, you may agree that our actions have indirect effects, but deny that we can tell what those effects will be in advance. It is much easier to look back and identify past actions which led to substantial knock-on benefits than it is to predict what actions will have those effects in future. Or you may think that we cannot make meaningful predictions about existential risks. Or you may think that there is little we can do about existential risks given our current knowledge, institutions, and technology.
Though it is very difficult to be confident here, there are opportunities which seem likely to have some impact. The fact that we are uncertain means that it may also be worth investing in our ability to identify more valuable opportunities to improve the future.
The final step in the argument laid out above relies on the notion of the expected value of an action. The expected value of an action combines (a) the value of each possible outcome with (b) the probability of each outcome occurring. This allows us to see why it can sometimes be better to take an action that has a smaller chance of success, but a greater reward if successful.
It’s worth separating out two subtly different ways that you might disagree with the use of expected values in this argument.
1. Objecting to the use of expected values to make decisions in principle
Some people challenge the use of expected value at all. Expected value theory can run into problems especially if we allow the possibility of infinite amounts of value (since expected value theory says that any non-zero probability of creating infinite value should dominate our decision-making).
The main alternative to expected value theory is pure risk aversion.fn-20 This view states that we should sometimes take lower expected value options if they offer extra certainty.fn-21 Risk aversion is a matter of degree: it doesn’t specify exactly what tradeoff we should make between certainty and potential impact. Introducing any amount of risk aversion weakens the case for the long-term future.
Should we be risk averse when we evaluate altruistic actions? Risk aversion seems to be irrational in a fairly fundamental sense - that is, risk averse agents can end up having inconsistent preferences that leave them open to exploitationfn-22 (though note that Buchak does argue that risk aversion can be rational).fn-23
2. Objecting to the way that expected values are estimated in practice
You might agree that expected values are how we should make decisions under uncertainty in principle, but disagree with the way that these expected values are calculated in practice.
There are always difficulties in calculating expected value. But the difficulties expand when we lack historical data or studies on the intervention. It therefore seems reasonable to be sceptical of any attempt to argue from the expected value of an intervention in this space.
Often when we’re reasoning, it makes sense to account for base rates. In this case, the base rate is the average effectiveness of a cause area. Suppose you think that the average cause area effectiveness is much lower than our estimate of the effectiveness of work to improve the future. Since our estimate is quite speculative, it seems like we should think it’s pretty possible that we’ve made a mistake, and this means that our all-things considered judgement about the effectiveness of work on the long-term future should be lower than our initial calculation suggests.fn-24
We think that the case we’ve made for prioritising the long-term future is based on wider considerations than simple expected value estimates (e.g. we’ve argued that it makes intuitive sense, and discussed a number of different heuristics which point in this direction). However you might certainly put less weight on any specific attempts to shape the long-term future if you are sceptical of our ability to estimate EV in this area.
Finally, you might agree with the case laid out above, but believe that the best way to improve the future is by solving the biggest problems that exist today. This isn’t, strictly speaking, an objection to the general idea that the long-term future of humanity should be a priority. It’s just a view about the best way to improve the future. As we’ve discussed, there’s reason to think that solving problems today will have hard-to-anticipate long-term effects. For instance, reducing poverty will improve health and wellbeing. But it may also have significant long-term effects, in that more people will be able to contribute productively to society, to innovation, to the economy, etc.
You might also choose to focus on current problems if you think that these problems are unlikely to be solved by general progress. One worry is that if we leave current problems unsolved, we might get locked into bad structures. For instance, it might be important to solve inequality, in case the problem gets harder to solve later.fn-25
You might therefore choose to focus on solving more immediate, concrete problems in the world even if you think that the long-term is ultimately what’s most important, if you believe (most of) the following:
Maximum possible duration of civilization, M = 100,000,000,000,000 Default chance of reaching maximum possible duration of civilization (according to our guess of 1% for each of 2 filters), C = 0.01 x 0.01 = 0.0001 Default expected time remaining for civilization = M x C = 10,000,000,000
If we increase the odds of survival at one of the filters by one in a million, we can multiply one of the inputs for C by 1.000001. So our new value of C is 0.01 x 0.01 x 1.000001 = 0.0001000001 New expected time remaining for civilization = M x C = 10,000,010,000
This value is 10,000 years greater than our default expected value. So even if we've overestimated by up to 10x, by our rough guess we'd gain between 1,000 and 10,000 years.
It’s also worth noting here, however, that reducing the probability of extinction by one in a million may be harder than it sounds - given that there is not just one single threat that needs to be tackled, but many different scenarios that could seriously threaten human extinction.