Toby Ord is working on a book about existential risks for a general audience. This fireside chat with Will MacAskill, from EA Global 2018: London, illuminates much of Toby’s recent thinking. Topics include: What are the odds of an existential catastrophe this century? Which risks do we have the most reason to worry about? And why should we consider printing out Wikipedia?
Below is a transcript of Toby's fireside chat, which we have lightly edited for clarity. You can discuss this talk on the EA Forum.
Will: Toby, you're working on a book at the moment. Just to start off, tell us about that.
Toby: I've been working on a book for a couple of years now, and ultimately, I think with big books - this one is on existential risk - I think that they're often a little bit like an iceberg, and certainly Doing Good Better was, where there's this huge amount of work that goes on before you even decide to write the book, coming up with ideas and distilling them.
I'm trying to write really the definitive book on existential risk. I think the best book so far, if you're looking for something before my book comes out, is John Leslie's The End of the World. That's from 1996. That book actually inspired Nick Bostrom, to some degree, to get into this.
I thought about writing an academic book. Certainly a lot of the ideas that are going to be included are cutting edge ideas that haven't really been talked about anywhere before. But I ultimately thought that it was better to write something at the really serious end of general non-fiction, to try to reach a wider audience. That's been an interesting aspect of writing it.
Will: And how do you define an existential risk? What counts as existential risks?
Toby: Yeah. This is actually something where even within effective altruism, people often make a mistake here, because the name existential risk, that Nick Bostrom coined, is designed to be evocative of extinction. But the purpose of the idea, really, is that there's the risk of human extinction, but there's also a whole lot of other risks which are very similar in how we have to treat them. They all involve a certain common methodology for dealing with them, in that they're risks that are so serious that we can't afford to have even one of them happen. We can't learn from trial and error, so we have to have a proactive approach.
The way that I currently think about it is that existential risks are risks that threaten the destruction of humanity's long-term potential. Extinction would obviously destroy all of our potential over the long term, as would a permanent unrecoverable collapse of civilization, if we were reduced to a pre-agricultural state again or something like that, and as would various other things that are neither extinction nor collapse. There could be some form of permanent totalitarianism. If the Nazis had succeeded in a thousand-year Reich, and then maybe it went on for a million years, we might still say that that was an utter, perhaps irrevocable, disaster.
I'm not sure that at the time it would have been possible for the Nazis to achieve that outcome with existing technology, but as we get more advanced surveillance technology and genetic engineering and other things, it might be possible to have lasting terrible political states. So existential risk includes both extinction and these other related areas.
Will: In terms of what your aims are with the book, what's the change you're trying to effect?
Toby: One key aim is to introduce the idea of existential risk to a wider audience. I think that this is actually one of the most important ideas of our time. It really deserves a proper airing, trying to really get all of the framing right. And then also, as I said, to introduce a whole lot of new cutting edge ideas that are to do with new concepts, mathematics of existential risk and other related ideas, lots of the best science, all put into one place. There's that aspect as well, so it's definitely a book for everyone on existential risk. I've learned a lot while writing it, actually.
But also, when it comes to effective altruism, I think that often we have some misconceptions around existential risk, and we also have some bad framings of it. It's often framed as if it's this really counterintuitive idea. There's different ways of doing this. A classic one involves saying "There could be 10 to the power of 53 people who live in the future, so even if there's only a very small chance..." and going from there, which makes it seem unnecessarily nerdy, where you've kind of got to be a math person to really get any pull from that argument. And even if you are a mathsy person, it feels a little bit like a trick of some sort, like some convincing argument that one equals two or something, where you can't quite see what the problem is, but you're not compelled by it.
Actually, though, I think that there's room for a really broad group of people to get behind the idea of existential risk. There's no reason that my parents or grandparents couldn't be deeply worried about the permanent destruction of humanity's long-term potential. These things are really bad, and I actually think that it's not a counterintuitive idea at all. In fact, ultimately I think that the roots of existential risk, and worrying about them, came from the risk of nuclear war in the 20th century.
My parents were out on marches against nuclear weapons. At the time, the biggest protest in US history was 2 million people in Central Park protesting nuclear weapons. It was a huge thing. It was actually the biggest thing at that time, in terms of civic engagement. And so when people can see that there's a real and present threat that could threaten the whole future, they really get behind it. That's also one of the aspects of climate change, is people perceive it as a threat to continued human existence, among other things, and that's one of the reasons that motivates them.
So I think that you can have a much more intuitive framing of this. The future is so much longer than the present, so some of the ways that we could help really could be by helping this long-term future, if there are ways that we could help that whole time period.
Will: Looking to the next century, let's say, where do you see the main existential risks being? What are all the ones that we are facing, and which are the ones we should be most concerned about?
Toby: I think that there is some existential risk remaining from nuclear war and from climate change. I think that both of those are current anthropogenic existential risks. The nuclear war risk is via nuclear winter, where the soot from burning cities would rise up into the upper atmosphere, above the cloud level, so that it can't get rained down, and then would block sunlight for about eight years or so. The risk there isn't that it gets really dark and you can't see or something like that, and it's not that it's so cold that we can't survive, it's that there are more frosts, and that the temperatures are depressed by quite a lot, such that the growing season for crops is only a couple of months. And there's not enough time for the wheat to germinate and so forth, and so there'll be widespread famine. That's the threat there.
And then there's climate change. Climate change is a warming. Nuclear winter is actually also a change in the climate, but a cooling. I think that the amount of warming that could happen from climate change is really underappreciated. The tail risk, the chance that the warming is a lot worse than we expect, is really big. Even if you set aside the serious risks of runaway climate change, of big feedbacks from the methane clathrates or the permafrost, even if you set all of those things aside, scientists say that the estimate for if you doubled CO2 in the atmosphere is three degrees of warming. And that's what would happen if you doubled it.
But if you look at the fine print, they say it's actually from 1.5 degrees to 4.5 degrees. That's a huge range. There's a factor of three between those estimates, and that's just a 66% confidence interval. They actually think there's a one in six chance it's more than 4.5 degrees. So I think there's a very serious chance that if it doubled, it's more than 4.5 degrees, but also there's uncertainty about how many doublings will happen. It could easily be the case that humanity doubles the CO2 levels twice, in which case, if we also got unlucky on the sensitivity, there could be nine degrees of warming.
And so when you hear these things about how many degrees of warming they're talking about, they're often talking about the median of an estimate. If there saying we want to keep it below two degrees, what they mean is want to keep the median below two degrees, such that there's still a serious chance that it's much higher than that. If you look into all of that, there could be very serious warming, much more serious than you get in a lot of scientific reports. But if you read the fine print in the analyses, this is in there. And so I think there's a lack of really looking into that, so I'm actually a lot more worried about it than I was before I started looking into this.
By the same token, though, it's difficult for it to be an existential risk. Even if there were 10 degrees of warming or something beyond what you're reading about in the newspapers, the warming... it would be extremely bad, just to clarify. But I've been thinking about all these things in terms of whether they could be existential risks, rather than whether they could lead to terrible situations, which could then lead to other bad outcomes. But one thing is that in both cases, both nuclear winter and climate change, coastal areas are a lot less affected. There's obviously flooding when it comes to climate change, but a country like New Zealand, which is mostly coastal, would be mostly spared the effects of either of these types of calamities. Civilization, as far as I can tell, should continue in New Zealand roughly as it does today, but perhaps without low priced chips coming in from China.
Will: I really think we should buy some land in New Zealand.
Toby: Like as a hedge?
Will: I'm completely serious about this idea.
Toby: I mean, we definitely should not screw up with climate change. It's a really serious problem. It's just a question that I'm looking at is, is it an existential risk? Ultimately, it's probably better thought of as a change in the usable areas on the earth. They currently don't include Antarctica. They don't include various parts of Siberia and some parts of Canada, which are covered in permafrost. Effectively, with extreme climate change, the usable parts of the earth would move a bit, and they would also shrink a lot. It would be a catastrophe, but I don't see why that would be the end.
Will: Between climate change and nuclear winter, do you think climate change is too neglected by EA?
Toby: Yeah, actually, I think it probably is. Although you don't see many people in EA looking at either of those. I think they're actually very reasonable. In both cases, it's unclear why they would the end of humanity, and people generally in the nuclear winter research do not say that it would be. They say it would be catastrophic, and maybe 90% of people could die, but they don't say that it would kill everyone. I think in both cases, they're such large changes to the earth's environment, huge unprecedented changes, that you can't rule out that something that we haven't yet modeled happens.
I mean, we didn't even know about nuclear winter until more than 30 years after the use of nuclear weapons. There was a whole period of time when new effects could have happened, and we would have been completely ignorant of them at the time when we launched a war. So there could be other things like that. And in both cases, that's where I think most of the danger of existential risk lies, just that it's such a large perturbation of the earth's system that one wouldn't be shocked if it turned out to be an existential catastrophe. So there are those ones, but I think the things that are of greatest risk are things that are forthcoming.
Will: So, tell us about the risks from unprecedented technology.
Toby: Yeah. The two areas that I'm most worried about in particular are biotechnology and artificial intelligence. When it comes to biotech, there's a lot to be worried about. If you look at some of the greatest disasters in human history, in terms of the proportion of the population who died in them, great plagues and pandemics are in this category. The Black Death killed between a quarter and 60% of people in Europe, and it was somewhere between 5 and 15% of the entire world's population. And there are a couple of other cases that are perhaps at a similar level, such as the spread of Afro-Eurasian germs into the Americas when Columbus went across and they exchanged germs. And also, say, the 1918 flu killed about 4% of the people in the world.
So we've had some cases that were big, really big. Could they be so big that everyone dies? I don't think so, at least from natural causes. But maybe. It wouldn't be silly to be worried about that, but it's not my main area of concern. I'm more concerned with biotechnological advances that we've had. We've had radical breakthroughs recently. It's only recently that we've discovered even that there are bacteria and viruses, that we've worked out about DNA, and that we've worked out how to take parts of DNA from one organism and put them into another. How to synthesize entire viruses just based on their DNA code. Things like this. And these radical advances in technology have let us do some very scary things.
And there's also been this extreme, it's often called democratization of this technology, but since the technology could be used for harm, it's also a form of proliferation, and so I'm worried about that. It's very quick. You probably all remember when the human genome project was first announced. That cost billions of dollars, and now a complete human genome can be sequenced for $1,000. It's kind of a routine part of PhD work, that you get a genome sequenced.
These things have come so quickly, and other things like CRISPR and also if we look at gene drives, these were technologies, really radical things, CRISPR for putting arbitrary genetic code from one animal into another, and gene drives for releasing it into the wild and having it proliferate, that were less than two years between being invented by the cutting edge labs in the world, the very smartest scientists, Nobel Prize-worthy stuff, to being replicated by undergraduates in science competitions. Just two years, and so if you think about that, the pool of people who could have bad motives, who have access to the ability to do these things, is increasing massively, from just a select group of people where you might think there's only five people in the world who could do it, who have the skills, who have the money, and who have the time to do it, through to a thing that's much faster and where the pool of people is in the millions. There's just much more chance you get someone with bad motivation.
And there's also states with bioweapons programs. We often think that we're protected by things like the Bioweapons Convention, the BWC. That is the main protection, but there are states who violate it. We know, for example, that Russia has been violating it for a long time. They had massive programs with more than 10,000 scientists working on versions of smallpox, and they had an outbreak when they did a smallpox weapons test, which has been confirmed, and they also killed a whole lot of people with anthrax accidentally when they forgot to replace a filter on their lab and blew a whole lot of anthrax spores out over the city that the lab was based in.
There's really bad examples of bio-safety there, and also the scary thing is that people are actually working on these things. The US believes that there are about six countries in violation of this treaty. Some counties, like Israel, haven't even signed up to it. And also it has the budget of a typical McDonald's, and it has four employees. So that's the thing that stands between us and misuse of these technologies, and I really think that that is grossly inadequate.
Will: The Bioweapons Convention has four people working in it?
Toby: Yeah. It had three. I had to change it in my book, because a new person got employed.
Will: How does that compare to other sorts of conventions?
Toby: I don't know. It's a good question. So those are the types of reasons that I'm really worried about developments in bio.
Will: Yeah. And what would you say to the response that it's just very hard for a virus to kill literally everybody, because they have this huge bunker system in Switzerland, nuclear submarines have six-month tours, and so on? Obviously, this is an unimaginable tragedy for civilization, but still there would be enough people alive that over some period of time, populations would increase again.
Toby: Yeah. I mean, you could add to that uncontacted tribes and also researchers in Antarctica as other hard-to-reach populations. I think it's really good that we've diversified somewhat like that. I think that it would be really hard, and so I think that even if there is a catastrophe, it's likely to not be an existential disaster.
But there are reasons for some actors to try to push something to be extremely dangerous. For example, as I said, the Soviets, then Russians after the collapse of the Soviet Union, were working on weaponizing smallpox, and weaponizing Ebola. It was crazy stuff, and tens of thousands of people were working on it. And they were involved in a mutually assured destruction nuclear weapons system with a dead hand policy, where even if their command centers were destroyed, they would force retaliation with all of their weapons. There was this logic of mutually assured destruction and deterrence, where they needed to have ways of plausibly inflicting extreme amounts of harm in order to try to deter the US. So they were already involved in that type of logic, and so it would have made some sense for them to do terrible things with bioweapons too, assuming the underlying logic makes any sense at all. So I think that there could be realistic attempts to make extremely dangerous bioweapons.
I should also say that I think this is an area that's under-invested in, in EA. I think that sometimes there's about... I would say that the existential risk from bio is maybe about half that of AI, or a quarter or something like that. But a factor of two or four in how big the risk is. If you recall, in effective altruism we're not interested in work on the problem that has the biggest size, we're interested in what marginal impact you'll have. And it's entirely possible that someone would be more than a couple of times better at working on trying to avoid bio problems than they would be on trying to avoid AI problems.
And also, the community among EAs who are working on biosecurity is much smaller as well, so one would expect there to be good opportunities there. But work on bio-risk does require quite a different skillset, because in bio, lot of the risk is misuse risk, either by lone individuals, small groups, or nation states. It's much more of a traditional security-type area, where working in biosecurity might involve talking a lot with national security programs and so forth. It's not the kind of thing that one wants free and open discussions of all of the different things. And one also doesn't want to just say, "Hey, let's have this open research forum where we're just on the internet throwing out ideas, like, 'How would you kill every last person? Oh, I know! What about this?'" We don't actually want that kind of discussion about it, which puts it in a bit of a different zone.
But I think that for people who think that they actually are able to not talk about things that they find interesting and fascinating and important, which a lot of us have trouble not talking about those things, but for people who could do that and also perhaps who already have a bio background, it could be a very useful area.
Will: Okay. And so you think that EA in general, even though they're taking these risks more seriously than maybe most people, you think we're still neglecting it relative to the EA portfolio.
Toby: I think so. And then AI, I think, is probably the biggest risk.
Will: Okay, so tell us a little bit about that.
Toby: Yeah. You may have heard more than you ever want to about AI risk. But basically, my thinking about this is that the reason that humanity is in control of its destiny, and the reason that we have such a large long-term potential, is because we are the species that's in control. For example, gorillas are not in control of their destiny. Whether they flourish or not, I hope that they will, but it depends upon human choices. We're not in such a position compared to any other species, and that's because of our intellectual abilities, both what we think of as intelligence, like problem-solving, and also our ability to communicate and cooperate.
But these intellectual abilities have given us the position where we have the majority of the power on the planet, and where we have the control of our destiny. If we create some artificial intelligence, generally intelligent systems, and we make them be smarter than humans and also just generally capable and have initiative and motivation and agency, then by default, we should expect that they would be in control of our future, not us. Unless we made good efforts to stop that. But the relevant professional community, who are trying to work out how to stop it, how to guarantee that they obey commands or that they're just motivated to help humans in the first place, they think it's really hard, and they have higher estimates of the risk from AI than anyone else.
There's disagreement about the level of risk, but there's also some of the most prominent AI researchers, including ones who are attempting to build such generally intelligent systems, who are very scared about it. They aren't the whole AI community, but they are a significant part of it. There are a couple of other AI experts who say that worrying about existential risk is a really fringe position in AI, but they're actually either just lying or they're incompetently ignorant, because they should notice that Stuart Russell and Demis Hassabis are very prominently on the record saying this is a really big issue.
So I think that that should just give us a whole lot of reason to just expect, yeah, I guess creating a successor species probably could well be the last thing we do. And maybe we'd create something that also is even more important than us, and it would be a great future to create a successor. It would be effectively our children, or our "mind children," maybe. But also, we don't have a very good idea how to do that. We have even less of an idea about how to create artificial intelligence systems that have themselves moral status and have feelings and emotions, and strive to achieve greater perfections than us and so on. More likely it would be for some more trivial ultimate purpose. Those are the kind of reasons that I'm worried about.
Will: Yeah, you hinted briefly, but what's your overall... over the next hundred years, let's say, overall chance you'd assign some existential risk event, and then how does that break down between these different risks you've suggested?
Toby: Yeah. I would say something like a one in six chance that we don't make it through this century. I think that there was something like a one in a hundred chance that we didn't make it through the 20th century. Overall, we've seen this dramatic trend towards humanity having more and more power, often increasing at exponential rates, depending on how you measure it. But there hasn't been this kind of similar increase in human wisdom, and so our power has been outstripping our wisdom. The 20th century is the first one where we really had the potential to destroy ourselves. I don't see any particular reason why we wouldn't expect, then, the 21st century to have our power even more outbalance our wisdom, and indeed that seems to be the case. We also know of particular technologies that look like this could happen.
And then the 22nd century, I think would be even more dangerous. I don't really see a natural end to this until we discover almost all the technologies that can be built or something, or we go extinct, or we get our act together and decide that we've had enough of that and we're going to make sure that we never suffer any of these catastrophes. I think that that's what we should be attempting to do. If we had a business-as-usual century, I don't know what I'd put the risk at for this century. A lot higher than one in six. My one in six is because I think that there's a good chance, particularly later in the century, that we get our act together. If I knew we wouldn't get our act together, it'd be more like one in two, or one in three.
Will: Okay, cool. Okay. So if we just, no one really cared, no one was really taking action, it would be more like 50/50?
Toby: Yeah, if it was pretty much like it is at moment, with us just running forward, then yeah. I'm not sure. I haven't really tried to estimate that, but it would be something, maybe a third or a half.
Will: Okay. And then within that one in six, how does that break down between these different risks?
Toby: Yeah. Again, these numbers are all very rough, I should clarify to everyone, but I think it's useful to try to give quantitative estimates when you're giving rough numbers, because if you just say, "I think it's tiny," and the other person says, "No, I think it's really important," you may actually both think it's the same number, like 1% or something like that. I think that I would say AI risk is something like 10%, and bio is something like 5%.
Will: And then the others are less than a percent?
Toby: Yeah, that's right. I think that climate change and... I mean, climate change wouldn't kill us this century if it kills us, anyway. And nuclear war, definitely less than a percent. And probably the remainder would be more in the unknown risks category. Maybe I should actually have even more of the percentage in that unknown category.
Will: Let's talk a little bit about that. How seriously do you take unknown existential risks? I guess they are known unknowns, because we know there are some.
Toby: Yeah.
Will: How seriously do you take them, and then what do you think we should do, if anything, to guard against them?
Toby: Yeah, it's a good question. I think we should take them quite seriously. If we think backwards, and think what risks would we have known about in the past, we had very little idea. Only two people had any idea about nuclear bombs in, let's say, 1935 or something like that, a few years before the bomb was first started to be designed. It would have been unknown technology for almost everyone. And if you go back five more years, then it was unknown to everyone. I think that these issues about AI and, actually, man-made pandemics, there were a few people who were talking these things very early on, but only a couple of people, and it might have been hard to distinguish them from the noise.
But I think ultimately, we should expect that there are unknown risks. There are things that we can do about them. One of the things that we could do about them is to work on things like stopping war. So I think that, say, avoiding great power war, as opposed to avoiding all particular wars. Some potential wars have no real chance of causing existential catastrophe. But things like World War II or the Cold War were cases where they plausibly could have.
I think the way to think about this is not that war itself, or great power war, is an existential risk, but rather it's something else, which I call an existential risk factor. I take inspiration in this from the Global Burden of Disease, which looks at different diseases and shows how much does, say, heart disease cause mortality, morbidity in the world, and adds up a number of disability adjusted life years for that. They do that for all the different diseases, and then they also want to ask questions like how much ill health does smoking cause, or alcohol? You can think of these things as these pillars for each of the different particular diseases, but then there's this question of cross-cutting things, where something like smoking increases heart disease and also lung cancer and various other aspects, so it kind of contributes a bit to a whole lot of different outcomes. And they ask the question, well, what if you took smoking from its current level down to zero, how much ill health would go away? They call that the burden of the risk factor, and you can do that with a whole lot of things. Not many people think about this, though, within existential risk. I think our community tends to fixate on particular risks a bit too much, and they think if someone's really interested in existential risk, that's good. They'll say, "Oh, you work on asteroid prediction and deflection? That's really cool." That person is part of the ingroup, or the team, or something.
And if they hear that someone else works on global peace and cooperation, then they'll think, "Oh, I guess that might be good in some way." But actually, if you ask yourself, conditional upon how much existential risk is there this century, "What if we knew there was going to be no great power war?" How much would it go down from, say, my current estimate of about 17%? I don't know. Maybe down to 10% or something like that, or it could halve. It could actually have very big effect on the amount of risk.
And if you think about, say, World War II, that was a big great power war, they invented nuclear weapons during that war, because of the war. And then we also started to massively escalate and invent new types of nuclear weapons, thermonuclear weapons, because of the Cold War. So war has a history of really provoking existential risk, and I think that this really connects in with the risks that we don't yet know about, because one way to try to avoid those risks is to try to avoid war, because war has a tendency to then try to delve into dark corners of technology space.
So I think that's a really useful idea that people should think about. The risk of being wiped out by asteroids is in the order of one in a million per century. I think it's actually probably lower. Whereas, as I just said, great power war, taking that down to zero instead of taking asteroid risk down to zero, is probably worth multiple percentage points of existential risk, which is way more. It's like thousands of times bigger. While certain kind of nebulous peace-type thing might have a lot of people working on them, it might not be that neglected to try avoiding great power wars in particular. So, thinking about the US and China and Russia and maybe the EU, and trying to avoid any of these poles coming into war with each other, is actually quite a lot more neglected. So I think that there would be really good opportunities to try to help with these future risks that way. And that's not the only one of these existential risk factors. You could think of a whole lot of things like this.
Will: Do you have any views on how likely a great power war is over the next century then?
Toby: I would not have a better estimate of that than anyone else in the audience.
Will: Reducing great power war is one way of reducing unknown risks. Another way might be things like refuges, or greater detection measures, or backing up knowledge in certain ways. Stuff like David Denkenberger's work with ALLFED. What's your view on these sorts of activities that are about ensuring that small populations of people, after the global catastrophic but not extinction risk, then are able to flourish again rather than actually just dwindle?
Toby: It sounds good. Definitely, the sign is positive. How good it is compared to other kinds of direct work one could do on existential risk, I'm not sure. I tend to think that, at least assuming we've got a breathable atmosphere and so on, it's probably not that hard to come back from the collapse of civilization. I've been looking a lot when writing this book at the really long-term history of humanity and civilization. And one thing that I was surprised to learn is that the agricultural revolution, this ability to move from hunter-gatherer, forager-type life, into something that could enable civilization, cities, writing, and so forth, that that happened about five times in different parts of the world.
So sometimes people, I think mistakenly, refer to Mesopotamia as the cradle of civilization. That's a very Western approach. Actually, there are many cradles, and there were civilizations that started in North America, South America, New Guinea, China, and Africa. So actually, I think every continent except for Australia and Europe. And ultimately, these civilizations kind of have merged together into some kind of global amalgam at the moment. And they all happened at a very similar time, like within a couple of thousand years of each other.
Basically, as soon as the most recent ice age ended and the rivers started flowing and so on, then around these very rivers, civilizations developed. So it does seem to me to be something that is not just a complete fluke or something like that. I think that there's a good chance that things would bounce back, but work to try to help on that, particularly to do the very first bits of work. As an example, printing out copies of Wikipedia, putting them in some kind of dried out, airtight containers, and just putting them in some places scattered around the world or something, is probably this kind of cheap thing that an individual could fund, and maybe a group of five people could actually just do. We're still in the case where there are a whole lot of things you could do, just-in-case type things.
Will: I wonder how big Wikipedia is when you print it all out?
Toby: Yeah, it could be pretty big.
Will: You'd probably want to edit it somehow.
Toby: You might.
Will: Justin Bieber and stuff.
Toby: Yeah, don't do the Pokemon section.
Will: What are the non-consequentialist arguments for caring about existential risk reduction? Something that's distinctive about your book is you're trying to unite various moral foundations.
Toby: Yeah, great. That's something that's very close to my heart. And this is part of the idea that I think that there's a really common sense explanation as to why we should care about these things. It's not salient to many people that there are these risks, and that's a major reason that they don't take them seriously, rather than because they've thought seriously about it, and they've decided that they don't care whether everything that they've ever tried to create and stand for in civilization and culture is all destroyed. I don't think that many people explicitly think that.
But my main approach, the guiding light for me, is really thinking about the opportunity cost, so it's thinking about everything that we could achieve, and this great and glorious future that is open to us and that we could do. And actually, the last chapter of my book really explores that and looks at the epic durations that we might be able to survive for, the types of things that happen over these cosmological time scales that we might be able to achieve. That's one aspect, duration. I think it's quite inspiring to me. And then also the scale of civilization could go beyond the Earth and into the stars. I think there's quite a lot that would be very good there.
But also the quality of life could be improved a lot. People could live longer and healthier in various obvious ways, but also they could... If you think about your peak experiences, like the moments that really shine through, the very best moments of your life, they're so much better, I think, than typical experiences. Even within human biology, we are capable of having these experiences, which are much better, much more than twice as good as the typical experiences. Maybe we could get much of our life up to that level. So I think there's a lot of room for improvement in quality as well.
These ideas about the future really are the main guide to me, but there are also these other foundations, which I think also point to similar things. One of them is a deontological one, where Edmund Burke, one of the founders of political conservatism, had this idea of the partnership of the generations. What he was talking about there was that we've had ultimately a hundred billion people who've lived before us, and they've built this world for us. And each generation has made improvements, innovations of various forms, technological and institutional, and they've handed down this world to their children. It's through that that we have achieved greatness. Otherwise, we know what it would be like. It would be very much like it was on the savanna in South Africa for the first generations, because it's not like we would have somehow been able to create iPhones from scratch or something like that.
Basically, if you look around, pretty much every single thing you can see, other, I guess, than the people in this room, was built up out of thousands of generations of people working together, passing down all of their achievements to their children. And it has to be. That's the only way you can have civilization at all. And so, is our generation going to be the one that breaks this chain and that drops the baton and destroys everything that all of these others have built? It's an interesting kind of backwards-looking idea there, of debts that we owe and a kind of relationship we're in. One of the reasons that so much was passed down to us was an expectation of continuation of this. I think that's, to me, quite another moving way of thinking about this, which doesn't appeal to thoughts about the opportunity cost that would be lost in the future.
And another one that I think is quite interesting is a virtue approach. This is often, when people talk about virtue ethics, they're often thinking about character traits which are particularly admirable or valuable within individuals. I've been increasingly thinking while writing this book about this at a civilizational level. If you think of humanity as a group agent, so the kind of collective things that we do, in the same way as we might think of, say, United Kingdom as a collective agent and talk about what the UK wants when it comes to Brexit or some question like that. That if we think about humanity, then I think we're incredibly imprudent. We take these risks, which are insane risks if an individual was taking them, where effectively the lifespan of humanity, it's equivalent to us taking risks to our whole future life, just to make the next five seconds a lot better. With no real thought about this at all, no explicit questioning of it or even calculating it out or anything, we're just blithely taking these risks. I think that we're very impatient and imprudent. I think that we could do with a lot more wisdom, and I think that you can actually also come from this perspective. When you look at humanity's current situation, it does not look like how a wise entity would be making decisions about its future. It looks incredibly juvenile and immature and like it needs to grow up. And so I think that's another kind of moral foundation that one could come to these same conclusions through.
Will: What are your views on timelines for the development of advanced AI? How has that changed over the course of writing the book, if at all, as well?
Toby: Yeah. I guess my feeling on timelines have changed over the last five or 10 years. Ultimately, the deep learning revolution has gone very quickly, and there really are, in terms of the remaining things that need to happen before you get artificial general intelligence, not that many left. Progress seems very quick, and there don't seem to be any fundamental reasons why the current wave of technology couldn't take us all the way through to the end.
Now, it may not. I hope it doesn't, actually. I think that would just be a bit too fast, and we'd have a lot of trouble handling it. But I can't rule out it happening in, say, 10 years or even less. Seems unlikely. I guess my best guess for kind of median estimate, so as much chance of happening before this date as happening after this date, would be something like 20 years from now. But also, if it took more than 100 years, I wouldn't be that surprised. I allocate, say, a 10% chance or more to it taking longer than that. But I do think that there's a pretty good chance that it happens within, say, 10 to 20 years from now. Maybe there's like a 30, 40% chance it happens in that interval.
That is quite worrying, because this is a case where I can't rely on the idea that humanity will get its act together. I think ultimately the case with existential risk is fairly clear and compelling. This is something that is worth a significant amount of our attention and is one of the most important priorities for humanity. But we might not have been able to make that case over short time periods, so it does worry me quite a bit.
Another aspect here, which gets a bit confusing, and it's sometimes confused within effective altruism, is try to think about the timelines that you think are the most plausible, so you can imagine a probability distribution over different years, and when it would arrive. But then there's also the aspect that your work would have more impact if it happened sooner, and I think this is a real thing, such that if AI is developed in 50 years' time, then the ideas we have now about what it's going to look like are more false. Trying to do work now that involves these current ideas will be more shortsighted about what's actually going to help with the problem. And also, there'll be many more people who've come to work on the problem by that point, so it'll be much less neglected by the time it actually happens, whereas if it happens sooner, it'll be much more neglected. Your marginal impact on the problem is bigger if it happens sooner.
You could start with your overall distribution about when it's going to happen, and then modify that into a kind of impact-adjusted distribution about when it's going to happen. That's ultimately the kind of thing that would be most relevant to when you think about it. Effectively, this is perhaps just an unnecessarily fancy way of saying, one wants to hedge against it coming early, even if you thought that was less likely. But then you also don't want to get yourself all confused and then think it is coming early, because you somehow messed up this rather complex process of thinking about your leverage changing over time, as well as the probability changing over time. I think people often do get confused. They then decide they're going to focus on it coming early, and then they forget that they were focusing on it because of leverage considerations, not probability considerations.
Will: In response to the hedging, what would you say to the idea that, well, in very long timelines, we can have unusual influence? So supposing it's coming in 100 years' time, I'm like, "Wow, I have this 100 years to kind of grow. Perhaps I can invest my money, build hopefully exponentially growing movements like effective altruism and so on." And this kind of patience, this ability to think on such a long time horizon, that's itself a kind of unusual superpower or way of getting leverage.
Toby: That is a great question. I've thought about that a lot, and I've got a short piece on this online: The Timing of Labour Aimed at Existential Risk Reduction. And what I was thinking about was this question about, suppose you're going to do a year of work. Is it more important that a year of work happens now or that a year of work happens closer to the crunch time, when the risks are imminent? And you could apply this to other things as well as existential risk as well. Ultimately, I think that there are some interesting reasons that push in both directions, as you've suggested.
The big one that pushes towards later work, such that you'd rather have the year of work be done in the immediate vicinity of the difficult time period, is something I call nearsightedness. We just don't know what the shape of the threats are. I mean, as an example, it could be that now we think AI is bigger than bio, but then it turns out within five or 10 years' time that there've been some radical breakthroughs in bio, and we think bio's the biggest threat. And then we think, "Oh, I'd rather have been able to switch my labor into bio."
So that's an aspect where it's better to be doing it later in time, other things being equal. But then there's also quite a few reasons why it's good to do things earlier in time, and these include, what you were suggesting, growth, but there are various things to do with your money in a bank or investment could grow, such that you do the work now, you invest the money, the money's much bigger, and then you pay for much more work later. Obviously, there's growth in terms of people and ideas, so you do some work growing a movement, then you have thousands or millions of people try to help later, instead of just a few. Also growing an academic field works that like that. A lot of things do.
And then there's also other related ideas, like steering. If you're going to do some work on steering the direction of how we deal with one of these issues, you want to do that steering work earlier, not later. It's like the idea of diverting a river. You want to do that closer to the source of the river. And so there's various of these things that push in different directions, and they help you to work out the different things you were thinking of doing. I like to think of this as a portfolio, in the same way as we think perhaps of a EA portfolio, what we're all doing with our lives. It's not the case that each one of us has to mirror the overall portfolio of important problems in the world, but what we should do together is contribute as best we can to humanity's portfolio of work on these different issues.
Similarly, you could think of a portfolio over time, of all the different bits of work and which ones are best to be done at which different times. So now it's better to be thinking deeply about some of these questions, trying to do some steering, trying to do some growth. And direct work is often more useful to be done later, although there are some exceptions. For example, it could be that with AI safety, you actually need to do some direct work just to prove that there's a "there" there. And I think that that is actually sort of effectively direct work on AI safety at the moment. The main benefit of it is actually that it helps with the growth of the field.
So anyway, there are a few different aspects on that question, but I think that our portfolio should involve both these things. I think there's also a pretty reasonable chance, indeed, that AI comes late or that the risks come late and so on, such that the best thing to be doing was growing the interest in these areas. In some ways, my book is a bet on that, to say it'd be really useful if this idea had a really robust and good presentation, and to try to do that and present it in this right way, so that it has the potential to really take off and be something that people all over the world take seriously.
Obviously, that's in some tension with the possibility AI could come in five years, or some other risk, bio risk, could happen really soon. Or nuclear war or something like that. But I think ultimately, our portfolio should go both places.
Will: Terrific. Well, we've got time for one last short question. First question that we got. Will there be an audiobook?
Toby: Yes.
Will: Will you narrate it?
Toby: Maybe.