In this fireside chat from EA Global 2018: San Francisco, Rob Wiblin asks Nick Beckstead questions about the long-term future, epistemic humility, and which careers effective altruists may not be exploring enough. A transcript of their chat is below, which we have lightly edited for readability.
Rob: How many people heard the podcast interview I did with Beckstead a couple months ago? I'm going to try now to not repeat that, and to push on from some of the topics that we raised there.
Nick: Cool.
Rob: Since 2012, there's been a distinct movement towards focusing on existential risk and long-term future causes. But it's not as if there are many new arguments that we weren't aware of in 2012. We've just become more confident, or we're more willing to make the contrarian bet that thinking about the very long-term future is the most impactful thing to do.
I'm curious to know whether you think we should have made that switch earlier. Whether we were too averse to doing something that was weird, unconventional, and that other people in general society might not have respected or understood.
Nick: Yeah, that's a good question. I think some things have changed that have made focusing on the long-term future more attractive. But I mostly think we should've gone in more boldly on those issues sooner, and that we had a sufficient case to do so.
I think the main thing that has changed, with AI in particular, is that the sense of progress in the field has taken off. It's become a bit more tangible for that reason. But mostly, I do agree that we could've gone in earlier. And I regret that. If we had spent money to grow work in that field sooner, I think it would have been better spent than what the Effective Altruist community is likely to spend its last dollars on. I wish the field was larger today. Yeah, so, I think that was a mistake.
I guess if you try to ask: "Why did we make that mistake?" I think you're probably pointing in roughly the right direction. It's uncomfortable to do something super weird. I don't know, though, I guess different people would have different answers.
I think if I ask myself why I didn't come into this more boldly earlier - I include myself in the group of people who could have - I think there was a sense of: "Wow, that's pretty crazy. Am I ready for that? What are people going to think?"
Rob: People came up with all these stories for why working on reducing poverty or helping animals in the standard ways that we've been doing before, were also the best ways to help the long-term future, which struck me as very suspicious arguments at the time. Because they seemed like rationalizations for just not changing anything based on what could be a really crucial consideration.
Do you think that there's anything that we might be doing like that today? Do you think that they were actually rationalizations? Or were they good considerations that we had to work through?
Nick: I mean I think it's true that a lot of ways we could do good now, including addressing global poverty, I think you could make a case that they have a positive effect on the long-term future. If I was going to guess the sign, that's the sign I would guess.
I don't think that donating to the usual suspects, AMF and such, would be my leading guess for how to change the long-term character of the future in the best way possible. So, I don't know, I don't want to psychologize other people's reasons for coming up with that view.
Could we be making other mistakes like that today? I don't see one. I think there's ones you could say maybe we're doing that for. You could make a case like, "Oh, are we doing that with infinite ethics or something?" That's not much of a debate about that.
Rob: Seems like it could be another crucial consideration that could really switch things around.
Nick: It could be. I mean, according to me, I would say if I was off doing the thing that was best for infinite ethics, it would look a lot like the stuff I'm doing for global catastrophic risks.
Rob: Isn't that really suspicious? Why is that?
Nick: I don't find it that suspicious. I think that a lot of the stuff on AI and bio-security preparations, when I run the numbers, I think the expected cost per life saved is right up there with the best ways of just helping humans in general. I think AI and bio-security are big deal things in the world, and they're not just big deals if you have an Astronomical Waste worldview.
So I think they weather a storm from "help the most humans" to "do the best thing for astronomical waste", It wouldn't be that weird if they were also the best thing for "infinite good outcomes". I think it's less tight there. If I were going to explain the argument a little bit more, I would say, "Well, if there's a way of achieving infinite good, then either it's a scalable process where you want more of it, or it's a process where you just want to get it once."
If it's a scalable process where you want more of it, then it's going to be some physical process that could be discoverable by a completed science. Then the same astronomical waste argument applies to that, because that's where all the possible resources are, is in the distant future. And we'll probably unlock the relevant physical process if we nail that problem.
And conversely, if it's a thing where you just get it once, I don't have any good candidates available right now for how we're going to achieve infinite value through some esoteric means. My bet would be on a more developed civilization with a completed science finding it. So I want to make sure we get there.
Rob: We could at least tell people about it a bit more.
Nick: We could.
Rob: If it seemed that decisive, yeah.
Nick We could.
Rob: We could have some infinite ethics promotion advocacy group. That'd be pushing the boundaries, I think.
Nick: Yeah.
Rob: Maybe bad for the rest of the movement.
Nick: Over that amount of time as well, there's been a movement among mainstream AI researchers to take alignment issues seriously. But they seem to have been fairly slow to get on board. And even now, it seems like we're making decent progress on concrete technical questions, but it's not as if mainstream AI capabilities researchers are stampeding into joining this effort.
Rob: Why do you think that is? Are there things that we could have done differently? Or could do differently now?
Nick: Yeah. Why is the AI community not doing all the things it could do to take technical safety seriously as a problem? I think that's a difficult question. I think partly it's a matter of time and it's a a matter of timelines and what people believe. How far away they believe powerful AI systems are.
I think that's a big chunk of it. I also think a big chunk of it is that some of the work involves motivations that are kind of unusual, and are unusual for thinking about with the lens you would have as a machine learning researcher.
I think, for example, Paul Christiano does a lot of really excellent work on AI alignment as a problem. He blogs about it. A lot of his blog posts are not really shaped like machine learning papers. I think fields tend to get organized around successful methodologies, people who have employed those methodologies successfully and sets of problems that are interesting.
I think that machine learning is a field that likes empirical results and explaining them. What you did with some code and what the empirical results were. Some issues within AI safety and alignment fit well in that frame, and people do them. Which is great.
But I also think there are a lot of things that, say, Paul writes about, that don't fit that. I think a lot of the things that you would have to consider to be properly motivated to work on the problem might have a little bit of the character of "not shaped like a machine learning paper". I think that makes it hard to move things forward. I think that's a hard problem to think about.
Rob: Yeah. Do you think that's a big mistake that fields are making? If they're so resistant to ideas about their field that come from outside? Is that defensible? Because it helps them avoid nonsense that would otherwise creep in?
Nick: Partly. I might reframe "resistant to ideas that come from the outside." I think fields are organized around a topic, a set of problems, and some methodologies. New methodologies, I think, tend to get accepted when people come in and use a new methodology to do something really impressive that by the standards of the field, or solve some recalcitrant, interesting problem.
I'm basically expressing a Kuhnian view about the philosophy of science. I think there's a lot to that and it keeps out pseudoscience. I think it helps keep things rigorous, and there's a valuable function to be played there. But I think because some of the approaches to thinking about the motivations behind AI safety as a technical problem, aren't so much shaped like a machine learning paper, I think it does set off people's pseudoscience alarm bells. And I think that makes it harder to move forward in the way that I wish things would.
Rob: Yeah. Seems like this was an issue with concerns about nanotechnology as well. Maybe in the 80s and 90s.
Nick: Yes.
Rob: I guess maybe less of an issue with concerns about synthetic biology, it seems like. Maybe there's concerns that come from within the field as well.
Nick: Yeah, I mean, I think the case of nanotechnology is a harder one. I feel less sure what to think about it overall. But I do think insofar as I have looked into that, like Eric Drexler's work on nanosystems, I haven't seen anything that I would call a decisive refutation of that. And I've seen things that were offered as decisive refutations.
So I think I have pretty reasonable odds that something like that is in principle possible. So I feel uncertain about what to think about the field's view of that, but I do place substantial probability on that being a case of interesting work that was on the turf of another discipline, didn't conform to the reigning methodologies and successfully reasoned its way to a novel conclusion. But not a conclusion that was of substantial interest to the field in the right way, and provable using what they would normally consider the way to do science. To make it become socialized and accepted.
I do think it's an interesting test for thinking about the sociology of a field, and how fields work with their intellectual norms.
It's a little weird to talk about whether they're being irrational in this respect, because in some sense, I think these are spontaneous orders that arise. It's not designed by one person. They're kind of hard to change and their fashion plays a big role in fields. I think that's clear to anybody who's been a grad student.
Rob: Do you think if we're interested in changing views within fields in the future, that we should try to pair launching our views with some kind of technical accomplishment? Some engineering result that they might respect?
Nick: Yeah. It's an interesting question. I think it's a hard question ultimately. If you took the strict Kuhnian view and you said, "Okay, well how do you change the paradigm of the field?" You have to come in with some revolutionary accomplishment.
Rob: It'd be a long delay, I expect.
Nick: That's a tall order.
Rob: Yeah.
Nick: I think you can do things that are interesting by the standards of the field and also are interesting for your own unusual motivation. And that is also a route to legitimizing a topic.
Rob: Rhat career options that are very promising do you think EAs are neglecting to take sufficiently?
I wouldn't say this is underrated by the EA community right now, but it is striking to me that we still don't have that many people from our community who are, say, working on technical AI safety at OpenAI or DeepMind. We don't have that many people from our community who are working as policy advisers at those organizations.
If you're trying to be prepared for a world in which transformative AGI might be developed over the next 15 or 20 years, then that's an unfortunate oversight. I think those roles are hard to get, so it's understandable. But I think we should be making a strong effort for that.
I think roles as research engineers working on AI safety would also be pretty valuable. I think those might not require going and getting a PhD in machine learning. I think that's something that a really good software engineer could think about retooling for and could successfully end up doing. And could have a really big impact.
Rob: You were telling me about someone who recently managed to get into OpenAI, right? As a software developer? Or they were a software developer and then within a couple of months of training, they managed to get a job there as an engineer?
Nick: Right, yeah.
Rob: Yeah.
Nick: So I think there's more room for that kind of thing.
Jason Matheny's career is very striking to me. This is somebody who's been a part of the Effective Altruism community. Who's now the director of IARPA, and has had I think some pretty interesting impacts supporting the Good Judgment Project, and the competition around forecasting in the intelligence community, which I think has developed some really useful techniques that could be more widely applied in society.
I saw the other day, something in the news talking about a set of questions that is now asked whenever new projects are funded out of IARPA. They include, "What will the effects be if we're working on this and another country acquires this technology? How long will it take? And what will the effects be? And can we prepare defensive countermeasures?" It was a new set of questions that was asked of everything that they're funding, because that was one of his ideas and it was a great idea.
I'm struck by the fact that we don't have more people looking for opportunities like that in the US government. I think we could. I would love it if more people in our community tried to get roles as program managers at IARPA or DARPA and tried to fund some awesome and relevant science along the way, and also develop their networks and be in a place where they could play an important role, if some of the transformative technologies that we're interested in start getting closer to having a big impact on the world.
Rob: What's the deal with Matheny? He was an EA in the early 2000s, long before we came up with the name. Then he's set on this path and has been enormously successful.
Is it just a selection effect that the people who get involved in a new set of ideas very early on tend to be extremely driven? Or has he just gotten very lucky maybe? Or we're lucky to have someone so talented?
Nick: Yeah. I'm not sure.
Rob: Maybe it's easier than we think.
Nick: I'm not sure what the answer to that is. A story that I'm just making up right now would be well, there's a lot of things that I think are better explained now. If you were figuring everything out back then and you managed to get to a lot of the crucial considerations on your own, that's a stronger filter.
I could imagine other social considerations and stuff like that. Ultimately I think the interesting thing is, hey, maybe we could try some more of this strategy.
Rob: Yeah. Makes sense.
Nick: I'm also struck by... I mentioned Tetlock just a moment ago. I'm stuck by the fact that we don't have more people going and being PhD students with Tetlock.
I think learning that methodology and deploying it somewhere else, maybe in the government, maybe at an org like Open Philanthropy or something like that, seems like a promising route for somebody to take.
Rob: Do you worry that improving decision-making stuff is just too indirect?
Nick: I mean, you could make that case. But I think you can apply it relatively directly. Suppose you had a view like, "Well, look, what I really care about is AI and bio-security preparedness." You could just take the methodology and apply it to AI and bio-security relevant forecasting. I think that would be pretty valuable for the specific cause in question.
Rob: Yeah. It's really annoying no one's done that. I just want to get, like, forecasting numbers. If anyone's listening and wants to: do that.
So all those paths are pretty competitive and the EA community sometimes accused of being elitist.
Nick: Yeah.
Rob: Do you think we are elitist? If so, is it a mistake or not?
Nick: I mean, what is elitism, maybe? We'll need to define our terms first.
Rob: I guess that would be partly my question. Why would someone make that critique? I think, if you see a bunch of people talking about charity and they're talking about these kind of "out there" ideas, very intellectual... I think we used to be a little bit unnecessarily combative about other approaches to doing good.
Nick: I think that might be part of the cause of why somebody might look at what we're doing and say it's elitist. Also I think it's partly self reinforcing. So you have a community that has grown up to a significant degree in Oxford and San Francisco and Berkeley.
So there's a bit of a founder effect, and we tend to cater to the people who pay attention to us. So you end up with a lot of people who have technical backgrounds, went to really good schools, and were thinking about the problems that we can best solve.
I think when I look at all the problems that we're most interested in, the paths to impact often do look research driven, egghead-y, technocratic. So I think we shouldn't change our cause selection or views about what is a good solution in response to what is ultimately a social criticism about being elitist.
But I do think... do we have something to learn about how we're talking about what we're doing and taking an attitude of humility towards that? I think maybe.
Rob: Yeah. 'Cause one way that we could've ended up or we could end up focusing on these exclusive positions mistakenly would be if our question is: how can one person have the biggest impact? So we look for very difficult roles where that person's incredibly influential. Then we look at causes where one person can have a lot of impact.
But then as a result, we can't really achieve very much scale, 'cause there's only a small fraction of people who are ever qualified to take those roles. If we'd said, what are roles that, as a whole, given the number of people who are gonna end up taking them, would have the largest social impact?
Then you might say, "Well, here's a position where there's like a million different roles." Or something like there's a million different... there's a huge room for more talent, basically.
Nick: Yeah.
Rob: And although each person is less useful, as a whole they'll have a larger impact.
Nick: Yeah. I think it's a legit question. I think your answer to that question might depend on your cause selection, to some degree. This interview, and my personal priorities, are focused more heavily on global catastrophic risks. I think it's harder to think of something that takes a much larger number of people as an input and helps a lot with the future of artificial intelligence.
I think less about some of these other areas. I think it would be in some ways a more interesting question that somebody else might be able to answer better. Like, if we were doing something at huge scale with global poverty.
If you had a Teach for America of global poverty or something, meaning a thing that takes on huge numbers of college graduates and places them in a role where they're doing something effective about the problem, has huge room for more talent. That's an interesting idea. I haven't really thought it through. Maybe there's something there.
Similarly, could you do something with that in animal welfare? I don't know the answer and I would sort of defer to somebody who's more involved in that space. But I think for AI and bio-security, I think that smaller community, technocratic, more artisanal job spec, is the place that it seems like it makes the most sense to be looking.
Rob: Yeah. One reason that you don't want tons of people in those fields and it's quite artisanal, is that there's a pretty high risk of people who are somewhat amateurish causing harm.
Nick: I think that's very true.
Rob: Yeah. Do you think that we're sufficiently worried about people causing harm in their career? I mean 80,000 Hours, perhaps should we be advising more or fewer people to go into those fields?
Nick: I don't know. 80,000 Hours seems like roughly in the right place on that to me. I think there's some subtlety to projects we take on in the Effective Altruism community. There's a way you can evaluate your project that makes it perhaps a little too easy to make it seem like you're doing well.
Which would be like, "Well, this is our staff. This is our budget. This is the number of people we helped." And say, "Well, if we value our time at X and blah, blah, blah, blah, then the return on investment is good."
I think many things do take up a slot in the sense that, "Well, there's probably only gonna be one EA community organizer in this city." Or maybe EA community organizing group in the city. Or maybe there's only gonna be one EA career adviser group. Something like that.
We should be mindful of who else might've been in that slot, and what their impact would be. That can be a way to make it easier than it would seem to have negative impact by doing something that seems pretty good.
Rob: Because you can displace something that would've been better that you never perceive.
Nick: Yeah.
Rob: Yeah. Interesting. I'm curious to know what you think about the debate that was happening earlier this year around epistemic humility.
You wrote this quite popular post on LessWrong around five years ago, where you staked out a pretty strong view on epistemic humility in favor of taking the outside view.
Nick: Right.
Rob: And not taking your own personal intuitions that seriously.
Nick: Yeah.
Rob: It seems that there's been some controversy about that. People pushing back, saying that it doesn't generalize very well. What do you reckon?
Nick: I mean the motivation for that view is like... in philosophy they have conciliatory views about peer disagreement. Where if there's a disagreement, in some sense your view should be an impartial combination of the views of other people who take that view seriously. Or have thought about that, and maybe some weighting based on how likely you are to be right versus them in disagreements of this kind.
That post was kind of like the macro consequences of, if you took that seriously, what would that imply? Another framing that motivates it would be, "Well, we could think of all of us as thermometers of truth, in some sense." You throw questions at us and we get a belief about them. We don't totally know how good our thermometers are. I have one and you have one and everybody else has one. We're like, "What should we do when these thermometers give us different answers?"
The view I defended was kind of like, "Yeah, you should be some impartial combination of the output of what you think the other thermometers would've said about this question." Where you're weighting by factors that you would think would indicate reliability. I think I basically still hold that.
But there was a secondary recommendation in that post that was like, "Why don't you test how well you're doing, by explaining your view to other people and seeing how much of it they buy?" If you're only getting like some weird niche of people to agree with you, then maybe you should think twice about whether you're on the right track.
In some sense, I think that's still a useful activity. But I think these cases where you can only convince some weird niche of people that you're on the right track, but you're still right, are more common than I used to think. I think some of the remarks I made earlier about the structure of intellectual fields is an input to my views about how that can happen.
I guess I think I'm walking back from that test being very decisive about how likely it is that you're on the right track. I don't know. Eliezer Yudkowsky wrote a nice book about this. I think there's a part of that that I really agree with which is that you might not be able to know, it might be very hard to tell who's being irrational.
Somebody else might have some arguments that you can't really follow and they might in fact be right, even if you meet up and talk about whether that's the case. And for all the world, it seems like one of them is more reasonable.
So I don't think I have a lot of super practical advice, but I do feel like it's possible to go too far in that direction. Another thought on that it may also be important for personal development to not be too overly modest about developing your own opinions about how things work.
I think if you don't develop your own opinions about how things work and see them tested and refined and figure out how much to trust yourself, then it becomes hard to do things that are important and innovative and be one of the first ones to arrive at an important insight.
I said something about that in that post, but I think if I was writing it again today, I would emphasize it more.