Following a rigorous selection process, we are pleased to present the following five-minute EAGxVirtual 2020 lightning talks:
We've lightly edited the talks for clarity. You can also watch them on YouTube and discuss them on the EA Forum.
Moderator: Hi, everyone. Welcome to EAGxVirtual 2020 lightning talks. We're about to watch 12 videos submitted by attendees just like you, on topics ranging from biosecurity, to cultivated meat, to donation pledges, to imposter syndrome. Some of our speakers represent EA organizations. Others are individuals presenting their own thoughts and experiences, or research that they've done in their spare time.
Their work demonstrates the depth, scope, thoughtfulness, and commitment to action that the EA community brings to the challenge of doing good better. We're proud to share these talks with you today.
Hi. My name is Cecilia Tilli, and I am the Secretary General of the Foundation to Prevent Antibiotic Resistance. I will speak briefly about antimicrobial resistance and how this could be a cause that joins global health and development with longtermism and biosecurity.
First, some quick background information on antibiotic resistance: This is what happens when we use antibiotics against bacterial infections, and the bacteria evolves so that the treatment becomes ineffective. This is estimated today to cause about 700,000 deaths annually, and it's expected to cause about 10 million deaths annually by 2050.
Let’s take a look at the drivers of antibiotic resistance. The problem is that some infections in humans are resistant to antibiotics. Perhaps the most well-known driver of this is the antibiotic treatment of viral infections. This is considered a misuse of antibiotics because antibiotics do not work against viruses. So you get all of the side effects, but none of the benefits of the treatment. Also, it drives antibiotic resistance. This happens because prescribers are not following guidelines. It can also happen for many other reasons — for example, because of a lack of diagnostics, because antibiotics are sold without prescriptions or at patients’ [request], or because of other drivers linked to poverty.
A different driver is therapeutic, justified antibiotic treatment. Correct treatment also drives resistance, [mainly due to] both the lack of alternatives and the amount of infections. The drivers for infections, in turn, include factors such as urbanization, travel, hand hygiene, and weak immune systems. They also include factors such as suboptimal policies and research allocation and a lack of health education —which again, can be linked to poverty.
Then there is a group of drivers related to the agricultural use of antibiotics, and another related to transmitting resistance infections in healthcare.
If we map all of the drivers, it looks like this:
This is obviously a very complex image. If we want to prevent antibiotic resistance in an effective way, we need to analyze the situation and [strategically target the most influential] drivers.
In the past 10 or 20 years, interest in, and awareness of, antibiotic resistance have grown, and many more resources are now allocated to it. But my impression is that quite a lot of these resources are still not allocated in the best way. For example, resources are spent on information campaigns that lack a solid evidence base. Also, a lot of interventions are done without properly measuring the effects and following up, so that we know if [potential solutions are useful and worth expanding] to other places.
I think this is an opportunity for EAs. We have the potential to improve the allocation of resources already dedicated to this area. I also think that this area interests EAs because it’s a down-to-earth way of introducing longtermism in a global health setting. Antimicrobial resistance is something that people who work in global health are familiar with, and it has clear consequences today that are especially evident in poor countries. Also, it’s a cause that links to [questions around] the healthcare we can expect to have in 100 or 200 years. In this sense, it's an interesting field that [doesn’t present global health and longtermism as opposing causes]. Instead, they go together.
I also think there is a clear overlap between biosecurity and biorisk, both in terms of policy interventions and preventing the spread of infectious disease. This could be a promising field for career entry, because there are positions at many levels in research, government, and NGOs. I think the experience and network you can build in antimicrobial resistance is valuable for many different paths in research policy, global health, and biorisk.
Thank you.
Hi, I'm Noga. I'm a master's student in systems biology in Israel and I interned at the Johns Hopkins Center for Health Security two years ago. In this talk, I'll explain which paths can lead to an effective career in biosecurity.
There are three main disciplines that come together in biosecurity:
However, my sense is that the most-needed people right now are policymakers with a PhD in biology. Let me explain why.
Pandemic preparedness and prevention constantly sits within epidemiology, and epidemiologists already kind of think like effective altruists. So entering this area as an effective altruist won't really give you a competitive advantage.
However, people who completed a recent PhD and have expertise in emerging biotechnologies have [a perspective] that is more rare in the field — and that is especially useful for issues that EA focuses on, like engineered pandemics.
Policymaking is generally a superior path because technical biosecurity work is almost always funded by governments or other nonprofits. Therefore, it's not as constrained by skill. Some EAs have asked me in the past about becoming academics in the field. You have social influence, that is for sure. But academics are funded by grantmakers — so overall, I think it’s advisable to become a grantmaker or part of a body that advises grantmakers if you're entering the field.
The Center for Health Security essentially is [the latter, a body that advises grantmakers]. When I was there, people told me many times that in order to become a valued expert in the field, you need to have a terminal degree: an MD, a JD, or a PhD. If you're not sure whether you can commit to such a long path, the bright side is that internships are a common starting point. However, biology professors are often wary of people with interests in biosecurity, because they see them, sometimes correctly, as anti-science and anti-technology. So be careful about how you present yourself and your experiences if you're planning to pursue a PhD after an internship.
If you're not committed to a biology PhD but do want to enter the field, another common path is to start a fellowship and then get a low-level role in the same [organization]. You can advance there while concurrently getting any additional degrees at the institution where you work.
In terms of the job itself, most jobs are in the US. Having American citizenship really helps in that case, because many jobs also require security clearance. Also, making connections is really important — not only because the field is small, but also it’s full of collaborations for carrying out productive projects.
The list of positions on the 80,000 Hours job board is pretty representative. In addition, there are some projects in the EA community that individuals [conduct], sometimes remotely. So, that's another option.
If you want to get involved, get in touch with me or Greg Lewis, who's kind of the community’s gatekeeper.
Finally, I'd like to say that although many EAs are starting to direct their trajectory into this field, there's still a lot of work to be done and expertise to be had, especially during and after this current pandemic.
Hello. Thank you for joining my talk. My name is Jennifer, and I'm the Research and Projects Strategist at Fish Welfare Initiative.
Fifty years ago, the first major campaigns in the animal movement were launched. We started with the most relatable animals, such as primates and rabbits being experimented on. Later, we graduated to farmed animals, initially focusing on cows and pigs.
Now our movement's primary focus is chickens, largely because of the immense number of chickens farmed. This movement is about the expansion of our moral circle, and we're on the verge of the next frontier: fish.
Since no pitch for a good cause would be complete without it, I will quickly run through the [EA case] for fish welfare:
The scale of the issue: Roughly 111 billion fish are alive on farms at any given point. That's more than twice the number of chickens and over 100 times the number of pigs.
The degree of suffering: There's general scientific consensus that fish can feel pain. Yet, conditions on fish farms are often alarming. Fish are crowded in monotonous tanks with dubious water quality, suffer under diseases and parasites, and often have no opportunity to express their natural behaviour. They're generally treated like a commodity without any moral worth.
We took this picture in Vietnam. It shows pangasius catfish being transported to a slaughterhouse. Fish are stored like this in small buckets while fully conscious for 10 to 20 minutes, to say nothing of the trillions of wild fish killed annually.
Despite these immense numbers, fish farming is still a largely neglected [problem]. As we speak, only two EA-aligned organizations focus primarily on fish: the Aquatic Life Institute and our organization, Fish Welfare Initiative.
The issue’s tractability: We hope our work can demonstrate the tractability of fish welfare work. We’ve already seen some preliminary successes from other organizations. For example, the RSPCA and Open Cages successfully pushed for a ban of live fish sales in eastern European countries.
Our work happens in stages:
For the next few minutes, I want to share some of our findings with you.
Fish species and farming systems are extremely diverse. Consider all of the species and farming systems the animal movement has worked with already — from pigs in gestation crates, to chickens in battery cages. Fish are not just one more species. They represent about 370 different species currently farmed by humans, and each of these has very specific welfare needs.
Farming systems are just as diverse. On the left side of the slide, you see the intensive agriculture system; every parameter is controlled and monitored. On the right side, you see [the system for] fish, which are simply left in a pond and given little attention.
Expertise [isn’t mandatory] in this field. Fish Welfare Initiative was founded last year, and our co-founders didn't have prior expertise in the field. Rather, like many of you, they were generalists with a strong drive to do good. So you don't need expertise to get off the ground.
But it sure does help. We learned this lesson when we hired a PhD as our fish welfare specialist. Our research process is dramatically better now.
Even “perfect,” fully developed plans always have unpredictable elements. This is another crucial lesson that you can surely relate to. For example, COVID-19 threw our plans overboard. The farm visits we were doing are now virtually impossible. During this forced break, we've pivoted to working more on building the fish movement and influencing institutions such as seafood certification schemes. We hope that these activities have a similar level of impact as our initial plans.
So how can you help? We would love to hear from you about any high-leverage opportunities, such as public consultation periods for certification schemes. Also, please circulate our job ads and consider volunteering or being an intern. We'd love to help anyone by connecting them with our network of people who work on fish all over the world.
Thank you for listening and for being part of a movement to expand our moral circle. Enjoy the rest of the conference.
Hello, I'm Sid. I'm one of the co-founders of Generation Pledge. Generation Pledge exists because we have a problem. Actually, we have myriad problems — and they're compounded by the fact that we don't have enough resources to solve them.
Even if we were to only solve the sustainable development goals set by the United Nations, we would still have an estimated funding gap of $2.5 trillion annually, and that doesn't even include some really crucial cause areas like catastrophic risk prevention or animal welfare.
Fortunately, though, there is an opportunity [to address this shortfall].
There are over 275,000 “ultra high net wealth” individuals (someone with more than $30 million in assets). Collectively, this group owns over $33 trillion.
But unfortunately, this money isn't being used to create the transformations that we'd like to see.
In 2018, out of this pool of money, only $153 billion was donated. That's less than 0.5%, and it's being deployed in ways that we wouldn't consider the most effective.
Most people say that they are donating to “education,” which often [refers to] donors’ alma maters. Other [funds from this pool] are going toward what donors categorize as “healthcare,” by which they mean first-world healthcare causes or specific research that isn't as impactful, global, or longtermist as it could be.
This simply isn’t good enough, because this is such a tremendous opportunity that we're not capitalizing on. This is what we decided to try to shift.
My co-founder and I have unique access to this world because we ourselves are inheritors. Our parents have a lot of wealth that they either generated or inherited themselves. As inheritors, we had the opportunity to work at the intersection between effective impact and ultra high net wealth families. While there, we noticed that there wasn't yet an organization bringing inheritors together to talk about how to use our unique [situation] and the assets that we're set to inherit for effective impact — and to create an educational process for doing that.
That's why we decided to create Generation Pledge. In many ways, it is Founders Pledge for inheritors, in that we make two commitments:
We're just getting started. We had the idea two years ago and have been working tirelessly since. Right now, in June 2020, we're launching our very first cohort of pledgers.
Around 40 people have already committed to the pledge, and based on publicly available numbers, we estimate that this represents around $750 million in pledge dollars, spanning 11 countries. We're just getting started and we're very excited about the possibilities with Generation Pledge.
We could use help [as we continue to grow]. There are three things that you could help us with:
Thank you for being here at EA Global, and thank you for watching this.
Let's say you believe that the well-being and flourishing of people who exist in the future matters just as much as it does for people who are alive today. Let's also say you're lucky enough to have $1 billion burning a hole in your pocket. What are you going to do with it?
Maybe you try to reduce the chance of existential catastrophe — something that wipes humanity out or permanently destroys our potential. But is it possible that existential risk (or x-risk) is low enough already — or so hard to reduce — that you should instead find some other way to improve the world? And even if you do focus on x-risks, which ones [should you choose] — nuclear war, biorisk, AI risk? Which particular interventions are best? Remember, you only have $1 billion. You can't do it all.
These sorts of prioritization questions are core to EA, and are just as relevant if you happen to not find yourself with $1 billion to burn — for example, if you're deciding on a career path, or you want to donate a few thousand dollars per year.
There are a lot of factors that go into answering these questions, including personal fit and moral beliefs. But one pivotal set of factors is how likely various existential catastrophes are and how much various actions could change those likelihoods.
Unfortunately, those are really, really hard things to estimate. In fact, it's even hard to know how much you should update your beliefs based on the estimates experts have come up with. Here are some reasons why:
Given that litany of challenges, what should we do?
One option would be to give up on having beliefs about which risk is more likely than others, which interventions are likely to reduce risk by more, et cetera. This is a really, really terrible option. Those likelihoods clearly make a difference regarding which risks and interventions we should focus on, and this option makes it impossible to prioritize.
A second option would be to entirely ignore experts' beliefs and be guided only by our own beliefs. This is a merely terrible option, rather than a really, really terrible one. At least we’re heading in the right direction. One problem with this option is that ignoring experts doesn't tend to be a great move in general. Another problem is that most of the challenges with x-risk estimates noted above also apply to one's own beliefs.
A third option would be to just to avoid explicit quantitative estimates — and instead, form more qualitative beliefs and pay attention to qualitative statements like “There is only a remote possibility of extinction due to nuclear war this century” or “X-risk from AI is more likely than x-risk from gamma ray bursts.” This option isn't terrible, but it's still not the best we can do. One reason is that these qualitative judgments about x-risk still suffer from many of the challenges noted above.
Another reason is that using qualitative beliefs or statements can lead to major issues when trying to make decisions, or when two people are trying to work out precisely what each means and whether they agree or not. For example, precisely how low a chance is a “remote possibility”? Is it a one-in-a-billion chance we can safely ignore or a one-in-100 chance that might still warrant a lot of attention, given the stakes? And how much higher is x-risk from AI than x-risk from gamma ray bursts — two times higher, 10 times higher, a million times higher?
So what do I actually suggest we do? What mystical, wonderful fourth option have I saved until now? Well, EAGxVirtual attendee, I'm very glad you asked. The starting point is for people to come up with specific quantities for x-risk estimates, and then for other people to pay attention to these estimates and update their beliefs based on them. That's roughly what many EAs have been doing so far, which is a good thing. For example, in 2008, some researchers provided estimates of the chance of various human extinction events by 2100, such as from nuclear war or natural pandemics. These estimates are referenced often in EA.
But what if a particular set of estimates isn't very reliable, or doesn't reflect the more typical or recent views of experts? After all, this was just an informal survey of around 15 participants from 12 years ago, and it faces many of the challenges noted above. And what if, as a community, we're anchoring too strongly on these particular estimates rather than being influenced by a broader set of use or forming our own independent views? And what if we wanted to know about things that weren't estimated there, such as the chance of permanent dystopia rather than extinction?
To address these concerns, I've built a database for every single estimate I could find of x-risk or similar things, and people can add additional estimates that they've found or made themselves. I'd really encourage you to check out the database. Go to bit.ly/x-risk to do so. I'd also encourage you to help build up this resource by adding any relevant estimates you come across.
My hope is that this collaboratively built database will allow us to form a much more complete picture of the distributions of views on these matters — both the points of consensus that do exist among experts and the vast uncertainties and disagreements that remain. I hope that this, in turn, improves the high-stakes prioritization decisions we're continually forced to make as people trying to do good better in a world of many problems and limited resources. Thank you.
Hi. I'm Jessica, and I currently run Yale Effective Altruism. This summer, I'm co-organizing a virtual fellowship with organizers from McGill and Northeastern. In this talk, I want to make the case for more cross-university mentorship and collaboration.
For those who are unfamiliar with EA fellowships, they’ve [been available] at a few universities for several years now, and are now offered in some city groups. There are both introductory and more in-depth fellowships, but mine is an introductory one, so I'll focus on that. The goal for introductory fellowships is to ramp up EA knowledge for a cohort of people.
Many schools have tailored their fellowships differently, but in ours, we select 15 people who are new to EA from a pool of applicants. Then, fellows meet for around nine weeks and have discussions each week covering different cause areas, methods of thinking, and careers. There are also workshops and mentoring opportunities. Basically, these fellowships aim to provide a really high-fidelity model of what EA is and onboard new members.
Usually, these fellowships are geographically bound. However, since we are virtual now, some schools have been moving theirs online and opening them up to other schools. So I’d like to give a huge shout-out to Stanford and Oxford for doing this first. I think it’s awesome, because it gives students who are in places without fellowships the opportunity to participate. Also, it adds a lot of diversity to the cohort, which makes for a much more enriching experience.
Catherine Low from the Centre for Effective Altruism gave me the idea for the one I'm currently running. It's the Yale Effective Altruism Fellowship moved to a virtual setting, but co-organized by people from other schools with new university groups. These organizers would really like to start fellowships at their respective schools, but they either felt that their group wasn't large enough yet to organize one, or they just didn't know enough about organizing a fellowship. I think this is super-common. Thanks to different schools (and I’ll give a huge shoutout to Harvard), there are a lot of public resources on how to run fellowships. However, it can still be intimidating, especially if you're only one organizer, or one of a few organizers, in a group at a school that’s doesn’t have EA fellowships.
In working with these organizers, I had two main goals:
I’ve had help in running this fellowship program. Coleen, Anna, and Thomas, who are my co-organizers, have been a huge help in advertising, interviewing, preparing, and running discussions. It seems like a huge win-win. In terms of advertising, 79 people applied. That's really awesome.
Additionally, the overhead needed to expand to different groups is not terribly high, especially if you already have public resources to help run your fellowship or project. The most complicated things were copying my resources outside of my Google Suite, doing some edits in advertising, and making a new Mailchimp account. Other than that, things were the same fellowship procedures I would be doing anyway. So it's low overhead to add other group organizers, which is great.
This form of collaboration and mentorship is not limited to fellowships or university groups. This method could be expanded to various group projects. More established EA groups have a wealth of resources to share with new groups, and thanks to initiatives like the EA Hub, these are pretty well-documented. However, I still think that the best way to learn how to run something is to actually run it yourself — and that's a lot easier when you're doing it with someone who has done it before.
Overall, cross-group coordination in this [context]:
We're still at the start of our fellowship program, but so far it's been going well. I have high hopes. Thank you.
This talk is about the real-life experience of running media campaigns to try and prevent COVID-19 deaths in Sub-Saharan Africa, and to answer the question: Will it actually do any good?
Let me first introduce the organization that I run. It’s called Development Media International. We’re about as data-driven as you get for a media organization. Everybody knows that radio and television reach a lot of people, but do they actually change behaviors?
Well, there is no proof — not in the entire advertising world or the entire epidemiological world — because no one had done the randomized controlled trials, or it hadn't worked. So we did these, and they did work. They showed a 56% increase in malaria treatment, a 9.7% reduction in child deaths, and a cost-per-life saved in the $200-$800 range. Also, we ran a second trial showing a 20% increase in contraceptive use.
Everything was going really well. We had the evidence, and then we were able to scale it. By January 2020, we were campaigning in nine countries in Sub-Saharan Africa.
Then COVID hit, and we were suddenly plunged into emergency mode. We were all working from home. Some of us actually caught the disease. But we knew that this was a disease which was heavily dependent on people's behaviors. We knew that in order to run campaigns you had to have good relationships with governments and with the media, and we had those in nine countries. We also knew that the last thing to go down, even in Armageddon, would be the radio stations.
At the same time, we were being told by Imperial College London that unless something was done, 2.5 million people were going to die in Africa. So at an emotional level, it felt like something we just had to do. So we went for it. We started producing radio and television spots before we had any money. We paid for it out of reserves. We got on air very quickly in six of our nine countries. Then the money did come in from GiveWell, from the Skoll Foundation, from the Mulago Foundation, from The Life You Can Save, and from some generous private individuals. We received about $1.3 million. We need more, but that amount has carried us through until now.
It has been a strange experience. Usually we deal with health promotion departments and ministries of health. Suddenly we were dealing with the prime minister's office, and with very firm directives being issued to us. We're used to dealing with malaria campaigns where there's 40 years of scientific evidence on what works and what doesn't. Here we're in completely uncharted territory — where no one, not even the World Health Organization (WHO) or the Africa Centres for Disease Control and Prevention know some of the answers. It is a completely different way of working.
Will it work? This is the big question. It all depends on what happens in Africa, and here the epidemiologists’ [estimates] are all over the place. On the one side, you have Imperial College London saying that there will be 2.5 million deaths by the end of the year. There has been more recent study by the London School of Hygiene and Tropical Medicine, which has figures even in excess of that. On the other hand, you have WHO Africa, which thinks [the death toll] will be around 150,000. These are big differences. So we modeled our own cost effectiveness using the Imperial College figures.
Assuming we could increase social distancing by about 10%, we calculated that we'd save about 40,000 lives in the nine countries [where we operate], and that would work out to about $50 per life saved. Those are incredible numbers. We've never seen those before, but they reflect the vast numbers of lives at risk and the fact that the disease and the campaigns really do affect everyone. When using the smaller WHO numbers, our cost-effectiveness would be about a twentieth of that, so around $1,000 per life saved, but that still seems like a pretty good value in my book.
However, at the time of [this conference], the pandemic really hasn't taken off at all [in Africa]. There have been 53 deaths in Burkina Faso, but four in Malawi, two in Mozambique, and none at all in Uganda. We don't know why. It could be because the median age is so young (it's around 9.7 years across Sub-Saharan Africa). It could be because older people live in more rural areas and the disease hasn't found them yet. It could be that the pandemic has been delayed because the continent took very stringent measures far earlier in the epidemic than their counterparts in Europe managed to do. But we really don't know. So we face the unusual prospect that if this carries on, we'll save no lives at all.
That would be a bizarre result for an organization that prides itself almost entirely on its impact. But of course, we'd infinitely prefer that outcome. We can then concentrate on the secondary effects of the pandemic. It's important to remember that more people died of measles and malaria during the Ebola epidemic than died of Ebola itself.
We shall see. These are interesting times.
Hello everyone. I am Juan García, research associate at the Alliance to Feed the Earth in Disasters, also known as ALLFED. I'm going to tell you about a project on microbial protein as food during global food catastrophes.
Current food security research concentrates on addressing factors such as population increase, resource scarcity, resource depletion, and climate change. While these topics are important, there's one that is severely neglected: the occurrence of strong food shocks. What I mean by this is that human civilization's food system is unprepared for catastrophes that will reduce food production by 10% or more.
Some estimates predict that there is around an 80% chance of a food shock that reduces global food production by about 10%, and up to a 10% chance of total food production loss — both within this century. The most extreme food shocks that could potentially affect humanity in the near future are [related to] global catastrophes. They entail almost complete food production loss for humanity by making conventional agriculture unfeasible [across the globe] for many years. This could be a product of a super-volcanic eruption, an asteroid or comet impact, or a large-scale nuclear war. These are categorized as “global catastrophic risks”, or GCRs for short. In this situation, the loss of food production will most likely kill a much greater number of people than the catastrophe event itself.
Instead of giving up in the face of this fact, we at ALLFED study potential food solutions that could help in such events. We call these solutions “alternative foods.” They have been proposed as a more cost-effective solution than increasing food stockpiles, given the astronomical cost of storing enough food to feed humanity through a 5-to-10-year nuclear winter.
The alternative food solutions we study have to fulfill one or both of two conditions:
The latest alternative food my colleagues and I have studied is microbial protein from hydrogen, scientifically known as “single-cell protein” or SCB for short. This is a very protein-rich product with an excellent amino acid profile, similar to or better than that of meat, which is obtained from hydrogen oxidized in bacteria in bioreactors. These bacteria take in hydrogen and carbon dioxide, among other things, to produce a high-quality food, which can be produced in the complete absence of sunlight by simply using electricity, biomass, or fuels. This SCP could potentially be used as an ingredient in foods such as bread, pasta, plant-based meat, and dairy, and act as a protein supplement similar to whey protein shakes.
We started the production process and focused on identifying potential bottlenecks to the large-scale deployment of this technology during a catastrophe. For this, we reviewed two different production options:
(Gasification refers to converting feed stock into a gas by heating it to very high temperatures without oxygen, so that pyrology occurs.)
When we estimated the cost of building the factories and the energy consumption required for feeding everyone on earth, we found this technology to be slower in terms of the ramp-up speed compared to other potential food solutions, such as seaweed and cellulosic sugar. But the excellent nutritional content of single-cell protein makes it a very interesting option for fulfilling the protein requirements of humanity during a catastrophe.
Specifically, we estimated that redirecting the construction budget of the chemical industry and related sectors to production of single-cell protein could fulfill between 3-10% of the protein needs of humanity in the first year after a catastrophic event and much more in subsequent years. We also found that the industry-standard production based on water electrolysis will be severely limited by the availability of noble metals, such as platinum, and by its high electricity use, which makes the gasification option more promising for use during a catastrophe.
Experience [we’ve gained] from the COVID-19 pandemic has made us change some of our assumptions, such as expecting a faster response to a crisis or more funding for interventions. If you're interested in this topic, I suggest staying in touch with us, because we are in the process of releasing a scientific paper to publish our findings.
If you would like to know more about alternative foods and disaster preparedness, go to allfed.info for a lot of resources. We are always looking for volunteers in many different disciplines. Thanks for listening.
Hi, my name is Jack Lewars. I'm the Executive Director at One for the World, a movement of people trying to change charitable giving to end extreme poverty. We ask people to give 1% of their income to GiveWell's recommended charities for life, and then we make it really easy for them to choose their charities, set up a donation online, and give us permission to collect that money each month and move it to the charities that they have chosen. Right now, we're operating in the US, Canada, and Australia, and we're about to add the UK and New Zealand. However, if you don't live in one of those countries and you'd like to get involved, we are interested in other markets. So do please get in touch and we'd love to have a conversation with you.
The main way that we sign people up to our pledge is through groups of volunteers that we call chapters. They usually exist in universities or in workplaces, and we give them a suite of resources, as well as training and mentoring, to do two things. First, we educate people about effective giving. Then, if people have bought into the idea that data-driven reasoning is a great way to think about giving and believe that they can have a huge impact with a tiny donation to very cost-effective charities, we ask our chapters to pitch the pledge and send people to the online portal to sign up.
You might be thinking that this sounds quite similar to Giving What We Can. What's the difference? Many of our staff are Giving What We Can members, and it's amazing that people would give 10% of their income to effective charities. However, asking people to donate 10% is quite a high bar to entry. We think that there are thousands of people out there who would like to be involved in effective giving and could give an amount that would make a big difference — but who we think will be put off if we go to them and ask them to donate 10% straightaway.
We also think that if we get people involved at an affordable entry point, particularly when they're still quite young, they'll have a long lifetime value. There's every chance that we can educate them to give more over time and provide opportunities to get more involved in the movement. But we are keen not to put them off in our first encounter, and we think that a 10% pledge might do that with people who could [otherwise] be valuable in the long term.
So really, we're trying to be the mass entry point into this movement with a very affordable price point that lets anybody contribute. We’re confident that as people see the amount of impact that they're having over time, they will get more involved.
A second big difference is that we have developed a piece of donation software with a technology partner. We use it to process donations. This has two big advantages:
So how can you get involved? We are desperate to start more chapters. The amount of money that One for the World moves is basically a function of how many chapters we have. A chapter, on average, raises about $15,000 in annual pledges each year, and we want to be moving millions of dollars, so we need hundreds of chapters. If you are a student or a young professional and you would like to set up a chapter with a group of fellow volunteers, persuade people that effective giving is a great thing to be part of, and give them the opportunity to be part of a pledge, we would love to hear from you.
What do you get in return? We give you training and mentoring. We give you a suite of resources that will help you persuade people using the messages that we've found to be most effective. And we give you a dedicated chapter manager who will mentor you throughout the year and help you to be as effective as possible.
Why is this a good thing for you? It's an amazing opportunity to learn new skills and to demonstrate those skills and behaviors in your career. Whether you're a student or a young professional, employers will definitely be impressed later in life if you can say, “I raised $100,000 in annual pledges in a year through my volunteering with this charity.” Many of our chapter leads have gone on to do great things using the exact skills and behaviors that they developed at One for the World.
It's also an excellent chance to have a demonstrable impact, because you can see every day how much you're raising. That is deeply personally satisfying. So if this is of interest to you, we would love to hear from you. And if you're interested in taking the pledge, that is a great first step and can be done on our website. Thanks very much.
Hi, I'm Frankie. I work at the Forethought Foundation as William MacAskill's assistant. Before that I worked at the Future of Humanity Institute as Nick Bostrom's assistant, and during university, I was co-president of the Yale Effective Altruism student group.
Today I'm going to talk about imposter syndrome and how it affects members of the EA community. Imposter syndrome is a psychological pattern in which a person doubts their accomplishments and has a persistent internalized fear of being exposed as a fraud. It's the experience of believing you're not as competent as others perceive you to be. I've known a lot of people in EA who struggle with this. These are some of the most intelligent and competent people I know. It seems particularly prevalent among women and ethnic minorities. So if we care about diversity in EA, that’s even more reason for us to work on this.
Everyone's experience of imposter syndrome is different, but I'll illustrate some of the ways it can express itself, with real examples from my friends and colleagues and my personal experience.
If you experience imposter syndrome, you might dismiss yourself as underqualified for roles when choosing which jobs to apply for. When I was a senior in college, I had a lot of anxiety around applying to jobs. I made a spreadsheet, as many EAs do. I browsed the 80,000 Hours job board and added around 20 roles that I thought I might plausibly be a good fit for. Then I watched as each deadline approached and I rationalized, one by one, all of the reasons why I wouldn't be a good fit for that role: I didn't have enough experience, I didn't have those particular skills, the role was too competitive and I'd just waste people’s time by increasing the pool of applicants. In the end, I only got up the courage to apply to one job which I felt was a particularly good match with my qualifications and experience.
One friend, who has done research for several core EA organizations, told me she has dropped out of late-stage job applications for reasons related to [feeling like an] imposter. This is somebody who has worked with prominent EA researchers.
If you have imposter syndrome, you might also imagine that others know more about a topic than you, or that you don't know enough to talk or write about a topic in public. I know some people who are relative experts in their fields, but don't feel like they're qualified to post on the EA Forum. Even Will MacAskill, one of the founders of the EA community, said that after his first real post on the EA Forum, he had an anxiety dream every evening imagining people talking and saying, “We’ve lost all respect for you after you wrote that post.”
If you experience imposter syndrome, you might discount positive evidence about your skills. One friend of mine [described to me] how when someone praises a particular piece of work or a trait she has, or if she does well on a test, she'll find ways to discount it. For example, she’ll think, “They're my friend, they have to say that”, or “Well, that was an easy test.” But when she gets negative feedback, she gives that evidence outsized weight.
There are many other ways that imposter syndrome might express itself. I've heard from friends who have experienced feeling like they must do things on their own and never ask for help. Others have described having an outsized fear of making mistakes, trying new things, or being found out — and overworking to keep up the façade. All of this can affect our mental health and lead to burnout.
Of course, [on top of] the personal toll this can take, it reduces our capacity for impact as a movement. So here are some suggestions for those of you watching this talk:
Thanks for listening, and enjoy the rest of the conference.
Hi, everyone. Today, I'm going to talk about the pitfalls of Bayesian reasoning.
Bayesian reasoning is a way of updating our beliefs in response to new evidence. For example, suppose we wanted to know whether a coin is biased or not. We'd probably start with what is called a “prior belief.” This is the belief that we have before obtaining evidence about the question that we're considering.
In this case, we'd probably think that a coin lands on heads 50% of the time. Then we would collect new evidence by tossing the coin several times and updating our prior belief with each toss. [Ultimately], we’d form a new belief — a “posterior” — which incorporates new evidence.
This describes, in a simple example, the idea of Bayesian reasoning, which is applicable in many such scenarios. But I would argue it’s most applicable when we have well-defined probabilities in a well-characterized situation — [in other words] if we know how to use the relevant formulae in performing the calculations to tell us how to update our beliefs.
However, Bayesian reasoning is also widely used, especially in the EA community, to address very difficult and abstract questions. My argument today is that Bayesian reasoning applied in these contexts suffers from a number of pitfalls that are often overlooked:
The first pitfall [I’ll cover involves] problems with priors. When people talk about their prior belief [regarding] this or that question, it's often not very clear what their belief is prior to — that is, what evidence has, and has not, been incorporated into that belief. This can lead to evidence being double-counted. If, when forming our prior, we've implicitly incorporated beliefs about how the world is (which we didn't realize we had), and then we set conditions based on them again later, we’re basically double-counting the same evidence. Also, I think that this can lead to conversations becoming distracted by a detailed analysis of which prior is the best to use, rather than actually focusing on an object-level discussion of the issue in question.
As an example, look at the comments on the EA Forum article “Are We Living in the Most Influential Time in History?” by Will MacAskill and Toby Ord. The comments go back and forth about the best prior to use; I think [the commenters] mention the word “prior” about 50 times. They spend almost no time talking about the actual evidence for the thesis in question. I think it’s a bit strange that we spend so much time focusing on which belief we should have if we didn't have any evidence, rather than focusing on the evidence that we do have — and how to incorporate that into our decision-making.
A second pitfall is the difficulty in interpreting and analyzing Bayesian estimates. It's common in EA discourse to present estimates of a posterior probability for a specific question. As an example of this, consider Toby Ord's estimates of different existential risks in his recent book The Precipice. The purpose of such estimates is generally to represent one's overall degree of belief given all of the evidence for a particular question.
The problem with such evidence is that it can't [undergo] any objective analysis or examination, because it’s produced by an entirely subjective introspective thought process. We don't understand this process at all, so I think we should be quite skeptical about the meaningfulness and utility of such estimates — especially for difficult and non-structured questions. These estimates don't have any clear explanation or details that we can [analyze]. I don't think they have much value in informing future work or helping future researchers to make progress.
A third pitfall is the falseness of precision that Bayesian estimates can sometimes give rise to. When we place numbers on our own certainty, it provides a sense of structure and precision that may not necessarily be warranted. In particular, if we consider very unlikely events, we generally are not very good at distinguishing small probabilities, such as between a one-in-1,000 and a one-in-10,000 event. However, when we place numbers of such magnitudes on probabilities, then we may unnecessarily or inappropriately anchor [our own or others’ beliefs] to those probabilities. This [promotes] a greater sense of certainty than we have a real right to, given the lack of knowledge that we actually have of the situation.
Given these pitfalls, I suggest that for many questions and controversies facing EAs, we avoid these kinds of Bayesian reasonings — or at least augment them with other forms of reasoning. In particular, the one I want to advocate for is what I call “explicit model building.” This is not anything new. It's basically the idea that we specify the assumptions that we hold about how a system works, or about what's relevant to a given question, and then we use these assumptions in the context of mathematical, graphical, or computer models (or whatever else) and show how they lead to particular predictions or estimates.
This is a widely-used approach for everything from macroeconomic models to climate models. The problem is it doesn’t produce the all-things-considered estimates that Bayesian estimates do. But in my view, that is perhaps too much to ask for in many cases. Often, it may be better to avoid problems with priors and uninterpretable results by instead explicitly modeling our knowledge of the system.
China is currently home to approximately 20% of the world’s population. It’s also home to approximately 7% of the world’s water and 7% of the world’s land. If those statistics don’t seem to add up when you want to feed such a large population, neither does this one: China consumes 28% of the world’s meat, and that’s only currently. That number is very much set to grow as China’s middle class expands and [seeks] a diet that is more rich in animal protein — specifically, meat.
There are clear environmental issues associated with that, but I think from an EA perspective, there are also other issues we should consider — most pertinently, animal welfare and the billions of animals whose lives and quality of life hang in the balance.
When we think of Chinese innovation, we often think of Huawei or mobile technologies. But Chinese innovation also extends to industrial animal farming. An example is so-called “pig hotels,” which are multistory buildings where pigs are born and grow their entire lives before slaughter, never setting hoof on land. This is clearly incredibly cruel. Last year, in the middle of the year, China's pig population was 440 million. And while [factory farming can create issues for humans] like African swine fever, it's clear that there are also billions of animals whose lives we need to consider when we think of meat consumption in China.
When we think of innovation, we can also frame it in a positive way. In my own research, when I was studying at Peking University and completing my master's, I looked at Chinese attitudes toward cultured meat. I think cultured meat is a viable alternative considering the growing demand for meat in China, which will be very hard to supply. There are a few reasons why I think it's a great time for China to adopt and develop this kind of technology.
China's currently in a transition period. It's moving away from largely smallholder farms and wet markets supplying the national meat supply toward the cold chain. That has only been accelerated by the recent COVID pandemic. When something happens in China — in any industry, or more broadly in relation to development — if the government is supportive of it, things move a lot faster than they do elsewhere. So in regards to regulation or an incumbent meat lobby, there are lower hurdles, especially if the government is supportive.
What other benefits are there [to developing cultivated meat] outside of animal welfare? I think there are also clearly public health benefits to avoiding industrial animal farming. A recent study done in China found that if there was a conversion to cultured meat, it would only require 1.1% of the land currently used for meat production, and it would reduce greenhouse gases by 85%.
From my perspective, the most important thing to consider in this equation is the consumer. We might have the most advanced technology to save the world and, indeed, deal with many domestic environmental, animal welfare, and public health issues in China. But if people will not buy cultured meat, then it may not succeed.
My own research first involved doing interviews with 18 stakeholders in the current meat industry, and in alternative protein and food technology, and understanding Chinese cultural and historical attitudes toward meat and vegetarianism. Then, I did a 1,000-person survey that was funded by the Cellular Agriculture Society. That survey sought to [identify] the best way to present cultured meat to a Chinese consumer. I based it on previous work done by Chris Bryant and Courtney Dillard on framing, and also on issues like how to tailor a product to make it attractive to a Chinese consumer.
A few takeaways:
I think, for these reasons, cultured meat in China is something that EAs should definitely support, especially given how tractable and neglected the problem is, as well as the impact involved.
But also, I think it's important to avoid food neocolonialism and the types of debates that have happened in relation to other food technologies like GMOs [genetically modified organisms]. GMOs are very much seen in some parts of the world as a western-imposed technology on countries where lives are given less importance, and where new technology can be developed without considering the impact that it might have on public health. It’s also important, when it comes to general discussion around things like greenhouse gases, for there to be a debate around whether lower- and middle-income countries are able to develop in the same manner that we have done in the west.
We must recognize that cultured meat can be beneficial globally — and indeed, should be adopted globally. I would encourage any EAs who are interested in this issue to get involved in the space.