August 03, 2017
Hey!
Are you going to Effective Altruism Global in San Francisco this month? If so, some of us from the team will see you there!
Those of us missing out can find some diversion in this early thread on EA Onion headlines, as well as very real articles by the satirical magazine on topics close to EA.
Whether hot or cold, we hope you have a great August!
The team
Have you ever wondered whether it’s acceptable to take a harmful job if it allows you to indirectly help others? Well, so has 80,000 Hours, and now they’ve released in-depth research and analysis on this topic!
A common question is how EAs should approach climate change and global warming. Stijn Bruers addresses this in his recent blog post, laying out his own strategy for offsetting personal carbon emissions using EA principles.
Julia Galef identifies categories for inefficiencies in the social value market.
Peter Hurford and Marcus Davis research how long it takes to develop a new vaccine.
Ben West presents an argument for why the future is probably positive.
Daniel Dewey from the Open Philanthropy Project details his current thoughts on the Machine Intelligence Research Institute’s Highly Reliable Agent Design agenda.
Check out the 80-minute debate between Will MacAskill and Giles Fraser about whether effective altruism is the right way to lead an ethical life. It covers a lot of interesting tidbits, including why you should not give to disaster relief, how typical charities do a hundred times less good than the best charities and why you should first stop eating chicken and eggs if you want to reduce animal suffering through your diet.
80,000 Hours
80,000 Hours continued their podcast with an interview about OpenAI and how to become an AI researcher with Dr Amodei. Apart from an analysis on the ethics of harmful jobs, new publications include a career review of Machine Learning PhDs, and an investigation of the economics literature on which professions have the biggest positive spillovers to society.
Animal Charity Evaluators
ACE explained the process and rationale behind their cost-effectiveness estimates and have made updates to their charity evaluation process. They have updated their list of funded projects from the Animal Advocacy Research Fund.
Centre for Effective Altruism
This month, CEA has been focusing on the EA Global San Francisco conference happening this weekend. EA Global SF is one of the big community events of the year and is exploring how we can do good together as the community grows. They have also been reviewing over 700 EA Grants applications. They expect to announce the grants they are making to promising EA projects in the next couple of months.
Centre for the Future of Intelligence
CFI had their first conference. Keynote speakers included the Rt Hon Matt Hancock MP (UK Digital Minister), Professor Stuart Russell (Berkeley), Baroness Onora O'Neill (Cambridge), Dr Claire Craig (Royal Society), and Professor Francesca Rossi (University of Padova, Italy). The entire event was livestreamed.
Centre for the Study of Existential Risk
The “Black Sky” Infrastructure and Societal Resilience workshop report is online. Gorm Shackelford joined CSER – he is using text and data mining to create a database of information on catastrophic risks and interventions. CSER hosted the Decision Theory & the Future of Artificial Intelligence workshop with talks on The Tragedy of the Uncommons: On the Psychology, Politics and Policy of Existential Risk and on Resource Allocation in Health Emergencies.
Future of Humanity Institute
FHI’s AI safety team published a paper on avoiding catastrophic events with human intervention, Nick Bostrom did an interview with PBS explaining superintelligence concerns and FHI is hiring for 2 new research roles in macrostrategy.
Foundational Research Institute
In a new paper, researcher Lukas Gloor introduces tranquilism as a theory for what makes experiences valuable or disvaluable. Tranquilism is an "absence of desire" theory that treats all states of contentment (such as meditation, flow states, or other forms of tranquility) as equally valuable as states of intense pleasure.
Future of Life Institute
FLI interviewed Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence.
GiveWell
GiveWell published a blog post summarizing its progress in 2017. GiveWell also described why it is considering fistula management charities as potential future top charities and its plans to partner with IDinsight, an international NGO, as part of this work.
Open Philanthropy Project
The Open Philanthropy Project announced several grants last month, including $2,400,000 over four years to the Montreal Institute for Learning Algorithms to support research to improve the positive long-term impact of artificial intelligence on society and $500,000 to The Greenfield Project to push for federal reforms to improve farm animal welfare.
Raising for Effective Giving
This year’s poker world championship – the World Series of Poker – has been highly successful for REG: REG supporter Benjamin Pollak finished 3rd for $3.5 million in the Main Event and pledged to donate $105,000 to effective charities; several other players are making major contributions as well. REG also published a new website.
If you have a few moments, Help test two cool Slack plugins (join the Rethink Charity team here!)
Also, don’t forget that applications for EA Global in London are open!
Make sure to check out the FLI list of postings and 80,000 Hours' job board!
To keep on top of EA jobs, feel free to visit these Facebook and LinkedIn groups as well.
Go forth and do the most good! Let us know how you liked this edition and how we can improve further.
If you’re interested in past editions of this newsletter, here is the full archive
The Effective Altruism Newsletter is a joint project between the Centre for Effective Altruism, and .impact.
This is an archived version of the EA Newsletter sent to 45,566 subscribers on August 3, 2017. To see the full archives, click here.