Player FM - Internet Radio Done Right
47,068 subscribers
Checked 2d ago
Added eight years ago
Content provided by The 80,000 Hours Podcast, The 80, and 000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The 80,000 Hours Podcast, The 80, and 000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED
T
Talkin‘ Politics & Religion Without Killin‘ Each Other


1 David French | Friends or Enemies? Overcoming Divides with Justice, Kindness, and Humility in a Polarized America 1:15:36
1:15:36
Play Later
Play Later
Lists
Like
Liked1:15:36
In this episode, we welcome back David French, columnist for The New York Times , former constitutional attorney, and author of Divided We Fall . We discuss the current state of American democracy, the challenges of political division, and how we can engage in civil discourse despite deep ideological differences. David also shares a personal update on his family and reflects on the profound trials and growth that come with adversity. 📌 What We Discuss: ✔️ How David and his family navigated the challenges of a serious health crisis. ✔️ The rise of political polarization and the factors driving it. ✔️ Why distinguishing between “unwise, unethical, and unlawful” is crucial in analyzing political actions. ✔️ How consuming different perspectives (even opposing ones) helps in understanding political dynamics. ✔️ The role of Christian values in politics and how they are being redefined. ⏳ Episode Highlights 📍 [00:01:00] – David French’s background and his journey from litigation to journalism. 📍 [00:02:30] – Personal update: David shares his wife Nancy’s battle with cancer and their journey as a family. 📍 [00:06:00] – How to navigate personal trials while maintaining faith and resilience. 📍 [00:10:00] – The danger of political paranoia and the pitfalls of extreme polarization. 📍 [00:18:00] – The "friend-enemy" paradigm in American politics and its influence in Christian fundamentalism. 📍 [00:24:00] – Revisiting Divided We Fall : How America’s divisions have devolved since 2020. 📍 [00:40:00] – The categories and differences of unwise, unethical, and unlawful political actions. 📍 [00:55:00] – The balance between justice, kindness, and humility in political engagement. 📍 [01:00:00] – The After Party initiative: A Christian approach to politics focused on values rather than policy. 💬 Featured Quotes 🔹 "You don't know who you truly are until your values are tested." – David French 🔹 "If we focus on the relational, we can have better conversations even across deep differences." – Corey Nathan 🔹 "Justice, kindness, and humility—if you're missing one, you're doing it wrong." – David French 🔹 "The United States has a history of shifting without repenting. We just move on." – David French 📚 Resources Mentioned David French’s Writing: New York Times David’s Book: Divided We Fall The After Party Initiative – More Info Advisory Opinions Podcast (with Sarah Isgur & David French) – Listen Here 📣 Call to Action If you found this conversation insightful, please: ✅ Subscribe to Talkin' Politics & Religion Without Killin' Each Other on your favorite podcast platform. ✅ Leave a review on Apple Podcasts, Spotify, or wherever you listen: ratethispodcast.com/goodfaithpolitics ✅ Support the show on Patreon: patreon.com/politicsandreligion ✅ Watch the full conversation and subscribe on YouTube: youtube.com/@politicsandreligion 🔗 Connect With Us on Social Media @coreysnathan: Bluesky LinkedIn Instagram Threads Facebook Substack David French: 🔗 Twitter | BlueSky | New York Times Our Sponsors Meza Wealth Management: www.mezawealth.com Prolux Autogroup: www.proluxautogroup.com or www.granadahillsairporttransportation.com Let’s keep talking politics and religion—with gentleness and respect. 🎙️💡…
80,000 Hours Podcast
Mark all (un)played …
Manage series 1531348
Content provided by The 80,000 Hours Podcast, The 80, and 000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The 80,000 Hours Podcast, The 80, and 000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
…
continue reading
290 episodes
Mark all (un)played …
Manage series 1531348
Content provided by The 80,000 Hours Podcast, The 80, and 000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The 80,000 Hours Podcast, The 80, and 000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
…
continue reading
290 episodes
Alle Folgen
×8
80,000 Hours Podcast


1 15 expert takes on infosec in the age of AI 2:35:54
2:35:54
Play Later
Play Later
Lists
Like
Liked2:35:54
"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You’re training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It’s hard to tell a story where it’s not a factor. It’s easy to tell a story where it is a factor." — Holden Karnofsky What happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse? With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn’t yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents. You’ll hear: Holden Karnofsky on why every good future relies on strong infosec, and how hard it’s been to hire security experts (from episode #158 ) Tantum Collins on why infosec might be the rare issue everyone agrees on ( episode #166 ) Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security ( episode #197 ) Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs ( episode #195 ) Kevin Esvelt on what cryptographers can teach biosecurity experts ( episode #164 ) Lennart Heim on on Rob’s computer security nightmares ( episode #155 ) Zvi Mowshowitz on the insane lack of security mindset at some AI companies ( episode #184 ) Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity ( episode #132 ) Bruce Schneier on whether AI could eliminate software bugs for good, and why it’s bad to hook everything up to the internet ( episode #64 ) Nita Farahany on the dystopian risks of hacked neurotech ( episode #174 ) Vitalik Buterin on how cybersecurity is the key to defence-dominant futures ( episode #194 ) Nathan Labenz on how even internal teams at AI companies may not know what they’re building ( episode #176 ) Allan Dafoe on backdooring your own AI to prevent theft ( episode #212 ) Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!) Carl Shulman on the challenge of trusting foreign AI models ( episode #191, part 2 ) Plus lots of concrete advice on how to get into this field and find your fit Check out the full transcript on the 80,000 Hours website . Chapters: Cold open (00:00:00) Rob's intro (00:00:49) Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21) Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39) Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23) Nova DasSarma on the best available defences against well-funded adversaries (00:22:10) Sella Nevo on why AI model weights are so valuable to steal (00:28:56) Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24) Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56) Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22) Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54) Bruce Schneier on why it’s bad to hook everything up to the internet (00:55:54) Nita Farahany on the possibility of hacking neural implants (01:04:47) Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48) Nova DasSarma on exciting progress in information security (01:19:28) Nathan Labenz on how even internal teams at AI companies may not know what they’re building (01:30:47) Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51) Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57) Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45) Nova DasSarma on politically motivated cyberattacks (02:03:44) Bruce Schneier on the day-to-day benefits of improved security and recognising that there’s never zero risk (02:07:27) Holden Karnofsky on why it’s so hard to hire security people despite the massive need (02:13:59) Nova DasSarma on practical steps to getting into this field (02:16:37) Bruce Schneier on finding your personal fit in a range of security careers (02:24:42) Rob's outro (02:34:46) Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore…
8
80,000 Hours Podcast


1 #213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared 3:57:36
3:57:36
Play Later
Play Later
Lists
Like
Liked3:57:36
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years. That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “ Preparing for the intelligence explosion .” Not in the distant future, but probably in three to seven years. Links to learn more, highlights, video, and full transcript. The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years. Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist. What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed. In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss: Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thought The three different types of intelligence explosions that occur in order Will’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rights How to prevent ourselves from accidentally “locking in” mediocre futures for all eternity Ways AI could radically improve human coordination and decision making Why we should aim for truly flourishing futures, not just avoiding extinction Chapters: Cold open (00:00:00) Who’s Will MacAskill? (00:00:46) Why Will now just works on AGI (00:01:02) Will was wrong(ish) on AI timelines and hinge of history (00:04:10) A century of history crammed into a decade (00:09:00) Science goes super fast; our institutions don't keep up (00:15:42) Is it good or bad for intellectual progress to 10x? (00:21:03) An intelligence explosion is not just plausible but likely (00:22:54) Intellectual advances outside technology are similarly important (00:28:57) Counterarguments to intelligence explosion (00:31:31) The three types of intelligence explosion (software, technological, industrial) (00:37:29) The industrial intelligence explosion is the most certain and enduring (00:40:23) Is a 100x or 1,000x speedup more likely than 10x? (00:51:51) The grand superintelligence challenges (00:55:37) Grand challenge #1: Many new destructive technologies (00:59:17) Grand challenge #2: Seizure of power by a small group (01:06:45) Is global lock-in really plausible? (01:08:37) Grand challenge #3: Space governance (01:18:53) Is space truly defence-dominant? (01:28:43) Grand challenge #4: Morally integrating with digital beings (01:32:20) Will we ever know if digital minds are happy? (01:41:01) “My worry isn't that we won't know; it's that we won't care” (01:46:31) Can we get AGI to solve all these issues as early as possible? (01:49:40) Politicians have to learn to use AI advisors (02:02:03) Ensuring AI makes us smarter decision-makers (02:06:10) How listeners can speed up AI epistemic tools (02:09:38) AI could become great at forecasting (02:13:09) How not to lock in a bad future (02:14:37) AI takeover might happen anyway — should we rush to load in our values? (02:25:29) ML researchers are feverishly working to destroy their own power (02:34:37) We should aim for more than mere survival (02:37:54) By default the future is rubbish (02:49:04) No easy utopia (02:56:55) What levers matter most to utopia (03:06:32) Bottom lines from the modelling (03:20:09) People distrust utopianism; should they distrust this? (03:24:09) What conditions make eventual eutopia likely? (03:28:49) The new Forethought Centre for AI Strategy (03:37:21) How does Will resist hopelessness? (03:50:13) Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions and web: Katy Moore…
8
80,000 Hours Podcast


1 Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui) 36:50
36:50
Play Later
Play Later
Lists
Like
Liked36:50
When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment. As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.) And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.” But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action. And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place. This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that. And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour. Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable. This episode was originally recorded on March 6, 2025. Chapters: Intro (00:00:11) More juicy OpenAI news (00:00:46) The court order (00:02:11) Elon has two hurdles to jump (00:05:17) The judge's sympathy (00:08:00) OpenAI's defence (00:11:45) Alternative plans for OpenAI (00:13:41) Should the foundation give up control? (00:16:38) Alternative plaintiffs to Musk (00:21:13) The 'special interest party' option (00:25:32) How might this play out in the fall? (00:27:52) The nonprofit board is in a bit of a bind (00:29:20) Is it in the public interest to race? (00:32:23) Could the board be personally negligent? (00:34:06) Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value 3:41:31
3:41:31
Play Later
Play Later
Lists
Like
Liked3:41:31
A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play? The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount! Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.” Rebroadcast: this episode was originally released in October 2022. Links to learn more, highlights, and full transcript. The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped. We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits. These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good. Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact. Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong. In this conversation, originally released in October 2022, Alan and Rob explore these issues and many others: Simple rules of thumb for having philosophical insights A key flaw that hid in Pascal's wager from the very beginning Whether we have to simply ignore infinities because they mess everything up What fundamentally is 'probability'? Some of the many reasons 'frequentism' doesn't work as an account of probability Why the standard account of counterfactuals in philosophy is deeply flawed And why counterfactuals present a fatal problem for one sort of consequentialism Chapters: Cold open {00:00:00} Rob's intro {00:01:05} The interview begins {00:05:28} Philosophical methodology {00:06:35} Theories of probability {00:40:58} Everyday Bayesianism {00:49:42} Frequentism {01:08:37} Ranges of probabilities {01:20:05} Implications for how to live {01:25:05} Expected value {01:30:39} The St. Petersburg paradox {01:35:21} Pascal’s wager {01:53:25} Using expected value in everyday life {02:07:34} Counterfactuals {02:20:19} Most counterfactuals are false {02:56:06} Relevance to objective consequentialism {03:13:28} Alan’s best conference story {03:37:18} Rob's outro {03:40:22} Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons 2:40:52
2:40:52
Play Later
Play Later
Lists
Like
Liked2:40:52
America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially. As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Rebroadcast: this episode was originally released in December 2022. Links to learn more, highlights, and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons. But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for. What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide. Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound. In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining: Why inter-service rivalry is one of the biggest constraints on US nuclear policy Two times the US sabotaged nuclear nonproliferation among great powers How his field uses jargon to exclude outsiders How the US could prevent the revival of mass nuclear testing by the great powers Why nuclear deterrence relies on the possibility that something might go wrong Whether 'salami tactics' render nuclear weapons ineffective The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missiles The problems that arise when you won't talk to people you think are evil Why missile defences are politically popular despite being strategically foolish How open source intelligence can prevent arms races And much more. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway 2:44:07
2:44:07
Play Later
Play Later
Lists
Like
Liked2:44:07
Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through. That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant. Links to learn more, highlights, video, and full transcript. This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up. Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway. But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first. As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild. As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there. Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered. That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary. But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible. Host Rob and Allan also cover: The most exciting beneficial applications of AI Whether and how we can influence the development of technology What DeepMind is doing to evaluate and mitigate risks from frontier AI systems Why cooperative AI may be as important as aligned AI The role of democratic input in AI governance What kinds of experts are most needed in AI safety and governance And much more Chapters: Cold open (00:00:00) Who's Allan Dafoe? (00:00:48) Allan's role at DeepMind (00:01:27) Why join DeepMind over everyone else? (00:04:27) Do humans control technological change? (00:09:17) Arguments for technological determinism (00:20:24) The synthesis of agency with tech determinism (00:26:29) Competition took away Japan's choice (00:37:13) Can speeding up one tech redirect history? (00:42:09) Structural pushback against alignment efforts (00:47:55) Do AIs need to be 'cooperatively skilled'? (00:52:25) How AI could boost cooperation between people and states (01:01:59) The super-cooperative AGI hypothesis and backdoor risks (01:06:58) Aren’t today’s models already very cooperative? (01:13:22) How would we make AIs cooperative anyway? (01:16:22) Ways making AI more cooperative could backfire (01:22:24) AGI is an essential idea we should define well (01:30:16) It matters what AGI learns first vs last (01:41:01) How Google tests for dangerous capabilities (01:45:39) Evals 'in the wild' (01:57:46) What to do given no single approach works that well (02:01:44) We don't, but could, forecast AI capabilities (02:05:34) DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25) How 'structural risks' can force everyone into a worse world (02:15:01) Is AI being built democratically? Should it? (02:19:35) How much do AI companies really want external regulation? (02:24:34) Social science can contribute a lot here (02:33:21) How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55) Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Camera operator: Jeremy Chevillotte Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui) 57:29
57:29
Play Later
Play Later
Lists
Like
Liked57:29
On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with its current charitable mission. For a normal company takeover bid, this would already be spicy. But OpenAI’s unique structure — a nonprofit foundation controlling a for-profit corporation — turns the gambit into an audacious attack on the plan OpenAI announced in December to free itself from nonprofit oversight. As today’s guest Rose Chan Loui — founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits — explains, OpenAI’s nonprofit board now faces a challenging choice. Links to learn more, highlights, video, and full transcript. The nonprofit has a legal duty to pursue its charitable mission of ensuring that AI benefits all of humanity to the best of its ability. And if Musk’s bid would better accomplish that mission than the for-profit’s proposal — that the nonprofit give up control of the company and change its charitable purpose to the vague and barely related “pursue charitable initiatives in sectors such as health care, education, and science” — then it’s not clear the California or Delaware Attorneys General will, or should, approve the deal. OpenAI CEO Sam Altman quickly tweeted “no thank you” — but that was probably a legal slipup, as he’s not meant to be involved in such a decision, which has to be made by the nonprofit board ‘at arm’s length’ from the for-profit company Sam himself runs. The board could raise any number of objections: maybe Musk doesn’t have the money, or the purchase would be blocked on antitrust grounds, seeing as Musk owns another AI company (xAI), or Musk might insist on incompetent board appointments that would interfere with the nonprofit foundation pursuing any goal. But as Rose and Rob lay out, it’s not clear any of those things is actually true. In this emergency podcast recorded soon after Elon’s offer, Rose and Rob also cover: Why OpenAI wants to change its charitable purpose and whether that’s legally permissible On what basis the attorneys general will decide OpenAI’s fate The challenges in valuing the nonprofit’s “priceless” position of control Whether Musk’s offer will force OpenAI to up their own bid, and whether they could raise the money If other tech giants might now jump in with competing offers How politics could influence the attorneys general reviewing the deal What Rose thinks should actually happen to protect the public interest Chapters: Cold open (00:00:00) Elon throws a $97.4b bomb (00:01:18) What was craziest in OpenAI’s plan to break free of the nonprofit (00:02:24) Can OpenAI suddenly change its charitable purpose like that? (00:05:19) Diving into Elon’s big announcement (00:15:16) Ways OpenAI could try to reject the offer (00:27:21) Sam Altman slips up (00:35:26) Will this actually stop things? (00:38:03) Why does OpenAI even want to change its charitable mission? (00:42:46) Most likely outcomes and what Rose thinks should happen (00:51:17) Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out 3:12:24
3:12:24
Play Later
Play Later
Lists
Like
Liked3:12:24
Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI? With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023. Check out the full transcript on the 80,000 Hours website. You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear: Ajeya Cotra on overrated AGI worries Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger Ian Morris on why the future must be radically different from the present Nick Joseph on whether his companies internal safety policies are enough Richard Ngo on what everyone gets wrong about how ML models work Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t Carl Shulman on why you’ll prefer robot nannies over human ones Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles Hugo Mercier on why even superhuman AGI won’t be that persuasive Rob Long on the case for and against digital sentience Anil Seth on why he thinks consciousness is probably biological Lewis Bollard on whether AI advances will help or hurt nonhuman animals Rohin Shah on whether humanity’s work ends at the point it creates AGI And of course, Rob and Luisa also regularly chime in on what they agree and disagree with. Chapters: Cold open (00:00:00) Rob's intro (00:00:58) Rob & Luisa: Bowerbirds compiling the AI story (00:03:28) Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16) Rob & Luisa: Agentic AI and designing machine people (00:24:06) Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20) Ian Morris on why we won’t end up living like The Jetsons (00:47:03) Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21) Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43) Richard Ngo on the most important misconception in how ML models work (01:03:10) Rob & Luisa: Issues Rob is less worried about now (01:07:22) Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08) Michael Webb on why he’s sceptical about explosive economic growth (01:20:50) Carl Shulman on why people will prefer robot nannies over humans (01:28:25) Rob & Luisa: Should we expect AI-related job loss? (01:36:19) Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06) Holden Karnofsky on the power that comes from just making models bigger (01:45:21) Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49) Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29) Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08) Robert Long on whether digital sentience is possible (02:15:09) Anil Seth on why he believes in the biological basis of consciousness (02:27:21) Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52) Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37) Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35) Rob's outro (03:11:02) Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions and additional content editing: Katy Moore…
8
80,000 Hours Podcast


1 #124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions 3:10:21
3:10:21
Play Later
Play Later
Lists
Like
Liked3:10:21
If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong. Rebroadcast: this episode was originally released in March 2022. Links to learn more, highlights, and full transcript. Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish. First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running. Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries. 'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves. While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing. Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget. In this in-depth conversation, originally released in March 2022, Karen Levy and host Rob Wiblin chat about the above, as well as: Why it pays to figure out how you'll interpret the results of an experiment ahead of time The trouble with misaligned incentives within the development industry Projects that don't deliver value for money and should be scaled down How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren Logistical challenges in reaching huge numbers of people with essential services Lessons from Karen's many-decades career And much more Chapters: Cold open (00:00:00) Rob's intro (00:01:33) The interview begins (00:02:21) Funding for effective altruist–mentality development projects (00:04:59) Pre-policy plans (00:08:36) ‘Sustainability’, and other myths in typical international development practice (00:21:37) ‘Participatoriness’ (00:36:20) ‘Holistic approaches’ (00:40:20) How the development industry sees evidence-based development (00:51:31) Initiatives in Africa that should be significantly curtailed (00:56:30) Misaligned incentives within the development industry (01:05:46) Deworming: the early days (01:21:09) The problem of deworming (01:34:27) Deworm the World (01:45:43) Where the majority of the work was happening (01:55:38) Logistical issues (02:20:41) The importance of a theory of change (02:31:46) Ways that things have changed since 2006 (02:36:07) Academic work vs policy work (02:38:33) Fit for Purpose (02:43:40) Living in Kenya (03:00:32) Underrated life advice (03:05:29) Rob’s outro (03:09:18) Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 If digital minds could suffer, how would we ever know? (Article) 1:14:30
1:14:30
Play Later
Play Later
Lists
Like
Liked1:14:30
“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world . Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way. But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions: We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests. It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe . And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering. We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise. This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem , what we think can be done about it , and how you might pursue this work in your career . We also discuss a series of possible objections to thinking this is a pressing world problem. You can read the full article, Understanding the moral status of digital minds , on the 80,000 Hours website. Chapters: Introduction (00:00:00) Understanding the moral status of digital minds (00:00:58) Summary (00:03:31) Our overall view (00:04:22) Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59) Clearing up common misconceptions (00:12:16) Creating digital minds could go very badly - or very well (00:14:13) Dangers for digital minds (00:14:41) Dangers for humans (00:16:13) Other dangers (00:17:42) Things could also go well (00:18:32) We don't know how to assess the moral status of AI systems (00:19:49) There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39) Many plausible theories of consciousness could include digital minds (00:24:16) The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55) We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00) The scale of this issue might be enormous (00:36:08) Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35) Summing up so far (00:52:22) Arguments against the moral status of digital minds as a pressing problem (00:53:25) Two key cruxes (00:53:31) Maybe this problem is intractable (00:54:16) Maybe this issue will be solved by default (00:58:19) Isn't risk from AI more important than the risks to AIs? (01:00:45) Maybe current AI progress will stall (01:02:36) Isn't this just too crazy? (01:03:54) What can you do to help? (01:05:10) Important considerations if you work on this problem (01:13:00)…
8
80,000 Hours Podcast


1 #132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems 2:41:11
2:41:11
Play Later
Play Later
Lists
Like
Liked2:41:11
If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free. This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops. Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. Rebroadcast: this episode was originally released in June 2022. Links to learn more, highlights, and full transcript. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge. The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once , and thereby have a transformative effect on society. If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately. If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off. As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point. If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world. We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough. Chapters: Cold open (00:00:00) Rob's intro (00:00:52) The interview begins (00:02:44) Why computer security matters for AI safety (00:07:39) State of the art in information security (00:17:21) The hack of Nvidia (00:26:50) The most secure systems that exist (00:36:27) Formal verification (00:48:03) How organisations can protect against hacks (00:54:18) Is ML making security better or worse? (00:58:11) Motivated 14-year-old hackers (01:01:08) Disincentivising actors from attacking in the first place (01:05:48) Hofvarpnir Studios (01:12:40) Capabilities vs safety (01:19:47) Interesting design choices with big ML models (01:28:44) Nova’s work and how she got into it (01:45:21) Anthropic and career advice (02:05:52) $600M Ethereum hack (02:18:37) Personal computer security advice (02:23:06) LastPass (02:31:04) Stuxnet (02:38:07) Rob's outro (02:40:18) Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter 2:25:43
2:25:43
Play Later
Play Later
Lists
Like
Liked2:25:43
What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more. The question is a classic that makes for great dorm-room philosophy discussion. But it’s hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective. Today’s guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself. Rebroadcast: this episode was originally released in September 2022. Links to learn more, highlights, and full transcript. That idea , in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations. Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering. As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves — a position known as ‘philosophical hedonism’ — has been one of the most enduringly popular ideas in ethics. And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things? Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value “a radical and important philosophical contribution.” So what convinces Sharon that philosophical hedonism deserves another go? In today’s interview with host Rob Wiblin, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes these counterarguments are misguided. A philosophical hedonist shouldn’t get in an experience machine, nor override an individual’s autonomy, except in situations so different from the classic thought experiments that it no longer seems strange they would do so. Chapters: Cold open (00:00:00) Rob’s intro (00:00:41) The interview begins (00:04:27) Metaethics (00:05:58) Anti-realism (00:12:21) Sharon's theory of moral realism (00:17:59) The history of hedonism (00:24:53) Intrinsic value vs instrumental value (00:30:31) Egoistic hedonism (00:38:12) Single axis of value (00:44:01) Key objections to Sharon’s brand of hedonism (00:58:00) The experience machine (01:07:50) Robot spouses (01:24:11) Most common misunderstanding of Sharon’s view (01:28:52) How might a hedonist actually live (01:39:28) The organ transplant case (01:55:16) Counterintuitive implications of hedonistic utilitarianism (02:05:22) How could we discover moral facts? (02:19:47) Rob’s outro (02:24:44) Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #134 Classic episode – Ian Morris on what big-picture history teaches us 3:40:53
3:40:53
Play Later
Play Later
Lists
Like
Liked3:40:53
Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs. Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women. Why such big systematic changes — and why these changes specifically? That's the question bestselling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve . Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years. Rebroadcast: this episode was originally released in July 2022. Links to learn more, highlights, and full transcript. There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer? In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels. On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength. There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another. Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career. In this classic episode, we discuss all of Ian's major books. Chapters: Rob's intro (00:00:53) The interview begins (00:02:30) Geography is Destiny (00:03:38) Why the West Rules—For Now (00:12:04) War! What is it Good For? (00:28:19) Expectations for the future (00:40:22) Foragers, Farmers, and Fossil Fuels (00:53:53) Historical methodology (01:03:14) Falsifiable alternative theories (01:15:59) Archaeology (01:22:56) Energy extraction technology as a key driver of human values (01:37:43) Allowing people to debate about values (02:00:16) Can productive wars still occur? (02:13:28) Where is history contingent and where isn’t it? (02:30:23) How Ian thinks about the future (03:13:33) Macrohistory myths (03:29:51) Ian’s favourite archaeology memory (03:33:19) The most unfair criticism Ian’s ever received (03:35:17) Rob's outro (03:39:55) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #140 Classic episode – Bear Braumoeller on the case that war isn’t in decline 2:48:03
2:48:03
Play Later
Play Later
Lists
Like
Liked2:48:03
Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out. But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe. Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age . Rebroadcast: this episode was originally released in November 2022. Links to learn more, highlights, and full transcript. The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours. If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st. Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster. He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead , he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone. In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war." In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as: Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect? What would Bear's critics say in response to all this? What do the optimists get right? How does one do proper statistical tests for events that are clumped together, like war deaths? Why are deaths in war so concentrated in a handful of the most extreme events? Did the ideas of the Enlightenment promote nonviolence, on balance? Were early states more or less violent than groups of hunter-gatherers? If Bear is right, what can be done? How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century? Which wars are remarkable but largely unknown? Chapters: Cold open (00:00:00) Rob's intro (00:01:01) The interview begins (00:05:37) Only the Dead (00:08:33) The Enlightenment (00:18:50) Democratic peace theory (00:28:26) Is religion a key driver of war? (00:31:32) International orders (00:35:14) The Concert of Europe (00:44:21) The Bismarckian system (00:55:49) The current international order (01:00:22) The Better Angels of Our Nature (01:19:36) War datasets (01:34:09) Seeing patterns in data where none exist (01:47:38) Change-point analysis (01:51:39) Rates of violent death throughout history (01:56:39) War initiation (02:05:02) Escalation (02:20:03) Getting massively different results from the same data (02:30:45) How worried we should be (02:36:13) Most likely ways Only the Dead is wrong (02:38:31) Astonishing smaller wars (02:42:45) Rob’s outro (02:47:13) Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year) 2:50:02
2:50:02
Play Later
Play Later
Lists
Like
Liked2:50:02
"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob Wiblin It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including: How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today How much of the gender pay gap is due to direct pay discrimination vs other factors How cleaner wrasse fish blow the mirror test out of the water Why effective altruism may be too big a tent to work well How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with …as well as 27 other top observations and arguments from the past year of the show . Check out the full transcript and episode links on the 80,000 Hours website. Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours . So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there. It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown. Enjoy, and look forward to speaking with you in 2025! Chapters: Rob's intro (00:00:00) Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11) Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17) Meghan Barrett on the likelihood of insect sentience (00:11:26) Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47) Sella Nevo on side-channel attacks (00:19:32) Zvi Mowshowitz on AI sleeper agents (00:22:59) Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11) Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23) Emily Oster on the impact of kids on women's careers (00:40:29) Carl Shulman on robot nannies (00:45:19) Nathan Labenz on kids and artificial friends (00:50:12) Nathan Calvin on why it's not too early for AI policies (00:54:13) Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08) Nick Joseph on why he’s a big fan of the responsible scaling policy approach (01:03:11) Sihao Huang on how the US and UK might coordinate with China (01:06:09) Nathan Labenz on better transparency about predicted capabilities (01:10:18) Ezra Karger on what explains forecasters’ disagreements about AI risks (01:15:22) Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58) Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24) Vitalik Buterin on defensive acceleration (01:29:43) Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59) Nate Silver on whether effective altruism is too big to succeed (01:38:42) Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27) Lewis Bollard on how factory farming is philosophically indefensible (01:46:28) Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27) Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43) Anil Seth on how our brain interprets reality (02:01:03) Eric Schwitzgebel on whether consciousness can be nested (02:04:53) Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23) Peter Godfrey-Smith on uploads of ourselves (02:14:34) Laura Deming on surprising things that make mice live longer (02:21:17) Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46) Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12) Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04) Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26) Cameron Meyer Shorb on vaccines for wild animals (02:42:53) Spencer Greenberg on personal principles (02:46:08) Producing and editing: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video editing: Simon Monsour Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #211 – Sam Bowman on why housing still isn't fixed and what would actually work 3:25:46
3:25:46
Play Later
Play Later
Lists
Like
Liked3:25:46
Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their area can usually find some way to block them, even if the benefits to society outweigh the costs 10 or 100 times over. The result of this ‘vetocracy’ has been skyrocketing rent in major cities — not to mention exacerbating homelessness, energy poverty, and a host of other social maladies . This has been known for years but precious little progress has been made. When trains, tunnels, or nuclear reactors are occasionally built, they’re comically expensive and slow compared to 50 years ago. And housing construction in the UK and California has barely increased, remaining stuck at less than half what it was in the ’60s and ’70s. Today’s guest — economist and editor of Works in Progress Sam Bowman — isn’t content to just condemn the Not In My Backyard (NIMBY) mentality behind this stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you. Links to learn more, highlights, video, and full transcript. So, as Sam explains, a different strategy is needed, one that acknowledges that opponents of development are often correct that a given project will make them worse off. But the thing is, in the cases we care about, these modest downsides are outweighed by the enormous benefits to others — who will finally have a place to live, be able to get to work, and have the energy to heat their home. But democracies are majoritarian, so if most existing residents think they’ll be a little worse off if more dwellings are built in their area, it’s no surprise they aren’t getting built. Luckily we already have a simple way to get people to do things they don’t enjoy for the greater good, a strategy that we apply every time someone goes in to work at a job they wouldn’t do for free: compensate them . Sam thinks this idea, which he calls “Coasean democracy,” could create a politically sustainable majority in favour of building and underlies the proposals he thinks have the best chance of success — which he discusses in detail with host Rob Wiblin. Chapters: Cold open (00:00:00) Introducing Sam Bowman (00:00:59) We can’t seem to build anything (00:02:09) Our inability to build is ruining people's lives (00:04:03) Why blocking growth of big cities is terrible for science and invention (00:09:15) It's also worsening inequality, health, fertility, and political polarisation (00:14:36) The UK as the 'limit case' of restrictive planning permission gone mad (00:17:50) We've known this for years. So why almost no progress fixing it? (00:36:34) NIMBYs aren't wrong: they are often harmed by development (00:43:58) Solution #1: Street votes (00:55:37) Are street votes unfair to surrounding areas? (01:08:31) Street votes are coming to the UK — what to expect (01:15:07) Are street votes viable in California, NY, or other countries? (01:19:34) Solution #2: Benefit sharing (01:25:08) Property tax distribution — the most important policy you've never heard of (01:44:29) Solution #3: Opt-outs (01:57:53) How to make these things happen (02:11:19) Let new and old institutions run in parallel until the old one withers (02:18:17) The evil of modern architecture and why beautiful buildings are essential (02:31:58) Northern latitudes need nuclear power — solar won't be enough (02:45:01) Ozempic is still underrated and “the overweight theory of everything” (03:02:30) How has progress studies remained sane while being very online? (03:17:55) Video editing: Simon Monsour Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals 3:21:03
3:21:03
Play Later
Play Later
Lists
Like
Liked3:21:03
"I really don’t want to give the impression that I think it is easy to make predictable, controlled, safe interventions in wild systems where there are many species interacting. I don’t think it’s easy, but I don’t see any reason to think that it’s impossible. And I think we have been making progress. I think there’s every reason to think that if we continue doing research, both at the theoretical level — How do ecosystems work? What sorts of things are likely to have what sorts of indirect effects? — and then also at the practical level — Is this intervention a good idea? — I really think we’re going to come up with plenty of things that would be helpful to plenty of animals." —Cameron Meyer Shorb In today’s episode, host Luisa Rodriguez speaks to Cameron Meyer Shorb — executive director of the Wild Animal Initiative — about the cutting-edge research on wild animal welfare. Links to learn more, highlights, and full transcript. They cover: How it’s almost impossible to comprehend the sheer number of wild animals on Earth — and why that makes their potential suffering so important to consider. How bad experiences like disease, parasites, and predation truly are for wild animals — and how we would even begin to study that empirically. The tricky ethical dilemmas in trying to help wild animals without unintended consequences for ecosystems or other potentially sentient beings. Potentially promising interventions to help wild animals — like selective reforestation, vaccines, fire management, and gene drives. Why Cameron thinks the best approach to improving wild animal welfare is to first build a dedicated research field — and how Wild Animal Initiative’s activities support this. The many career paths in science, policy, and technology that could contribute to improving wild animal welfare. And much more. Chapters: Cold open (00:00:00) Luisa's intro (00:01:04) The interview begins (00:03:40) One concrete example of how we might improve wild animal welfare (00:04:04) Why should we care about wild animal suffering? (00:10:00) What’s it like to be a wild animal? (00:19:37) Suffering and death in the wild (00:29:19) Positive, benign, and social experiences (00:51:33) Indicators of welfare (01:01:40) Can we even help wild animals without unintended consequences? (01:13:20) Vaccines for wild animals (01:30:59) Fire management (01:44:20) Gene drive technologies (01:47:42) Common objections and misconceptions about wild animal welfare (01:53:19) Future promising interventions (02:21:58) What’s the long game for wild animal welfare? (02:27:46) Eliminating the biological basis for suffering (02:33:21) Optimising for high-welfare landscapes (02:37:33) Wild Animal Initiative’s work (02:44:11) Careers in wild animal welfare (02:58:13) Work-related guilt and shame (03:12:57) Luisa's outro (03:19:51) Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit 1:22:08
1:22:08
Play Later
Play Later
Lists
Like
Liked1:22:08
One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right? Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal. Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them? That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance. Links to learn more, highlights, video, and full transcript. As Rose lays out, on paper OpenAI is controlled by a nonprofit board that: Can fire the CEO. Would receive all the profits after the point OpenAI makes 100x returns on investment. Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.” But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale). Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars. So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead. Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it? OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff. Chapters: Cold open (00:00:00) What's coming up (00:00:50) Who is Rose Chan Loui? (00:03:11) How OpenAI carefully chose a complex nonprofit structure (00:04:17) OpenAI's new plan to become a for-profit (00:11:47) The nonprofit board is out-resourced and in a tough spot (00:14:38) Who could be cheated in a bad conversion to a for-profit? (00:17:11) Is this a unique case? (00:27:24) Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:28:58) The crazy difficulty of valuing the profits OpenAI might make (00:35:21) Control of OpenAI is independently incredibly valuable and requires compensation (00:41:22) It's very important the nonprofit get cash and not just equity (and few are talking about it) (00:51:37) Is it a farce to call this an "arm's-length transaction"? (01:03:50) How the nonprofit board can best play their hand (01:09:04) Who can mount a court challenge and how that would work (01:15:41) Rob's outro (01:21:25) Producer: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video editing: Simon Monsour Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world 2:22:03
2:22:03
Play Later
Play Later
Lists
Like
Liked2:22:03
"I think stories are the way we shift the Overton window — so widen the range of things that are acceptable for policy and palatable to the public. Almost by definition, a lot of things that are going to be really important and shape the future are not in the Overton window, because they sound weird and off-putting and very futuristic. But I think stories are the best way to bring them in." — Elizabeth Cox In today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world. Links to learn more, highlights, and full transcript. They cover: How TV shows and movies compare to novels, short stories, and creative nonfiction if you’re trying to do good. The existing empirical evidence for the impact of storytelling. Their competing takes on the merits of thinking carefully about target audiences. Whether stories can really change minds on deeply entrenched issues, or whether writers need to have more modest goals. Whether humans will stay relevant as creative writers with the rise of powerful AI models. Whether you can do more good with an overtly educational show vs other approaches. Elizabeth’s experience with making her new five-part animated show Ada — including why she chose the topics of civilisational collapse, kidney donations, artificial wombs, AI, and gene drives. The pros and cons of animation as a medium. Career advice for creative writers. Keiran’s idea for a longtermist Christmas movie. And plenty more. Check out Ada on YouTube! Material you might want to check out before listening: The trailer for Elizabeth’s new animated series Ada — the full series will be available on TED-Ed’s YouTube channel in early January 2025 Keiran’s pilot script and a 10-episode outline for his show Bequest , and his post about the show on the Effective Altruism Forum Chapters: Cold open (00:00:00) Luisa's intro (00:01:04) The interview begins (00:02:52) Is storytelling really a high-impact career option? (00:03:26) Empirical evidence of the impact of storytelling (00:06:51) How storytelling can inform us (00:16:25) How long will humans stay relevant as creative writers? (00:21:54) Ada (00:33:05) Debating the merits of thinking about target audiences (00:38:03) Ada vs other approaches to impact-focused storytelling (00:48:18) Why animation (01:01:06) One Billion Christmases (01:04:54) How storytelling can humanise (01:09:34) But can storytelling actually change strongly held opinions? (01:13:26) Novels and short stories (01:18:38) Creative nonfiction (01:25:06) Other promising ways of storytelling (01:30:53) How did Ada actually get made? (01:33:23) The hardest part of the process for Elizabeth (01:48:28) Elizabeth’s hopes and dreams for Ada (01:53:10) Designing Ada with an eye toward impact (01:59:16) Alternative topics for Ada (02:05:33) Deciding on the best way to get Ada in front of people (02:07:12) Career advice for creative writers (02:11:31) Wikipedia book spoilers (02:17:05) Luisa's outro (02:20:42) Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead 2:58:39
2:58:39
Play Later
Play Later
Lists
Like
Liked2:58:39
"I think one of the reasons I took [shutting down my charity] so hard is because entrepreneurship is all about this bets-based mindset. So you say, “I’m going to take a bunch of bets. I’m going to take some risky bets that have really high upside.” And this is a winning strategy in life, but maybe it’s not a winning strategy for any given hand. So the fact of the matter is that I believe that intellectually, but l do not believe that emotionally. And I have now met a bunch of people who are really good at doing that emotionally, and I’ve realised I’m just not one of those people. I think I’m more entrepreneurial than your average person; I don’t think I’m the maximally entrepreneurial person. And I also think it’s just human nature to not like failing." —Sarah Eustis-Guthrie In today’s episode, host Luisa Rodriguez speaks to Sarah Eustis-Guthrie — cofounder of the now-shut-down Maternal Health Initiative , a postpartum family planning nonprofit in Ghana — about her experience starting and running MHI, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected. Links to learn more, highlights, and full transcript. They cover: The evidence that made Sarah and her cofounder Ben think their organisation could be super impactful for women — both from a health perspective and an autonomy and wellbeing perspective. Early yellow and red flags that maybe they didn’t have the full story about the effectiveness of the intervention. All the steps Sarah and Ben took to build the organisation — and where things went wrong in retrospect. Dealing with the emotional side of putting so much time and effort into a project that ultimately failed. Why it’s so important to talk openly about things that don’t work out, and Sarah’s key lessons learned from the experience. The misaligned incentives that discourage charities from shutting down ineffective programmes. The movement of trust-based philanthropy, and Sarah’s ideas to further improve how global development charities get their funding and prioritise their beneficiaries over their operations. The pros and cons of exploring and pivoting in careers. What it’s like to participate in the Charity Entrepreneurship Incubation Program , and how listeners can assess if they might be a good fit. And plenty more. Chapters: Cold open (00:00:00) Luisa’s intro (00:00:58) The interview begins (00:03:43) The case for postpartum family planning as an impactful intervention (00:05:37) Deciding where to start the charity (00:11:34) How do you even start implementing a charity programme? (00:18:33) Early yellow and red flags (00:22:56) Proof-of-concept tests and pilot programme in Ghana (00:34:10) Dealing with disappointing pilot results (00:53:34) The ups and downs of founding an organisation (01:01:09) Post-pilot research and reflection (01:05:40) Is family planning still a promising intervention? (01:22:59) Deciding to shut down MHI (01:34:10) The surprising community response to news of the shutdown (01:41:12) Mistakes and what Sarah could have done differently (01:48:54) Sharing results in the space of postpartum family planning (02:00:54) Should more charities scale back or shut down? (02:08:33) Trust-based philanthropy (02:11:15) Empowering the beneficiaries of charities’ work (02:18:04) The tough ask of getting nonprofits to act when a programme isn’t working (02:21:18) Exploring and pivoting in careers (02:27:01) Reevaluation points (02:29:55) PlayPumps were even worse than you might’ve heard (02:33:25) Charity Entrepreneurship (02:38:30) The mistake of counting yourself out too early (02:52:37) Luisa’s outro (02:57:50) Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 Parenting insights from Rob and 8 past guests 1:35:39
1:35:39
Play Later
Play Later
Lists
Like
Liked1:35:39
With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them. Links to learn more and full transcript. After hearing 8 former guests’ insights, Luisa and Rob chat about: Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months). What have been the biggest surprises for Rob in becoming a parent. How Rob's dealt with work and parenting tradeoffs, and his advice for other would-be parents. Rob's list of recommended purchases for new or upcoming parents . This bonus episode includes excerpts from: Ezra Klein on parenting yourself as well as your children (from episode #157 ) Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid ( #110 and #158 ) Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives ( #178 ) Russ Roberts on empirical research when deciding whether to have kids ( #87 ) Spencer Greenberg on his surveys of parents ( #183 ) Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems ( #153 ) Bryan Caplan on homeschooling ( #172 ) Nita Farahany on thinking about life and the world differently with kids ( #174 ) Chapters: Cold open (00:00:00) Rob & Luisa’s intro (00:00:19) Ezra Klein on parenting yourself as well as your children (00:03:34) Holden Karnofsky on preparing for a kid and freezing embryos (00:07:41) Emily Oster on the impact of kids on relationships (00:09:22) Russ Roberts on empirical research when deciding whether to have kids (00:14:44) Spencer Greenberg on parent surveys (00:23:58) Elie Hassenfeld on how having children reframes his relationship to solving pressing problems (00:27:40) Emily Oster on careers and kids (00:31:44) Holden Karnofsky on the experience of having kids (00:38:44) Bryan Caplan on homeschooling (00:40:30) Emily Oster on what actually makes a difference in young kids' lives (00:46:02) Nita Farahany on thinking about life and the world differently (00:51:16) Rob’s first impressions of parenthood (00:52:59) How Rob has changed his views about parenthood (00:58:04) Can the pros and cons of parenthood be studied? (01:01:49) Do people have skewed impressions of what parenthood is like? (01:09:24) Work and parenting tradeoffs (01:15:26) Tough decisions about screen time (01:25:11) Rob’s advice to future parents (01:30:04) Coda: Rob’s updated experience at nine months (01:32:09) Emily Oster on her amazing nanny (01:35:01) Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #206 – Anil Seth on the predictive brain and how to study consciousness 2:33:50
2:33:50
Play Later
Play Later
Lists
Like
Liked2:33:50
"In that famous example of the dress, half of the people in the world saw [blue and black], half saw [white and gold]. It turns out there’s individual differences in how brains take into account ambient light. Colour is one example where it’s pretty clear that what we experience is a kind of inference: it’s the brain’s best guess about what’s going on in some way out there in the world. And that’s the claim that I’ve taken on board as a general hypothesis for consciousness: that all our perceptual experiences are inferences about something we don’t and cannot have direct access to." —Anil Seth In today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain. Links to learn more, highlights, and full transcript. They cover: What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness. Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains. Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is. Whether our theories of human consciousness can be applied to nonhuman animals. Anil’s thoughts on whether machines could ever be conscious. Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next. And much more. Chapters: Cold open (00:00:00) Luisa’s intro (00:01:02) The interview begins (00:02:42) How expectations and perception affect consciousness (00:03:05) How the brain makes sense of the body it’s within (00:21:33) Psychedelics and predictive processing (00:32:06) Blindsight and visual consciousness (00:36:45) Split-brain patients (00:54:56) Overflow experiments (01:05:28) How much can we learn about consciousness from empirical research? (01:14:23) Which parts of the brain are responsible for conscious experiences? (01:27:37) Current state and disagreements in the study of consciousness (01:38:36) Digital consciousness (01:55:55) Consciousness in nonhuman animals (02:18:11) What’s next for Anil (02:30:18) Luisa’s outro (02:32:46) Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


If you care about social impact, is voting important? In this piece, Rob investigates the two key things that determine the impact of your vote: The chances of your vote changing an election’s outcome. How much better some candidates are for the world as a whole, compared to others. He then discusses a couple of the best arguments against voting in important elections, namely: If an election is competitive, that means other people disagree about which option is better, and you’re at some risk of voting for the worse candidate by mistake. While voting itself doesn’t take long, knowing enough to accurately pick which candidate is better for the world actually does take substantial effort — effort that could be better allocated elsewhere. Finally, Rob covers the impact of donating to campaigns or working to "get out the vote," which can be effective ways to generate additional votes for your preferred candidate. We last released this article in October 2020, but we think it largely still stands up today. Chapters: Rob's intro (00:00:00) Introduction (00:01:12) What's coming up (00:02:35) The probability of one vote changing an election (00:03:58) How much does it matter who wins? (00:09:29) What if you’re wrong? (00:16:38) Is deciding how to vote too much effort? (00:21:47) How much does it cost to drive one extra vote? (00:25:13) Overall, is it altruistic to vote? (00:29:38) Rob's outro (00:31:19) Producer: Keiran Harris…
8
80,000 Hours Podcast


1 #205 – Sébastien Moro on the most insane things fish can do 3:11:05
3:11:05
Play Later
Play Later
Lists
Like
Liked3:11:05
"You have a tank split in two parts: if the fish gets in the compartment with a red circle, it will receive food, and food will be delivered in the other tank as well. If the fish takes the blue triangle, this fish will receive food, but nothing will be delivered in the other tank. So we have a prosocial choice and antisocial choice. When there is no one in the other part of the tank, the male is choosing randomly. If there is a male, a possible rival: antisocial — almost 100% of the time. Now, if there is his wife — his female, this is a prosocial choice all the time. "And now a question: Is it just because this is a female or is it just for their female? Well, when they're bringing a new female, it’s the antisocial choice all the time. Now, if there is not the female of the male, it will depend on how long he's been separated from his female. At first it will be antisocial, and after a while he will start to switch to prosocial choices." —Sébastien Moro In today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience. Links to learn more, highlights, and full transcript. They cover: The insane capabilities of fish in tests of memory, learning, and problem-solving. Examples of fish that can beat primates on cognitive tests and recognise individual human faces. Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission. Whether fish can experience emotions, and how this is even studied. The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea. How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive. Ethical issues raised by evidence that fish may be conscious and experience suffering. And plenty more. Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism 1:57:48
1:57:48
Play Later
Play Later
Lists
Like
Liked1:57:48
Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything . Links to learn more, highlights, video, and full transcript. On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits. But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall. Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years. Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome. Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others. But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him: How would Nate spend $10 billion differently than today’s philanthropists influenced by EA? Is anyone else competitive with EA in terms of impact per dollar? Does he have any big disagreements with 80,000 Hours’ advice on how to have impact? Is EA too big a tent to function? What global problems could EA be ignoring? Should EA be more willing to court controversy? Does EA’s niceness leave it vulnerable to exploitation? What moral philosophy would he have modelled EA on? Rob and Nate also talk about: Nate’s theory of Sam Bankman-Fried’s psychology. Whether we had to “raise or fold” on COVID. Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not. “Winners’ tilt.” Whether it’s selfish to slow down AI progress. The ridiculous 13 Keys to the White House. Whether prediction markets are now overrated. Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund. And plenty more. Chapters: Cold open (00:00:00) Rob's intro (00:01:03) The interview begins (00:03:08) Sam Bankman-Fried and trust in the effective altruism community (00:04:09) Expected value (00:19:06) Similarities and differences between Sam Altman and SBF (00:24:45) How would Nate do EA differently? (00:31:54) Reservations about utilitarianism (00:44:37) Game theory equilibrium (00:48:51) Differences between EA culture and rationalist culture (00:52:55) What would Nate do with $10 billion to donate? (00:57:07) COVID strategies and tradeoffs (01:06:52) Is it selfish to slow down AI progress? (01:10:02) Democratic legitimacy of AI progress (01:18:33) Dubious election forecasting (01:22:40) Assessing how reliable election forecasting models are (01:29:58) Are prediction markets overrated? (01:41:01) Venture capitalists and risk (01:48:48) Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video engineering: Simon Monsour Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation 1:25:09
1:25:09
Play Later
Play Later
Lists
Like
Liked1:25:09
"In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it. "The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-difficult hours. There’s something special about what’s achieved at the end of that process." —Peter Godfrey-Smith In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World . Links to learn more, highlights, and full transcript. They cover: Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence. How the role of culture has been crucial in enabling human technological progress. Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too. Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives. Whether we can and should avoid death by uploading human minds. And plenty more. Chapters: Cold open (00:00:00) Luisa's intro (00:00:57) The interview begins (00:02:12) Wild animal suffering and rewilding (00:04:09) Thinking about death (00:32:50) Uploads of ourselves (00:38:04) Culture and how minds make things happen (00:54:05) Challenges for water-based animals (01:01:37) The importance of sea-to-land transitions in animal life (01:10:09) Luisa's outro (01:23:43) Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame 1:36:00
1:36:00
Play Later
Play Later
Lists
Like
Liked1:36:00
In this episode from our second show, 80k After Hours , Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride. Links to learn more, highlights, and full transcript. They cover: Keiran’s views on free will, and how he came to hold them What it’s like not experiencing sustained guilt, shame, and anger Whether Luisa would become a worse person if she felt less guilt and shame — specifically whether she’d work fewer hours, or donate less money, or become a worse friend Whether giving up guilt and shame also means giving up pride The implications for love The neurological condition ‘Jerk Syndrome’ And some practical advice on feeling less guilt, shame, and anger Who this episode is for: People sympathetic to the idea that free will is an illusion People who experience tons of guilt, shame, or anger People worried about what would happen if they stopped feeling tonnes of guilt, shame, or anger Who this episode isn’t for: People strongly in favour of retributive justice Philosophers who can’t stand random non-philosophers talking about philosophy Non-philosophers who can’t stand random non-philosophers talking about philosophy Chapters: Cold open (00:00:00) Luisa's intro (00:01:16) The chat begins (00:03:15) Keiran's origin story (00:06:30) Charles Whitman (00:11:00) Luisa's origin story (00:16:41) It's unlucky to be a bad person (00:19:57) Doubts about whether free will is an illusion (00:23:09) Acting this way just for other people (00:34:57) Feeling shame over not working enough (00:37:26) First person / third person distinction (00:39:42) Would Luisa become a worse person if she felt less guilt? (00:44:09) Feeling bad about not being a different person (00:48:18) Would Luisa donate less money? (00:55:14) Would Luisa become a worse friend? (01:01:07) Pride (01:08:02) Love (01:15:35) Bears and hurricanes (01:19:53) Jerk Syndrome (01:24:24) Keiran's outro (01:34:47) Get more episodes like this by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type "80k After Hours" into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science 2:20:26
2:20:26
Play Later
Play Later
Lists
Like
Liked2:20:26
"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the next questions are, and then getting to the next stage and the next stage and so on. And occasionally there’ll be revolutionary ideas which will really completely change your view of science. And it is possible that some revolutionary breakthrough in our understanding will come about and we might crack this problem, but there’s no evidence for that. It doesn’t mean that there isn’t a lot of promising work going on. There are many legitimate areas which could lead to real improvements in health in old age. So I’m fairly balanced: I think there are promising areas, but there’s a lot of work to be done to see which area is going to be promising, and what the risks are, and how to make them work." —Venki Ramakrishnan In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality . Links to learn more, highlights, and full transcript. They cover: What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived. Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped. Why eliminating major age-related diseases might only extend average lifespan by 15 years. The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t. And plenty more. Chapters: Cold open (00:00:00) Luisa's intro (00:01:04) The interview begins (00:02:21) Reasons to explore why we age and die (00:02:35) Evolutionary pressures and animals that don't biologically age (00:06:55) Why does ageing cause us to die? (00:12:24) Is there a hard limit to the human lifespan? (00:17:11) Evolutionary tradeoffs between fitness and longevity (00:21:01) How ageing resets with every generation, and what we can learn from clones (00:23:48) Younger blood (00:31:20) Freezing cells, organs, and bodies (00:36:47) Are the goals of anti-ageing research even realistic? (00:43:44) Dementia (00:49:52) Senescence (01:01:58) Caloric restriction and metabolic pathways (01:11:45) Yamanaka factors (01:34:07) Cancer (01:47:44) Mitochondrial dysfunction (01:58:40) Population effects of extended lifespan (02:06:12) Could increased longevity increase inequality? (02:11:48) What’s surprised Venki about this research (02:16:06) Luisa's outro (02:19:26) Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #201 – Ken Goldberg on why your robot butler isn’t here yet 2:01:43
2:01:43
Play Later
Play Later
Lists
Like
Liked2:01:43
"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surprising, because humans are very good at this. In fact, even with one eye, we can navigate and we can clear the dinner table. But it seems that we’re building in a lot of understanding and intuition about what’s happening in the world and where objects are and how they behave. For robots, it’s very difficult to get a perfectly accurate model of the world and where things are. So if you’re going to go manipulate or grasp an object, a small error in that position will maybe have your robot crash into the object, a delicate wine glass, and probably break it. So the perception and the control are both problems." —Ken Goldberg In today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies. Links to learn more, highlights, and full transcript. They cover: Why training robots is harder than training large language models like ChatGPT. The biggest engineering challenges that still remain before robots can be widely useful in the real world. The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine. Whether we should be worried about robot labour affecting human employment. Recent breakthroughs in robotics, and what cutting-edge robots can do today. Ken’s work as an artist, where he explores the complex relationship between humans and technology. And plenty more. Chapters: Cold open (00:00:00) Luisa's intro (00:01:19) General purpose robots and the “robotics bubble” (00:03:11) How training robots is different than training large language models (00:14:01) What can robots do today? (00:34:35) Challenges for progress: fault tolerance, multidimensionality, and perception (00:41:00) Recent breakthroughs in robotics (00:52:32) Barriers to making better robots: hardware, software, and physics (01:03:13) Future robots in home care, logistics, food production, and medicine (01:16:35) How might robot labour affect the job market? (01:44:27) Robotics and art (01:51:28) Luisa's outro (02:00:55) Producer: Keiran Harris Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #200 – Ezra Karger on what superforecasters and experts think about existential risks 2:49:24
2:49:24
Play Later
Play Later
Lists
Like
Liked2:49:24
"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks. Links to learn more, highlights, and full transcript. They cover: How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change. What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results. The challenges of predicting low-probability, high-impact events. Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on. The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are. Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies. Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next. Whether large language models could help or outperform human forecasters. How people can improve their calibration and start making better forecasts personally. Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making. And plenty more. Chapters: Cold open (00:00:00) Luisa’s intro (00:01:07) The interview begins (00:02:54) The Existential Risk Persuasion Tournament (00:05:13) Why is this project important? (00:12:34) How was the tournament set up? (00:17:54) Results from the tournament (00:22:38) Risk from artificial intelligence (00:30:59) How to think about these numbers (00:46:50) Should we trust experts or superforecasters more? (00:49:16) The effect of debate and persuasion (01:02:10) Forecasts from the general public (01:08:33) How can we improve people’s forecasts? (01:18:59) Incentives and recruitment (01:26:30) Criticisms of the tournament (01:33:51) AI adversarial collaboration (01:46:20) Hypotheses about stark differences in views of AI risk (01:51:41) Cruxes and different worldviews (02:17:15) Ezra’s experience as a superforecaster (02:28:57) Forecasting as a research field (02:31:00) Can large language models help or outperform human forecasters? (02:35:01) Is forecasting valuable in the real world? (02:39:11) Ezra’s book recommendations (02:45:29) Luisa's outro (02:47:54) Producer: Keiran Harris Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy 1:12:37
1:12:37
Play Later
Play Later
Lists
Like
Liked1:12:37
"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, ' This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going. "I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making." —Nathan Calvin In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature. Links to learn more, highlights, and full transcript. They cover: What’s actually in SB 1047, and which AI models it would apply to. The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water. What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it. Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated. How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies. Why California is taking state-level action rather than waiting for federal regulation. How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems. And plenty more. Chapters: Cold open (00:00:00) Luisa's intro (00:00:57) The interview begins (00:02:30) What risks from AI does SB 1047 try to address? (00:03:10) Supporters and critics of the bill (00:11:03) Misunderstandings about the bill (00:24:07) Competition, open source, and liability concerns (00:30:56) Model size thresholds (00:46:24) How is SB 1047 different from the executive order? (00:55:36) Objections Nathan is sympathetic to (00:58:31) Current status of the bill (01:02:57) How can listeners get involved in work like this? (01:05:00) Luisa's outro (01:11:52) Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #198 – Meghan Barrett on challenging our assumptions about insects 3:48:12
3:48:12
Play Later
Play Later
Lists
Like
Liked3:48:12
"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t spend as much time learning about them at museums; and they’re just harder to spend time with in a lot of ways, I think, for people. So people have pets that are vertebrates that they take care of across the taxonomic groups, and people get familiar with those from going to zoos and watching their behaviours there, and watching nature documentaries and more. But I think the insects are still really underappreciated, and that means that our intuitions are probably more likely to be wrong than with those other groups." —Meghan Barrett In today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects. If you're interested in getting involved with this work, check out Meghan's recent blog post: I’m into insect welfare! What’s next? Links to learn more, highlights, and full transcript. They cover: The scale of potential insect suffering in the wild, on farms, and in labs. Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants. How size bias might help explain why many people assume insects can’t feel pain. Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods. Challenges facing the nascent field of insect welfare research, and where the main research gaps are. Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects. And much more. Chapters: Cold open (00:00:00) Luisa's intro (00:01:02) The interview begins (00:03:06) What is an insect? (00:03:22) Size diversity (00:07:24) How important is brain size for sentience? (00:11:27) Offspring, parental investment, and lifespan (00:19:00) Cognition and behaviour (00:23:23) The scale of insect suffering (00:27:01) Capacity to suffer (00:35:56) The empirical evidence for whether insects can feel pain (00:47:18) Nociceptors (01:00:02) Integrated nociception (01:08:39) Response to analgesia (01:16:17) Analgesia preference (01:25:57) Flexible self-protective behaviour (01:31:19) Motivational tradeoffs and associative learning (01:38:45) Results (01:43:31) Reasons to be sceptical (01:47:18) Meghan’s probability of sentience in insects (02:10:20) Views of the broader entomologist community (02:18:18) Insect farming (02:26:52) How much to worry about insect farming (02:40:56) Inhumane slaughter and disease in insect farms (02:44:45) Inadequate nutrition, density, and photophobia (02:53:50) Most humane ways to kill insects at home (03:01:33) Challenges in researching this (03:07:53) Most promising reforms (03:18:44) Why Meghan is hopeful about working with the industry (03:22:17) Careers (03:34:08) Insect Welfare Research Society (03:37:16) Luisa's outro (03:47:01) Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task 2:29:26
2:29:26
Play Later
Play Later
Lists
Like
Liked2:29:26
The three biggest AI companies — Anthropic , OpenAI , and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough? That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious. Links to learn more, highlights, video, and full transcript. As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way. As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise. Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he’s found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them. In addition to all of that, Nick and Rob talk about: What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute). What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea. What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI. And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org . Chapters: Cold open (00:00:00) Rob’s intro (00:01:00) The interview begins (00:03:44) Scaling laws (00:04:12) Bottlenecks to further progress in making AIs helpful (00:08:36) Anthropic’s responsible scaling policies (00:14:21) Pros and cons of the RSP approach for AI safety (00:34:09) Alternatives to RSPs (00:46:44) Is an internal audit really the best approach? (00:51:56) Making promises about things that are currently technically impossible (01:07:54) Nick’s biggest reservations about the RSP approach (01:16:05) Communicating “acceptable” risk (01:19:27) Should Anthropic’s RSP have wider safety buffers? (01:26:13) Other impacts on society and future work on RSPs (01:34:01) Working at Anthropic (01:36:28) Engineering vs research (01:41:04) AI safety roles at Anthropic (01:48:31) Should concerned people be willing to take capabilities roles? (01:58:20) Recent safety work at Anthropic (02:10:05) Anthropic culture (02:14:35) Overrated and underrated AI applications (02:22:06) Rob’s outro (02:26:36) Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video engineering: Simon Monsour Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #196 – Jonathan Birch on the edge cases of sentience and why they matter 2:01:50
2:01:50
Play Later
Play Later
Lists
Like
Liked2:01:50
"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan Birch In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI . (Check out the free PDF version !) Links to learn more, highlights, and full transcript. They cover: Candidates for sentience, such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified. Chilling tales about overconfident policies that probably caused significant suffering for decades. How policymakers can act ethically given real uncertainty. Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions. How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too. Why Jonathan is so excited about citizens’ assemblies. Jonathan’s conversation with the Dalai Lama about whether insects are sentient. And plenty more. Chapters: Cold open (00:00:00) Luisa’s intro (00:01:20) The interview begins (00:03:04) Why does sentience matter? (00:03:31) Inescapable uncertainty about other minds (00:05:43) The “zone of reasonable disagreement” in sentience research (00:10:31) Disorders of consciousness: comas and minimally conscious states (00:17:06) Foetuses and the cautionary tale of newborn pain (00:43:23) Neural organoids (00:55:49) AI sentience and whole brain emulation (01:06:17) Policymaking at the edge of sentience (01:28:09) Citizens’ assemblies (01:31:13) The UK’s Sentience Act (01:39:45) Ways Jonathan has changed his mind (01:47:26) Careers (01:54:54) Discussing animal sentience with the Dalai Lama (01:59:08) Luisa’s outro (02:01:04) Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them 2:08:29
2:08:29
Play Later
Play Later
Lists
Like
Liked2:08:29
"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella Nevo In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them. Links to learn more, highlights, and full transcript. They cover: Real-world examples of sophisticated security breaches, and what we can learn from them. Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors. The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks. The current best practices in cybersecurity, and why they may not be enough to keep bad actors away. New security measures that Sella hopes can mitigate with the growing risks. Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia. And plenty more. Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field! Chapters: Cold open (00:00:00) Luisa’s intro (00:00:56) The interview begins (00:02:30) The importance of securing the model weights of frontier AI models (00:03:01) The most sophisticated and surprising security breaches (00:10:22) AI models being leaked (00:25:52) Researching for the RAND report (00:30:11) Who tries to steal model weights? (00:32:21) Malicious code and exploiting zero-days (00:42:06) Human insiders (00:53:20) Side-channel attacks (01:04:11) Getting access to air-gapped networks (01:10:52) Model extraction (01:19:47) Reducing and hardening authorised access (01:38:52) Confidential computing (01:48:05) Red-teaming and security testing (01:53:42) Careers in information security (01:59:54) Sella’s work on flood forecasting systems (02:01:57) Luisa’s outro (02:04:51) Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government 3:04:18
3:04:18
Play Later
Play Later
Lists
Like
Liked3:04:18
"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “ My techno-optimism ,” which both camps agreed was basically reasonable. Links to learn more, highlights, video, and full transcript. Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive. Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously. But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions. The upshot? Defensive acceleration : humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination. Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024 . You don’t need a business idea yet — just the hustle to start a technology company. In addition to all of that, host Rob Wiblin and Vitalik discuss: AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism. Vitalik’s updated p(doom). Whether the social impact of blockchain and crypto has been a disappointment. Whether humans can merge with AI, and if that’s even desirable. The most valuable defensive technologies to accelerate. How to trustlessly identify what everyone will agree is misinformation Whether AGI is offence-dominant or defence-dominant. Vitalik’s updated take on effective altruism. Plenty more. Chapters: Cold open (00:00:00) Rob’s intro (00:00:56) The interview begins (00:04:47) Three different views on technology (00:05:46) Vitalik’s updated probability of doom (00:09:25) Technology is amazing, and AI is fundamentally different from other tech (00:15:55) Fear of totalitarianism and finding middle ground (00:22:44) Should AI be more centralised or more decentralised? (00:42:20) Humans merging with AIs to remain relevant (01:06:59) Vitalik’s “d/acc” alternative (01:18:48) Biodefence (01:24:01) Pushback on Vitalik’s vision (01:37:09) How much do people actually disagree? (01:42:14) Cybersecurity (01:47:28) Information defence (02:01:44) Is AI more offence-dominant or defence-dominant? (02:21:00) How Vitalik communicates among different camps (02:25:44) Blockchain applications with social impact (02:34:37) Rob’s outro (03:01:00) Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #193 – Sihao Huang on the risk that US–China AI competition leads to war 2:23:34
2:23:34
Play Later
Play Later
Lists
Like
Liked2:23:34
"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao Huang In today’s episode, host Luisa Rodriguez speaks to Sihao Huang about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance. Links to learn more, highlights, video, and full transcript. They cover: Whether the US and China are in an AI race, and the global implications if they are. The state of the art of AI in China. China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain. How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people. Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control. How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly. And plenty more. Chapters: Cold open (00:00:00) Luisa's intro (00:01:02) The interview begins (00:02:06) Is China in an AI race with the West? (00:03:20) How advanced is Chinese AI? (00:15:21) Bottlenecks in Chinese AI development (00:22:30) China and AI risks (00:27:41) Information control and censorship (00:31:32) AI safety research in China (00:36:31) Could China be a source of catastrophic AI risk? (00:41:58) AI enabling human rights abuses and undermining democracy (00:50:10) China’s semiconductor industry (00:59:47) China’s domestic AI governance landscape (01:29:22) China’s international AI governance strategy (01:49:56) Coordination (01:53:56) Track two dialogues (02:03:04) Misunderstandings Western actors have about Chinese approaches (02:07:34) Complexity thinking (02:14:40) Sihao’s pet bacteria hobby (02:20:34) Luisa's outro (02:22:47) Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US 1:54:24
1:54:24
Play Later
Play Later
Lists
Like
Liked1:54:24
"Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns among almost everyone. You are talking about people who may have gone down into the secret tunnels beneath Washington, DC, escaped from the Capitol and such: people are now broiling to death; people are dying from carbon monoxide poisoning; people who followed instructions and went into their basement are dying of suffocation. Everywhere there is death, everywhere there is fire. "That iconic mushroom stem and cap that represents a nuclear blast — when a nuclear weapon has been exploded on a city — that stem and cap is made up of people. What is left over of people and of human civilisation." —Annie Jacobsen In today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario . Links to learn more, highlights, and full transcript. They cover: The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts. What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack. The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes. The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds. How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures. And plenty more. Chapters: Cold open (00:00:00) Luisa’s intro (00:01:03) The interview begins (00:02:28) The first 24 minutes (00:02:59) The Black Book and presidential advisors (00:13:35) False alarms (00:40:43) Russian misperception of US counterattack (00:44:50) A narcissistic madman with a nuclear arsenal (01:00:13) Is escalation inevitable? (01:02:53) Firestorms and rings of annihilation (01:12:56) Nuclear electromagnetic pulses (01:27:34) Continuity of government (01:36:35) Rays of hope (01:41:07) Where we’re headed (01:43:52) Avoiding politics (01:50:34) Luisa’s outro (01:52:29) Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #191 (Part 2) – Carl Shulman on government and society after AGI 2:20:32
2:20:32
Play Later
Play Later
Lists
Like
Liked2:20:32
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI . You can listen to them in either order! If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together? It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere. Links to learn more, highlights, and full transcript. As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases. If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it. Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet. To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest. In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI. Carl Shulman and host Rob Wiblin discuss the above, as well as: The risk of society using AI to lock in its values. The difficulty of preventing coups once AI is key to the military and police. What international treaties we need to make this go well. How to make AI superhuman at forecasting the future. Whether AI will be able to help us with intractable philosophical questions. Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale. Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.' Opportunities for listeners to contribute to making the future go well. Chapters: Cold open (00:00:00) Rob’s intro (00:01:16) The interview begins (00:03:24) COVID-19 concrete example (00:11:18) Sceptical arguments against the effect of AI advisors (00:24:16) Value lock-in (00:33:59) How democracies avoid coups (00:48:08) Where AI could most easily help (01:00:25) AI forecasting (01:04:30) Application to the most challenging topics (01:24:03) How to make it happen (01:37:50) International negotiations and coordination and auditing (01:43:54) Opportunities for listeners (02:00:09) Why Carl doesn't support enforced pauses on AI research (02:03:58) How Carl is feeling about the future (02:15:47) Rob’s outro (02:17:37) Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #191 (Part 1) – Carl Shulman on the economy and national security after AGI 4:14:58
4:14:58
Play Later
Play Later
Lists
Like
Liked4:14:58
This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI . You can listen to them in either order! The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply? Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating. Links to learn more, highlights, and full transcript. Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour. It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field. It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business. It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in. As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives. And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals. This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use. These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply? In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking: If we're heading towards the above, how come economic growth is slow now and not really increasing? Why have computers and computer chips had so little effect on economic productivity so far? Are self-replicating biological systems a good comparison for self-replicating machine systems? Isn't this just too crazy and weird to be plausible? What bottlenecks would be encountered in supplying energy and natural resources to this growing economy? Might there not be severely declining returns to bigger brains and more training? Wouldn't humanity get scared and pull the brakes if such a transformation kicked off? If this is right, how come economists don't agree? Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other? Chapters: Cold open (00:00:00) Rob’s intro (00:01:00) Transitioning to a world where AI systems do almost all the work (00:05:21) Economics after an AI explosion (00:14:25) Objection: Shouldn’t we be seeing economic growth rates increasing today? (00:59:12) Objection: Speed of doubling time (01:07:33) Objection: Declining returns to increases in intelligence? (01:11:59) Objection: Physical transformation of the environment (01:17:39) Objection: Should we expect an increased demand for safety and security? (01:29:14) Objection: “This sounds completely whack” (01:36:10) Income and wealth distribution (01:48:02) Economists and the intelligence explosion (02:13:31) Baumol effect arguments (02:19:12) Denying that robots can exist (02:27:18) Classic economic growth models (02:36:12) Robot nannies (02:48:27) Slow integration of decision-making and authority power (02:57:39) Economists’ mistaken heuristics (03:01:07) Moral status of AIs (03:11:45) Rob’s outro (04:11:47) Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #190 – Eric Schwitzgebel on whether the US is conscious 2:00:46
2:00:46
Play Later
Play Later
Lists
Like
Liked2:00:46
"One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the most complex, difficult questions that there are. And even if we can’t make great progress on them and don’t come to completely satisfying solutions, just the fact of trying to grapple with these things is kind of the universe looking at itself and trying to understand itself. So we’re kind of this bright spot of reflectiveness in the cosmos, and I think we should celebrate that fact for its own intrinsic value and interestingness." —Eric Schwitzgebel In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World . Links to learn more, highlights, and full transcript. They cover: Why our intuitions seem so unreliable for answering fundamental questions about reality. What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity. Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs. Eric’s claim that consciousness and cosmology are universally bizarre and dubious. How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on. The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true. Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions. And much more. Chapters: Cold open |00:00:00| Luisa’s intro |00:01:10| Bizarre and dubious philosophical theories |00:03:13| The materialist view of consciousness |00:13:55| What would it mean for the US to be conscious? |00:19:46| Supersquids and antheads thought experiments |00:22:37| Alternatives to the materialist perspective |00:35:19| Are our intuitions useless for thinking about these things? |00:42:55| Key ingredients for consciousness |00:46:46| Reasons to think the US isn’t conscious |01:01:15| Overlapping consciousnesses [01:09:32] Borderline cases of consciousness |01:13:22| Are we dreaming right now? |01:40:29| Will we ever have answers to these dubious and bizarre questions? |01:56:16| Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems 2:48:51
2:48:51
Play Later
Play Later
Lists
Like
Liked2:48:51
"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people. "But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap." —Rachel Glennerster In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems. Links to learn more, highlights, and full transcript. They cover: How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development. How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries. The challenges in designing effective pull mechanisms, from design to implementation. Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology. The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks. The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine. The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet. “Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies. Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change. And much more. Chapters: The Market Shaping Accelerator (00:03:33) Pull mechanisms for innovation (00:13:10) Accelerating the pneumococcal and COVID vaccines (00:19:05) Advance market commitments (00:41:46) Is this uncertainty hard for funders to plan around? (00:49:17) The story of the malaria vaccine that wasn’t (00:57:15) Challenges with designing and implementing AMCs and other pull mechanisms (01:01:40) Universal COVID vaccine (01:18:14) Climate-resilient crops (01:34:09) The Market Shaping Accelerator’s Innovation Challenge (01:45:40) Indoor air quality to reduce respiratory infections (01:49:09) Repurposing generic drugs (01:55:50) Clean air conditioning units (02:02:41) Broad-spectrum antivirals for pandemic prevention (02:09:11) Improving education in low- and middle-income countries (02:15:53) What’s still weird for Rachel about living in the US? (02:45:06) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #188 – Matt Clancy on whether science is good 2:40:15
2:40:15
Play Later
Play Later
Lists
Like
Liked2:40:15
"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff. "Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt Clancy In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress. Links to learn more, highlights, and full transcript . They cover: Whether scientific progress is actually net positive for humanity. Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors. Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity. Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out. Why Matt is sceptical that AGI could really cause explosive economic growth. And much more. Chapters: Is scientific progress net positive for humanity? (00:03:00) The time of biological perils (00:17:50) Modelling the benefits of science (00:25:48) Income and health gains from scientific progress (00:32:49) Discount rates (00:42:14) How big are the returns to science? (00:51:08) Forecasting global catastrophic biological risks from scientific progress (01:05:20) What’s the value of scientific progress, given the risks? (01:15:09) Factoring in extinction risk (01:21:56) How science could reduce extinction risk (01:30:18) Are we already too late to delay the time of perils? (01:42:38) Domain experts vs superforecasters (01:46:03) What Open Philanthropy’s Innovation Policy programme settled on (01:53:47) Explosive economic growth (02:06:28) Matt’s favourite thought experiment (02:34:57) Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard" 3:06:47
3:06:47
Play Later
Play Later
Lists
Like
Liked3:06:47
"Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else? Well, if you are on the one company town on Mars, your labour mobility is zero, which has never existed on Earth. Even in your stereotypical West Virginian company town run by immigrant labour, there’s still, by definition, a train out. On Mars, you might not even be in the launch window. And even if there are five other company towns or five other settlements, they’re not necessarily rated to take more humans. They have their own oxygen budget, right? "And so economists use numbers like these, like labour mobility, as a way to put an equation and estimate the ability of a company to set noncompetitive wages or to set noncompetitive work conditions. And essentially, on Mars you’re setting it to infinity." — Zach Weinersmith In today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behind Saturday Morning Breakfast Cereal — about the latest book he wrote with his wife Kelly: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through? Links to learn more, highlights, and full transcript. They cover: Why space travel is suddenly getting a lot cheaper and re-igniting enthusiasm around space settlement. What Zach thinks are the best and worst arguments for settling space. Zach’s journey from optimistic about space settlement to a self-proclaimed “space bastard” (pessimist). How little we know about how microgravity and radiation affects even adults, much less the children potentially born in a space settlement. A rundown of where we could settle in the solar system, and the major drawbacks of even the most promising candidates. Why digging bunkers or underwater cities on Earth would beat fleeing to Mars in a catastrophe. How new space settlements could look a lot like old company towns — and whether or not that’s a bad thing. The current state of space law and how it might set us up for international conflict. How space cannibalism legal loopholes might work on the International Space Station. And much more. Chapters: Space optimism and space bastards (00:03:04) Bad arguments for why we should settle space (00:14:01) Superficially plausible arguments for why we should settle space (00:28:54) Is settling space even biologically feasible? (00:32:43) Sex, pregnancy, and child development in space (00:41:41) Where’s the best space place to settle? (00:55:02) Creating self-sustaining habitats (01:15:32) What about AI advances? (01:26:23) A roadmap for settling space (01:33:45) Space law (01:37:22) Space signalling and propaganda (01:51:28) Space war (02:00:40) Mining asteroids (02:06:29) Company towns and communes in space (02:10:55) Sending digital minds into space (02:26:37) The most promising space governance models (02:29:07) The tragedy of the commons (02:35:02) The tampon bandolier and other bodily functions in space (02:40:14) Is space cannibalism legal? (02:47:09) The pregnadrome and other bizarre proposals (02:50:02) Space sexism (02:58:38) What excites Zach about the future (03:02:57) Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives 1:18:58
1:18:58
Play Later
Play Later
Lists
Like
Liked1:18:58
"I work in a place called Uttar Pradesh, which is a state in India with 240 million people. One in every 33 people in the whole world lives in Uttar Pradesh. It would be the fifth largest country if it were its own country. And if it were its own country, you’d probably know about its human development challenges, because it would have the highest neonatal mortality rate of any country except for South Sudan and Pakistan. Forty percent of children there are stunted. Only two-thirds of women are literate. So Uttar Pradesh is a place where there are lots of health challenges. "And then even within that, we’re working in a district called Bahraich, where about 4 million people live. So even that district of Uttar Pradesh is the size of a country, and if it were its own country, it would have a higher neonatal mortality rate than any other country. In other words, babies born in Bahraich district are more likely to die in their first month of life than babies born in any country around the world." — Dean Spears In today’s episode, host Luisa Rodriguez speaks to Dean Spears — associate professor of economics at the University of Texas at Austin and founding director of r.i.c.e. — about his experience implementing a surprisingly low-tech but highly cost-effective kangaroo mother care programme in Uttar Pradesh, India to save the lives of vulnerable newborn infants. Links to learn more, highlights, and full transcript. They cover: The shockingly high neonatal mortality rates in Uttar Pradesh, India, and how social inequality and gender dynamics contribute to poor health outcomes for both mothers and babies. The remarkable benefits for vulnerable newborns that come from skin-to-skin contact and breastfeeding support. The challenges and opportunities that come with working with a government hospital to implement new, evidence-based programmes. How the currently small programme might be scaled up to save more newborns’ lives in other regions of Uttar Pradesh and beyond. How targeted health interventions stack up against direct cash transfers. Plus, a sneak peak into Dean’s new book, which explores the looming global population peak that’s expected around 2080, and the consequences of global depopulation. And much more. Chapters: Why is low birthweight a major problem in Uttar Pradesh? (00:02:45) Neonatal mortality and maternal health in Uttar Pradesh (00:06:10) Kangaroo mother care (00:12:08) What would happen without this intervention? (00:16:07) Evidence of KMC’s effectiveness (00:18:15) Longer-term outcomes (00:32:14) GiveWell’s support and implementation challenges (00:41:13) How can KMC be so cost effective? (00:52:38) Programme evaluation (00:57:21) Is KMC is better than direct cash transfers? (00:59:12) Expanding the programme and what skills are needed (01:01:29) Fertility and population decline (01:07:28) What advice Dean would give his younger self (01:16:09) Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals 2:33:12
2:33:12
Play Later
Play Later
Lists
Like
Liked2:33:12
"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.' I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis Bollard In today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today. Links to learn more, highlights, and full transcript. They cover: The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention. Work to improve farmed animal welfare that Open Philanthropy is excited about funding. The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done. The occasional tension between ending factory farming and curbing climate change How AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit. How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species . Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering. How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations. And much more. Chapters: Common objections to ending factory farming (00:13:21) Potential solutions (00:30:55) Cage-free reforms (00:34:25) Broiler chicken welfare (00:46:48) Do companies follow through on these commitments? (01:00:21) Fish welfare (01:05:02) Alternatives to animal proteins (01:16:36) Farm animal welfare in Asia (01:26:00) Farm animal welfare in Europe (01:30:45) Animal welfare science (01:42:09) Approaches Lewis is less excited about (01:52:10) Will we end factory farming in our lifetimes? (01:56:36) Effect of AI (01:57:59) Recent big wins for farm animals (02:07:38) How animal advocacy has changed since Lewis first got involved (02:15:57) Response to the Moral Weight Project (02:19:52) How to help (02:28:14) Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT 3:31:22
3:31:22
Play Later
Play Later
Lists
Like
Liked3:31:22
Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don’t Worry About the Vase , Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. Links to learn more, summary, and full transcript. In today’s episode, host Rob Wiblin asks Zvi for his takes on: US-China negotiations Whether AI progress has stalled The biggest wins and losses for alignment in 2023 EU and White House AI regulations Which major AI lab has the best safety strategy The pros and cons of the Pause AI movement Recent breakthroughs in capabilities In what situations it’s morally acceptable to work at AI labs Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details. Zvi and Rob also talk about: The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be. The “ sleeper agent ” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is. Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact. Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank ( Balsa Research ) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply. Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI. An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels . And plenty more. Chapters: Zvi’s AI-related worldview (00:03:41) Sleeper agents (00:05:55) Safety plans of the three major labs (00:21:47) Misalignment vs misuse vs structural issues (00:50:00) Should concerned people work at AI labs? (00:55:45) Pause AI campaign (01:30:16) Has progress on useful AI products stalled? (01:38:03) White House executive order and US politics (01:42:09) Reasons for AI policy optimism (01:56:38) Zvi’s day-to-day (02:09:47) Big wins and losses on safety and alignment in 2023 (02:12:29) Other unappreciated technical breakthroughs (02:17:54) Concrete things we can do to mitigate risks (02:31:19) Balsa Research and the Jones Act (02:34:40) The National Environmental Policy Act (02:50:36) Housing policy (02:59:59) Underrated rationalist worldviews (03:16:22) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions and additional content editing: Katy Moore…
8
80,000 Hours Podcast


Today’s release is a reading of our career review of AI governance and policy , written and narrated by Cody Fenwick. Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks, and there are opportunities in the broad field of AI governance to positively shape how society responds to and prepares for the challenges posed by the technology. Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them. If you want to check out the links, footnotes and figures in today’s article, you can find those here. Editing and audio proofing: Ben Cordell and Simon Monsour Narration: Cody Fenwick…
8
80,000 Hours Podcast


1 #183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more 2:36:38
2:36:38
Play Later
Play Later
Lists
Like
Liked2:36:38
"When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, 'I solved your problem.' What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings. A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, 'Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?' And they’d be like, 'Hell no!' It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it." —Spencer Greenberg In today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago. Links to learn more, summary, and full transcript. They cover: How much money makes you happy — and the tricky methodological issues that come up trying to answer that question. The importance of hype in making valuable things happen. How to recognise warning signs that someone is untrustworthy or likely to hurt you. Whether Registered Reports are successfully solving reproducibility issues in science. The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles. The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups. The potential harms of lightgassing, which is the opposite of gaslighting. How Spencer’s team used non-statistical methods to test whether astrology works. Whether there’s any social value in retaliation. And much more. Chapters: Does money make you happy? (00:05:54) Hype vs value (00:31:27) Warning signs that someone is bad news (00:41:25) Integrity and reproducibility in social science research (00:57:54) Personal principles (01:16:22) Decision-making errors (01:25:56) Lightgassing (01:49:23) Astrology (02:02:26) Game theory, tit for tat, and retaliation (02:20:51) Parenting (02:30:00) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more 2:21:31
2:21:31
Play Later
Play Later
Lists
Like
Liked2:21:31
"[One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics. "Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered." — Bob Fischer In today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project . Links to learn more, summary, and full transcript. They cover: The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach. Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions. The results that most surprised Bob. Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table. Thought experiments like Tortured Tim that test different philosophical assumptions about welfare. Confronting our own biases when estimating animal mental capacities and moral worth. The limitations of using neuron counts as a proxy for moral weights. How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation. And plenty more. Chapters: Welfare ranges (00:10:19) Historical assessments (00:16:47) Method (00:24:02) The present / absent approach (00:27:39) Results (00:31:42) Chickens (00:32:42) Bees (00:50:00) Salmon and limits of methodology (00:56:18) Octopuses (01:00:31) Pigs (01:27:50) Surprises about the project (01:30:19) Objections to the project (01:34:25) Alternative decision theories and risk aversion (01:39:14) Hedonism assumption (02:00:54) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #181 – Laura Deming on the science that could keep us healthy in our 80s and beyond 1:37:21
1:37:21
Play Later
Play Later
Lists
Like
Liked1:37:21
"The question I care about is: What do I want to do? Like, when I'm 80, how strong do I want to be? OK, and then if I want to be that strong, how well do my muscles have to work? OK, and then if that's true, what would they have to look like at the cellular level for that to be true? Then what do we have to do to make that happen? In my head, it's much more about agency and what choice do I have over my health. And even if I live the same number of years, can I live as an 80-year-old running every day happily with my grandkids?" — Laura Deming In today’s episode, host Luisa Rodriguez speaks to Laura Deming — founder of The Longevity Fund — about the challenge of ending ageing. Links to learn more, summary, and full transcript. They cover: How lifespan is surprisingly easy to manipulate in animals, which suggests human longevity could be increased too. Why we irrationally accept age-related health decline as inevitable. The engineering mindset Laura takes to solving the problem of ageing. Laura’s thoughts on how ending ageing is primarily a social challenge, not a scientific one. The recent exciting regulatory breakthrough for an anti-ageing drug for dogs. Laura’s vision for how increased longevity could positively transform society by giving humans agency over when and how they age. Why this decade may be the most important decade ever for making progress on anti-ageing research. The beauty and fascination of biology, which makes it such a compelling field to work in. And plenty more. Chapters: The case for ending ageing (00:04:00) What might the world look like if this all goes well? (00:21:57) Reasons not to work on ageing research (00:27:25) Things that make mice live longer (00:44:12) Parabiosis, changing the brain, and organ replacement can increase lifespan (00:54:25) Big wins the field of ageing research (01:11:40) Talent shortages and other bottlenecks for ageing research (01:17:36) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #180 – Hugo Mercier on why gullibility and misinformation are overrated 2:36:55
2:36:55
Play Later
Play Later
Lists
Like
Liked2:36:55
The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI. And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies. But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday , Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people. Links to learn more, summary, and full transcript. In this interview, host Rob Wiblin and Hugo discuss: How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility. How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us. Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about. Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment. The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t. Why fake news and conspiracy theories actually have less impact than most people assume. False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why. And plenty more. Chapters: The view that humans are really gullible (00:04:26) The evolutionary argument against humans being gullible (00:07:46) Open vigilance (00:18:56) Intuitive and reflective beliefs (00:32:25) How people decide who to trust (00:41:15) Redefining beliefs (00:51:57) Bloodletting (01:00:38) Vaccine hesitancy and creationism (01:06:38) False beliefs without skin in the game (01:12:36) One consistent weakness in human judgement (01:22:57) Trying to explain harmful financial decisions (01:27:15) Astrology (01:40:40) Medical treatments that don’t work (01:45:47) Generative AI, LLMs, and persuasion (01:54:50) Ways AI could improve the information environment (02:29:59) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety 2:56:48
2:56:48
Play Later
Play Later
Lists
Like
Liked2:56:48
Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain. From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool. So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all ? Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings , in which he sets out to try to resolve this paradox. Links to learn more, video, highlights, and full transcript. In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as: How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system. How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs. The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field. How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems. The “smoke detector principle” of why we experience so many false alarms along with true threats. The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective. Evolutionary theories on why we age and die. And much more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #178 – Emily Oster on what the evidence actually says about pregnancy and parenting 2:22:36
2:22:36
Play Later
Play Later
Lists
Like
Liked2:22:36
"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending? Which hours? How do I want the weekends to look? The things that are going to shape the way your day-to-day goes, and the time you spend with your kids, and what you're doing in that time with your kids, and all of those things: you have an opportunity to deliberately plan them. And you can then feel like, 'I've thought about this, and this is a life that I want. This is a life that we're trying to craft for our family, for our kids.' And that is distinct from thinking you're doing a good job in every moment — which you can't achieve. But you can achieve, 'I'm doing this the way that I think works for my family.'" — Emily Oster In today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood. Links to learn more, summary, and full transcript. They cover: Common pregnancy myths and advice that Emily disagrees with — and why you should probably get a doula. Whether it’s fine to continue with antidepressants and coffee during pregnancy. What the data says — and doesn’t say — about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more. Which factors really matter for kids to thrive — and why that means parents shouldn’t sweat the small stuff. How to reduce parental guilt and anxiety with facts, and reject judgemental “Mommy Wars” attitudes when making decisions that are best for your family. The effects of having kids on career ambitions, pay, and productivity — and how the effects are different for men and women. Practical advice around managing the tradeoffs between career and family. What to consider when deciding whether and when to have kids. Relationship challenges after having kids, and the protective factors that help. And plenty more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps 2:47:09
2:47:09
Play Later
Play Later
Lists
Like
Liked2:47:09
Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama , drawing on Nathan's alarming experience red-teaming an early version of GPT-4 and resulting conversations with OpenAI staff and board members. Links to learn more, video, highlights, and full transcript. Today we go deeper, diving into: What AI now actually can and can’t do, across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be. Why most people, including most listeners, probably don’t know and can’t keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that. How we need to learn to talk about AI more productively, particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone. Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.’ The chances that anti-regulation rhetoric from some AI entrepreneurs backfires. How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing. Preparing for coming societal impacts and potential disruption from AI. Practical ways that curious listeners can try to stay abreast of everything that’s going on. And plenty more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be 2:59:17
2:59:17
Play Later
Play Later
Lists
Like
Liked2:59:17
You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?” You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours. But then you get up, walk outside, and look at the number on your box. ‘3’. Huh. Now you don’t know what to believe. If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928? In today’s interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning’ could be relevant for figuring out where we should direct our charitable giving. Rebroadcast: this episode was originally released in January 2021. Links to learn more, summary, and full transcript. Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘ longtermism ’ — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future. Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that’s both very large relative to what’s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time. But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live. If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed. If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘ doomsday argument ‘ alone. If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we’re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead. There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn’t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants. In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely. They also discuss: Which worldviews Open Phil finds most plausible, and how it balances them Which worldviews Ajeya doesn’t embrace but almost does How hard it is to get to other solar systems The famous ‘simulation argument’ When transformative AI might actually arrive The biggest challenges involved in working on big research reports What it’s like working at Open Phil And much more Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications 3:50:30
3:50:30
Play Later
Play Later
Lists
Like
Liked3:50:30
Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation. But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster. According to Carl Shulman, research associate at Oxford University’s Future of Humanity Institute , that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future. Rebroadcast: this episode was originally released in October 2021. Links to learn more, summary, and full transcript. The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs: The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American. So saving all US citizens at any given point in time would be worth $1,300 trillion. If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice ), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone. Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today. This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner , Larry Summers , and Cass Sunstein . If the case is clear enough, why hasn’t it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve? Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds. Carl suspects another reason is that it’s difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn’t know what good performance looks like, politicians can’t be given incentives to do the right thing. It’s reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe. But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we’ve still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended. Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we’ve never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on. Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover: A few reasons Carl isn’t excited by ‘strong longtermism’ How x-risk reduction compares to GiveWell recommendations Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change The history of bioweapons Whether gain-of-function research is justifiable Successes and failures around COVID-19 The history of existential risk And much more Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms 3:22:17
3:22:17
Play Later
Play Later
Lists
Like
Liked3:22:17
If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines. The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft. They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What’s happening here? According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ‘networked corruption’. Everyone in the community is benefiting from the criminal enterprise — so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil. Rebroadcast: this episode was originally released in September 2021. Links to learn more, summary, and full transcript. In today’s episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world. Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country’s rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us. The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected. Mushtaq’s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. To root out fraud, aid agencies try to impose institutions and laws that work in countries like the U.K. today. Everyone nods their heads and appears to go along, but years later they find nothing has changed, or worse — the new anti-corruption laws are mostly just used to persecute anyone who challenges the country’s rulers. As Mushtaq explains, to people who specialise in understanding why corruption is ubiquitous in some countries but not others, this is entirely predictable. Western agencies imagine a situation where most people are law abiding, but a handful of selfish fat cats are engaging in large-scale graft. In fact in the countries they’re trying to change everyone is breaking some rule or other, or participating in so-called ‘corruption’, because it’s the only way to get things done and always has been. Mushtaq’s rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they’re participating in, they almost always win out. To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers. Trying to impose a new way of doing things from the top down wasn’t how Europe modernised, and it won’t work elsewhere either. In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy — in the hope that one day this will give people a viable alternative to corruption. In this extensive interview Rob and Mushtaq cover this and much more, including: How does one test theories like this? Why are companies in some poor countries so much less productive than their peers in rich countries? Have rich countries just legalized the corruption in their societies? What are the big live debates in institutional economics? Should poor countries protect their industries from foreign competition? Where has industrial policy worked, and why? How can listeners use these theories to predict which policies will work in their own countries? Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came out in 2023 . That's 32 of our favourite ideas packed into one episode that's so bursting with substance it might be more than the human mind can safely handle. There's something for everyone here: Ezra Klein on punctuated equilibrium Tom Davidson on why AI takeoff might be shockingly fast Johannes Ackva on political action versus lifestyle changes Hannah Ritchie on how buying environmentally friendly technology helps low-income countries Bryan Caplan on rational irrationality on the part of voters Jan Leike on whether the release of ChatGPT increased or reduced AI extinction risks Athena Aktipis on why elephants get deadly cancers less often than humans Anders Sandberg on the lifespan of civilisations Nita Farahany on hacking neural interfaces ...plus another 23 such gems. And they're in an order that our audio engineer Simon Monsour described as having an "eight-dimensional-tetris-like rationale." I don't know what the hell that means either, but I'm curious to find out. And remember: if you like these highlights, note that we release 20-minute highlights reels for every new episode over on our sister feed, which is called 80k After Hours . So even if you're struggling to make time to listen to every single one, you can always get some of the best bits of our episodes. We hope for all the best things to happen for you in 2024, and we'll be back with a traditional classic episode soon. This Mega-highlights Extravaganza was brought to you by Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong…
8
80,000 Hours Podcast


1 #100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome 2:51:32
2:51:32
Play Later
Play Later
Lists
Like
Liked2:51:32
Today’s episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!). The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it’s rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so. Rebroadcast: this episode was originally released in May 2021. Links to learn more, summary, and full transcript. The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today. The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort. Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better. Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. If you’re in a hurry, we’ve extracted the key advice that Howie has to share in a section below . Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world. Here are a few quotes from early reviewers: "I think there’s a big difference between admitting you have depression/seeing a psych and giving a warts-and-all account of a major depressive episode like Howie does in this episode… His description was relatable and really inspiring." Someone who works on mental health issues said: "This episode is perhaps the most vivid and tangible example of what it is like to experience psychological distress that I’ve ever encountered. Even though the content of Howie and Keiran’s discussion was serious, I thought they both managed to converse about it in an approachable and not-overly-somber way." And another reviewer said: "I found Howie’s reflections on what is actually going on in his head when he engages in negative self-talk to be considerably more illuminating than anything I’ve heard from my therapist." We also hope that the episode will: Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles. Give insight into what it’s like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully. Several early listeners have even made specific behavioral changes due to listening to the episode — including people who generally have good mental health but were convinced it’s well worth the low cost of setting up a plan in case they have problems in the future. So we think this episode will be valuable for: People who have experienced mental health problems or might in future; People who have had troubles with stress, anxiety, low mood, low self esteem, imposter syndrome and similar issues, even if their experience isn’t well described as ‘mental illness’; People who have never experienced these problems but want to learn about what it’s like, so they can better relate to and assist family, friends or colleagues who do. In other words, we think this episode could be worthwhile for almost everybody. Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts. If you don’t want to hear or read the most intense section, you can skip the chapter called ‘Disaster’. And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’. We’ve collected a large list of high quality resources for overcoming mental health problems in our links section . If you’re feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the US (800-273-8255) and Samaritans in the UK (116 123). You may also want to find and save a number for a local service where possible. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models 3:46:52
3:46:52
Play Later
Play Later
Lists
Like
Liked3:46:52
OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do that safely? That’s the central theme of today’s episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast. Links to learn more, video, highlights, and full transcript. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI’s “red team” that probed GPT-4 to find ways it could be abused, long before it was public. Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month. Nathan’s view: it’s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view. When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI’s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion. In today’s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution , which he did in the hope that it would provide useful background to understanding the OpenAI board’s reservations about Sam Altman, which to this day have not been laid out in any detail. But while he feared throughout 2022 that OpenAI and Sam Altman didn’t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI. Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they’re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI’s decision to release GPT-4 when it did was for the best. On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They’ve also invested major resources into new ‘Superalignment’ and ‘Preparedness’ teams, while avoiding using competition with China as an excuse for recklessness. At the same time, it’s very hard to know whether it’s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we’re confident we want, which we can prove will remain safe as its capabilities get ever greater. By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI’s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they’re also better placed than maybe anyone in the world to assess if the company’s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders. In today’s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also: Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan’s interactions with the board when he raised concerns from his red teaming efforts. Which AI applications we should be urgently rolling out, with less worry about safety. Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments. Whether AI capabilities are advancing faster than safety efforts and controls. The costs and benefits of releasing powerful models like GPT-4. Nathan’s view on the game theory of AI arms races and China. Whether it’s worth taking some risk with AI for huge potential upside. The need for more “AI scouts” to understand and communicate AI progress. And plenty more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #175 – Lucia Coulter on preventing lead poisoning for $1.66 per child 2:14:08
2:14:08
Play Later
Play Later
Lists
Like
Liked2:14:08
Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they’ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects. We’ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close. Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children’s intellectual potential, health, and life expectancy is vast — the health damage involved is around that caused by malaria, tuberculosis, and HIV combined. This week’s guest, Lucia Coulter — cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) — speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced. Links to learn more, summary, and full transcript. Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here ). Or, looking at it differently, LEEP is saving a year of healthy life for $14 , and in the long run is increasing people’s lifetime income anywhere from $300–1,200 for each $1 it spends, by preventing intellectual stunting. Which raises the question: why hasn’t this happened already? How is lead still in paint in most poor countries, even when that’s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next? With host Robert Wiblin, Lucia answers all those questions and more: Why LEEP isn’t fully funded, and what it would do with extra money (you can donate here ). How bad lead poisoning is in rich countries. Why lead is still in aeroplane fuel. How lead got put straight in food in Bangladesh, and a handful of people got it removed. Why the enormous damage done by lead mostly goes unnoticed. The other major sources of lead exposure aside from paint. Lucia’s story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship’s Incubation Program . Why Lucia pledges 10% of her income to cost-effective charities. Lucia’s take on why GiveWell didn’t support LEEP earlier on. How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer. Generalisable lessons LEEP has learned from coordinating with governments in poor countries. And plenty more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire and Dominic Armstrong Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers 2:00:31
2:00:31
Play Later
Play Later
Lists
Like
Liked2:00:31
"It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you’re interacting on any platform that also has issued you a multifunctional device where they’re looking at your brainwave activity, they are marketing to you, they’re cognitively shaping you. "So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there’s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, 'Oh my god, mind reading is here. Now what?'" — Nita Farahany In today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology. Links to learn more, summary, and full transcript. They cover: How close we are to actual mind reading. How hacking neural interfaces could cure depression. How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations. How close we are to being able to unlock our phones by singing a song in our heads. How neurodata has been used for interrogations, and even criminal prosecutions. The possibility of linking brains to the point where you could experience exactly the same thing as another person. Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind. And plenty more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe 2:38:20
2:38:20
Play Later
Play Later
Lists
Like
Liked2:38:20
"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual. "But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." — Jeff Sebo In today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds. Links to learn more, highlights, and full transcript. They cover: The non-negligible chance that AI systems will be sentient by 2030 What AI systems might want and need, and how that might affect our moral concepts What happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote? What kind of legal and political status should AI systems have? Legal personhood? Political citizenship? What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other? The repugnant conclusion and the rebugnant conclusion The experience of trying to build the field of AI welfare What improv comedy can teach us about doing good in the world And plenty more. Chapters: Cold open (00:00:00) Luisa's intro (00:01:00) The interview begins (00:02:45) We should extend moral consideration to some AI systems by 2030 (00:06:41) A one-in-1,000 threshold (00:15:23) What does moral consideration mean? (00:24:36) Hitting the threshold by 2030 (00:27:38) Is the threshold too permissive? (00:38:24) The Rebugnant Conclusion (00:41:00) A world where AI experiences could matter more than human experiences (00:52:33) Should we just accept this argument? (00:55:13) Searching for positive-sum solutions (01:05:41) Are we going to sleepwalk into causing massive amounts of harm to AI systems? (01:13:48) Discourse and messaging (01:27:17) What will AI systems want and need? (01:31:17) Copies of digital minds (01:33:20) Connected minds (01:40:26) Psychological connectedness and continuity (01:49:58) Assigning responsibility to connected minds (01:58:41) Counting the wellbeing of connected minds (02:02:36) Legal personhood and political citizenship (02:09:49) Building the field of AI welfare (02:24:03) What we can learn from improv comedy (02:29:29) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Dominic Armstrong and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #172 – Bryan Caplan on why you should stop reading the news 2:23:22
2:23:22
Play Later
Play Later
Lists
Like
Liked2:23:22
Is following important political and international news a civic duty — or is it our civic duty to avoid it? It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless. Links to learn more, summary, and full transcript. In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including: That it overwhelmingly provides us with information we can't usefully act on. That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant. That it obscures the big picture, falling into the trap of thinking 'something important happens every day.' That it's highly addictive, for many people chewing up 10% or more of their waking hours. That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought. And plenty more. Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person. In the second half of the episode, Bryan and Rob cover: Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly. Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions. How to allocate resources in space. Bryan's experience homeschooling his kids. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures 1:46:14
1:46:14
Play Later
Play Later
Lists
Like
Liked1:46:14
"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." — Alison Young In today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future. Links to learn more, summary, and full transcript . They cover: The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reporting The Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many years The time the Soviets had a major anthrax leak, and then hid it for over a decade The 1977 influenza pandemic caused by vaccine trial gone wrong in China The last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK Ways we could get more reliable oversight and accountability for these labs And the investigative work Alison’s most proud of Chapters: Cold open (00:00:00) Luisa's intro (00:01:13) Investigating leaks at the CDC (00:05:16) High-profile CDC accidents (00:16:13) Dugway live anthrax accidents (00:32:08) Soviet anthrax leak (00:44:41) The 1977 influenza pandemic (00:53:43) The last death from smallpox (00:59:27) How common are lab leaks? (01:09:05) Improving the regulation of dangerous biological research (01:18:36) Potential solutions (01:34:55) The investigative work Alison’s most proud of (01:40:33) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down 2:57:46
2:57:46
Play Later
Play Later
Lists
Like
Liked2:57:46
"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh Harish In today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution. Links to learn more, summary, and full transcript. They cover: How bad air pollution is for our health and life expectancy The different kinds of harm that particulate pollution causes The strength of the evidence that it damages our brain function and reduces our productivity Whether it was a mistake to switch our attention to climate change and away from air pollution Whether most listeners to this show should have an air purifier running in their house right now Where air pollution in India is worst and why, and whether it's going up or down Where most air pollution comes from The policy blunders that led to many sources of air pollution in India being effectively unregulated Why indoor air pollution packs an enormous punch The politics of air pollution in India How India ended up spending a lot of money on outdoor air purifiers The challenges faced by foreign philanthropists in India Why Santosh has made the grants he has so far And plenty more Chapters: Cold open (00:00:00) Rob's intro (00:01:07) How bad is air pollution? (00:03:41) Quantifying the scale of the damage (00:15:47) Effects on cognitive performance and mood (00:24:19) How do we really know the harms are as big as is claimed? (00:27:05) Misconceptions about air pollution (00:36:56) Why don’t environmental advocacy groups focus on air pollution? (00:42:22) How listeners should approach air pollution in their own lives (00:46:58) How bad is air pollution in India in particular (00:54:23) The trend in India over the last few decades (01:12:33) Why aren’t people able to fix these problems? (01:24:17) Household waste burning (01:35:06) Vehicle emissions (01:42:10) The role that courts have played in air pollution regulation in India (01:50:09) Industrial emissions (01:57:10) The political economy of air pollution in northern India (02:02:14) Can philanthropists drive policy change? (02:13:42) Santosh’s grants (02:29:45) Examples of other countries that have managed to greatly reduce air pollution (02:45:44) Career advice for listeners in India (02:51:11) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels 1:47:56
1:47:56
Play Later
Play Later
Lists
Like
Liked1:47:56
"One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, “The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them?” I think that's an interesting thought experiment -- and a good one -- to say, “Are there cases in which I think that's justifiable?” — Paul Niehaus In today’s episode, host Luisa Rodriguez interviews Paul Niehaus — co-founder of GiveDirectly — on the case for giving unconditional cash to the world's poorest households. Links to learn more, summary and full transcript. They cover: The empirical evidence on whether giving cash directly can drive meaningful economic growth How the impacts of GiveDirectly compare to USAID employment programmes GiveDirectly vs GiveWell’s top-recommended charities How long-term guaranteed income affects people's risk-taking and investments Whether recipients prefer getting lump sums or monthly instalments How GiveDirectly tackles cases of fraud and theft The case for universal basic income, and GiveDirectly’s UBI studies in Kenya, Malawi, and Liberia The political viability of UBI Plenty more Chapters: Cold open (00:00:00) Luisa’s intro (00:00:58) The basic case for giving cash directly to the poor (00:03:28) Comparing GiveDirectly to USAID programmes (00:15:42) GiveDirectly vs GiveWell’s top-recommended charities (00:35:16) Cash might be able to drive economic growth (00:41:59) Fraud and theft of GiveDirectly funds (01:09:48) Universal basic income studies (01:22:33) Skyjo (01:44:43) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Dominic Armstrong and Milo McGuire Additional content editing: Luisa Rodriguez and Katy Moore Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #168 – Ian Morris on whether deep history says we're heading for an intelligence explosion 2:43:55
2:43:55
Play Later
Play Later
Lists
Like
Liked2:43:55
"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian Morris In today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript. They cover: Some crazy anomalies in the historical record of civilisational progress Whether we should think about technology from an evolutionary perspective Whether we ought to expect war to make a resurgence or continue dying out Why we can't end up living like The Jetsons Whether stagnation or cyclical recurring futures seem very plausible What it means that the rate of increase in the economy has been increasing Whether violence is likely between humans and powerful AI systems The most likely reasons for Rob and Ian to be really wrong about all of this How professional historians react to this sort of talk The future of Ian’s work Plenty more Chapters: Cold open (00:00:00) Rob’s intro (00:01:27) Why we should expect the future to be wild (00:04:08) How historians have reacted to the idea of radically different futures (00:21:20) Why we won’t end up in The Jetsons (00:26:20) The rise of machine intelligence (00:31:28) AI from an evolutionary point of view (00:46:32) Is violence likely between humans and powerful AI systems? (00:59:53) Most troubling objections to this approach in Ian’s view (01:28:20) Confronting anomalies in the historical record (01:33:10) The cyclical view of history (01:56:11) Is stagnation plausible? (02:01:38) The limit on how long this growth trend can continue (02:20:57) The future of Ian’s work (02:37:17) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption 1:54:49
1:54:49
Play Later
Play Later
Lists
Like
Liked1:54:49
"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that. Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren Kell Links to learn more, summary and full transcript. In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products. They cover: The basic case for alternative proteins, and why they’re so hard to make Why fermentation is a surprisingly promising technology for creating delicious alternative proteins The main scientific challenges that need to be solved to make fermentation even more useful The progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordable How GFI Europe is helping with some of these challenges How people can use their careers to contribute to replacing factory farming with alternative proteins The best part of Seren’s job Plenty more Chapters: Cold open (00:00:00) Luisa’s intro (00:01:08) The interview begins (00:02:22) Why alternative proteins? (00:02:36) What makes alternative proteins so hard to make? (00:11:30) Why fermentation is so exciting (00:24:23) The technical challenges involved in scaling fermentation (00:44:38) Progress in cultivated meat (01:06:04) GFI Europe’s work (01:32:47) Careers (01:45:10) The best part of Seren’s job (01:50:07) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Dominic Armstrong and Milo McGuire Additional content editing: Luisa Rodriguez and Katy Moore Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere 3:08:49
3:08:49
Play Later
Play Later
Lists
Like
Liked3:08:49
"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum Collins In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins. Links to learn more, highlights, and full transcript. They cover: How AI could strengthen government capacity, and how that's a double-edged sword How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't there To what extent policymakers take different threats from AI seriously Whether the US and China are in an AI arms race or not Whether it's OK to transform the world without much of the world agreeing to it The tyranny of small differences in AI policy Disagreements between different schools of thought in AI policy, and proposals that could unite them How the US AI Bill of Rights could be improved Whether AI will transform the labour market, and whether it will become a partisan political issue The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them What listeners might be able to do to help with this whole mess Panpsychism Plenty more Chapters: Cold open (00:00:00) Rob's intro (00:01:00) The interview begins (00:04:01) The risk of autocratic lock-in due to AI (00:10:02) The state of play in AI policymaking (00:13:40) China and AI (00:32:12) The most promising regulatory approaches (00:57:51) Transforming the world without the world agreeing (01:04:44) AI Bill of Rights (01:17:32) Who’s ultimately responsible for the consequences of AI? (01:20:39) Policy ideas that could appeal to many different groups (01:29:08) Tension between those focused on x-risk and those focused on AI ethics (01:38:56) Communicating with policymakers (01:54:22) Is AI going to transform the labour market in the next few years? (01:58:51) Is AI policy going to become a partisan political issue? (02:08:10) The value of political philosophy (02:10:53) Tantum’s work at DeepMind (02:21:20) CSET (02:32:48) Career advice (02:35:21) Panpsychism (02:55:24) Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe 2:48:33
2:48:33
Play Later
Play Later
Lists
Like
Liked2:48:33
"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders Sandberg In today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics. Links to learn more, summary and full transcript. They cover: The epic new book Anders is working on, and whether he’ll ever finish it Whether there's a best possible world or we can just keep improving forever What wars might look like if the galaxy is mostly settled The impediments to AI or humans making it to other stars How the universe will end a million trillion years in the future Whether it’s useful to wonder about whether we’re living in a simulation The grabby aliens theory Whether civilizations get more likely to fail the older they get The best way to generate energy that could ever exist Black hole bombs Whether superintelligence is necessary to get a lot of value The likelihood that life from elsewhere has already visited Earth And plenty more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives 3:03:42
3:03:42
Play Later
Play Later
Lists
Like
Liked3:03:42
"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin Esvelt In today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons. Links to learn more, summary and full transcript. They cover: Why it makes sense to focus on deliberately released pandemics Case studies of people who actually wanted to kill billions of humans How many people have the technical ability to produce dangerous viruses The different threats of stealth and wildfire pandemics that could crash civilisation The potential for AI models to increase access to dangerous pathogens Why scientists try to identify new pandemic-capable pathogens, and the case against that research Technological solutions, including UV lights and advanced PPE Using CRISPR-based gene drive to fight diseases and reduce animal suffering And plenty more. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


Today’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare. If you want to check out the links, footnotes and figures in today’s article, you can find those here . And if you like this article, you might enjoy a couple of related episodes of this podcast: #128 – Chris Blattman on the five reasons wars happen #140 – Bear Braumoeller on the case that war isn’t in decline Audio mastering and editing for this episode: Dominic Armstrong Audio Engineering Lead: Ben Cordell Producer: Keiran Harris…
8
80,000 Hours Podcast


1 #163 – Toby Ord on the perils of maximising the good that you do 3:07:08
3:07:08
Play Later
Play Later
Lists
Like
Liked3:07:08
Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more? But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.” Links to learn more, summary and full transcript. Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes. Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things. This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible. Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects. But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before. To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids. Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff. The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error. As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data. In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world. Toby and Rob also discuss: The rise and fall of FTX and some of its impacts What Toby hoped effective altruism would and wouldn't become when he helped to get it off the ground What utilitarianism has going for it, and what's wrong with it in Toby's view How to mathematically model the importance of personal integrity Which AI labs Toby thinks have been acting more responsibly than others How having a young child affects Toby’s feelings about AI risk Whether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartial How Toby ended up being the source of the highest quality images of the Earth from space Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour Transcriptions: Katy Moore…
8
80,000 Hours Podcast


An audio version of the 2023 80,000 Hours career guide, also available on our website , on Amazon and on Audible . If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift .
8
80,000 Hours Podcast


1 #162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI 59:34
59:34
Play Later
Play Later
Lists
Like
Liked59:34
Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT. But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us. Links to learn more, summary and full transcript. On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so. And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead. In The Coming Wave , Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies: 1. Developing an Apollo programme for technical AI safety 2. Instituting capability audits for AI models 3. Buying time by exploiting hardware choke points 4. Getting critics involved in directly engineering AI models 5. Getting AI labs to be guided by motives other than profit 6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it 7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities 8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working 9. Creating a mass public movement that understands AI and can demand the necessary controls 10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibria As Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything. Rob and Mustafa discuss the above, as well as: Whether we should be open sourcing AI models Whether Mustafa's policy views are consistent with his timelines for transformative AI How people with very different views on these issues get along at AI labs The failed efforts (so far) to get a wider range of people involved in these decisions Whether it's dangerous for Mustafa's new company to be training far larger models than GPT-4 Whether we'll be blown away by AI progress over the next year What mandatory regulations government should be imposing on AI labs right now Appropriate priorities for the UK's upcoming AI safety summit Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite 3:30:32
3:30:32
Play Later
Play Later
Lists
Like
Liked3:30:32
"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980. So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" — Michael Webb In today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and the labour market. Links to learn more, summary and full transcript. They cover: The jobs most and least exposed to AI Whether we’ll we see mass unemployment in the short term How long it took other technologies like electricity and computers to have economy-wide effects Whether AI will increase or decrease inequality Whether AI will lead to explosive economic growth What we can we learn from history, and reasons to think this time is different Career advice for a world of LLMs Why Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involved Michael's take as a musician on AI-generated music And plenty more If you'd like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he's now hiring! Check out Quantum Leap's website . Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #160 – Hannah Ritchie on why it makes sense to be optimistic about the environment 2:36:42
2:36:42
Play Later
Play Later
Lists
Like
Liked2:36:42
"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah Ritchie In today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism. Links to learn more, summary and full transcript. They cover: Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could get Her new book about how we could be the first generation to build a sustainable planet Whether climate change is the most worrying environmental issue How we reduced outdoor air pollution Why Hannah is worried about the state of biodiversity Solutions that address multiple environmental issues at once How the world coordinated to address the hole in the ozone layer Surprises from Our World in Data’s research Psychological challenges that come up in Hannah’s work And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less 2:51:20
2:51:20
Play Later
Play Later
Lists
Like
Liked2:51:20
In July, OpenAI announced a new team and project: Superalignment . The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort. Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Links to learn more, summary and full transcript. Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team. Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did. Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain. The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.” But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep. Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities. Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including: If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about. How do you know that these technical problems can be solved at all, even in principle? At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do? In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover: OpenAI's current plans to achieve 'superalignment' and the reasoning behind them Why alignment work is the most fundamental and scientifically interesting research in ML The kinds of people he’s excited to hire to join his team and maybe save the world What most readers misunderstood about the OpenAI announcement The three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversight What the standard should be for confirming whether Jan's team has succeeded Whether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solved Whether Jan thinks OpenAI has deployed models too quickly or too slowly The many other actors who also have to do their jobs really well if we're going to have a good AI future Plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 We now offer shorter 'interview highlights' episodes 6:10
6:10
Play Later
Play Later
Lists
Like
Liked6:10
Over on our other feed, 80k After Hours , you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode — but we think these will be a nice upgrade on skipping episodes entirely. Get these highlight episodes by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app. Highlights put together by Simon Monsour and Milo McGuire…
8
80,000 Hours Podcast


1 #158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk 3:13:33
3:13:33
Play Later
Play Later
Lists
Like
Liked3:13:33
Back in 2007, Holden Karnofsky cofounded GiveWell , where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy , where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well. In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work. Links to learn more, summary and full transcript. (As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic , giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.) One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought. As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security. In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as: Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies. What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists. Holden’s case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world. What the ML and AI safety communities get wrong in Holden's view. Ways we might succeed with AI just by dumb luck. The value of laying out imaginable success stories. Why information security is so important and underrated. Whether it's good to work at an AI lab that you think is particularly careful. The track record of futurists’ predictions. And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #157 – Ezra Klein on existential risk from AI and what DC could do about it 1:18:46
1:18:46
Play Later
Play Later
Lists
Like
Liked1:18:46
In Oppenheimer , scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that. In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created. Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years. Links to learn more, summary and full transcript. Like many people he has also taken a big interest in AI this year, writing articles such as “ This changes everything .” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable. Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work. By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research. From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs. In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as: They cover: Whether it's desirable to slow down AI research The value of engaging with current policy debates even if they don't seem directly important Which AI business models seem more or less dangerous Tensions between people focused on existing vs emergent risks from AI Two major challenges of being a new parent Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #156 – Markus Anderljung on how to regulate cutting-edge AI models 2:06:36
2:06:36
Play Later
Play Later
Lists
Like
Liked2:06:36
"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it. And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus Anderljung In today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems. Links to learn more, summary and full transcript. They cover: The need for AI governance, including self-replicating models and ChaosGPT Whether or not AI companies will willingly accept regulation The key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoring Whether we can be confident that people won't train models covertly and ignore the licencing system The progress we’ve made so far in AI governance The key weaknesses of these approaches The need for external scrutiny of powerful models The emergent capabilities problem Why it really matters where regulation happens Advice for people wanting to pursue a career in this field And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio Engineering Lead: Ben Cordell Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 Bonus: The Worst Ideas in the History of the World 35:24
35:24
Play Later
Play Later
Lists
Like
Liked35:24
Today’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast. If you have strong opinions about this one way or another, please email us at podcast@80000hours.org to help us figure out whether more of this ought to exist. Chapters: Rob’s intro (00:00:00) The Worst Ideas in the History of the World (00:00:51) My history with longtermism (00:04:01) Outlining the format (00:06:17) Will MacAskill’s basic case (00:07:38) 5 reasons for why future people might not matter morally (00:10:26) Whether we can reasonably hope to influence the future (00:15:53) Great power wars (00:18:55) Nuclear weapons (00:22:27) Gain-of-function research (00:28:31) Closer (00:33:02) Rob's outro (00:35:13)…
8
80,000 Hours Podcast


1 #155 – Lennart Heim on the compute governance era and what has to come after 3:12:43
3:12:43
Play Later
Play Later
Lists
Like
Liked3:12:43
As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community. With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China. But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands? In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot. Links to learn more, summary and full transcript. As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data. If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources. According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices. We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with? But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem. Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer. By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive. With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long. If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point? Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making. Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring. Lennart and Rob discuss the above as well as: How can we best categorise all the ways AI could go wrong? Why did the US restrict the export of some chips to China and what impact has that had? Is the US in an 'arms race' with China or is that more an illusion? What is the deal with chips specialised for AI applications? How is the 'compute' industry organised? Downsides of using compute as a target for regulations Could safety mechanisms be built into computer chips themselves? Who would have the legal authority to govern compute if some disaster made it seem necessary? The reasons Rob doubts that any of this stuff will work Could AI be trained to operate as a far more severe computer worm than any we've seen before? What does the world look like when sluggish human reaction times leave us completely outclassed? And plenty more Chapters: Rob’s intro (00:00:00) The interview begins (00:04:35) What is compute exactly? (00:09:46) Structural risks (00:13:25) Why focus on compute? (00:21:43) Weaknesses of targeting compute (00:30:41) Chip specialisation (00:37:11) Export restrictions (00:40:13) Compute governance is happening (00:59:00) Reactions to AI regulation (01:05:03) Creating legal authority to intervene quickly (01:10:09) Building mechanisms into chips themselves (01:18:57) Rob not buying that any of this will work (01:39:28) Are we doomed to become irrelevant? (01:59:10) Rob’s computer security bad dreams (02:10:22) Concrete advice (02:26:58) Article reading: Information security in high-impact areas (02:49:36) Rob’s outro (03:10:38) Producer: Keiran Harris Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters 3:09:42
3:09:42
Play Later
Play Later
Lists
Like
Liked3:09:42
Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still. Today's guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work. Links to learn more, summary and full transcript. He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important. In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely. For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence -- from doomers to doubters -- and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don't understand. He has landed somewhere in the middle — troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome. Today's conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including: What he sees as the strongest case both for and against slowing down the rate of progress in AI research. Why he disagrees with most other ML researchers that training a model on a sensible 'reward function' is enough to get a good outcome. Why he disagrees with many on LessWrong that the bar for whether a safety technique is helpful is “could this contain a superintelligence.” That he thinks nobody has very compelling arguments that AI created via machine learning will be dangerous by default, or that it will be safe by default. He believes we just don't know. That he understands that analogies and visualisations are necessary for public communication, but is sceptical that they really help us understand what's going on with ML models, because they're different in important ways from every other case we might compare them to. Why he's optimistic about DeepMind’s work on scalable oversight, mechanistic interpretability, and dangerous capabilities evaluations, and what each of those projects involves. Why he isn't inherently worried about a future where we're surrounded by beings far more capable than us, so long as they share our goals to a reasonable degree. Why it's not enough for humanity to know how to align AI models — it's essential that management at AI labs correctly pick which methods they're going to use and have the practical know-how to apply them properly. Three observations that make him a little more optimistic: humans are a bit muddle-headed and not super goal-orientated; planes don't crash; and universities have specific majors in particular subjects. Plenty more besides. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work 2:56:10
2:56:10
Play Later
Play Later
Lists
Like
Liked2:56:10
GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency. But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth? Today's guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year. Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water. But some other researchers focused on figuring out the best ways to help the world's poorest people say GiveWell shouldn't just do more of the same thing, but rather ought to look at the problem differently. Links to learn more, summary and full transcript. Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending — such as 'lives saved,' 'household incomes doubled,' and for health improvements, the 'quality-adjusted life year.' The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them. An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a 'growth miracle' will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else. Elie sees where both of these critiques are coming from, and notes that they've influenced GiveWell's work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today. In today's in-depth conversation, Elie and host Rob Wiblin cover the above, as well as: Why GiveWell flipped from not recommending chlorine dispensers as an intervention for safe drinking water to spending tens of millions of dollars on them What transferable lessons GiveWell learned from investigating different kinds of interventions Why the best treatment for premature babies in low-resource settings may involve less rather than more medicine. Severe malnourishment among children and what can be done about it. How to deal with hidden and non-obvious costs of a programme Some cheap early treatments that can prevent kids from developing lifelong disabilities The various roles GiveWell is currently hiring for, and what's distinctive about their organisational culture And much more. Chapters: Rob’s intro (00:00:00) The interview begins (00:03:14) GiveWell over the last couple of years (00:04:33) Dispensers for Safe Water (00:11:52) Syphilis diagnosis for pregnant women via technical assistance (00:30:39) Kangaroo Mother Care (00:48:47) Multiples of cash (01:01:20) Hidden costs (01:05:41) MiracleFeet (01:09:45) Serious malnourishment among young children (01:22:46) Vitamin A deficiency and supplementation (01:40:42) The subjective wellbeing approach in contrast with GiveWell's approach (01:46:31) The value of saving a life when that life is going to be very difficult (02:09:09) Whether economic policy is what really matters overwhelmingly (02:20:00) Careers at GiveWell (02:39:10) Donations (02:48:58) Parenthood (02:50:29) Rob’s outro (02:55:05) Producer: Keiran Harris Audio mastering: Simon Monsour and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #152 – Joe Carlsmith on navigating serious philosophical confusion 3:26:58
3:26:58
Play Later
Play Later
Lists
Like
Liked3:26:58
What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones? Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do? In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism. Links to learn more, summary and full transcript. To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world. The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.' If true, it could revolutionise our comprehension of the universe and the way we ought to live... Other two ideas cut for length — click here to read the full post. These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments? Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world. In today's challenging conversation, Joe and Rob discuss all of the above, as well as: What Joe doesn't like about the drowning child thought experiment An alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help others What Joe doesn't like about the expression “the train to crazy town” Whether Elon Musk should place a higher probability on living in a simulation than most other people Whether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promises To what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thing How strong the case is that advanced AI will engage in generalised power-seeking behaviour Chapters: Rob’s intro (00:00:00) The interview begins (00:09:21) Downsides of the drowning child thought experiment (00:12:24) Making demanding moral values more resonant (00:24:56) The crazy train (00:36:48) Whether we’re living in a simulation (00:48:50) Reasons to doubt we’re living in a simulation, and practical implications if we are (00:57:02) Rob's explainer about anthropics (01:12:27) Back to the interview (01:19:53) Decision theory and affecting the past (01:23:33) Rob's explainer about decision theory (01:29:19) Back to the interview (01:39:55) Newcomb's problem (01:46:14) Practical implications of acausal decision theory (01:50:04) The hitchhiker in the desert (01:55:57) Acceptance within philosophy (02:01:22) Infinite ethics (02:04:35) Rob's explainer about the expanding spheres approach (02:17:05) Back to the interview (02:20:27) Infinite ethics and the utilitarian dream (02:27:42) Rob's explainer about epicycles (02:29:30) Back to the interview (02:31:26) What to do with all of these weird philosophical ideas (02:35:28) Welfare longtermism and wisdom longtermism (02:53:23) Epistemic learned helplessness (03:03:10) Power-seeking AI (03:12:41) Rob’s outro (03:25:45) Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #151 – Ajeya Cotra on accidentally teaching AI models to deceive us 2:49:40
2:49:40
Play Later
Play Later
Lists
Like
Liked2:49:40
Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don't get to see any resumes or do reference checks. And because you're so rich, tonnes of people apply for the job — for all sorts of reasons. Today's guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods. Links to learn more, summary and full transcript. As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you're monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it. Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky! Can't we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won't work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes: Saints — models that care about doing what we really want Sycophants — models that just want us to say they've done a good job, even if they get that praise by taking actions they know we wouldn't want them to Schemers — models that don't care about us or our interests at all, who are just pleasing us so long as that serves their own agenda And according to Ajeya, there are also ways we could end up actively selecting for motivations that we don't want. In today's interview, Ajeya and Rob discuss the above, as well as: How to predict the motivations a neural network will develop through training Whether AIs being trained will functionally understand that they're AIs being trained, the same way we think we understand that we're humans living on planet Earth Stories of AI misalignment that Ajeya doesn't buy into Analogies for AI, from octopuses to aliens to can openers Why it's smarter to have separate planning AIs and doing AIs The benefits of only following through on AI-generated plans that make sense to human beings What approaches for fixing alignment problems Ajeya is most excited about, and which she thinks are overrated How one might demo actually scary AI failure mechanisms Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio mastering: Ryan Kessler and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #150 – Tom Davidson on how quickly AI could transform the world 3:01:59
3:01:59
Play Later
Play Later
Lists
Like
Liked3:01:59
It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from. For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before? You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.” But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird. Links to learn more, summary and full transcript. As a teaser, consider the following: Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world. You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades. But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research. And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves. And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly. To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii. Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now. Wild. Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours. Luisa and Tom also discuss: • How we might go from GPT-4 to AI disaster • Tom’s journey from finding AI risk to be kind of scary to really scary • Whether international cooperation or an anti-AI social movement can slow AI progress down • Why it might take just a few years to go from pretty good AI to superhuman AI • How quickly the number and quality of computer chips we’ve been using for AI have been increasing • The pace of algorithmic progress • What ants can teach us about AI • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:04:53) How we might go from GPT-4 to disaster (00:13:50) Explosive economic growth (00:24:15) Are there any limits for AI scientists? (00:33:17) This seems really crazy (00:44:16) How is this going to go for humanity? (00:50:49) Why AI won’t go the way of nuclear power (01:00:13) Can we definitely not come up with an international treaty? (01:05:24) How quickly we should expect AI to “take off” (01:08:41) Tom’s report on AI takeoff speeds (01:22:28) How quickly will we go from 20% to 100% of tasks being automated by AI systems? (01:28:34) What percent of cognitive tasks AI can currently perform (01:34:27) Compute (01:39:48) Using effective compute to predict AI takeoff speeds (01:48:01) How quickly effective compute might increase (02:00:59) How quickly chips and algorithms might improve (02:12:31) How to check whether large AI models have dangerous capabilities (02:21:22) Reasons AI takeoff might take longer (02:28:39) Why AI takeoff might be very fast (02:31:52) Fast AI takeoff speeds probably means shorter AI timelines (02:34:44) Going from human-level AI to superhuman AI (02:41:34) Going from AGI to AI deployment (02:46:59) Were these arguments ever far-fetched to Tom? (02:49:54) What ants can teach us about AI (02:52:45) Rob’s outro (03:00:32) Producer: Keiran Harris Audio mastering: Simon Monsour and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours) 1:17:28
1:17:28
Play Later
Play Later
Lists
Like
Liked1:17:28
In this episode from our second show, 80k After Hours , Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project , which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically, and as of recording in June 2022, has six full-time staff. Links to learn more, highlights and full transcript. They cover: • The evidence for shrimp sentience • How farmers and the public feel about shrimp • The scale of the problem • What shrimp farming looks like • The killing process, and other welfare issues • Shrimp Welfare Project’s strategy • History of shrimp welfare work • What it’s like working in India and Vietnam • How to help Who this episode is for: • People who care about animal welfare • People interested in new and unusual problems • People open to shrimp sentience Who this episode isn’t for: • People who think shrimp couldn’t possibly be sentient • People who got called ‘shrimp’ a lot in high school and get anxious when they hear the word over and over again Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #149 – Tim LeBon on how altruistic perfectionism is self-defeating 3:11:48
3:11:48
Play Later
Play Later
Lists
Like
Liked3:11:48
Being a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself. But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat. This is the disastrous cycle today's guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset. Links to learn more, summary and full transcript. Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on “doing the most good you can,” Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it. But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it's their goal. Tim has treated hundreds of clients with all sorts of mental health challenges. But in today's conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he's most enthusiastic about: cognitive behavioural therapy. Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it's hard to feel truly happy and secure, and free to take risks, when we’re just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that's hard to shake. But there's hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you're not perfect. In today's extensive conversation, Tim and Rob cover: • How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personality • What leads people to adopt a perfectionist mindset • How 80,000 Hours contributes to perfectionism among some readers and listeners, and what it might change about its advice to address this • What happens in a session of cognitive behavioural therapy for someone struggling with perfectionism, and what factors are key to making progress • Experiments to test whether one's core beliefs (‘I need to be perfect to be valued’) are true • Using exposure therapy to treat phobias • How low-self esteem and imposter syndrome are related to perfectionism • Stoicism as an approach to life, and why Tim is enthusiastic about it • What the Stoics do better than utilitarian philosophers and vice versa • And how to decide which are the best virtues to live by Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Simon Monsour and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't 2:17:28
2:17:28
Play Later
Play Later
Lists
Like
Liked2:17:28
If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no. Today's guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting. In reality you don't want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment. Links to learn more, summary and full transcript. Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we're familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one. In short: we're uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world. That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don't succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels. In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don't work out? Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage. If you're optimistic about renewables, as Johannes is, then that's all the more reason to relax about scenarios where they work as planned, and focus one's efforts on the possibility that they don't. And Johannes notes that the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn't, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come. In today's extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as: • Retooling newly built coal plants in the developing world • Specific clean energy technologies like geothermal and nuclear fusion • Possible biases among environmentalists and climate philanthropists • How climate change compares to other risks to humanity • In what kinds of scenarios future emissions would be highest • In what regions climate philanthropy is most concentrated and whether that makes sense • Attempts to decarbonise aviation, shipping, and industrial processes • The impact of funding advocacy vs science vs deployment • Lessons for climate change focused careers • And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #147 – Spencer Greenberg on stopping valueless papers from getting into top journals 2:38:08
2:38:08
Play Later
Play Later
Lists
Like
Liked2:38:08
Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated. Two key reasons are 'p-hacking' and 'publication bias'. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they're actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a 'null result' never saw the light of day. The resulting phenomenon of publication bias is one we've understood for 60 years. Today's repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years. Links to learn more, summary and full transcript. He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, "when the original study's p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference." To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science , and see if they find the same results. But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren't yet fully appreciated. One of these Spencer calls 'importance hacking': passing off obvious or unimportant results as surprising and meaningful. Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper’s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it's far from easy to stop people exaggerating the importance of their work. In this wide-ranging conversation, Rob and Spencer discuss the above as well as: • When you should and shouldn't use intuition to make decisions. • How to properly model why some people succeed more than others. • The difference between “Soldier Altruists” and “Scout Altruists.” • A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found. • Whether a 15-minute intervention could make people more likely to sustain a new habit two months later. • The most common way for groups with good intentions to turn bad and cause harm. • And Spencer's approach to a fulfilling life and doing good, which he calls “Valuism.” Here are two flashcard decks that might make it easier to fully integrate the most important ideas they talk about: • The first covers 18 core concepts from the episode • The second includes 16 definitions of unusual terms. Chapters: Rob’s intro (00:00:00) The interview begins (00:02:16) Social science reform (00:08:46) Importance hacking (00:18:23) How often papers replicate with different p-values (00:43:31) The Transparent Replications project (00:48:17) How do we predict high levels of success? (00:55:26) Soldier Altruists vs. Scout Altruists (01:08:18) The Clearer Thinking podcast (01:16:27) Creating habits more reliably (01:18:16) Behaviour change is incredibly hard (01:32:27) The FIRE Framework (01:46:21) How ideology eats itself (01:54:56) Valuism (02:08:31) “I dropped the whip” (02:35:06) Rob’s outro (02:36:40) Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #146 – Robert Long on why large language models like GPT (probably) aren't conscious 3:12:51
3:12:51
Play Later
Play Later
Lists
Like
Liked3:12:51
By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user: "I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else." (It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.") Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious. What should we make of these AI systems? One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being. Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be. Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation. Links to learn more, summary and full transcript. In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious. To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us. To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious. In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as: • What artificial sentience might look like, concretely • Reasons to think AI systems might become sentient — and reasons they might not • Whether artificial sentience would matter morally • Ways digital minds might have a totally different range of experiences than humans • Whether we might accidentally design AI systems that have the capacity for enormous suffering You can find Luisa and Rob’s follow-up conversation here , or by subscribing to 80k After Hours . Chapters: Rob’s intro (00:00:00) The interview begins (00:02:20) What artificial sentience would look like (00:04:53) Risks from artificial sentience (00:10:13) AIs with totally different ranges of experience (00:17:45) Moral implications of all this (00:36:42) Is artificial sentience even possible? (00:42:12) Replacing neurons one at a time (00:48:21) Biological theories (00:59:14) Illusionism (01:01:49) Would artificial sentience systems matter morally? (01:08:09) Where are we with current systems? (01:12:25) Large language models and robots (01:16:43) Multimodal systems (01:21:05) Global workspace theory (01:28:28) How confident are we in these theories? (01:48:49) The hard problem of consciousness (02:02:14) Exotic states of consciousness (02:09:47) Developing a full theory of consciousness (02:15:45) Incentives for an AI system to feel pain or pleasure (02:19:04) Value beyond conscious experiences (02:29:25) How much we know about pain and pleasure (02:33:14) False positives and false negatives of artificial sentience (02:39:34) How large language models compare to animals (02:53:59) Why our current large language models aren’t conscious (02:58:10) Virtual research assistants (03:09:25) Rob’s outro (03:11:37) Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #145 – Christopher Brown on why slavery abolition wasn't inevitable 2:42:24
2:42:24
Play Later
Play Later
Lists
Like
Liked2:42:24
In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success. It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress. But today's guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable. Links to learn more, video, highlights, and full transcript. While most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn't believe any of the arguments for that conclusion pass muster. If he's right, a counterfactual history where slavery remains widespread in 2023 isn't so far-fetched. As Christopher lays out in his two key books, Moral Capital: Foundations of British Abolitionism and Arming Slaves: From Classical Times to the Modern Age , slavery has been ubiquitous throughout history. Slavery of some form was fundamental in Classical Greece, the Roman Empire, in much of the Islamic civilization, in South Asia, and in parts of early modern East Asia, Korea, China. It was justified on all sorts of grounds that sound mad to us today. But according to Christopher, while there’s evidence that slavery was questioned in many of these civilisations, and periodically attacked by slaves themselves, there was no enduring or successful moral advocacy against slavery until the British abolitionist movement of the 1700s. That movement first conquered Britain and its empire, then eventually the whole world. But the fact that there's only a single time in history that a persistent effort to ban slavery got off the ground is a big clue that opposition to slavery was a contingent matter: if abolition had been inevitable, we’d expect to see multiple independent abolitionist movements thoroughly history, providing redundancy should any one of them fail. Christopher argues that this rarity is primarily down to the enormous economic and cultural incentives to deny the moral repugnancy of slavery, and crush opposition to it with violence wherever necessary. Mere awareness is insufficient to guarantee a movement will arise to fix a problem. Humanity continues to allow many severe injustices to persist, despite being aware of them. So why is it so hard to imagine we might have done the same with forced labour? In this episode, Christopher describes the unique and peculiar set of political, social and religious circumstances that gave rise to the only successful and lasting anti-slavery movement in human history. These circumstances were sufficiently improbable that Christopher believes there are very nearby worlds where abolitionism might never have taken off. We also discuss: Various instantiations of slavery throughout human history Signs of antislavery sentiment before the 17th century The role of the Quakers in early British abolitionist movement The importance of individual “heroes” in the abolitionist movement Arguments against the idea that the abolition of slavery was contingent Whether there have ever been any major moral shifts that were inevitable Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena 3:15:57
3:15:57
Play Later
Play Later
Lists
Like
Liked3:15:57
What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer. But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function. If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead. Links to learn more, summary and full transcript. As Athena explains in her book The Cheating Cell , what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise: • Cells will proliferate when they shouldn't. • Cells won't die when they should. • Cells won't engage in the kind of division of labour that they should. • Cells won’t do the jobs that they're supposed to do. • Cells will monopolise resources. • And cells will trash the environment. When we think about animals in the wild, or even bacteria living inside our cells, we understand that they're facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics. We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster. Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since *Homo sapiens* came about. Here’s a quote from Athena: “So you have to shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we're going to map it onto anything like what we experience, a day is at least 10 years for them, right? So it's a very, very different way of thinking.” You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss: • Cheating within cells themselves • Cooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or stars • Whether it’s too out-there to think of humans as engaging in cancerous behaviour • Why elephants get deadly cancers less often than humans, despite having way more cells • When a cell should commit suicide • The strategy of deliberately not treating cancer aggressively • Superhuman cooperation And at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse , including: • Staying happy while thinking about the apocalypse • Practical steps to prepare for the apocalypse • And whether a zombie apocalypse is already happening among Tasmanian devils And if you’d rather see Rob and Athena’s facial expressions as they laugh and laugh while discussing cancer and the apocalypse — you can watch the video of the full interview . Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles 2:35:30
2:35:30
Play Later
Play Later
Lists
Like
Liked2:35:30
Rebroadcast: this episode was originally released in June 2020. Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what, she's not so bad". Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history . He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His latest book asks: if we reframe global problems as puzzles, would the world be a better place? Links to learn more, summary and full transcript. This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at clever show notes that change style each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. ( Radical Honesty. ) We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. ( Drop Dead Healthy. ) Another reason to listen is for the facts: • The Bayer aspirin company invented heroin as a cough suppressant • Coriander is just the British way of saying cilantro • Dogs have a third eyelid to protect the eyeball from irritants • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). ( The Know-It-All. ) One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). ( The Year of Living Biblically. ) I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; the construction worker who told me how to get to my subway platform on the morning of the interview; and Queen Jadwiga for making bagels popular in the 1300s, which kept me going during the recording. ( Thanks a Thousand. ) We also discuss: • Blackmailing yourself • The most extreme ideas A.J.’s ever considered • Utilitarian movie reviews • Doing good as a writer • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.…
8
80,000 Hours Podcast


1 #81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments 2:37:11
2:37:11
Play Later
Play Later
Lists
Like
Liked2:37:11
Rebroadcast: this episode was originally released in July 2020. 80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents. Links to learn more, summary and full transcript. There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world. He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments . And he doesn’t think it’s clear we should take these as a strong source of evidence. Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences? Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance. He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in. This is the second episode hosted by Howie Lempel, and he and Ben cover, among many other things: • The threat of AI systems increasing the risk of permanently damaging conflict or collapse • The possibility of permanently locking in a positive or negative future • Contenders for types of advanced systems • What role AI should play in the effective altruism portfolio Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.…
8
80,000 Hours Podcast


1 #83 Classic episode - Jennifer Doleac on preventing crime without police and prisons 2:17:46
2:17:46
Play Later
Play Later
Lists
Like
Liked2:17:46
Rebroadcast: this episode was originally released in July 2020. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three ways to effectively prevent crime that don't require police or prisons and the human toll they bring with them: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, the full blog post, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You're just more likely to get caught. You might think: "Well, people will just commit crime in the morning instead". But it looks like criminals aren’t early risers, and that doesn’t happen. On her unusually rigorous podcast Probable Causation , Jennifer spoke to one of the authors of a related study, in which very bright streetlights were randomly added to some public housing complexes but not others. They found the lights reduced outdoor night-time crime by 36%, at little cost. The next best thing to sun-light is human-light, so just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone. The second approach is cognitive behavioral therapy (CBT), in which you're taught to slow down your decision-making, and think through your assumptions before acting. There was a randomised controlled trial done in schools, as well as juvenile detention facilities in Chicago, where the kids assigned to get CBT were followed over time and compared with those who were not assigned to receive CBT. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%. Jennifer says that the program isn’t that expensive, and the benefits are massive. Everyone would probably benefit from being able to talk through their problems but the gains are especially large for people who've grown up with the trauma of violence in their lives. Finally, Jennifer thinks that reducing lead levels might be the best buy of all in crime prevention. There is really compelling evidence that lead not only increases crime, but also dramatically reduces educational outcomes. In today’s conversation, Rob and Jennifer also cover, among many other things: • Misconduct, hiring practices and accountability among US police • Procedural justice training • Overrated policy ideas • Policies to try to reduce racial discrimination • The effects of DNA databases • Diversity in economics • The quality of social science research Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.…
8
80,000 Hours Podcast


1 #143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons 2:40:17
2:40:17
Play Later
Play Later
Lists
Like
Liked2:40:17
America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially. As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Links to learn more, summary and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons. But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for. What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide. Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound. In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining: • Why inter-service rivalry is one of the biggest constraints on US nuclear policy • Two times the US sabotaged nuclear nonproliferation among great powers • How his field uses jargon to exclude outsiders • How the US could prevent the revival of mass nuclear testing by the great powers • Why nuclear deterrence relies on the possibility that something might go wrong • Whether 'salami tactics' render nuclear weapons ineffective • The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missiles • The problems that arise when you won't talk to people you think are evil • Why missile defences are politically popular despite being strategically foolish • How open source intelligence can prevent arms races • And much more. Chapters: Rob’s intro (00:00:00) The interview begins (00:02:49) Misconceptions in the effective altruism community (00:05:42) Nuclear deterrence (00:17:36) Dishonest rituals (00:28:17) Downsides of generalist research (00:32:13) “Mutual assured destruction” (00:38:18) Budgetary considerations for competing parts of the US military (00:51:53) Where the effective altruism community can potentially add the most value (01:02:15) Gatekeeping (01:12:04) Strengths of the nuclear security community (01:16:14) Disarmament (01:26:58) Nuclear winter (01:38:53) Attacks against US allies (01:41:46) Most likely weapons to get used (01:45:11) The role of moral arguments (01:46:40) Salami tactics (01:52:01) Jeffrey's disagreements with Thomas Schelling (01:57:00) Why did it take so long to get nuclear arms agreements? (02:01:11) Detecting secret nuclear facilities (02:03:18) Where Jeffrey would give $10M in grants (02:05:46) The importance of archival research (02:11:03) Jeffrey's policy ideas (02:20:03) What should the US do regarding China? (02:27:10) What should the US do regarding Russia? (02:31:42) What should the US do regarding Taiwan? (02:35:27) Advice for people interested in working on nuclear security (02:37:23) Rob’s outro (02:39:13) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction 1:47:54
1:47:54
Play Later
Play Later
Lists
Like
Liked1:47:54
John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work he's also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column. • Links to learn more, summary, and full transcript • Video version of the interview • Lecture: Why the world looks the same in any language Our show is mostly about the world's most pressing problems and what you can do to solve them. But what's the point of hosting a podcast if you can't occasionally just talk about something fascinating with someone whose work you appreciate? So today, just before the holidays, we're sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him: • Can you communicate faster in some languages than others, or is there some constraint that prevents that? • Does learning a second or third language make you smarter or not? • Can a language decay and get worse at communicating what people want to say? • If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own? • Did Shakespeare write in a foreign language, and if so, should we translate his plays? • How much does language really shape the way we think? • Are creoles the best languages in the world — languages that ideally we would all speak? • What would be the optimal number of languages globally? • Does trying to save dying languages do their speakers a favour, or is it more of an imposition? • Should we bother to teach foreign languages in UK and US schools? • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself? • Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make? We then put some of these questions to ChatGPT itself, asking it to play the role of a linguistics professor at Colombia University. We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits! And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full conversation here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Video editing: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well 2:44:19
2:44:19
Play Later
Play Later
Lists
Like
Liked2:44:19
Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow. But do they really 'understand' what they're saying, or do they just give the illusion of understanding? Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society. Links to learn more, summary and full transcript. One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer. However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable. Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve. We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter. In today's conversation we discuss the above, as well as: • Could speeding up AI development be a bad thing? • The balance between excitement and fear when it comes to AI advances • What OpenAI focuses its efforts where it does • Common misconceptions about machine learning • How many computer chips it might require to be able to do most of the things humans do • How Richard understands the 'alignment problem' differently than other people • Why 'situational awareness' may be a key concept for understanding the behaviour of AI models • What work to positively shape the development of AI Richard is and isn't excited about • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 My experience with imposter syndrome — and how to (partly) overcome it (Article) 44:05
44:05
Play Later
Play Later
Lists
Like
Liked44:05
Today’s release is a reading of our article called My experience with imposter syndrome — and how to (partly) overcome it , written and narrated by Luisa Rodriguez. If you want to check out the links, footnotes and figures in today’s article, you can find those here. And if you like this article, you’ll probably enjoy episode #100 of this show: Having a successful career with depression, anxiety, and imposter syndrome Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering and editing for this episode: Milo McGuire…
8
80,000 Hours Podcast


In this episode, usual host of the show Rob Wiblin gives his thoughts on the recent collapse of FTX. Click here for an official 80,000 Hours statement. And here are links to some potentially relevant 80,000 Hours pieces: • Episode #24 of this show – Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause . • Is it ever OK to take a harmful job in order to do more good? An in-depth analysis • What are the 10 most harmful jobs? • Ways people trying to do good accidentally make things worse, and how to avoid them…
8
80,000 Hours Podcast


1 #140 – Bear Braumoeller on the case that war isn't in decline 2:47:06
2:47:06
Play Later
Play Later
Lists
Like
Liked2:47:06
Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out. But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe. Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age . Links to learn more, summary and full transcript. The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours. If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st. Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster. He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead , he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone. In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war". In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as: • Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect? • What would Bear's critics say in response to all this? • What do the optimists get right? • How does one do proper statistical tests for events that are clumped together, like war deaths? • Why are deaths in war so concentrated in a handful of the most extreme events? • Did the ideas of the Enlightenment promote nonviolence, on balance? • Were early states more or less violent than groups of hunter-gatherers? • If Bear is right, what can be done? • How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century? • Which wars are remarkable but largely unknown? Chapters: Rob’s intro (00:00:00) The interview begins (00:03:32) Only the Dead (00:06:28) The Enlightenment (00:16:47) Democratic peace theory (00:26:22) Is religion a key driver of war? (00:29:27) International orders (00:33:07) The Concert of Europe (00:42:15) The Bismarckian system (00:53:43) The current international order (00:58:16) The Better Angels of Our Nature (01:17:30) War datasets (01:32:03) Seeing patterns in data where none exist (01:45:32) Change-point analysis (01:49:33) Rates of violent death throughout history (01:54:32) War initiation (02:02:55) Escalation (02:17:57) Getting massively different results from the same data (02:28:38) How worried we should be (02:34:07) Most likely ways Only the Dead is wrong (02:36:25) Astonishing smaller wars (02:40:39) Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #139 – Alan Hájek on puzzles and paradoxes in probability and expected value 3:38:26
3:38:26
Play Later
Play Later
Lists
Like
Liked3:38:26
A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play? The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount! Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.” Links to learn more, summary and full transcript. The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped. We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits. These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good. Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact. Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong. In today's conversation, Alan and Rob explore these issues and many others: • Simple rules of thumb for having philosophical insights • A key flaw that hid in Pascal's wager from the very beginning • Whether we have to simply ignore infinities because they mess everything up • What fundamentally is 'probability'? • Some of the many reasons 'frequentism' doesn't work as an account of probability • Why the standard account of counterfactuals in philosophy is deeply flawed • And why counterfactuals present a fatal problem for one sort of consequentialism Chapters: Rob’s intro (00:00:00) The interview begins (00:01:48) Philosophical methodology (00:02:54) Theories of probability (00:37:17) Everyday Bayesianism (00:46:01) Frequentism (01:04:56) Ranges of probabilities (01:16:23) Implications for how to live (01:21:24) Expected value (01:26:58) The St. Petersburg paradox (01:31:40) Pascal's wager (01:49:44) Using expected value in everyday life (02:03:53) Counterfactuals (02:16:38) Most counterfactuals are false (02:52:25) Relevance to objective consequentialism (03:09:47) Marker 18 (03:10:21) Alan’s best conference story (03:33:37) Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 Preventing an AI-related catastrophe (Article) 2:24:18
2:24:18
Play Later
Play Later
Lists
Like
Liked2:24:18
Today’s release is a professional reading of our new problem profile on preventing an AI-related catastrophe , written by Benjamin Hilton. We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks. Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this. As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute. Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more. If you want to check out the links, footnotes and figures in today’s article, you can find those here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Editing and narration: Perrin Walker and Shaun Acker Audio proofing: Katy Moore…
8
80,000 Hours Podcast


1 #138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter 2:24:20
2:24:20
Play Later
Play Later
Lists
Like
Liked2:24:20
What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more. The question is a classic that makes for great dorm-room philosophy discussion. But it's hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective. Today's guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself. Links to learn more, summary, full transcript, and full version of this blog post. That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations. Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering. As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves -- a position known as 'philosophical hedonism' -- has been one of the most enduringly popular ideas in ethics. And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things? Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value "a radical and important philosophical contribution." In today's interview, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes the most popular counterarguments are misguided. Host Rob Wiblin and Sharon also cover: • The essential need to disentangle intrinsic, instrumental, and other sorts of value • Why Sharon’s arguments lead to hedonistic utilitarianism rather than hedonistic egoism (in which we only care about our own feelings) • How do people react to the 'experience machine' thought experiment when surveyed? • Why hedonism recommends often thinking and acting as though it were false • Whether it's crazy to think that relationships are only useful because of their effects on our subjective experiences • Whether it will ever be possible to eliminate pain, and whether doing so would be desirable • If we didn't have positive or negative experiences, whether that would cause us to simply never talk about goodness and badness • Whether the plausibility of hedonism is affected by our theory of mind • And plenty more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:45) Metaethics (00:04:16) Anti-realism (00:10:39) Sharon's theory of moral realism (00:16:17) The history of hedonism (00:23:11) Intrinsic value vs instrumental value (00:28:49) Egoistic hedonism (00:36:30) Single axis of value (00:42:19) Key objections to Sharon’s brand of hedonism (00:56:18) The experience machine (01:06:08) Robot spouses (01:22:29) Most common misunderstanding of Sharon’s view (01:27:10) How might a hedonist actually live (01:37:46) The organ transplant case (01:53:34) Counterintuitive implications of hedonistic utilitarianism (02:03:40) How could we discover moral facts? (02:18:05) Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #137 – Andreas Mogensen on whether effective altruism is just for consequentialists 2:21:34
2:21:34
Play Later
Play Later
Lists
Like
Liked2:21:34
Effective altruism, in a slogan, aims to 'do the most good.' Utilitarianism, in a slogan, says we should act to 'produce the greatest good for the greatest number.' It's clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism? Today's guest, Andreas Mogensen — senior research fellow at Oxford University's Global Priorities Institute — rejects utilitarianism, but as he explains, this does little to dampen his enthusiasm for the project of effective altruism. Links to learn more, summary and full transcript. Andreas leans towards 'deontological' or rule-based theories of ethics, rather than 'consequentialist' theories like utilitarianism which look exclusively at the effects of a person's actions. Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially. However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger's wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do. In a world as full of preventable suffering as our own, this simple 'principle of beneficence' is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance. As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one's income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they'll get the biggest 'bang for buck'. For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves. What arguments could a non-utilitarian moral theory mount against such giving? Many approaches to morality will say it's permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless. In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover: • Should we treat thought experiments that feature very large numbers with great suspicion? • If we had to allow someone to die to avoid preventing the World Cup final from being broadcast to the world, is that permissible? • What might a virtue ethicist regard as 'doing the most good'? • If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be? • If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view? Chapters: Rob’s intro (00:00:00) The interview begins (00:01:36) Deontology and effective altruism (00:04:59) Giving What We Can (00:28:56) Longtermism without consequentialism (00:38:01) Further differences between deontologists and consequentialists (00:44:13) Virtue ethics and effective altruism (01:08:15) Is Andreas really a deontologist? (01:13:26) Large number scepticism (01:21:11) Evolutionary debunking arguments (01:58:48) How Andreas’s views have changed (02:12:18) Derek Parfit’s influence on Andreas (02:17:27) Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #136 – Will MacAskill on what we owe the future 2:54:37
2:54:37
Play Later
Play Later
Lists
Like
Liked2:54:37
People who exist in the future deserve some degree of moral consideration. The future could be very big, very long, and/or very good. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are. So trying to make the world better for future generations is a key priority of our time. This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future , the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill. Links to learn more, summary and full transcript. From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well. Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile. But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed. A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it. This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working. But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations. The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back. But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently. In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise. If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as: • How Will was eventually won over to longtermism • The three best lines of argument against longtermism • How to avoid moral fanaticism • Which technologies or events are most likely to have permanent effects • What 'longtermists' do today in practice • How to predict the long-term effect of our actions • Whether the future is likely to be good or bad • Concrete ideas to make the future better • What Will donates his money to personally • Potatoes and megafauna • And plenty more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:36) What longtermism actually is (00:02:31) The case for longtermism (00:04:30) What longtermists are actually doing (00:15:54) Will’s personal journey (00:22:15) Strongest arguments against longtermism (00:42:28) Preventing extinction vs. improving the quality of the future (00:59:29) Is humanity likely to converge on doing the same thing regardless? (01:06:58) Lock-in scenario vs. long reflection (01:27:11) Is the future good in expectation? (01:32:29) Can we actually predictably influence the future positively? (01:47:27) Tiny probabilities of enormous value (01:53:40) Stagnation (02:19:04) Concrete suggestions (02:34:27) Where Will donates (02:39:40) Potatoes and megafauna (02:41:48) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #135 – Samuel Charap on key lessons from five months of war in Ukraine 54:47
54:47
Play Later
Play Later
Lists
Like
Liked54:47
After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we're in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over. So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia. Links to learn more, summary and full transcript. As Sam lays out, Russia controls much of Ukraine's east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter. Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict. In today's brisk conversation, Rob and Sam cover the following topics: • Current territorial control and the level of attrition within Russia’s and Ukraine's military forces. • Russia's current goals. • Whether Sam's views have changed since March on topics like: Putin's motivations, the wisdom of Ukraine's strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends. • Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion. • Whether there's anything to learn from many of our worst fears -- such as the use of bioweapons on civilians -- not coming to pass. • What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires). • Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it's still a long shot. Chapters: Rob’s intro (00:00:00) The interview begins (00:02:31) The state of play in Ukraine (00:03:05) How things have changed since March (00:12:59) Has Russia learned from its mistakes? (00:23:40) Broader lessons (00:28:44) A possible way out (00:37:15) Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #134 – Ian Morris on what big-picture history teaches us 3:41:07
3:41:07
Play Later
Play Later
Lists
Like
Liked3:41:07
Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs. Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women. Why such big systematic changes — and why these changes specifically? That's the question best-selling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve . Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years. Links to learn more, summary and full transcript. There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer? In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels. On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength. There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another. Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career. In today's episode, we discuss all of Ian's major books, taking on topics such as: • Why the Industrial Revolution happened in England rather than China • Whether or not wars can lead to less violence • Whether the evidence base in history — from document archives to archaeology — is strong enough to persuasively answer any of these questions • Why Ian thinks the way we live in the 21st century is probably a short-lived aberration • Whether the grand sweep of history is driven more by “very important people” or “vast impersonal forces” • Why Chinese ships never crossed the Pacific or rounded the southern tip of Africa • In what sense Ian thinks Brexit was “10,000 years in the making” • The most common misconceptions about macrohistory Chapters: Rob’s intro (00:00:00) The interview begins (00:01:51) Geography is Destiny (00:02:59) Why the West Rules—For Now (00:11:25) War! What is it Good For? (00:27:40) Expectations for the future (00:39:43) Foragers, Farmers, and Fossil Fuels (00:53:15) Historical methodology (01:02:35) Falsifiable alternative theories (01:15:20) Archaeology (01:22:18) Energy extraction technology as a key driver of human values (01:37:04) Allowing people to debate about values (01:59:38) Can productive wars still occur? (02:12:49) Where is history contingent and where isn't it? (02:29:45) How Ian thinks about the future (03:12:54) Macrohistory myths (03:29:12) Ian’s favourite archaeology memory (03:32:40) The most unfair criticism Ian’s ever received (03:34:39) Rob’s outro (03:39:16) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection 2:57:51
2:57:51
Play Later
Play Later
Lists
Like
Liked2:57:51
On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them. That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously. Links to learn more, summary and full transcript. Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality , and Life 3.0: Being Human in the Age of Artificial Intelligence , and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI. Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin. But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind. You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago. So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them. He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?” Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare? Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem. They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations. They also cover: • Whether we could understand what superintelligent systems were doing • The value of encouraging people to think about the positive future they want • How to give machines goals • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’ • Whether we’re sleepwalking into disaster • Whether people actually just want their biases confirmed • Why Max is worried about government-backed fact-checking • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:19) How Max prioritises (00:12:33) Intro to AI risk (00:15:47) Superintelligence (00:35:56) Imagining a wide range of possible futures (00:47:45) Recent advances in capabilities and alignment (00:57:37) How to give machines goals (01:13:13) Regulatory capture (01:21:03) How humanity fails to fulfil its potential (01:39:45) Are we being hacked? (01:51:01) Improving the news (02:05:31) Do people actually just want their biases confirmed? (02:16:15) Government-backed fact-checking (02:37:00) Would a superintelligence seem like magic? (02:49:50) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #132 – Nova DasSarma on why information security may be critical to the safe development of AI systems 2:42:27
2:42:27
Play Later
Play Later
Lists
Like
Liked2:42:27
If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free. This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops. Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge. Links to learn more, summary and full transcript. The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society. If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately. If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off. As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly. If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world. We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough. In today's conversation, Rob and Nova cover: • How good or bad is information security today • The most secure computer systems that exist • How to design an AI training compute centre for maximum efficiency • Whether 'formal verification' can help us design trustworthy systems • How wide the gap is between AI capabilities and AI safety • How to disincentivise hackers • What should listeners do to strengthen their own security practices • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world 1:05:42
1:05:42
Play Later
Play Later
Lists
Like
Liked1:05:42
“We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!” If you were a contestant on such a TV show, you'd love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on. Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm. Links to learn more, summary and full transcript. But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around. As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it's just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.” So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult. Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make. A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world. The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments. This is clearly not a trivial task. Lewis's own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote. And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities? As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year . Chapters: Rob’s intro (00:00:00) The interview begins (00:00:59) The biggest impediments to bouncing back (00:03:18) Can we do a serious version of The Knowledge? (00:14:58) Recovering without much coal or oil (00:29:56) Most valuable pro-resilience adjustments we can make today (00:40:23) Feeding the Earth in disasters (00:47:45) The reality of humans trying to actually do this (00:53:54) Most exciting recent findings in astrobiology (01:01:00) Rob’s outro (01:03:37) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure 2:16:41
2:16:41
Play Later
Play Later
Lists
Like
Liked2:16:41
Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you're all bunched up on a few tables in a basement office. But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You're the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP. You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have. This is roughly the situation faced by today's guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future , and founding figure in the effective altruism movement. Links to learn more, summary and full transcript. Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing. While surely a huge success, it brings with it risks that he's never had to consider before: • Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it. • Being seen as profligate could strike onlookers as selfish and disreputable. • Folks might start pretending to agree with their agenda just to get grants. • People working on nearby issues that are less flush with funding may end up resentful. • People might lose their focus on helping others as they get seduced by the prospect of earning a nice living. • Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely. But all these 'risks of commission' have to be weighed against 'risk of omission': the failure to achieve all you could have if you'd been truly ambitious. People looking askance at you for paying high salaries to attract the staff you want is unpleasant. But failing to prevent the next pandemic because you didn't have the necessary medical experts on your grantmaking team is worse than unpleasant — it's a true disaster. Yet few will complain, because they'll never know what might have been if you'd only set frugality aside. Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition . In today's episode, Rob and Will discuss the above as well as: • Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent? • Why are so many nonfiction books full of factual errors? • How does Will avoid anxiety and depression with more responsibility on his shoulders than ever? • What does Will disagree with his colleagues on? • Should we focus on existential risks more or less the same way, whether we care about future generations or not? • Are potatoes one of the most important technologies ever developed? • And plenty more. Chapters: Rob’s intro (00:00:00) The interview begins (00:02:41) What We Owe The Future preview (00:09:23) Longtermism vs. x-risk (00:25:39) How is Will doing? (00:33:16) Having a life outside of work (00:46:45) Underappreciated people in the effective altruism community (00:52:48) A culture of ambition within effective altruism (00:59:50) Massively scalable projects (01:11:40) Downsides and risks from the increase in funding (01:14:13) Barriers to ambition (01:28:47) The Future Fund (01:38:04) Patient philanthropy (01:52:50) Will’s disagreements with Sam Bankman-Fried and Nick Beckstead (01:56:42) Astronomical risks of suffering (s-risks) (02:00:02) Will’s future plans (02:02:41) What is it with Will and potatoes? (02:08:40) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #129 – James Tibenderana on the state of the art in malaria control and elimination 3:19:36
3:19:36
Play Later
Play Later
Lists
Like
Liked3:19:36
The good news is deaths from malaria have been cut by a third since 2005. The bad news is it still causes 250 million cases and 600,000 deaths a year, mostly among young children in sub-Saharan Africa. We already have dirt-cheap ways to prevent and treat malaria, and the fraction of the Earth's surface where the disease exists at all has been halved since 1900. So why is it such a persistent problem in some places, even rebounding 15% since 2019? That's one of many questions I put to today's guest, James Tibenderana — doctor, medical researcher, and technical director at a major global health nonprofit known as Malaria Consortium. James studies the cutting edge of malaria control and treatment in order to optimise how Malaria Consortium spends £100 million a year across countries like Uganda, Nigeria, and Chad. Links to learn more, summary and full transcript. In sub-Saharan Africa, where 90% of malaria deaths occur, the infection is spread by a few dozen species of mosquito that are ideally suited to the local climatic conditions and have thus been impossible to eliminate so far. While COVID-19 may have an 'R' (reproduction number) of 5, in some situations malaria has a reproduction number in the 1,000s. A single person with malaria can pass the parasite to hundreds of mosquitoes, which themselves each go on to bite dozens of people each, allowing cases to quickly explode. The nets and antimalarial drugs Malaria Consortium distributes have been highly effective where distributed, but there are tens of millions of young children who are yet to be covered simply due to a lack of funding. Despite the success of these approaches, given how challenging it will be to create a malaria-free world, there's enthusiasm to find new approaches to throw at the problem. Two new interventions have recently generated buzz: vaccines and genetic approaches to control the mosquito species that carry malaria. The RTS,S vaccine is the first-ever vaccine that attacks a protozoa as opposed to a virus or bacteria. It's a great scientific achievement. But James points out that even after three doses, it's still only about 30% effective. Unless future vaccines are substantially more effective, they will remain just a complement to nets and antimalarial drugs, which are cheaper and each cut mortality by more than half. On the other hand, the latest mosquito-control technologies are almost too effective. It is possible to insert genes into specific mosquito populations that reduce their ability to reproduce. By using a 'gene drive,' you can ensure mosquitoes hand these detrimental genes down to 100% of their offspring. If deployed, these genes would spread and ultimately eliminate the mosquitoes that carry malaria at low cost, thereby largely ridding the world of the disease. Because a single country embracing this method would have global effects, James cautions that it's important to get buy-in from all the countries involved, and to have a way of reversing the intervention if we realise we've made a mistake. In this comprehensive conversation, Rob and James discuss all of the above, as well as most of what you could reasonably want to know about the state of the art in malaria control today, including: • How malaria spreads and the symptoms it causes • The use of insecticides and poison baits • How big a problem insecticide resistance is • How malaria was eliminated in North America and Europe • The key strategic choices faced by Malaria Consortium in its efforts to create a malaria-free world • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:06) Malaria basics (00:06:56) Malaria vaccines (00:12:37) Getting rid of mosquitos (00:32:20) Gene drives (00:38:06) Symptoms (00:58:00) Preventing the spread (01:06:00) Why we haven’t gotten rid of malaria yet (01:15:07) What James is responsible for as technical director (01:30:52) Malaria Consortium's current strategy (01:39:59) Elimination vs. control (02:01:49) Delivery and practicalities (02:16:23) Relationships with governments (02:26:38) Funding gap (02:31:03) Access and use gap (02:39:10) The value of local researchers (02:49:26) Past research findings (02:57:10) How to help (03:06:30) How James ended up where he is today (03:13:45) Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #128 – Chris Blattman on the five reasons wars happen 2:46:51
2:46:51
Play Later
Play Later
Lists
Like
Liked2:46:51
In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great. Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out. The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today's episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace , which summarises what they think they've learned. Links to learn more, summary and full transcript. Chris's first point is that while organised violence may feel like it's all around us, it's actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace. In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn't — so they can see what a healthy society looks like and what's missing in the places where war does take hold. Chris argues that social scientists have generated five cogent models of when war can be 'rational' for both sides of a conflict: 1. Unchecked interests — such as national leaders who bear few of the costs of launching a war. 2. Intangible incentives — such as an intrinsic desire for revenge. 3. Uncertainty — such as both sides underestimating each other's resolve to fight. 4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future. 5. Misperceptions — such as our inability to see the world through other people's eyes. In today's interview, we walk through how each of the five explanations work and what specific wars or actions they might explain. In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity). The interview also covers: • What Chris and Rob got wrong about the war in Ukraine • What causes might not fit into these five categories • The role of people's choice to escalate or deescalate a conflict • How great power wars or nuclear wars are different, and what can be done to prevent them • How much representative government helps to prevent war • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:43) What people get wrong about violence (00:04:40) Medellín gangs (00:11:48) Overrated causes of violence (00:23:53) Cause of war #1: Unchecked interests (00:36:40) Cause of war #2: Intangible incentives (00:41:40) Cause of war #3: Uncertainty (00:53:04) Cause of war #4: Commitment problems (01:02:24) Cause of war #5: Misperceptions (01:12:18) Weaknesses of the model (01:26:08) Dancing on the edge of a cliff (01:29:06) Confusion around escalation (01:35:26) Applying the model to the war between Russia and Ukraine (01:42:34) Great power wars (02:01:46) Preventing nuclear war (02:18:57) Why undirected approaches won't work (02:22:51) Democratic peace theory (02:31:10) Exchanging hostages (02:37:21) What you can actually do to help (02:41:25) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good 3:20:28
3:20:28
Play Later
Play Later
Lists
Like
Liked3:20:28
On this episode of the show, host Rob Wiblin interviews Sam Bankman-Fried. This interview was recorded in February 2022, and released in April 2022. But on November 11 2022, Sam Bankman-Fried's company, FTX, filed for bankruptcy, and all staff at the Future Fund resigned — and the surrounding events led Rob to record a new intro on December 1st 2022 for this episode. • Read 80,000 Hours' statement on these events here . • You can also listen to host Rob’s reaction to the collapse of FTX on this podcast feed, above episode 140, or here . • Rob has shared some clarifications on his views about diminishing returns and risk aversion, and weaknesses in how it was discussed in this episode, here . • And you can read the original blog post associated with the episode here.…
8
80,000 Hours Podcast


1 #126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs 2:15:16
2:15:16
Play Later
Play Later
Lists
Like
Liked2:15:16
Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don't, because it doesn't. Incredible though it might seem, according to today's guest — economist Bryan Caplan, the author of Selfish Reasons To Have More Kids, The Myth of the Rational Voter, and The Case Against Education — the best evidence we have on the question suggests that, within reason, what parents do has little impact on how their children's lives play out once they're adults. Links to learn more, summary and full transcript. Of course, kids do resemble their parents. But just as we probably can't say it was attentive parenting that gave me my mother's nose, perhaps we can't say it was attentive parenting that made me succeed at school. Both the social environment we grow up in and the genes we receive from our parents influence the person we become, and looking at a typical family we can't really distinguish the impact of one from the other. But nature does offer us up a random experiment that can let us tell the difference: identical twins share all their genes, while fraternal twins only share half their genes. If you look at how much more similar outcomes are for identical twins than fraternal twins, you see the effect of sharing 100% of your genetic material, rather than the usual 50%. Double that amount, and you've got the full effect of genetic inheritance. Whatever unexplained variation remains is still up for grabs — and might be down to different experiences in the home, outside the home, or just random noise. The crazy thing about this research is that it says for a range of adult outcomes (e.g. years of education, income, health, personality, and happiness), it's differences in the genes children inherit rather than differences in parental behaviour that are doing most of the work. Other research suggests that differences in “out-of-home environment” take second place. Parenting style does matter for something, but it comes in a clear third. Bryan is quick to point out that there are several factors that help reconcile these findings with conventional wisdom about the importance of parenting. First, for some adult outcomes, parenting was a big deal (i.e. the quality of the parent/child relationship) or at least a moderate deal (i.e. drug use, criminality, and religious/political identity). Second, parents can and do influence you quite a lot — so long as you're young and still living with them. But as soon as you move out, the influence of their behaviour begins to wane and eventually becomes hard to spot. Third, this research only studies variation in parenting behaviour that was common among the families studied. And fourth, research on international adoptions shows they can cause massive improvements in health, income and other outcomes. But the findings are still remarkable, and imply many hyper-diligent parents could live much less stressful lives without doing their kids any harm at all. In this extensive interview Rob interrogates whether Bryan can really be right, or whether the research he's drawing on has taken a wrong turn somewhere. And that's just one topic we cover, some of the others being: • People’s biggest misconceptions about the labour market • Arguments against open borders • Whether most people actually vote based on self-interest • Whether philosophy should stick to common sense or depart from it radically • Personal autonomy vs. the possible benefits of government regulation • Bryan's perfect betting record • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:15) Labor Econ Versus the World (00:04:55) Open Borders (00:20:30) How much parenting matters (00:35:49) Self-Interested Voter Hypothesis (01:00:31) Why Bryan and Rob disagree so much on philosophy (01:12:04) Libertarian free will (01:25:10) The effective altruism community (01:38:46) Bryan’s betting record (01:48:19) Individual autonomy vs. welfare (01:59:06) Arrogant hedgehogs (02:10:43) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders 2:13:42
2:13:42
Play Later
Play Later
Lists
Like
Liked2:13:42
Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have declined by 83%. Nuclear brinksmanship, proxy wars, and the game theory of mutually assured destruction (MAD) have come to feel like relics of another era. Russia's invasion of Ukraine has changed all that. According to Joan Rohlfing — President of the Nuclear Threat Initiative, a Washington, DC-based nonprofit focused on reducing threats from nuclear and biological weapons — the annual risk of a ‘global catastrophic nuclear event'’ never fell as low as people like to think, and for some time has been on its way back up. Links to learn more, summary and full transcript. At the same time, civil society funding for research and advocacy around nuclear risks is being cut in half over a period of years — despite the fact that at $60 million a year, it was already just a thousandth as much as the US spends maintaining its nuclear deterrent. If new funding sources are not identified to replace donors that are withdrawing, the existing pool of talent will have to leave for greener pastures, and most of the next generation will see a career in the field as unviable. While global poverty is on the decline and life expectancy increasing, the chance of a catastrophic nuclear event is probably trending in the wrong direction. Ukraine gave up its nuclear weapons in 1994 in exchange for security guarantees that turned out not to be worth the paper they were written on. States that have nuclear weapons (such as North Korea), states that are pursuing them (such as Iran), and states that have pursued nuclear weapons but since abandoned them (such as Libya, Syria, and South Africa) may take this as a valuable lesson in the importance of military power over promises. China has been expanding its arsenal and testing hypersonic glide missiles that can evade missile defences. Japan now toys with the idea of nuclear weapons as a way to ensure its security against its much larger neighbour. India and Pakistan both acquired nuclear weapons in the late 1980s and their relationship continues to oscillate from hostile to civil and back. At the same time, the risk that nuclear weapons could be interfered with due to weaknesses in computer security is far higher than during the Cold War, when systems were simpler and less networked. In the interview, Joan discusses several steps that can be taken in the immediate term, such as renewed efforts to extend and expand arms control treaties, changes to nuclear use policy, and the retirement of what they see as vulnerable delivery systems, such as land-based silos. In the bigger picture, NTI seeks to keep hope alive that a better system than deterrence through mutually assured destruction remains possible. The threat of retaliation does indeed make nuclear wars unlikely, but it necessarily means the system fails in an incredibly destructive way: with the death of hundreds of millions if not billions. In the long run, even a tiny 1 in 500 risk of a nuclear war each year adds up to around an 18% chance of catastrophe over the century. In this conversation we cover all that, as well as: • How arms control treaties have evolved over the last few decades • Whether lobbying by arms manufacturers is an important factor shaping nuclear strategy • The Biden Nuclear Posture Review • How easily humanity might recover from a nuclear exchange • Implications for the use of nuclear energy Chapters: Rob’s intro (00:00:00) Joan’s EAG presentation (00:01:40) The interview begins (00:27:06) Nuclear security funding situation (00:31:09) Policy solutions for addressing a one-person or one-state risk factor (00:36:46) Key differences in the nuclear security field (00:40:44) Scary scenarios (00:47:02) Why the US shouldn’t expand its nuclear arsenal (00:52:56) The evolution of nuclear risk over the last 10 years (01:03:41) The interaction between nuclear weapons and cybersecurity (01:10:18) The chances of humanity bouncing back after nuclear war (01:13:52) What we should actually do (01:17:57) Could sensors be a game-changer? (01:22:39) Biden Nuclear Posture Review (01:27:50) Influence of lobbying firms (01:33:58) What NTI might do with an additional $20 million (01:36:38) Nuclear energy tradeoffs (01:43:55) Why we can’t rely on Stanislav Petrovs (01:49:49) Preventing war vs. building resilience for recovery (01:52:15) Places to donate other than NTI (01:54:25) Career advice (02:00:15) Why this problem is solvable (02:09:27) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions 3:09:53
3:09:53
Play Later
Play Later
Lists
Like
Liked3:09:53
If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong. Links to learn more, summary and full transcript. Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish. First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running. Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries. 'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves. While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing. Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget. In today's in-depth conversation, Karen Levy and I chat about the above, as well as: • Why it pays to figure out how you'll interpret the results of an experiment ahead of time • The trouble with misaligned incentives within the development industry • Projects that don't deliver value for money and should be scaled down • How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren • Logistical challenges in reaching huge numbers of people with essential services • Lessons from Karen's many-decades career • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #123 – Samuel Charap on why Putin invaded Ukraine, the risk of escalation, and how to prevent disaster 59:17
59:17
Play Later
Play Later
Lists
Like
Liked59:17
Russia's invasion of Ukraine is devastating the lives of Ukrainians, and so long as it continues there's a risk that the conflict could escalate to include other countries or the use of nuclear weapons. It's essential that NATO, the US, and the EU play their cards right to ideally end the violence, maintain Ukrainian sovereignty, and discourage any similar invasions in the future. But how? To pull together the most valuable information on how to react to this crisis, we spoke with Samuel Charap — a senior political scientist at the RAND Corporation, one of the US's foremost experts on Russia's relationship with former Soviet states, and co-author of Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia . Links to learn more, summary and full transcript. Samuel believes that Putin views the alignment of Ukraine with NATO as an existential threat to Russia — a perhaps unreasonable view, but a sincere one nevertheless. Ukraine has been drifting further into Western Europe's orbit and improving its defensive military capabilities, so Putin has concluded that if Russia wants to put a stop to that, there will never be a better time to act in the future. Despite early successes holding off the Russian military, Samuel is sceptical that time is on the Ukrainian side. If the war is to end before much of Ukraine is reduced to rubble, it will likely have to be through negotiation, rather than Russian defeat. The US policy response has so far been largely good, successfully balancing the need to punish Russia to dissuade large nations from bullying small ones in the future, while preventing NATO from being drawn into the war directly — which would pose a horrifying risk of escalation to a full nuclear exchange. The pressure from the general public to 'do something' might eventually cause national leaders to confront Russia more directly, but so far they are sensibly showing no interest in doing so. However, use of nuclear weapons remains a low but worrying possibility. Samuel is also worried that Russia may deploy chemical and biological weapons and blame it on the Ukrainians. Before war broke out, it's possible Russia could have been satisfied if Ukraine followed through on the Minsk agreements and committed not to join the EU and NATO. Or it might not have, if Putin was committed to war, come what may. In any case, most Ukrainians found those terms intolerable. At this point, the situation is even worse, and it's hard to see how an enduring ceasefire could be agreed upon. On top of the above, Russia is also demanding recognition that Crimea is part of Russia, and acceptance of the independence of the so-called Donetsk and Luhansk People's Republics. These conditions — especially the second — are entirely unacceptable to the Ukrainians. Hence the war continues, and could grind on for months or even years until one side is sufficiently beaten down to compromise on their core demands. Rob and Samuel discuss all of the above and also: • The chances that this conflict leads to a nuclear exchange • The chances of regime change in Russia • Whether the West should deliver MiG fighter jets to Ukraine • What are the implications if Sweden and/or Finland decide to join NATO? • What should NATO do now, and did it make any mistakes in the past? • What's the most likely situation for us to be looking at in three months' time? • Can Ukraine effectively win the war? Chapters: Rob’s intro (00:00:00) The interview begins (00:01:40) Putin's true motive (00:02:29) What the West could have done differently (00:07:44) Chances of Ukraine holding out (00:11:40) Chances of regime change in Russia (00:14:59) The good and the bad from the West so far (00:17:55) Should the West deliver MiG fighter jets to Ukraine? (00:19:57) "No-fly zones" (00:21:32) Chances that this conflict leads to a nuclear exchange (00:26:06) What listeners should do (00:36:01) Chances of biological or chemical weapons use (00:37:59) Best realistic outcome from here (00:39:29) Keeping the broader conversation sane (00:49:29) Why not promise to remove sanctions? (00:51:05) Pros and cons of Sweden and FInland joining NATO (00:52:53) The most likely situation in 3 months (00:53:58) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising 1:36:26
1:36:26
Play Later
Play Later
Lists
Like
Liked1:36:26
One of 80,000 Hours' main services is our free one-on-one careers advising , which we provide to around 1,000 people a year. Today we speak to two of our advisors, who have each spoken to hundreds of people -- including many regular listeners to this show -- about how they might be able to do more good while also having a highly motivating career. Before joining 80,000 Hours, Michelle Hutchinson completed a PhD in Philosophy at Oxford University and helped launch Oxford's Global Priorities Institute, while Habiba Islam studied politics, philosophy, and economics at Oxford University and qualified as a barrister. Links to learn more, summary and full transcript. In this conversation, they cover many topics that recur in their advising calls, and what they've learned from watching advisees’ careers play out: • What they say when advisees want to help solve overpopulation • How to balance doing good against other priorities that people have for their lives • Why it's challenging to motivate yourself to focus on the long-term future of humanity, and how Michelle and Habiba do so nonetheless • How they use our latest guide to planning your career • Why you can specialise and take more risk if you're in a group • Gaps in the effective altruism community it would be really useful for people to fill • Stories of people who have spoken to 80,000 Hours and changed their career — and whether it went well or not • Why trying to have impact in multiple different ways can be a mistake The episode is split into two parts: the first section on The 80,000 Hours Podcast , and the second on our new show 80k After Hours . This is a shameless attempt to encourage listeners to our first show to subscribe to our second feed. That second part covers: • Whether just encouraging someone young to aspire to more than they currently are is one of the most impactful ways to spend half an hour • How much impact the one-on-one team has, the biggest challenges they face as a group, and different paths they could have gone down • Whether giving general advice is a doomed enterprise Chapters: Rob’s intro (00:00:00) The interview begins (00:02:24) Cause prioritization (00:09:14) Unexpected outcomes from 1-1 advice (00:18:10) Making time for thinking about these things (00:22:28) Balancing different priorities in life (00:26:54) Gaps in the effective altruism space (00:32:06) Plan change vignettes (00:37:49) How large a role the 1-1 team is playing (00:49:04) What about when our advice didn’t work out? (00:55:50) The process of planning a career (00:59:05) Why longtermism is hard (01:05:49) Want to get free one-on-one advice from our team? We're here to help. We’ve helped thousands of people formulate their plans and put them in touch with mentors. We've expanded our ability to deliver one-on-one meetings so are keen to help more people than ever before. If you're a regular listener to the show we're especially likely to want to speak with you. Learn about and apply for advising. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


Today we're launching a new podcast called 80k After Hours . Like this show it’ll mostly still explore the best ways to do good — and some episodes will be even more laser-focused on careers than most original episodes. But we’re also going to widen our scope, including things like how to solve pressing problems while also living a happy and fulfilling life, as well as releases that are just fun, entertaining or experimental. It’ll feature: Conversations between staff on the 80,000 Hours team More eclectic formats and topics — one episode could be a structured debate about 'human challenge trials', the next a staged reading of a play about the year 2750 Niche content for specific audiences, such as high-school students, or active participants in the effective altruism community Extras and outtakes from interviews on the original feed 80,000 Hours staff interviewed on other podcasts Audio versions of our new articles and research You can find it by searching for 80k After Hours in whatever podcasting app you use, or by going to 80000hours.org/after-hours-podcast .…
8
80,000 Hours Podcast


1 #121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good 3:04:18
3:04:18
Play Later
Play Later
Lists
Like
Liked3:04:18
If you read polls saying that the public supports a carbon tax, should you believe them? According to today's guest — journalist and blogger Matthew Yglesias — it's complicated, but probably not. Links to learn more, summary and full transcript. Interpreting opinion polls about specific policies can be a challenge, and it's easy to trick yourself into believing what you want to believe. Matthew invented a term for a particular type of self-delusion called the 'pundit's fallacy': "the belief that what a politician needs to do to improve his or her political standing is do what the pundit wants substantively." If we want to advocate not just for ideas that would be good if implemented, but ideas that have a real shot at getting implemented, we should do our best to understand public opinion as it really is. The least trustworthy polls are published by think tanks and advocacy campaigns that would love to make their preferred policy seem popular. These surveys can be designed to nudge respondents toward the desired result — for example, by tinkering with question wording and order or shifting how participants are sampled. And if a poll produces the 'wrong answer', there's no need to publish it at all, so the 'publication bias' with these sorts of surveys is large. Matthew says polling run by firms or researchers without any particular desired outcome can be taken more seriously. But the results that we ought to give by far the most weight are those from professional political campaigns trying to win votes and get their candidate elected because they have both the expertise to do polling properly, and a very strong incentive to understand what the public really thinks. The problem is, campaigns run these expensive surveys because they think that having exclusive access to reliable information will give them a competitive advantage. As a result, they often don’t publish the findings, and instead use them to shape what their candidate says and does. Journalists like Matthew can call up their contacts and get a summary from people they trust. But being unable to publish the polling itself, they're unlikely to be able to persuade sceptics. When assessing what ideas are winners, one thing Matthew would like everyone to keep in mind is that politics is competitive, and politicians aren't (all) stupid. If advocating for your pet idea were a great way to win elections, someone would try it and win, and others would copy. One other thing to check that's more reliable than polling is real-world experience. For example, voters may say they like a carbon tax on the phone — but the very liberal Washington State roundly rejected one in ballot initiatives in 2016 and 2018. Of course you may want to advocate for what you think is best, even if it wouldn't pass a popular vote in the face of organised opposition. The public's ideas can shift, sometimes dramatically and unexpectedly. But at least you'll be going into the debate with your eyes wide open. In this extensive conversation, host Rob Wiblin and Matthew also cover: • How should a humanitarian think about US military interventions overseas? • From an 'effective altruist' perspective, was the US wrong to withdraw from Afghanistan? • Has NATO ultimately screwed over Ukrainians by misrepresenting the extent of its commitment to their independence? • What philosopher does Matthew think is underrated? • How big a risk is ubiquitous surveillance? • What does Matthew think about wild animal suffering, anti-ageing research, and autonomous weapons? • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:05) Autonomous weapons (00:04:42) India and the US (00:07:25) Evidence-backed interventions for reducing the harm done by racial prejudices (00:08:38) Factory farming (00:10:44) Wild animal suffering (00:12:41) Vaccine development (00:15:20) Anti-ageing research (00:16:27) Should the US develop a semiconductor industry? (00:19:13) What we should do about various existential risks (00:21:58) What governments should do to stop the next pandemic (00:24:00) Comets and supervolcanoes (00:31:30) Nuclear weapons (00:34:25) Advances in AI (00:35:46) Surveillance systems (00:38:45) How Matt thinks about public opinion research (00:43:22) Issues with trusting public opinion polls (00:51:18) The influence of prior beliefs (01:05:53) Loss aversion (01:12:19) Matt's take on military adventurism (01:18:54) How military intervention looks as a humanitarian intervention (01:29:12) Where Matt does favour military intervention (01:38:27) Why smart people disagree (01:44:24) The case for NATO taking an active stance in Ukraine (01:57:34) One Billion Americans (02:08:02) Matt’s views on the effective altruism community (02:11:46) Matt’s views on the longtermist community (02:19:48) Matt’s struggle to become more of a rationalist (02:22:42) Megaprojects (02:26:20) The impact of Matt’s work (02:32:28) Matt’s philosophical views (02:47:58) The value of formal education (02:56:59) Worst thing Matt’s ever advocated for (03:02:25) Rob’s outro (03:03:22) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #120 – Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy 2:05:51
2:05:51
Play Later
Play Later
Lists
Like
Liked2:05:51
In 2014 Taiwan was rocked by mass protests against a proposed trade agreement with China that was about to be agreed without the usual Parliamentary hearings. Students invaded and took over the Parliament. But rather than chant slogans, instead they livestreamed their own parliamentary debate over the trade deal, allowing volunteers to speak both in favour and against. Instead of polarising the country more, this so-called 'Sunflower Student Movement' ultimately led to a bipartisan consensus that Taiwan should open up its government. That process has gradually made it one of the most communicative and interactive administrations anywhere in the world. Today's guest — programming prodigy Audrey Tang — initially joined the student protests to help get their streaming infrastructure online. After the students got the official hearings they wanted and went home, she was invited to consult for the government. And when the government later changed hands, she was invited to work in the ministry herself. Links to learn more, summary and full transcript. During six years as the country's 'Digital Minister' she has been helping Taiwan increase the flow of information between institutions and civil society and launched original experiments trying to make democracy itself work better. That includes developing new tools to identify points of consensus between groups that mostly disagree, building social media platforms optimised for discussing policy issues, helping volunteers fight disinformation by making their own memes, and allowing the public to build their own alternatives to government websites whenever they don't like how they currently work. As part of her ministerial role Audrey also sets aside time each week to help online volunteers working on government-related tech projects get the help they need. How does she decide who to help? She doesn't — that decision is made by members of an online community who upvote the projects they think are best. According to Audrey, a more collaborative mentality among the country's leaders has helped increase public trust in government, and taught bureaucrats that they can (usually) trust the public in return. Innovations in Taiwan may offer useful lessons to people who want to improve humanity's ability to make decisions and get along in large groups anywhere in the world. We cover: • Why it makes sense to treat Facebook as a nightclub • The value of having no reply button, and of getting more specific when you disagree • Quadratic voting and funding • Audrey’s experiences with the Sunflower Student Movement • Technologies Audrey is most excited about • Conservative anarchism • What Audrey’s day-to-day work looks like • Whether it’s ethical to eat oysters • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:04) Global crisis of confidence in government (00:07:06) Treating Facebook as a nightclub (00:10:55) Polis (00:13:48) The value of having no reply button (00:24:33) The value of getting more specific (00:26:13) Concerns with Polis (00:30:40) Quadratic voting and funding (00:42:16) Sunflower Student Movement (00:55:24) Promising technologies (01:05:44) Conservative anarchism (01:22:21) What Audrey’s day-to-day work looks like (01:33:54) Taiwanese politics (01:46:03) G0v (01:50:09) Rob’s outro (02:05:09) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines 2:35:28
2:35:28
Play Later
Play Later
Lists
Like
Liked2:35:28
Rebroadcast: this episode was originally released in September 2018. In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”. Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked. Links to learn more, summary and full transcript. The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today. If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere. As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity. You might think the United States would have a more sensible nuclear launch policy. You’d be wrong. As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth. The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe. The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival. Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it. Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity. Strategically, the setup is stupid. Ethically, it is monstrous. So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization? Daniel explores these questions eloquently and urgently in his book. Today we cover: • Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold • How well are secrets kept in the government? • What was the risk of the first atomic bomb test? • Do we have a reliable estimate of the magnitude of a ‘nuclear winter’? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.…
8
80,000 Hours Podcast


1 #35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission 1:23:34
1:23:34
Play Later
Play Later
Lists
Like
Liked1:23:34
Rebroadcast: this episode was originally released in June 2018. How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’. At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure. That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator. In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious. Links to learn more, summary and full transcript. People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face. But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms. We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article 'Why operations management is one of the biggest bottlenecks in effective altruism’ , as well as: • Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform. • How a student can save a hospital millions with a simple spreadsheet model. • The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better. • What most people misunderstand about operations, and how to tell if you have what it takes. • And finally, operations jobs people should consider applying for. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.…
8
80,000 Hours Podcast


1 #67 Classic episode – David Chalmers on the nature and ethics of consciousness 4:42:05
4:42:05
Play Later
Play Later
Lists
Like
Liked4:42:05
Rebroadcast: this episode was originally released in December 2019. What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. Links to learn more, summary and full transcript. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind' , which argued against then-dominant materialist theories of consciousness. This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far. Get this episode by subscribing to our show on the world’s most pressing problems and how to solve them: search for 80,000 Hours in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.…
8
80,000 Hours Podcast


1 #59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable 1:43:05
1:43:05
Play Later
Play Later
Lists
Like
Liked1:43:05
Rebroadcast: this episode was originally released in June 2019. It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition. The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably. In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism. How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks? Sunstein — co-author of Nudge , Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens . He pulls together three phenomena which social scientists have studied in recent decades: preference falsification , variable thresholds for action , and group polarisation . If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable. Links to learn more, summary and full transcript. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis. • When is it justified to encourage your own group to polarise? • Sunstein's difficult experiences as a pioneer of animal rights law. • Whether activists can do better by spending half their resources on public opinion surveys. • Should people be more or less outspoken about their true views? • What might be the next social revolution to take off? • How can we learn about social movements that failed and disappeared? • How to find out what people really think. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site. The 80,000 Hours Podcast is produced by Keiran Harris.…
8
80,000 Hours Podcast


1 #119 – Andrew Yang on our very long-term future, and other topics most politicians won’t touch 1:25:57
1:25:57
Play Later
Play Later
Lists
Like
Liked1:25:57
Andrew Yang — past presidential candidate, founder of the Forward Party, and leader of the 'Yang Gang' — is kind of a big deal, but is particularly popular among listeners to The 80,000 Hours Podcast. Maybe that's because he's willing to embrace topics most politicians stay away from, like universal basic income, term limits for members of Congress, or what might happen when AI replaces whole industries. Links to learn more, summary and full transcript. But even those topics are pretty vanilla compared to our usual fare on The 80,000 Hours Podcast. So we thought it’d be fun to throw Andrew some stranger or more niche questions we hadn't heard him comment on before, including: 1. What would your ideal utopia in 500 years look like? 2. Do we need more public optimism today? 3. Is positively influencing the long-term future a key moral priority of our time? 4. Should we invest far more to prevent low-probability risks? 5. Should we think of future generations as an interest group that's disenfranchised by their inability to vote? 6. The folks who worry that advanced AI is going to go off the rails and destroy us all... are they crazy, or a valuable insurance policy? 7. Will people struggle to live fulfilling lives once AI systems remove the economic need to 'work'? 8. Andrew is a huge proponent of ranked-choice voting. But what about 'approval voting' — where basically you just get to say “yea” or “nay” to every candidate that's running — which some experts prefer? 9. What would Andrew do with a billion dollars to keep the US a democracy? 10. What does Andrew think about the effective altruism community? 11. What's one thing we should do to reduce the risk of nuclear war? 12. Will Andrew's new political party get Trump elected by splitting the vote, the same way Nader got Bush elected back in 2000? As it turns out, Rob and Andrew agree on a lot, so the episode is less a debate than a chat about ideas that aren’t mainstream yet... but might be one day. They also talk about: • Andrew’s views on alternative meat • Whether seniors have too much power in American society • Andrew’s DC lobbying firm on behalf of humanity • How the rest of the world could support the US • The merits of 18-year term limits • What technologies Andrew is most excited about • How much the US should spend on foreign aid • Persistence and prevalence of inflation in the US economy • And plenty more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:38) Andrew’s hopes for the year 2500 (00:03:10) Tech over the next century (00:07:03) Utopia for realists (00:10:41) Most likely way humanity fails (00:12:43) What Andrew would do with a billion dollars (00:14:44) Approval voting vs. ranked-choice voting (00:19:51) The worry that third party candidates could cause harm (00:21:12) Investment in existential risk reduction (00:25:18) Future generations as a disenfranchised interest group (00:30:37) Humanity Forward (00:32:05) Best way the rest of the world could support the US (00:37:17) Recent advances in AI (00:39:56) Artificial general intelligence (00:46:38) The Windfall Clause (00:49:39) The alignment problem (00:53:02) 18-year term limits (00:56:21) Effective altruism and longtermism (01:00:44) Persistence and prevalence of inflation in the US economy (01:01:25) Downsides of policies Andrew advocates for (01:02:08) What Andrew would have done differently with COVID (01:04:54) Fighting for attention in the media (01:09:25) Right ballpark level of foreign aid for the US (01:11:15) Government science funding (01:11:58) Nuclear weapons policy (01:15:06) US-China relationship (01:16:20) Human challenge trials (01:18:59) Forecasting accuracy (01:20:17) Upgrading public schools (01:21:41) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development 2:15:40
2:15:40
Play Later
Play Later
Lists
Like
Liked2:15:40
If a rich country were really committed to pursuing an active biological weapons program, there’s not much we could do to stop them. With enough money and persistence, they’d be able to buy equipment, and hire people to carry out the work. But what we can do is intervene before they make that decision. Today’s guest, Jaime Yassif — Senior Fellow for global biological policy and programs at the Nuclear Threat Initiative (NTI) — thinks that stopping states from wanting to pursue dangerous bioscience in the first place is one of our key lines of defence against global catastrophic biological risks (GCBRs). Links to learn more, summary and full transcript. It helps to understand why countries might consider developing biological weapons. Jaime says there are three main possible reasons: 1. Fear of what their adversary might be up to 2. Belief that they could gain a tactical or strategic advantage, with limited risk of getting caught 3. Belief that even if they are caught, they are unlikely to be held accountable In response, Jaime has developed a three-part recipe to create systems robust enough to meaningfully change the cost-benefit calculation. The first is to substantially increase transparency. If countries aren’t confident about what their neighbours or adversaries are actually up to, misperceptions could lead to arms races that neither side desires. But if you know with confidence that no one around you is pursuing a biological weapons programme, you won’t feel motivated to pursue one yourself. The second is to strengthen the capabilities of the United Nations’ system to investigate the origins of high-consequence biological events — whether naturally emerging, accidental or deliberate — and to make sure that the responsibility to figure out the source of bio-events of unknown origin doesn’t fall between the cracks of different existing mechanisms. The ability to quickly discover the source of emerging pandemics is important both for responding to them in real time and for deterring future bioweapons development or use. And the third is meaningful accountability. States need to know that the consequences for getting caught in a deliberate attack are severe enough to make it a net negative in expectation to go down this road in the first place. But having a good plan and actually implementing it are two very different things, and today’s episode focuses heavily on the practical steps we should be taking to influence both governments and international organisations, like the WHO and UN — and to help them maximise their effectiveness in guarding against catastrophic biological risks. Jaime and Rob explore NTI’s current proposed plan for reducing global catastrophic biological risks, and discuss: • The importance of reducing emerging biological risks associated with rapid technology advances • How we can make it a lot harder for anyone to deliberately or accidentally produce or release a really dangerous pathogen • The importance of having multiples theories of risk reduction • Why Jaime’s more focused on prevention than response • The history of the Biological Weapons Convention • Jaime’s disagreements with the effective altruism community • And much more And if you might be interested in dedicating your career to reducing GCBRs, stick around to the end of the episode to get Jaime’s advice — including on how people outside of the US can best contribute, and how to compare career opportunities in academia vs think tanks, and nonprofits vs national governments vs international orgs. Chapters: Rob’s intro (00:00:00) The interview begins (00:02:32) Categories of global catastrophic biological risks (00:05:24) Disagreements with the effective altruism community (00:07:39) Stopping the first person from getting infected (00:11:51) Shaping intent (00:15:51) Verification and the Biological Weapons Convention (00:25:31) Attribution (00:37:15) How to actually implement a new idea (00:50:54) COVID-19: natural pandemic or lab leak? (00:53:31) How much can we rely on traditional law enforcement to detect terrorists? (00:58:20) Constraining capabilities (01:01:24) The funding landscape (01:06:56) Oversight committees (01:14:20) Just winning the argument (01:20:17) NTI’s vision (01:27:39) Suppliers of goods and services (01:33:24) Publishers (01:39:41) Biggest weaknesses of NTI platform (01:42:29) Careers (01:48:31) How people outside of the US can best contribute (01:54:10) Academia vs think tanks vs nonprofits vs government (01:59:21) International cooperation (02:05:40) Best things about living in the US, UK, China, and Israel (02:11:16) Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #117 – David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah 3:08:13
3:08:13
Play Later
Play Later
Lists
Like
Liked3:08:13
If there's a nuclear war followed by nuclear winter, and the sun is blocked out for years, most of us are going to starve, right? Well, currently, probably we would, because humanity hasn't done much to prevent it. But it turns out that an ounce of forethought might be enough for most people to get the calories they need to survive, even in a future as grim as that one. Today's guest is engineering professor Dave Denkenberger, who co-founded the Alliance to Feed the Earth in Disasters (ALLFED), which has the goal of finding ways humanity might be able to feed itself for years without relying on the sun. Over the last seven years, Dave and his team have turned up options from the mundane, like mushrooms grown on rotting wood, to the bizarre, like bacteria that can eat natural gas or electricity itself. Links to learn more, summary and full transcript. One option stands out as potentially able to feed billions: finding a way to eat wood ourselves. Even after a disaster, a huge amount of calories will be lying around, stored in wood and other plant cellulose. The trouble is that, even though cellulose is basically a lot of sugar molecules stuck together, humans can't eat wood. But we do know how to turn wood into something people can eat. We can grind wood up in already existing paper mills, then mix the pulp with enzymes that break the cellulose into sugar and the hemicellulose into other sugars. Another option that shows a lot of promise is seaweed. Buffered by the water around them, ocean life wouldn't be as affected by the lower temperatures resulting from the sun being obscured. Sea plants are also already used to growing in low light, because the water above them already shades them to some extent. Dave points out that "there are several species of seaweed that can still grow 10% per day, even with the lower light levels in nuclear winter and lower temperatures. ... Not surprisingly, with that 10% growth per day, assuming we can scale up, we could actually get up to 160% of human calories in less than a year." Of course it will be easier to scale up seaweed production if it's already a reasonably sized industry. At the end of the interview, we're joined by Sahil Shah, who is trying to expand seaweed production in the UK with his business Sustainable Seaweed. While a diet of seaweed and trees turned into sugar might not seem that appealing, the team at ALLFED also thinks several perfectly normal crops could also make a big contribution to feeding the world, even in a truly catastrophic scenario. Those crops include potatoes, canola, and sugar beets, which are currently grown in cool low-light environments. Many of these ideas could turn out to be misguided or impractical in real-world conditions, which is why Dave and ALLFED are raising money to test them out on the ground. They think it's essential to show these techniques can work so that should the worst happen, people turn their attention to producing more food rather than fighting one another over the small amount of food humanity has stockpiled. In this conversation, Rob, Dave, and Sahil discuss the above, as well as: • How much one can trust the sort of economic modelling ALLFED does • Bacteria that turn natural gas or electricity into protein • How to feed astronauts in space with nuclear power • What individuals can do to prepare themselves for global catastrophes • Whether we should worry about humanity running out of natural resources • How David helped save $10 billion worth of electricity through energy efficiency standards • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:36) Resilient foods recap (00:04:27) Cost effectiveness recap (00:08:07) Turning fiber or wood or cellulose into sugar (00:10:30) Redirecting human-edible food away from animals (00:22:46) Seaweed production (00:26:33) Crops that can handle lower temperatures or lower light (00:35:24) Greenhouses (00:40:51) How much to trust this economic modeling (00:43:50) Global cooperation (00:51:16) People feeding themselves using these methods (00:57:15) NASA and ALLFED (01:04:47) Kinds of catastrophes (01:15:16) Is New Zealand overrated? (01:25:35) Should listeners be doing anything to prepare for possible disasters? (01:28:43) Cost effectiveness of work on EMPs (01:30:43) The future of ALLFED (01:33:34) Opportunities at ALLFED (01:40:49) Why Dave is optimistic around bigger-picture scarcity issues (01:46:58) Energy return on energy invested (01:56:36) Nitrogen and phosphorus (02:03:25) Energy and food prices (02:07:18) Sustainable Seaweed with Sahil Shah (02:21:44) Locusts (02:38:33) The effect of COVID on food supplies (02:44:01) How much food prices would spike in a disaster (02:50:46) How Dave helped to save ~$10 billion worth of energy (02:56:33) What it’s like to live in Alaska (03:03:18) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #116 – Luisa Rodriguez on why global catastrophes seem unlikely to kill us all 3:45:44
3:45:44
Play Later
Play Later
Lists
Like
Liked3:45:44
If modern human civilisation collapsed — as a result of nuclear war, severe climate change, or a much worse pandemic than COVID-19 — billions of people might die. That's terrible enough to contemplate. But what’s the probability that rather than recover, the survivors would falter and humanity would actually disappear for good? It's an obvious enough question, but very few people have spent serious time looking into it -- possibly because it cuts across history, economics, and biology, among many other fields. There's no Disaster Apocalypse Studies department at any university, and governments have little incentive to plan for a future in which their country probably no longer even exists. The person who may have spent the most time looking at this specific question is Luisa Rodriguez — who has conducted research at Rethink Priorities, Oxford University's Future of Humanity Institute, the Forethought Foundation, and now here, at 80,000 Hours. Links to learn more, summary and full transcript. She wrote a series of articles earnestly trying to foresee how likely humanity would be to recover and build back after a full-on civilisational collapse. There are a couple of main stories people put forward for how a catastrophe like this would kill every single human on Earth — but Luisa doesn’t buy them. Story 1 : Nuclear war has led to nuclear winter. There's a 10-year period during which a lot of the world is really inhospitable to agriculture. The survivors just aren't able to figure out how to feed themselves in the time period, so everyone dies of starvation or cold. Why Luisa doesn’t buy it : Catastrophes will almost inevitably be non-uniform in their effects. If 80,000 people survive, they’re not all going to be in the same city — it would look more like groups of 5,000 in a bunch of different places. People in some places will starve, but those in other places, such as New Zealand, will be able to fish, eat seaweed, grow potatoes, and find other sources of calories. It’d be an incredibly unlucky coincidence if the survivors of a nuclear war -- likely spread out all over the world -- happened to all be affected by natural disasters or were all prohibitively far away from areas suitable for agriculture (which aren’t the same areas you’d expect to be attacked in a nuclear war). Story 2 : The catastrophe leads to hoarding and violence, and in addition to people being directly killed by the conflict, it distracts everyone so much from the key challenge of reestablishing agriculture that they simply fail. By the time they come to their senses, it’s too late -- they’ve used up too much of the resources they’d need to get agriculture going again. Why Luisa doesn’t buy it : We‘ve had lots of resource scarcity throughout history, and while we’ve seen examples of conflict petering out because basic needs aren’t being met, we’ve never seen the reverse. And again, even if this happens in some places -- even if some groups fought each other until they literally ended up starving to death — it would be completely bizarre for it to happen to every group in the world. You just need one group of around 300 people to survive for them to be able to rebuild the species. In this wide-ranging and free-flowing conversation, Luisa and Rob also cover: • What the world might actually look like after one of these catastrophes • The most valuable knowledge for survivors • How fast populations could rebound • ‘Boom and bust’ climate change scenarios • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:37) Recovering from a serious collapse of civilization (00:11:41) Existing literature (00:14:52) Fiction (00:20:42) Types of disasters (00:23:13) What the world might look like after a catastrophe (00:29:09) Nuclear winter (00:34:34) Stuff that might stick around (00:38:58) Grace period (00:42:39) Examples of human ingenuity in tough situations (00:48:33) The most valuable knowledge for survivors (00:57:23) Would people really work together? (01:09:00) Radiation (01:27:08) Learning from the worst pandemics (01:31:40) Learning from fallen civilizations (01:36:30) Direct extinction (01:45:30) Indirect extinction (02:01:53) Rapid recovery vs. slow recovery (02:05:01) Risk of culture shifting against science and tech (02:15:33) Resource scarcity (02:23:07) How fast could populations rebound (02:37:07) Implications for what we ought to do right now (02:43:52) How this work affected Luisa’s views (02:54:00) Boom and bust climate change scenarios (02:57:06) Stagnation and cold wars (03:01:18) How Luisa met her biological father (03:18:23) If Luisa had to change careers (03:40:38 ) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #115 – David Wallace on the many-worlds theory of quantum mechanics and its implications 3:09:47
3:09:47
Play Later
Play Later
Lists
Like
Liked3:09:47
Quantum mechanics — our best theory of atoms, molecules, and the subatomic particles that make them up — underpins most of modern physics. But there are varying interpretations of what it means, all of them controversial in their own way. Famously, quantum theory predicts that with the right setup, a cat can be made to be alive and dead at the same time. On the face of it, that sounds either meaningless or ridiculous. According to today’s guest, David Wallace — professor at the University of Pittsburgh and one of the world's leading philosophers of physics — there are three broad ways experts react to this apparent dilemma: 1. The theory must be wrong, and we need to change our philosophy to fix it. 2. The theory must be wrong, and we need to change our physics to fix it. 3. The theory is OK, and cats really can in some way be alive and dead simultaneously. (David and Rob do their best to introduce quantum mechanics in the first 35 minutes of the episode, but it isn't the easiest thing to explain via audio alone. So if you need a refresher before jumping in, we recommend checking out our links to learn more, summary and full transcript. ) In 1955, physicist Hugh Everett bit the bullet on Option 3 and proposed Wallace's preferred solution to the puzzle: each time it's faced with a ‘quantum choice,’ the universe 'splits' into different worlds. Anything that has a probability greater than zero (from the perspective of quantum theory) happens in some branch — though more probable things happen in far more branches. While not a consensus position, the ‘many-worlds’ approach is one of the top three most popular ways to make sense of what's going on, according to surveys of relevant experts. Setting aside whether it's correct for a moment, one thing that's not often spelled out is what this approach would concretely imply if it were right. Is there a world where Rob (the show's host) can roll a die a million times, and it comes up 6 every time? As David explains in this episode: absolutely, that’s completely possible — and if Rob rolled a die a million times, there would be a world like that. Is there a world where Rob becomes president of the US? David thinks probably not. The things stopping Rob from becoming US president don’t seem down to random chance at the quantum level. Is there a world where Rob deliberately murdered someone this morning? Only if he’s already predisposed to murder — becoming a different person in that way probably isn’t a matter of random fluctuations in our brains. Is there a world where a horse-version of Rob hosts the 80,000 Horses Podcast? Well, due to the chance involved in evolution, it’s plausible that there are worlds where humans didn’t evolve, and intelligent horses have in some sense taken their place. And somewhere, fantastically distantly across the vast multiverse, there might even be a horse named Rob Wiblin who hosts a podcast, and who sounds remarkably like Rob. Though even then — it wouldn’t actually be Rob in the way we normally think of personal identity. Rob and David also cover: • If the many-worlds interpretation is right, should that change how we live our lives? • Are our actions getting more (or less) important as the universe splits into finer and finer threads? • Could we conceivably influence other branches of the multiverse? • Alternatives to the many-worlds interpretation • The practical value of physics today • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:15) Introduction to quantum mechanics (00:08:10) Why does quantum mechanics need an interpretation? (00:19:42) Quantum mechanics in basic language (00:30:37) Quantum field theory (00:33:13) Different theories of quantum mechanics (00:38:49) Many-worlds theory (00:43:14) What stuff actually happens (00:52:09) Can we count the worlds? (00:59:55) Why anyone believes any of these (01:05:01) Changing the physics (01:10:41) Changing the philosophy (01:14:21) Instrumentalism vs. realism (01:21:42) Objections to many-worlds (01:35:26) Why a consensus hasn’t emerged (01:50:59) Practical implications of the many-worlds theory (01:57:11) Are our actions getting more or less important? (02:04:21) Does utility increase? (02:12:02) Could we influence other branches? (02:17:01) Should you do unpleasant things first? (02:19:52) Progress in physics over the last 50 years (02:30:55) Practical value of physics today (02:35:24) Physics careers (02:43:56) Subjective probabilities (02:48:39) The philosophy of time (02:50:14) David’s experience at Oxford (02:59:51) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel and Katy Moore…
8
80,000 Hours Podcast


1 #114 – Maha Rehman on working with governments to rapidly deliver masks to millions of people 1:42:55
1:42:55
Play Later
Play Later
Lists
Like
Liked1:42:55
It’s hard to believe, but until recently there had never been a large field trial that addressed these simple and obvious questions: 1. When ordinary people wear face masks, does it actually reduce the spread of respiratory diseases? 2. And if so, how do you get people to wear masks more often? It turns out the first question is remarkably challenging to answer, but it's well worth doing nonetheless. Among other reasons, the first good trial of this prompted Maha Rehman — Policy Director at the Mahbub Ul Haq Research Centre — as well as a range of others to immediately use the findings to help tens of millions of people across South Asia, even before the results were public. Links to learn more, summary and full transcript. The groundbreaking Bangladesh RCT that inspired her to take action found that: • A 30% increase in mask wearing reduced total infections by 10%. • The effect was more pronounced for surgical masks compared to cloth masks (plus ~50% effectiveness). • Mask wearing also led to an increase in social distancing. • Of all the incentives tested, the only thing that impacted mask wearing was their colour (people preferred blue over green, and red over purple!). The research was done by social scientists at Yale, Berkeley, and Stanford, among others. It applied a program they called ‘NORM’ in half of 600 villages in which about 350,000 people lived. NORM has four components, which the researchers expected would work well for the general public: N: no-cost distribution O: offering information R: reinforcing the message and the information in the field M: modeling Basically you make sure a community has enough masks and you tell them why it’s important to wear them. You also reinforce the message periodically in markets and mosques, and via role models and promoters in the community itself. Tipped off that these positive findings were on the way, Maha took this program and rushed to put it into action in Lahore, Pakistan, a city with a population of about 13 million, before the Delta variant could sweep through the region. Maha had already been doing a lot of data work on COVID policy over the past year, and that allowed her to quickly reach out to the relevant stakeholders — getting them interested and excited. Governments aren’t exactly known for being super innovative, but in March and April Lahore was going through a very deadly third wave of COVID — so the commissioner quickly jumped on this approach, providing an endorsement as well as resources. Together with the original researchers, Maha and her team at LUMS collected baseline data that allowed them to map the mask-wearing rate in every part of Lahore, in both markets and mosques. And then based on that data, they adapted the original rural-focused model to a very different urban setting. The scale of this project was daunting, and in today’s episode Maha tells Rob all about the day-to-day experiences and stresses required to actually make it happen. They also discuss: • The challenges of data collection in this context • Disasters and emergencies she had to respond to in the middle of the project • What she learned from working closely with the Lahore Commissioner's Office • How to get governments to provide you with large amounts of data for your research • How she adapted from a more academic role to a ‘getting stuff done’ role • How to reduce waste in government procurement • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:33) Bangladesh RCT (00:06:24) The NORM model (00:08:34) Results of the experiment (00:10:46) Experimental design (00:20:35) Adapting the findings from Bangladesh to Lahore (00:23:55) Collecting data (00:34:09) Working with governments (00:38:38) Coordination (00:44:53) Disasters and emergencies (00:56:01) Sending out masks to every single person in Lahore (00:59:15) How Maha adapted to her role (01:07:17) Logistic aptitude (01:11:45) Disappointments (01:14:13) Procurement RCT (01:16:51) What we can learn (01:31:18) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 We just put up a new compilation of ten core episodes of the show 3:02
3:02
Play Later
Play Later
Lists
Like
Liked3:02
We recently launched a new podcast feed that might be useful to you and people you know. It's called Effective Altruism: Ten Global Problems , and it's a collection of ten top episodes of this show, selected to help listeners quickly get up to speed on ten pressing problems that the effective altruism community is working to solve. It's a companion to our other compilation Effective Altruism: An Introduction , which explores the big picture debates within the community and how to set priorities in order to have the greatest impact. These ten episodes cover: The cheapest ways to improve education in the developing world How dangerous is climate change and what are the most effective ways to reduce it? Using new technologies to prevent another disastrous pandemic Ways to simultaneously reduce both police misconduct and crime All the major approaches being taken to end factory farming How advances in artificial intelligence could go very right or very wrong Other big threats to the future of humanity — such as a nuclear war — and how can we make our species wiser and more resilient One problem few even recognise as a problem at all The selection is ideal for people who are completely new to the effective altruist way of thinking, as well as those who are familiar with effective altruism but new to The 80,000 Hours Podcast. If someone in your life wants to get an understanding of what 80,000 Hours or effective altruism are all about, and prefers to listen to things rather than read, this is a great resource to direct them to. You can find it by searching for effective altruism in whatever podcasting app you use, or by going to 80000hours.org/ten . We'd love to hear how you go listening to it yourself, or sharing it with others in your life. Get in touch by emailing podcast@80000hours.org.…
8
80,000 Hours Podcast


1 #113 – Varsha Venugopal on using gossip to help vaccinate every child in India 2:05:44
2:05:44
Play Later
Play Later
Lists
Like
Liked2:05:44
Our failure to make sure all kids globally get all of their basic vaccinations leads to 1.5 million child deaths every year. According to today’s guest, Varsha Venugopal, for the great majority this has nothing to do with weird conspiracy theories or medical worries — in India 80% of undervaccinated children are already getting some shots. They just aren't getting all of them, for the tragically mundane reason that life can get in the way. Links to learn more, summary and full transcript. As Varsha says, we're all sometimes guilty of "valuing our present very differently from the way we value the future", leading to short-term thinking whether about getting vaccines or going to the gym. So who should we call on to help fix this universal problem? The government, extended family, or maybe village elders? Varsha says that research shows the most influential figures might actually be local gossips. In 2018, Varsha heard about the ideas around effective altruism for the first time. By the end of 2019, she’d gone through Charity Entrepreneurship’s strategy incubation program, and quit her normal, stable job to co-found Suvita, a non-profit focused on improving the uptake of immunization in India, which focuses on two models: 1. Sending SMS reminders directly to parents and carers 2. Gossip The first one is intuitive. You collect birth registers, digitize the paper records, process the data, and send out personalised SMS messages to hundreds of thousands of families. The effect size varies depending on the context but these messages usually increase vaccination rates by 8-18%. The second approach is less intuitive and isn't yet entirely understood either. Here’s what happens: Suvita calls up random households and asks, “if there were an event in town, who would be most likely to tell you about it?” In over 90% of the cases, the households gave both the name and the phone number of a local ‘influencer’. And when tracked down, more than 95% of the most frequently named 'influencers' agreed to become vaccination ambassadors. Those ambassadors then go on to share information about when and where to get vaccinations, in whatever way seems best to them. When tested by a team of top academics at the Poverty Action Lab (J-PAL) it raised vaccination rates by 10 percentage points, or about 27%. The advantage of SMS reminders is that they’re easier to scale up. But Varsha says the ambassador program isn’t actually that far from being a scalable model as well. A phone call to get a name, another call to ask the influencer join, and boom — you might have just covered a whole village rather than just a single family. Varsha says that Suvita has two major challenges on the horizon: 1. Maintaining the same degree of oversight of their surveyors as they attempt to scale up the program, in order to ensure the program continues to work just as well 2. Deciding between focusing on reaching a few more additional districts now vs. making longer term investments which could build up to a future exponential increase. In this episode, Varsha and Rob talk about making these kinds of high-stakes, high-stress decisions, as well as: • How Suvita got started, and their experience with Charity Entrepreneurship • Weaknesses of the J-PAL studies • The importance of co-founders • Deciding how broad a program should be • Varsha’s day-to-day experience • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:47) The problem of undervaccinated kids (00:03:16) Suvita (00:12:47) Evidence on SMS reminders (00:20:30) Gossip intervention (00:28:43) Why parents aren’t already prioritizing vaccinations (00:38:29) Weaknesses of studies (00:43:01) Biggest challenges for Suvita (00:46:05) Staff location (01:06:57) Charity Entrepreneurship (01:14:37) The importance of co-founders (01:23:23) Deciding how broad a program should be (01:28:29) Careers at Suvita (01:34:11) Varsha’s advice (01:42:30) Varsha’s day-to-day experience (01:56:19) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #112 – Carl Shulman on the common-sense case for existential risk work and its practical implications 3:48:40
3:48:40
Play Later
Play Later
Lists
Like
Liked3:48:40
Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation. But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster. According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future. Links to learn more, summary and full transcript. The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs: • The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American. • So saving all US citizens at any given point in time would be worth $1,300 trillion. • If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice ), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone. • Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today. This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein. If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve? Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds. Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on. Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover: • A few reasons Carl isn't excited by 'strong longtermism' • How x-risk reduction compares to GiveWell recommendations • Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change • The history of bioweapons • Whether gain-of-function research is justifiable • Successes and failures around COVID-19 • The history of existential risk • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:34) A few reasons Carl isn't excited by strong longtermism (00:03:47) Longtermism isn’t necessary for wanting to reduce big x-risks (00:08:21) Why we don’t adequately prepare for disasters (00:11:16) International programs to stop asteroids and comets (00:18:55) Costs and political incentives around COVID (00:23:52) How x-risk reduction compares to GiveWell recommendations (00:34:34) Solutions for asteroids, comets, and supervolcanoes (00:50:22) Solutions for climate change (00:54:15) Solutions for nuclear weapons (01:02:18) The history of bioweapons (01:22:41) Gain-of-function research (01:34:22) Solutions for bioweapons and natural pandemics (01:45:31) Successes and failures around COVID-19 (01:58:26) Who to trust going forward (02:09:09) The history of existential risk (02:15:07) The most compelling risks (02:24:59) False alarms about big risks in the past (02:34:22) Suspicious convergence around x-risk reduction (02:49:31) How hard it would be to convince governments (02:57:59) Defensive epistemology (03:04:34) Hinge of history debate (03:16:01) Technological progress can’t keep up for long (03:21:51) Strongest argument against this being a really pivotal time (03:37:29) How Carl unwinds (03:45:30) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore…
8
80,000 Hours Podcast


1 #111 – Mushtaq Khan on using institutional economics to predict effective government reforms 3:20:26
3:20:26
Play Later
Play Later
Lists
Like
Liked3:20:26
If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines. The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft. They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What’s happening here? According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ‘networked corruption’. Everyone in the community is benefiting from the criminal enterprise — so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil. Links to learn more, summary and full transcript. In today's episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world. Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country's rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us. The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected. Mushtaq’s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. Mushtaq's rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they're participating in, they almost always win out. To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers. Trying to impose a new way of doing things from the top down wasn't how Europe modernised, and it won't work elsewhere either. In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy — in the hope that one day this will give people a viable alternative to corruption. In this extensive interview Rob and Mushtaq cover this and much more, including: • How does one test theories like this? • Why are companies in some poor countries so much less productive than their peers in rich countries? • Have rich countries just legalized the corruption in their societies? • What are the big live debates in institutional economics? • Should poor countries protect their industries from foreign competition? • How can listeners use these theories to predict which policies will work in their own countries? Chapters: Rob’s intro (00:00:00) The interview begins (00:01:55) Institutional economics (00:15:37) Anti-corruption policies (00:28:45) Capabilities (00:34:51) Why the market doesn’t solve the problem (00:42:29) Industrial policy (00:46:11) South Korea (01:01:31) Chiang Kai-shek (01:16:01) The logic of political survival (01:18:43) Anti-corruption as a design of your policy (01:35:16) Examples of anti-corruption programs with good prospects (01:45:17) The importance of getting overseas influences (01:56:05) Actually capturing the primary effect (02:03:26) How less developed countries could successfully design subsidies (02:15:14) What happens when horizontal policing isn't possible (02:26:34) Rule of law <--> economic development (02:33:40) Violence (02:38:31) How this applies to developed countries (02:48:57) Policies to help left-behind groups (02:55:39) What to study (02:58:50) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #110 – Holden Karnofsky on building aptitudes and kicking ass 2:46:06
2:46:06
Play Later
Play Later
Lists
Like
Liked2:46:06
Holden Karnofsky helped create two of the most influential organisations in the effective philanthropy world. So when he outlines a different perspective on career advice than the one we present at 80,000 Hours — we take it seriously. Holden disagrees with us on a few specifics, but it's more than that: he prefers a different vibe when making career choices, especially early in one's career. Links to learn more, summary and full transcript. While he might ultimately recommend similar jobs to those we recommend at 80,000 Hours, the reasons are often different. At 80,000 Hours we often talk about ‘paths’ to working on what we currently think of as the most pressing problems in the world. That’s partially because people seem to prefer the most concrete advice possible. But Holden thinks a problem with that kind of advice is that it’s hard to take actions based on it if your job options don’t match well with your plan, and it’s hard to get a reliable signal about whether you're making the right choices. How can you know you’ve chosen the right cause? How can you know the job you’re aiming for will be helpful to that cause? And what if you can’t get a job in this area at all? Holden prefers to focus on ‘aptitudes’ that you can build in all sorts of different roles and cause areas, which can later be applied more directly. Even if the current role doesn’t work out, or your career goes in wacky directions you’d never anticipated (like so many successful careers do), or you change your whole worldview — you’ll still have access to this aptitude. So instead of trying to become a project manager at an effective altruism organisation, maybe you should just become great at project management. Instead of trying to become a researcher at a top AI lab, maybe you should just become great at digesting hard problems. Who knows where these skills will end up being useful down the road? Holden doesn’t think you should spend much time worrying about whether you’re having an impact in the first few years of your career — instead you should just focus on learning to kick ass at something , knowing that most of your impact is going to come decades into your career. He thinks as long as you’ve gotten good at something, there will usually be a lot of ways that you can contribute to solving the biggest problems. But Holden’s most important point, perhaps, is this: Be very careful about following career advice at all . He points out that a career is such a personal thing that it’s very easy for the advice-giver to be oblivious to important factors having to do with your personality and unique situation. He thinks it’s pretty hard for anyone to really have justified empirical beliefs about career choice, and that you should be very hesitant to make a radically different decision than you would have otherwise based on what some person (or website!) tells you to do. Instead, he hopes conversations like these serve as a way of prompting discussion and raising points that you can apply your own personal judgment to. That's why in the end he thinks people should look at their career decisions through his aptitude lens, the '80,000 Hours lens', and ideally several other frameworks as well. Because any one perspective risks missing something important. Holden and Rob also cover: • Ways to be helpful to longtermism outside of careers • Why finding a new cause area might be overrated • Historical events that deserve more attention • And much more Chapters: Rob’s intro (00:00:00) Holden’s current impressions on career choice for longtermists (00:02:34) Aptitude-first vs. career path-first approaches (00:08:46) How to tell if you’re on track (00:16:24) Just try to kick ass in whatever (00:26:00) When not to take the thing you're excited about (00:36:54) Ways to be helpful to longtermism outside of careers (00:41:36) Things 80,000 Hours might be doing wrong (00:44:31) The state of longtermism (00:51:50) Money pits (01:02:10) Broad longtermism (01:06:56) Cause X (01:21:33) Open Philanthropy (01:24:23) COVID and the biorisk portfolio (01:35:09) Has the world gotten better? (01:51:16) Historical events that deserve more attention (01:55:11) Applied epistemology (02:10:55) What Holden has learned from COVID (02:20:55) What Holden has gotten wrong recently (02:32:59) Having a kid (02:39:50) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #109 – Holden Karnofsky on the most important century 2:19:02
2:19:02
Play Later
Play Later
Lists
Like
Liked2:19:02
Will the future of humanity be wild, or boring? It's natural to think that if we're trying to be sober and measured, and predict what will really happen rather than spin an exciting story, it's more likely than not to be sort of... dull. But there's also good reason to think that that is simply impossible. The idea that there's a boring future that's internally coherent is an illusion that comes from not inspecting those scenarios too closely. At least that is what Holden Karnofsky — founder of charity evaluator GiveWell and foundation Open Philanthropy — argues in his new article series titled 'The Most Important Century' . He hopes to lay out part of the worldview that's driving the strategy and grantmaking of Open Philanthropy's longtermist team, and encourage more people to join his efforts to positively shape humanity's future. Links to learn more, summary and full transcript. The bind is this. For the first 99% of human history the global economy (initially mostly food production) grew very slowly: under 0.1% a year. But since the industrial revolution around 1800, growth has exploded to over 2% a year. To us in 2020 that sounds perfectly sensible and the natural order of things. But Holden points out that in fact it's not only unprecedented, it also can't continue for long. The power of compounding increases means that to sustain 2% growth for just 10,000 years, 5% as long as humanity has already existed, would require us to turn every individual atom in the galaxy into an economy as large as the Earth's today. Not super likely. So what are the options? First, maybe growth will slow and then stop. In that case we today live in the single miniscule slice in the history of life during which the world rapidly changed due to constant technological advances, before intelligent civilization permanently stagnated or even collapsed. What a wild time to be alive! Alternatively, maybe growth will continue for thousands of years. In that case we are at the very beginning of what would necessarily have to become a stable galaxy-spanning civilization, harnessing the energy of entire stars among other feats of engineering. We would then stand among the first tiny sliver of all the quadrillions of intelligent beings who ever exist. What a wild time to be alive! Isn't there another option where the future feels less remarkable and our current moment not so special? While the full version of the argument above has a number of caveats, the short answer is 'not really'. We might be in a computer simulation and our galactic potential all an illusion, though that's hardly any less weird. And maybe the most exciting events won't happen for generations yet. But on a cosmic scale we'd still be living around the universe's most remarkable time. Holden himself was very reluctant to buy into the idea that today’s civilization is in a strange and privileged position, but has ultimately concluded "all possible views about humanity's future are wild". In the conversation Holden and Rob cover each part of the 'Most Important Century' series, including: • The case that we live in an incredibly important time • How achievable-seeming technology - in particular, mind uploading - could lead to unprecedented productivity, control of the environment, and more • How economic growth is faster than it can be for all that much longer • Forecasting transformative AI • And the implications of living in the most important century Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #108 – Chris Olah on working at top AI labs without an undergrad degree 1:33:24
1:33:24
Play Later
Play Later
Lists
Like
Liked1:33:24
Chris Olah has had a fascinating and unconventional career path. Most people who want to pursue a research career feel they need a degree to get taken seriously. But Chris not only doesn't have a PhD, but doesn’t even have an undergraduate degree. After dropping out of university to help defend an acquaintance who was facing bogus criminal charges, Chris started independently working on machine learning research, and eventually got an internship at Google Brain, a leading AI research group. In this interview — a follow-up to our episode on his technical work — we discuss what, if anything, can be learned from his unusual career path. Should more people pass on university and just throw themselves at solving a problem they care about? Or would it be foolhardy for others to try to copy a unique case like Chris’? Links to learn more, summary and full transcript. We also cover some of Chris' personal passions over the years, including his attempts to reduce what he calls 'research debt' by starting a new academic journal called Distill, focused just on explaining existing results unusually clearly. As Chris explains, as fields develop they accumulate huge bodies of knowledge that researchers are meant to be familiar with before they start contributing themselves. But the weight of that existing knowledge — and the need to keep up with what everyone else is doing — can become crushing. It can take someone until their 30s or later to earn their stripes, and sometimes a field will split in two just to make it possible for anyone to stay on top of it. If that were unavoidable it would be one thing, but Chris thinks we're nowhere near communicating existing knowledge as well as we could. Incrementally improving an explanation of a technical idea might take a single author weeks to do, but could go on to save a day for thousands, tens of thousands, or hundreds of thousands of students, if it becomes the best option available. Despite that, academics have little incentive to produce outstanding explanations of complex ideas that can speed up the education of everyone coming up in their field. And some even see the process of deciphering bad explanations as a desirable right of passage all should pass through, just as they did. So Chris tried his hand at chipping away at this problem — but concluded the nature of the problem wasn't quite what he originally thought. In this conversation we talk about that, as well as: • Why highly thoughtful cold emails can be surprisingly effective, but average cold emails do little • Strategies for growing as a researcher • Thinking about research as a market • How Chris thinks about writing outstanding explanations • The concept of 'micromarriages' and ‘microbestfriendships’ • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #107 – Chris Olah on what the hell is going on inside neural networks 3:09:21
3:09:21
Play Later
Play Later
Lists
Like
Liked3:09:21
Big machine learning models can identify plant species better than any human, write passable essays, beat you at a game of Starcraft 2, figure out how a photo of Tobey Maguire and the word 'spider' are related, solve the 60-year-old 'protein folding problem', diagnose some diseases, play romantic matchmaker, write solid computer code, and offer questionable legal advice. Humanity made these amazing and ever-improving tools. So how do our creations work? In short: we don't know. Today's guest, Chris Olah, finds this both absurd and unacceptable. Over the last ten years he has been a leader in the effort to unravel what's really going on inside these black boxes. As part of that effort he helped create the famous DeepDream visualisations at Google Brain, reverse engineered the CLIP image classifier at OpenAI, and is now continuing his work at Anthropic, a new $100 million research company that tries to "co-develop the latest safety techniques alongside scaling of large ML models". Links to learn more, summary and full transcript. Despite having a huge fan base thanks to his explanations of ML and tweets, today's episode is the first long interview Chris has ever given. It features his personal take on what we've learned so far about what ML algorithms are doing, and what's next for this research agenda at Anthropic. His decade of work has borne substantial fruit, producing an approach for looking inside the mess of connections in a neural network and back out what functional role each piece is serving. Among other things, Chris and team found that every visual classifier seems to converge on a number of simple common elements in their early layers — elements so fundamental they may exist in our own visual cortex in some form. They also found networks developing 'multimodal neurons' that would trigger in response to the presence of high-level concepts like 'romance', across both images and text, mimicking the famous 'Halle Berry neuron' from human neuroscience. While reverse engineering how a mind works would make any top-ten list of the most valuable knowledge to pursue for its own sake, Chris's work is also of urgent practical importance. Machine learning models are already being deployed in medicine, business, the military, and the justice system, in ever more powerful roles. The competitive pressure to put them into action as soon as they can turn a profit is great, and only getting greater. But if we don't know what these machines are doing, we can't be confident they'll continue to work the way we want as circumstances change. Before we hand an algorithm the proverbial nuclear codes, we should demand more assurance than "well, it's always worked fine so far". But by peering inside neural networks and figuring out how to 'read their minds' we can potentially foresee future failures and prevent them before they happen. Artificial neural networks may even be a better way to study how our own minds work, given that, unlike a human brain, we can see everything that's happening inside them — and having been posed similar challenges, there's every reason to think evolution and 'gradient descent' often converge on similar solutions. Among other things, Rob and Chris cover: • Why Chris thinks it's necessary to work with the largest models • What fundamental lessons we've learned about how neural networks (and perhaps humans) think • How interpretability research might help make AI safer to deploy, and Chris’ response to skeptics • Why there's such a fuss about 'scaling laws' and what they say about future AI progress Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #106 – Cal Newport on an industrial revolution for office work 1:53:27
1:53:27
Play Later
Play Later
Lists
Like
Liked1:53:27
If you wanted to start a university department from scratch, and attract as many superstar researchers as possible, what’s the most attractive perk you could offer? How about just not needing an email address. According to today's guest, Cal Newport — computer science professor and best-selling author of A World Without Email — it should seem obscene and absurd for a world-renowned vaccine researcher with decades of experience to spend a third of their time fielding requests from HR, building management, finance, and so on. Yet with offices organised the way they are today, nothing could be more natural. Links to learn more, summary and full transcript. But this isn’t just a problem at the elite level — this affects almost all of us. A typical U.S. office worker checks their email 80 times a day, once every six minutes on average. Data analysis by RescueTime found that a third of users checked email or Slack every three minutes or more, averaged over a full work day. Each time that happens our focus is broken, killing our momentum on the knowledge work we're supposedly paid to do. When we lament how much email and chat have reduced our focus and filled our days with anxiety and frenetic activity, we most naturally blame 'weakness of will'. If only we had the discipline to check Slack and email once a day, all would be well — or so the story goes. Cal believes that line of thinking fundamentally misunderstands how we got to a place where knowledge workers can rarely find more than five consecutive minutes to spend doing just one thing. Since the Industrial Revolution, a combination of technology and better organization have allowed the manufacturing industry to produce a hundred-fold as much with the same number of people. Cal says that by comparison, it's not clear that specialised knowledge workers like scientists, authors, or senior managers are *any* more productive than they were 50 years ago. If the knowledge sector could achieve even a tiny fraction of what manufacturing has, and find a way to coordinate its work that raised productivity by just 1%, that would generate on the order of $100 billion globally each year. Since the 1990s, when everyone got an email address and most lost their assistants, that lack of direction has led to what Cal calls the 'hyperactive hive mind': everyone sends emails and chats to everyone else, all through the day, whenever they need something. Cal points out that this is so normal we don't even think of it as a way of organising work, but it is: it's what happens when management does nothing to enable teams to decide on a better way of organising themselves. A few industries have made progress taming the 'hyperactive hive mind'. But on Cal's telling, this barely scratches the surface of the improvements that are possible within knowledge work. And reigning in the hyperactive hive mind won't just help people do higher quality work, it will free them from the 24/7 anxiety that there's someone somewhere they haven't gotten back to. In this interview Cal and Rob also cover: • Is this really one of the world's most pressing problems? • The historical origins of the 'hyperactive hive mind' • The harm caused by attention switching • Who's working to solve the problem and how • Cal's top productivity advice for high school students, university students, and early career workers • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:02) The hyperactive hivemind (00:04:11) Scale of the harm (00:08:40) Is email making professors stupid? (00:22:09) Why haven't we already made these changes? (00:29:38) Do people actually prefer the hyperactive hivemind? (00:43:31) Solutions (00:55:52) Advocacy (01:10:47) How to Be a High School Superstar (01:23:03) How to Win at College (01:27:46) So Good They Can't Ignore You (01:31:47) Personal barriers (01:42:51) George Marshall (01:47:11) Rob’s outro (01:49:18) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #105 – Alexander Berger on improving global health and wellbeing in clear and direct ways 2:54:32
2:54:32
Play Later
Play Later
Lists
Like
Liked2:54:32
The effective altruist research community tries to identify the highest impact things people can do to improve the world. Unsurprisingly, given the difficulty of such a massive and open-ended project, very different schools of thought have arisen about how to do the most good. Today's guest, Alexander Berger, leads Open Philanthropy's 'Global Health and Wellbeing' programme, where he oversees around $175 million in grants each year, and ultimately aspires to disburse billions in the most impactful ways he and his team can identify. This programme is the flagship effort representing one major effective altruist approach: try to improve the health and wellbeing of humans and animals that are alive today, in clearly identifiable ways, applying an especially analytical and empirical mindset. Links to learn more, summary, Open Phil jobs, and full transcript. The programme makes grants to tackle easily-prevented illnesses among the world's poorest people, offer cash to people living in extreme poverty, prevent cruelty to billions of farm animals, advance biomedical science, and improve criminal justice and immigration policy in the United States. Open Philanthropy's researchers rely on empirical information to guide their decisions where it's available, and where it's not, they aim to maximise expected benefits to recipients through careful analysis of the gains different projects would offer and their relative likelihoods of success. This 'global health and wellbeing' approach — sometimes referred to as 'neartermism' — contrasts with another big school of thought in effective altruism, known as 'longtermism', which aims to direct the long-term future of humanity and its descendants in a positive direction. Longtermism bets that while it's harder to figure out how to benefit future generations than people alive today, the total number of people who might live in the future is far greater than the number alive today, and this gain in scale more than offsets that lower tractability. The debate between these two very different theories of how to best improve the world has been one of the most significant within effective altruist research since its inception. Alexander first joined the influential charity evaluator GiveWell in 2011, and since then has conducted research alongside top thinkers on global health and wellbeing and longtermism alike, ultimately deciding to dedicate his efforts to improving the world today in identifiable ways. In this conversation Alexander advocates for that choice, explaining the case in favour of adopting the 'global health and wellbeing' mindset, while going through the arguments for the longtermist approach that he finds most and least convincing. Rob and Alexander also tackle: • Why it should be legal to sell your kidney, and why Alexander donated his to a total stranger • Why it's shockingly hard to find ways to give away large amounts of money that are more cost effective than distributing anti-malaria bed nets • How much you gain from working with tight feedback loops • Open Philanthropy's biggest wins • Why Open Philanthropy engages in 'worldview diversification' by having both a global health and wellbeing programme and a longtermist programme as well • Whether funding science and political advocacy is a good way to have more social impact • Whether our effects on future generations are predictable or unforeseeable • What problems the global health and wellbeing team works to solve and why • Opportunities to work at Open Philanthropy Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #104 – Pardis Sabeti on the Sentinel system for detecting and stopping pandemics 2:20:58
2:20:58
Play Later
Play Later
Lists
Like
Liked2:20:58
When the first person with COVID-19 went to see a doctor in Wuhan, nobody could tell that it wasn’t a familiar disease like the flu — that we were dealing with something new. How much death and destruction could we have avoided if we'd had a hero who could? That's what the last Assistant Secretary of Defense Andy Weber asked on the show back in March. Today’s guest Pardis Sabeti is a professor at Harvard, fought Ebola on the ground in Africa during the 2014 outbreak, runs her own lab, co-founded a company that produces next-level testing, and is even the lead singer of a rock band. If anyone is going to be that hero in the next pandemic — it just might be her. Links to learn more, summary and full transcript. She is a co-author of the SENTINEL proposal, a practical system for detecting new diseases quickly, using an escalating series of three novel diagnostic techniques. The first method, called SHERLOCK, uses CRISPR gene editing to detect familiar viruses in a simple, inexpensive filter paper test, using non-invasive samples. If SHERLOCK draws a blank, we escalate to the second step, CARMEN, an advanced version of SHERLOCK that uses microfluidics and CRISPR to simultaneously detect hundreds of viruses and viral strains. More expensive, but far more comprehensive. If neither SHERLOCK nor CARMEN detects a known pathogen, it's time to pull out the big gun: metagenomic sequencing. More expensive still, but sequencing all the DNA in a patient sample lets you identify and track every virus — known and unknown — in a sample. If Pardis and her team succeeds, our future pandemic potential patient zero may: 1. Go to the hospital with flu-like symptoms, and immediately be tested using SHERLOCK — which will come back negative 2. Take the CARMEN test for a much broader range of illnesses — which will also come back negative 3. Their sample will be sent for metagenomic sequencing, which will reveal that they're carrying a new virus we'll have to contend with 4. At all levels, information will be recorded in a cloud-based data system that shares data in real time; the hospital will be alerted and told to quarantine the patient 5. The world will be able to react weeks — or even months — faster, potentially saving millions of lives It's a wonderful vision, and one humanity is ready to test out. But there are all sorts of practical questions, such as: • How do you scale these technologies, including to remote and rural areas? • Will doctors everywhere be able to operate them? • Who will pay for it? • How do you maintain the public’s trust and protect against misuse of sequencing data? • How do you avoid drowning in the data the system produces? In this conversation Pardis and Rob address all those questions, as well as: • Pardis’ history with trying to control emerging contagious diseases • The potential of mRNA vaccines • Other emerging technologies • How to best educate people about pandemics • The pros and cons of gain-of-function research • Turning mistakes into exercises you can learn from • Overcoming enormous life challenges • Why it’s so important to work with people you can laugh with • And much more Chapters: The interview begins (00:01:40) Trying to control emerging contagious diseases (00:04:36) SENTINEL (00:15:31) SHERLOCK (00:25:09) CARMEN (00:36:32) Metagenomic sequencing (00:51:53) How useful these technologies could be (01:02:35) How this technology could apply to the US (01:06:41) Failure modes for this technology (01:18:34) Funding (01:27:06) mRNA vaccines (01:31:14) Other emerging technologies (01:34:45) Operation Outbreak (01:41:07) COVID (01:49:16) Gain-of-function research (01:57:34) Career advice (02:01:47) Overcoming big challenges (02:10:23) Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #103 – Max Roser on building the world's best source of COVID-19 data at Our World in Data 2:22:25
2:22:25
Play Later
Play Later
Lists
Like
Liked2:22:25
History is filled with stories of great people stepping up in times of crisis. Presidents averting wars; soldiers leading troops away from certain death; data scientists sleeping on the office floor to launch a new webpage a few days sooner. That last one is barely a joke — by our lights, people like today’s guest Max Roser should be viewed with similar admiration by historians of COVID-19. Links to learn more, summary and full transcript. Max runs Our World in Data , a small education nonprofit which began the pandemic with just six staff. But since last February his team has supplied essential COVID statistics to over 130 million users — among them BBC , The Financial Times , The New York Times , the OECD, the World Bank, the IMF, Donald Trump, Tedros Adhanom, and Dr. Anthony Fauci, just to name a few. An economist at Oxford University, Max Roser founded Our World in Data as a small side project in 2011 and has led it since, including through the wild ride of 2020. In today's interview Max explains how he and his team realized that if they didn't start making COVID data accessible and easy to make sense of, it wasn't clear when anyone would. Our World in Data wasn't naturally set up to become the world's go-to source for COVID updates. Up until then their specialty had been long articles explaining century-length trends in metrics like life expectancy — to the point that their graphing software was only set up to present yearly data. But the team eventually realized that the World Health Organization was publishing numbers that flatly contradicted themselves, most of the press was embarrassingly out of its depth, and countries were posting case data as images buried deep in their sites where nobody would find them. Even worse, nobody was reporting or compiling how many tests different countries were doing, rendering all those case figures largely meaningless. Trying to make sense of the pandemic was a time-consuming nightmare. If you were leading a national COVID response, learning what other countries were doing and whether it was working would take weeks of study — and that meant, with the walls falling in around you, it simply wasn't going to happen. Ministries of health around the world were flying blind. Disbelief ultimately turned to determination, and the Our World in Data team committed to do whatever had to be done to fix the situation. Overnight their software was quickly redesigned to handle daily data, and for the next few months Max and colleagues like Edouard Mathieu and Hannah Ritchie did little but sleep and compile COVID data. In this episode Max tells the story of how Our World in Data ran into a huge gap that never should have been there in the first place — and how they had to do it all again in December 2020 when, eleven months into the pandemic, there was nobody to compile global vaccination statistics. We also talk about: • Our World in Data's early struggles to get funding • Why government agencies are so bad at presenting data • Which agencies did a good job during the COVID pandemic (shout out to the European CDC) • How much impact Our World in Data has by helping people understand the world • How to deal with the unreliability of development statistics • Why research shouldn't be published as a PDF • Why academia under-incentivises data collection • The history of war • And much more Chapters: • Rob’s intro (00:00:00) • The interview begins (00:01:41) • Our World In Data (00:04:46) • How OWID became a leader on COVID-19 information (00:11:45) • COVID-19 gaps that OWID filled (00:27:45) • Incentives that make it so hard to get good data (00:31:20) • OWID funding (00:39:53) • What it was like to be so successful (00:42:11) • Vaccination data set (00:45:43) • Improving the vaccine rollout (00:52:44) • Who did well (00:58:08) • Global sanity (01:00:57) • How high-impact is this work? (01:04:43) • Does this work get you anywhere in the academic system? (01:12:48) • Other projects Max admires in this space (01:20:05) • Data reliability and availability (01:30:49) • Bringing together knowledge and presentation (01:39:26) • History of war (01:49:17) • Careers at OWID (02:01:15) • How OWID prioritise topics (02:12:30) • Rob's outro (02:21:02) Producer: Keiran Harris. Audio mastering: Ryan Kessler. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #102 – Tom Moynihan on why prior generations missed some of the biggest priorities of all 3:56:44
3:56:44
Play Later
Play Later
Lists
Like
Liked3:56:44
It can be tough to get people to truly care about reducing existential risks today. But spare a thought for the longtermist of the 17th century: they were surrounded by people who thought extinction was literally impossible. Today’s guest Tom Moynihan, intellectual historian and author of the book X-Risk: How Humanity Discovered Its Own Extinction , says that until the 18th century, almost everyone — including early atheists — couldn’t imagine that humanity or life could simply disappear because of an act of nature. Links to learn more, summary and full transcript. This is largely because of the prevalence of the ‘principle of plenitude’, which Tom defines as saying: “Whatever can happen will happen. In its stronger form it says whatever can happen will happen reliably and recurrently. And in its strongest form it says that all that can happen is happening right now. And that's the way things will be forever.” This has the implication that if humanity ever disappeared for some reason, then it would have to reappear. So why would you ever worry about extinction? Here are 4 more commonly held beliefs from generations past that Tom shares in the interview: • All regions of matter that can be populated will be populated : In other words, there are aliens on every planet, because it would be a massive waste of real estate if all of them were just inorganic masses, where nothing interesting was going on. This also led to the idea that if you dug deep into the Earth, you’d potentially find thriving societies. • Aliens were human-like, and shared the same values as us : they would have the same moral beliefs, and the same aesthetic beliefs. The idea that aliens might be very different from us only arrived in the 20th century. • Fossils were rocks that had gotten a bit too big for their britches and were trying to act like animals : they couldn’t actually move, so becoming an imprint of an animal was the next best thing. • All future generations were contained in miniature form, Russian-doll style, in the sperm of the first man : preformation was the idea that within the ovule or the sperm of an animal is contained its offspring in miniature form, and the French philosopher Malebranche said, well, if one is contained in the other one, then surely that goes on forever. And here are another three that weren’t held widely, but were proposed by scholars and taken seriously: • Life preceded the existence of rocks : Living things, like clams and mollusks, came first, and they extruded the earth. • No idea can be wrong : Nothing we can say about the world is wrong in a strong sense, because at some point in the future or the past, it has been true. • Maybe we were living before the Trojan War : Aristotle said that we might actually be living before Troy, because it — like every other event — will repeat at some future date. And he said that actually, the set of possibilities might be so narrow that it might be safer to say that we actually live before Troy. But Tom tries to be magnanimous when faced with these incredibly misguided worldviews. In this nearly four-hour long interview, Tom and Rob cover all of these ideas, as well as: • How we know people really believed such things • How we moved on from these theories • How future intellectual historians might view our beliefs today • The distinction between ‘apocalypse’ and ‘extinction’ • Utopias and dystopias • Big ideas that haven’t flowed through into all relevant fields yet • Intellectual history as a possible high-impact career • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:01:45) Principle of Plenitude (00:04:02) How do we know they really believed this? (00:13:20) Religious conceptions of time (00:24:01) How to react to wacky old ideas (00:29:18) The Copernican revolution (00:36:55) Fossils (00:42:30) How we got past these theories (00:51:19) Intellectual history (01:01:45) Future historians looking back to today (01:13:11) Could plenitude actually be true? (01:27:38) What is vs. what ought to be (01:36:43) Apocalypse vs. extinction (01:45:56) The history of probability (02:00:52) Utopias and dystopias (02:12:11) How Tom has changed his mind since writing the book (02:28:58) Are we making progress? (02:35:00) Big ideas that haven’t flowed through to all relevant fields yet (02:52:07) Failed predictions (02:59:01) Intellectual history as high-impact career (03:06:56) Communicating progress (03:15:07) What careers in history actually look like (03:23:03) Tom’s next major project (03:43:06) One of the funniest things past generations believed (03:51:50) Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #101 – Robert Wright on using cognitive empathy to save the world 1:36:00
1:36:00
Play Later
Play Later
Lists
Like
Liked1:36:00
In 2003, Saddam Hussein refused to let Iraqi weapons scientists leave the country to be interrogated. Given the overwhelming domestic support for an invasion at the time, most key figures in the U.S. took that as confirmation that he had something to hide — probably an active WMD program. But what about alternative explanations? Maybe those scientists knew about past crimes. Or maybe they’d defect. Or maybe giving in to that kind of demand would have humiliated Hussein in the eyes of enemies like Iran and Saudi Arabia. According to today’s guest Robert Wright, host of the popular podcast The Wright Show, these are the kinds of things that might have come up if people were willing to look at things from Saddam Hussein’s perspective. Links to learn more, summary and full transcript. He calls this ‘cognitive empathy’. It's not feeling-your-pain-type empathy — it's just trying to understand how another person thinks. He says if you pitched this kind of thing back in 2003 you’d be shouted down as a 'Saddam apologist' — and he thinks the same is true today when it comes to regimes in China, Russia, Iran, and North Korea. The two Roberts in today’s episode — Bob Wright and Rob Wiblin — agree that removing this taboo against perspective taking, even with people you consider truly evil, could potentially significantly improve discourse around international relations. They feel that if we could spread the meme that if you’re able to understand what dictators are thinking and calculating, based on their country’s history and interests, it seems like we’d be less likely to make terrible foreign policy errors. But how do you actually do that? Bob’s new ‘Apocalypse Aversion Project’ is focused on creating the necessary conditions for solving non-zero-sum global coordination problems, something most people are already on board with. And in particular he thinks that might come from enough individuals “transcending the psychology of tribalism”. He doesn’t just mean rage and hatred and violence, he’s also talking about cognitive biases. Bob makes the striking claim that if enough people in the U.S. had been able to combine perspective taking with mindfulness — the ability to notice and identify thoughts as they arise — then the U.S. might have even been able to avoid the invasion of Iraq. Rob pushes back on how realistic this approach really is, asking questions like: • Haven’t people been trying to do this since the beginning of time? • Is there a great novel angle that will change how a lot of people think and behave? • Wouldn’t it be better to focus on a much narrower task, like getting more mindfulness and meditation and reflectiveness among the U.S. foreign policy elite? But despite the differences in approaches, Bob has a lot of common ground with 80,000 Hours — and the result is a fun back-and-forth about the best ways to achieve shared goals. Bob starts by questioning Rob about effective altruism, and they go on to cover a bunch of other topics, such as: • Specific risks like climate change and new technologies • How to achieve social cohesion • The pros and cons of society-wide surveillance • How Rob got into effective altruism If you're interested to hear more of Bob's interviews you can subscribe to The Wright Show anywhere you're getting this one. You can also watch videos of this and all his other episodes on Bloggingheads.tv . Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #100 – Having a successful career with depression, anxiety and imposter syndrome 2:51:21
2:51:21
Play Later
Play Later
Lists
Like
Liked2:51:21
Today's episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!). The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it's rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so. Links to learn more, summary and full transcript. The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today. The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort. Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better. Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world. We hope that the episode will: 1. Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles. 2. Give insight into what it's like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully. So we think this episode will be valuable for: • People who have experienced mental health problems or might in future; • People who have had troubles with stress, anxiety, low mood, low self esteem, and similar issues, even if their experience isn’t well described as ‘mental illness’; • People who have never experienced these problems but want to learn about what it's like, so they can better relate to and assist family, friends or colleagues who do. In other words, we think this episode could be worthwhile for almost everybody. Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts. If you don’t want to hear the most intense section, you can skip the chapter called ‘Disaster’ (44–57mins). And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’ (1hr 11mins). If you're feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the U.S. (800-273-8255) and Samaritans in the U.K. (116 123). Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #99 – Leah Garcés on turning adversaries into allies to change the chicken industry 2:26:04
2:26:04
Play Later
Play Later
Lists
Like
Liked2:26:04
For a chance to prevent enormous amounts of suffering, would you be brave enough to drive five hours to a remote location to meet a man who seems likely to be your enemy, knowing that it might be an ambush? Today’s guest — Leah Garcés — was. That man was a chicken farmer named Craig Watts, and that ambush never happened. Instead, Leah and Craig forged a friendship and a partnership focused on reducing suffering on factory farms. Leah, now president of Mercy For Animals (MFA), tried for years to get access to a chicken farm to document the horrors she knew were happening behind closed doors. It made sense that no one would let her in — why would the evil chicken farmers behind these atrocities ever be willing to help her take them down? But after sitting with Craig on his living room floor for hours and listening to his story, she discovered that he wasn’t evil at all — in fact he was just stuck in a cycle he couldn’t escape, forced to use methods he didn’t endorse. Links to learn more, summary and full transcript. Most chicken farmers have enormous debts they are constantly struggling to pay off, make very little money, and have to work in terrible conditions — their main activity most days is finding and killing the sick chickens in their flock. Craig was one of very few farmers close to finally paying off his debts, which made him slightly less vulnerable to retaliation. That opened up the possibility for him to work with Leah. Craig let Leah openly film inside the chicken houses, and shared highly confidential documents about the antibiotics put into the feed. That led to a viral video, and a New York Times story. The villain of that video was Jim Perdue, CEO of one of the biggest meat companies in the world. They show him saying, "Farmers are happy. Chickens are happy. There's a lot of space. They're clean." And then they show the grim reality. For years, Perdue wouldn’t speak to Leah. But remarkably, when they actually met in person, she again managed to forge a meaningful relationship with a natural adversary. She was able to put aside her utter contempt for the chicken industry and see Craig and Jim as people, not cartoonish villains. Leah believes that you need to be willing to sit down with anyone who has the power to solve a problem that you don’t — recognising them as human beings with a lifetime of complicated decisions behind their actions. And she stresses that finding or making a connection is really important. In the case of Jim Perdue, it was the fact they both had adopted children. Because of this, they were able to forget that they were supposed to be enemies in that moment, and build some trust. The other lesson that Leah highlights is that you need to look for win-wins and start there, rather than starting with disagreements. With Craig Watts, instead of opening with “How do I end his job”, she thought, “How can I find him a better job?” If you find solutions where everybody wins, you don’t need to spend resources fighting the former enemy. They’ll come to you. It turns out that conditions in chicken houses are perfect for growing hemp or mushrooms, so MFA have started their ‘Transfarmation project’ to help farmers like Craig escape from the prison of factory farming by converting their production from animals to plants. To convince farmers to leave behind a life of producing suffering, all you need to do is find them something better — which for many of them is almost anything else. Leah and Rob also talk about: • Why conditions for farmers are so bad • The benefits of creating a public ranking, and scoring companies against each other • The difficulty of enforcing corporate pledges • And much more Chapters: Rob's intro (00:00:00) The interview begins (00:01:06) Grilled (00:06:25) Why are conditions for farmers so bad? (00:18:31) Lessons for others focused on social reform (00:25:04) Driving up the price of factory farmed meat (00:31:18) Mercy For Animals (00:50:08) The importance of building on past work (00:56:27) Farm sanctuaries (01:06:11) Important weaknesses of MFA (01:09:44) Farmed Animal Opportunity Index (01:12:54) Latin America (01:20:49) Enforcing corporate pledges (01:27:21) The Transfarmation project (01:35:25) Disagreements with others in the animal welfare movement (01:45:59) How has the animal welfare movement evolved? (01:51:52) Careers (02:03:32) Ending factory farming (02:05:57) Leah’s career (02:13:02) Mental health challenges (02:20:40) Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel…
8
80,000 Hours Podcast


1 #98 – Christian Tarsney on future bias and a possible solution to moral fanaticism 2:38:22
2:38:22
Play Later
Play Later
Lists
Like
Liked2:38:22
Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience. You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.” So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour. Which patient would you rather be? Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future. Christian Tarsney, a philosopher at Oxford University's Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences. Links to learn more, summary and full transcript. That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past? One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it. But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn't care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about! Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven't played yet are still on the way. If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction. It’s a live debate that’s playing out in the philosophy of time, as well as in physics. For Christian, there are two big practical implications of these past, present, and future ethical comparison cases. The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people's past goals, including the goals of people who are now dead. The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born? Christian and Rob also cover several other big topics, including: • A possible solution to moral fanaticism • How much of humanity's resources we should spend on improving the long-term future • How large the expected value of the continued existence of Earth-originating civilization might be • How we should respond to uncertainty about the state of the world • The state of global priorities research • And much more Chapters: • Rob’s intro (00:00:00) • The interview begins (00:01:20) • Future bias (00:04:33) • Philosophy of time (00:11:17) • Money pumping (00:18:53) • Time travel (00:21:22) • Decision theory (00:24:36) • Eternalism (00:32:32) • Fanaticism (00:38:33) • Stochastic dominance (00:52:11) • Background uncertainty (00:56:27) • Epistemic worries about longtermism (01:12:44) • Best arguments against working on existential risk reduction (01:32:34) • The scope of longtermism (01:41:12) • The value of the future (01:50:09) • Moral uncertainty (01:57:25) • The Berry paradox (02:35:00) • Competitive debating (02:28:34) • The state of global priorities research (02:21:33) • Christian’s personal priorities (02:17:27) Producer: Keiran Harris. Audio mastering: Ryan Kessler. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #97 – Mike Berkowitz on keeping the US a liberal democratic country 2:36:10
2:36:10
Play Later
Play Later
Lists
Like
Liked2:36:10
Donald Trump’s attempt to overturn the results of the 2020 election split the Republican party. There were those who went along with it — 147 members of Congress raised objections to the official certification of electoral votes — but there were others who refused. These included Brad Raffensperger and Brian Kemp in Georgia, and Vice President Mike Pence. Although one could say that the latter Republicans showed great courage, the key to the split may lie less in differences of moral character or commitment to democracy, and more in what was being asked of them. Trump wanted the first group to break norms, but he wanted the second group to break the law. And while norms were indeed shattered, laws were upheld. Today’s guest, Mike Berkowitz, executive director of the Democracy Funders Network, points out a problem we came to realize throughout the Trump presidency: So many of the things that we thought were laws were actually just customs. Links to learn more, summary and full transcript. So once you have leaders who don’t buy into those customs — like, say, that a president shouldn’t tell the Department of Justice who it should and shouldn’t be prosecuting — there’s nothing preventing said customs from being violated. And what happens if current laws change? A recent Georgia bill took away some of the powers of Georgia's Secretary of State — Brad Raffensberger. Mike thinks that's clearly retribution for Raffensperger's refusal to overturn the 2020 election results. But he also thinks it means that the next time someone tries to overturn the results of the election, they could get much farther than Trump did in 2020. In this interview Mike covers what he thinks are the three most important levers to push on to preserve liberal democracy in the United States: 1. Reforming the political system, by e.g. introducing new voting methods 2. Revitalizing local journalism 3. Reducing partisan hatred within the United States Mike says that American democracy, like democracy elsewhere in the world, is not an inevitability. The U.S. has institutions that are really important for the functioning of democracy, but they don't automatically protect themselves — they need people to stand up and protect them. In addition to the changes listed above, Mike also thinks that we need to harden more norms into laws, such that individuals have fewer opportunities to undermine the system. And inasmuch as laws provided the foundation for the likes of Raffensperger, Kemp, and Pence to exhibit political courage, if we can succeed in creating and maintaining the right laws — we may see many others following their lead. As Founding Father James Madison put it: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.” Mike and Rob also talk about: • What sorts of terrible scenarios we should actually be worried about, i.e. the difference between being overly alarmist and properly alarmist • How to reduce perverse incentives for political actors, including those to overturn election results • The best opportunities for donations in this space • And much more Chapters: Rob’s intro (00:00:00) The interview begins (00:02:01) What we should actually be worried about (00:05:03) January 6th, 2021 (00:11:03) Trump’s defeat (00:16:44) Improving incentives for representatives (00:30:55) Signs of a loss of confidence in American democratic institutions (00:44:58) Most valuable political reforms (00:54:39) Revitalising local journalism (01:08:07) Reducing partisan hatred (01:21:53) Should workplaces be political? (01:31:40) Mistakes of the left (01:36:50) Risk of overestimating the problem (01:39:56) Charitable giving (01:48:13) How to shortlist projects (01:56:42) Speaking to Republicans (02:04:15) Patriots & Pragmatists and The Democracy Funders Network (02:12:51) Rob’s outro (02:32:58) Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 The ten episodes of this show you should listen to first 3:03
3:03
Play Later
Play Later
Lists
Like
Liked3:03
Today we're launching a new podcast feed that might be useful to you and people you know. It's called ' Effective Altruism: An Introduction ', and it's a carefully chosen selection of ten episodes of this show, with various new intros and outros to guide folks through them. Basically, as the number of episodes of this show has grown, it has become less and less practical to ask new subscribers to go back and listen through most of our archives. So naturally new subscribers want to know... what should I listen to first? What episodes will help me make sense of effective altruist thinking and get the most out of new episodes? We hope that ' Effective Altruism: An Introduction ' will fill in that gap. Across the ten episodes, we cover what effective altruism at its core really is, what folks who are tackling a number of well-known problem areas are up to and why, some more unusual and speculative problems, and how we and the rest of the team here try to think through difficult questions as clearly as possible. Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well. Another gap it might fill is in helping you recommend the show to people, or suggest a way to learn more about effective altruist style thinking to people who are curious about it. If someone in your life wants to get an understanding of what 80,000 Hours or effective altruism are all about, and prefers to listen to things rather than read, this is a great resource to direct them to. You can find it by searching for effective altruism in your podcasting app, or by going to 80000hours.org/intro . We'd love to hear how you go listening to it yourself, or sharing it with others in your life. Get in touch by emailing podcast@80000hours.org.…
8
80,000 Hours Podcast


1 #96 – Nina Schick on disinformation and the rise of synthetic media 2:00:04
2:00:04
Play Later
Play Later
Lists
Like
Liked2:00:04
You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.? Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse , thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think. But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that's currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff? Links to learn more, summary and full transcript. Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as: • Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source? • If photoshop didn’t lead to total chaos, why should this be any different? But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive. She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied. Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”? Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions. As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can't agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society? Nina and Rob also talk about a bunch of other topics, including: • The history of disinformation, and groups who sow disinformation professionally • How deepfake pornography is used to attack and silence women activitists • The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes • Whether we should make it illegal to make a deepfake of someone without their permission • And the coolest positive uses of this technology Chapters: Rob’s intro (00:00:00) The interview begins (00:01:28) Deepfakes (00:05:49) The influence of synthetic media today (00:17:20) The history of misinformation and disinformation (00:28:13) Text vs. video (00:34:05) Privacy (00:40:17) Deepfake pornography (00:49:05) Russia and other bad actors (00:58:38) 2016 vs. 2020 US elections (01:13:44) Authoritarian regimes vs. liberal democracies (01:24:08) Law reforms (01:31:52) Positive uses (01:37:04) Technical solutions (01:40:56) Careers (01:52:30) Rob’s outro (01:58:27) Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #95 – Kelly Wanser on whether to deliberately intervene in the climate 1:24:08
1:24:08
Play Later
Play Later
Lists
Like
Liked1:24:08
How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado. 100 years? 50 years? 20? Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well. Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate. Links to learn more, summary and full transcript. Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have. Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy. After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere. Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them 'whiter' so they reflect even more sunlight back into space. These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter. Kelly says that scientists estimate that we're already lowering the global temperature this way by 0.5–1.1ºC, without even intending to. While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn't been funding to measure how much temperature change you get for a given amount of spray. And we won't want to dive into these methods head first because the atmosphere is a complex system we can't yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied. The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as: • It being riskier than doing nothing • That it will inevitably be dangerously political • And the risk of the 'double catastrophe', where a pandemic stops our climate interventions and temperatures sky-rocket at the worst time. Kelly and Rob also talk about: • The many climate interventions that are already happening • The most promising ideas in the field • And whether people would be more accepting if we found ways to intervene that had nothing to do with making the world a better place. Chapters: • Rob’s intro (00:00:00) • The interview begins (00:01:37) • Existing climate interventions (00:06:44) • Most promising ideas (00:16:23) • Doing good by accident (00:28:39) • Objections to this approach (00:31:16) • How much could countries do individually? (00:47:19) • Government funding (00:50:08) • Is global coordination possible? (00:53:01) • Malicious use (00:57:07) • Careers and SilverLining (01:04:03) • Rob’s outro (01:23:34) Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #94 – Ezra Klein on aligning journalism, politics, and what matters most 1:45:21
1:45:21
Play Later
Play Later
Lists
Like
Liked1:45:21
How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs? When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously? Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there's little pre-existing infrastructure to push them. Links to learn more, summary and full transcript. He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy). To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there's very little infrastructure for thinking about it. There isn't a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on. All of this generates a strong 'path dependence' that can lock the media in to covering less important topics despite having no intention to do so. According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important." One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: "This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.” Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “...like catnip for readers.” Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can't make the audience interested in it, that is your failure — never the audience's failure. But is that really true? In today’s episode we explore that claim, as well as: • How many hours of news the average person should consume • Where the progressive movement is failing to live up to its values • Why Ezra thinks 'price gouging' is a bad idea • Where the FDA has failed on rapid at-home testing for COVID-19 • Whether we should be more worried about tail-risk scenarios • And his biggest critiques of the effective altruism community Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race 1:54:21
1:54:21
Play Later
Play Later
Lists
Like
Liked1:54:21
COVID-19 has provided a vivid reminder of the power of biological threats. But the threat doesn't come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United States, but developed in large numbers by the Soviet Union, right up until its collapse — have the potential to spread globally and kill just as many as an all-out nuclear war. For five years today’s guest — Andy Weber — was the US Assistant Secretary of Defense responsible for biological and other weapons of mass destruction. While people primarily associate the Pentagon with waging wars, including most within the Pentagon itself, Andy is quick to point out that you can't have national security if your population remains at grave risk from natural and lab-created diseases. Andy's current mission is to spread the word that while bioweapons are terrifying, scientific advances also leave them on the verge of becoming an outdated technology. Links to learn more, summary and full transcript. He thinks there is an overwhelming case to increase our investment in two new technologies that could dramatically reduce the risk of bioweapons and end natural pandemics in the process. First, advances in genetic sequencing technology allow direct, real-time analysis of DNA or RNA fragments collected from the environment. You sample widely, and if you start seeing DNA sequences that you don't recognise — that sets off an alarm. Andy says that while desktop sequencers may be expensive enough that they're only in hospitals today, they're rapidly getting smaller, cheaper, and easier to use. In fact DNA sequencing has recently experienced the most dramatic cost decrease of any technology, declining by a factor of 10,000 since 2007. It's only a matter of time before they're cheap enough to put in every home. The second major breakthrough comes from mRNA vaccines , which are today being used to end the COVID pandemic. The wonder of mRNA vaccines is that they can instruct our cells to make any random protein we choose — and trigger a protective immune response from the body. By using the sequencing technology above, we can quickly get the genetic code that matches the surface proteins of any new pathogen, and switch that code into the mRNA vaccines we're already making. Making a new vaccine would become less like manufacturing a new iPhone and more like printing a new book — you use the same printing press and just change the words. So long as we kept enough capacity to manufacture and deliver mRNA vaccines on hand, a whole country could in principle be vaccinated against a new disease in months. In tandem these technologies could make advanced bioweapons a threat of the past. And in the process contagious disease could be brought under control like never before. Andy has always been pretty open and honest, but his retirement last year has allowed him to stop worrying about being seen to speak for the Department of Defense, or for the president of the United States – and we were able to get his forthright views on a bunch of interesting other topics, such as: • The chances that COVID-19 escaped from a research facility • Whether a US president can really truly launch nuclear weapons unilaterally • What he thinks should be the top priorities for the Biden administration • The time he and colleagues found 600kg of unsecured, highly enriched uranium sitting around in a barely secured facility in Kazakhstan, and eventually transported it to the United States • And much more. Job opportunity: Executive Assistant to Will MacAskill Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #92 – Brian Christian on the alignment problem 2:55:46
2:55:46
Play Later
Play Later
Lists
Like
Liked2:55:46
Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science. Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem , and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer. Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren't nervous at all. Links to learn more, summary and full transcript. Here’s a tease of 10 Hollywood-worthy stories from the episode: • The Riddle of Dopamine : The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience. • ALVINN : A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early 1990s, using a computer with a tenth the processing capacity of an Apple Watch. • Couch Potato : An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen. • Pitts & McCulloch : A homeless teenager and his foster father figure invent the idea of the neural net. • Tree Senility : Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die. • The Danish Bicycle : A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination. • Montezuma's Revenge : By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple. • Curious Pong : Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies. • AlphaGo Zero : A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself. • Robot Gymnasts : Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip. We also cover: • How reinforcement learning actually works, and some of its key achievements and failures • How a lack of curiosity can cause AIs to fail to be able to do basic things • The pitfalls of getting AI to imitate how we ourselves behave • The benefits of getting AI to infer what we must be trying to achieve • Why it’s good for agents to be uncertain about what they're doing • Why Brian isn’t that worried about explicit deception • The interviewees Brian most agrees with, and most disagrees with • Developments since Brian finished the manuscript • The effective altruism and AI safety communities • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 #91 – Lewis Bollard on big wins against factory farming and how they happened 2:33:17
2:33:17
Play Later
Play Later
Lists
Like
Liked2:33:17
I suspect today's guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming and what broader lessons we can learn from the experiences of people working to end cruelty in animal agriculture. That's why I interviewed him back in 2017 , and it's why I've come back for an updated second dose four years later. That conversation became a touchstone resource for anyone wanting to understand why people might decide to focus their altruism on farmed animal welfare, what those people are up to, and why. Lewis leads Open Philanthropy’s strategy for farm animal welfare , and since he joined in 2015 they’ve disbursed about $130 million in grants to nonprofits as part of this program. This episode certainly isn't only for vegetarians or people whose primary focus is animal welfare. The farmed animal welfare movement has had a lot of big wins over the last five years, and many of the lessons animal activists and plant-based meat entrepreneurs have learned are of much broader interest. Links to learn more, summary and full transcript. Some of those include: • Between 2019 and 2020, Beyond Meat's cost of goods sold fell from about $4.50 a pound to $3.50 a pound. Will plant-based meat or clean meat displace animal meat, and if so when? How quickly can it reach price parity? • One study reported that philosophy students reduced their meat consumption by 13% after going through a course on the ethics of factory farming. But do studies like this replicate? And what happens several months later? • One survey showed that 33% of people supported a ban on animal farming. Should we take such findings seriously? Or is it as informative as the study which showed that 38% of Americans believe that Ted Cruz might be the Zodiac killer? • Costco, the second largest retailer in the U.S., is now over 95% cage-free. Why have they done that years before they had to? And can ethical individuals within these companies make a real difference? We also cover: • Switzerland’s ballot measure on eliminating factory farming • What a Biden administration could mean for reducing animal suffering • How chicken is cheaper than peanuts • The biggest recent wins for farmed animals • Things that haven’t gone to plan in animal advocacy • Political opportunities for farmed animal advocates in Europe • How the US is behind Brazil and Israel on animal welfare standards • The value of increasing media coverage of factory farming • The state of the animal welfare movement • And much more If you’d like an introduction to the nature of the problem and why Lewis is working on it, in addition to our 2017 interview with Lewis , you could check out this 2013 cause report from Open Philanthropy . Chapters: Rob’s intro (00:00:00) The interview begins (00:04:37) Biggest recent wins for farmed animals (00:06:13) How to lower the price of plant-based meat (00:24:57) Documentaries for farmed animals (00:37:05) Political opportunities (00:43:07) Do we know how to get people to reduce their meat consumption? (00:45:03) The fraction of Americans who don’t eat meat (00:52:17) Surprising number of people who support a ban on animal farming (00:57:57) What we’ve learned over the past four years (01:02:48) Things that haven’t gone to plan (01:26:30) Animal advocacy in emerging countries (01:34:44) Fish, crustaceans, and wild animals (01:40:28) Open Philanthropy grants (01:47:43) Audience questions (01:59:29) The elimination of slavery (02:10:03) Careers (02:15:52) Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.…
8
80,000 Hours Podcast


1 Rob Wiblin on how he ended up the way he is 1:57:57
1:57:57
Play Later
Play Later
Lists
Like
Liked1:57:57
This is a crosspost of an episode of the Eureka Podcast . The interviewer is Misha Saul, a childhood friend of Rob's, who he has known for over 20 years. While it's not an episode of our own show, we decided to share it with subscribers because it's fun, and because it touches on personal topics that we don't usually cover on the show. Rob and Misha cover: • How Rob's parents shaped who he is (if indeed they did) • Their shared teenage obsession with philosophy, which eventually led to Rob working at 80,000 Hours • How their politics were shaped by growing up in the 90s • How talking to Rob helped Misha develop his own very different worldview • Why The Lord of the Rings movies have held up so well • What was it like being an exchange student in Spain, and was learning Spanish a mistake? • Marriage and kids • Institutional decline and historical analogies for the US in 2021 • Making fun of teachers • Should we stop eating animals? Producer: Keiran Harris. Audio mastering: Ben Cordell.…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.