The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.
You can learn mo...
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficu...
On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai
AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to ...
On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the li...
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.
We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check o...
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts.
You can learn more about Ann's work here:
https://www.wiseancestors.org
Timestamps:
00:00 What is Wise Ancestors?
04:27 Recovering a...
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI.
You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot
Timestamps:
00:00 Meta-narratives...
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.
You can learn more about David's work at ARIA here:
https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-a...
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com
Timestamps:
00:00 What is GiveDirectly?
15:04 AI for targeting cash transfers
29:39 AI for predicting natural disasters
46:04 How scalable is GiveDirectly's AI approach?
58:10 Decentralized vs. centralized data col...
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.
You can find Nathan's podcast here: https://www.cognitiverevolution.ai
Timestamps:
00:00 AI progress since GPT-4
10:50 Multimodality
19:06 Low-cost models
27:58 Coding versus medicine/law
36:09 AI agents
45:29 How much are people using AI?
53:39 Open source
01...
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.
Here's the document we discuss in the episode:
https://www.thecompendium.ai
Timestamps:
00:00 The Compendium ...
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.
Here's Writing Doom: https://www.youtube.com/watch?v=xfMQ7hzyFW4
Timestamps:
00:00 Writing Doo...
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.
Here's the document we discuss in the episode: ...
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:
https://epochai.org/blog/can-ai-scaling-continue-through-2030
Timestamps:
00:00 How important is scaling?
08:03 How capable will AIs be in 2030? &n...
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI.
You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt
Timestamps:
00:00 AI control
09:35 Challenges to AI control
23:48 AI control as a bridge to alignment
26:54 Policy and coordination for AI safety
29:25 Slowing dow...
Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world.
Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence
Timestamps:
00:00 Spending on safety vs capabilities
09:06 Racing dynamics - is t...
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.
Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai
Timestamps:
00:00 Is AI ...
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home
Timestamps:
00:00 Innovation prizes at XPRIZE
08:25 Deciding which prizes to create
19:00 Creat...
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org
Timestamps:
00:00 Mary's journey to presidency
05:11 Long-view leadership
06:55 Prioritizing global problems
08:...
Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation.
Apply for our RFP here: https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/
Timestamps:
00:00 Power concentration
07:43 RFP: Mitigating AI-driven power concentration
14:15 Open source AI
26:50 Institut...
I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!
Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com
If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.
The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!
Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.