The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.
Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading... more
This is a special crosspost episode where Adam Gleave is interviewed by Nathan Labenz from the Cognitive Revolution. At the... more
Ethan Perez is a Research Scientist at Anthropic, where he leads a team working on developing model organisms of misalignment. Youtube:... more
Emil is the co-founder of palette.fm (colorizing B&W pictures with generative AI) and was previously working in deep learning for... more
Evan Hubinger leads the Alignment stress-testing at Anthropic and recently published "Sleeper Agents: Training Deceptive LLMs That Persist Through Safety... more
Jeffrey Ladish is the Executive Director of Palisade Research which aimes so "study the offensive capabilities or AI systems today... more
Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta's open... more
https://youtu.be/Fk2MrpuWinc
Youtube: https://youtu.be/JXYcLQItZsk Paul Christiano's post: https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom
Neel Nanda is a researcher at Google DeepMind working on mechanistic interpretability. He is also known for his YouTube channel... more
To claim this podcast, you must confirm your ownership via the email address located in your podcast’s RSS feed (mic****@gmail.com). If you cannot access this email, please contact your hosting provider.