The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.
Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading... more
This is a special crosspost episode where Adam Gleave is interviewed by Nathan Labenz from the Cognitive Revolution. At the... more
Ethan Perez is a Research Scientist at Anthropic, where he leads a team working on developing model organisms of misalignment. Youtube:... more
Emil is the co-founder of palette.fm (colorizing B&W pictures with generative AI) and was previously working in deep learning for... more
Evan Hubinger leads the Alignment stress-testing at Anthropic and recently published "Sleeper Agents: Training Deceptive LLMs That Persist Through Safety... more
Jeffrey Ladish is the Executive Director of Palisade Research which aimes so "study the offensive capabilities or AI systems today... more
Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta's open... more
https://youtu.be/Fk2MrpuWinc
Youtube: https://youtu.be/JXYcLQItZsk Paul Christiano's post: https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom
Neel Nanda is a researcher at Google DeepMind working on mechanistic interpretability. He is also known for his YouTube channel... more
Joscha Bach (who defines himself as an AI researcher/cognitive scientist) has recently been debating existential risk from AI with Connor... more
Erik is a Phd at Berkeley working with Jacob Steinhardt, interested in making generative machine learning systems more robust, reliable,... more
Dylan Patel is Chief Analyst at SemiAnalysis a boutique semiconductor research and consulting firm specializing in the semiconductor supply chain from... more
Tony is a PhD student at MIT, and author of "Advesarial Policies Beat Superhuman Go AIs", accepted as Oral at... more
David Bau is an Assistant Professor studying the structure and interpretation of deep networks, and the co-author on "Locating and... more
I've talked to Alexander Pan, 1st year at Berkeley working with Jacob Steinhardt about his paper "Measuring Trade-Offs Between Rewards... more
Vincent is currently spending his time supporting AI alignment efforts, as well as investing across AI, semi, energy, crypto, bio... more
Aran Komatsuzaki is a ML PhD student at GaTech and lead researcher at EleutherAI where he was one of the... more
Curtis, also known on the internet as AI_WAIFU, is the head of Alignment at EleutherAI. In this episode we discuss... more
Eric is a PhD student in the Department of Physics at MIT working with Max Tegmark on improving our scientific/theoretical... more
Jesse Hoogland is a research assistant at David Krueger's lab in Cambridge studying AI Safety. More recently, Jesse has been... more
Explainer podcast for Richard Ngo's "Clarifying and predicting AGI" post on Lesswrong, which introduces the t-AGI framework to evaluate AI... more
Max Kaufmann and Alan Chan discuss the evaluation of large language models, AI Governance and more generally the impact of... more
Breandan Considine is a PhD student at the School of Computer Science at McGill University, under the supervision of Jin... more
Christoph Schuhmann is the co-founder and organizational lead at LAION, the non-profit who released LAION-5B, a dataset of 5,85 billion... more
Siméon Campos is the founder of EffiSciences and SaferAI, mostly focusing on alignment field building and AI Governance. More recently,... more
Collin Burns is a second-year ML PhD at Berkeley, working with Jacob Steinhardt on making language models honest, interpretable, and... more
Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life... more
David Krueger is an assistant professor at the University of Cambridge and got his PhD from Mila. His research group... more
Ethan Caballero is a PhD student at Mila interested in how to best scale Deep Learning models according to all... more
Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a... more