The Inside View

By Michaël Trazzi

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store and Apple App Store.


Category: Technology

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast
    

Subscribers: 6
Reviews: 0
Episodes: 52

Description

The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.

Episode Date
Owain Evans - AI Situational Awareness, Out-of-Context Reasoning
Aug 23, 2024
[Crosspost] Adam Gleave on Vulnerabilities in GPT-4 APIs (+ extra Nathan Labenz interview)
May 17, 2024
Ethan Perez on Selecting Alignment Research Projects (ft. Mikita Balesni & Henry Sleight)
Apr 09, 2024
Emil Wallner on Sora, Generative AI Startups and AI optimism
Feb 20, 2024
Evan Hubinger on Sleeper Agents, Deception and Responsible Scaling Policies
Feb 12, 2024
[Jan 2023] Jeffrey Ladish on AI Augmented Cyberwarfare and compute monitoring
Jan 27, 2024
Holly Elmore on pausing AI
Jan 22, 2024
Podcast Retrospective and Next Steps
Jan 09, 2024
Paul Christiano's views on "doom" (ft. Robert Miles)
Sep 29, 2023
Neel Nanda on mechanistic interpretability, superposition and grokking
Sep 21, 2023
Joscha Bach on how to stop worrying and love AI
Sep 08, 2023
Erik Jones on Automatically Auditing Large Language Models
Aug 11, 2023
Dylan Patel on the GPU Shortage, Nvidia and the Deep Learning Supply Chain
Aug 09, 2023
Tony Wang on Beating Superhuman Go AIs with Advesarial Policies
Aug 04, 2023
David Bau on Editing Facts in GPT, AI Safety and Interpretability
Aug 01, 2023
Alexander Pan on the MACHIAVELLI benchmark
Jul 26, 2023
Vincent Weisser on Funding AI Alignment Research
Jul 24, 2023
[JUNE 2022] Aran Komatsuzaki on Scaling, GPT-J and Alignment
Jul 19, 2023
Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI
Jul 16, 2023
Eric Michaud on scaling, grokking and quantum interpretability
Jul 12, 2023
Jesse Hoogland on Developmental Interpretability and Singular Learning Theory
Jul 06, 2023
Clarifying and predicting AGI by Richard Ngo
May 09, 2023
Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety
May 06, 2023
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines
May 04, 2023
Christoph Schuhmann on Open Source AI, Misuse and Existential risk
May 01, 2023
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
Apr 29, 2023
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Jan 17, 2023
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Jan 12, 2023
David Krueger–Coordination, Alignment, Academia
Jan 07, 2023
Ethan Caballero–Broken Neural Scaling Laws
Nov 03, 2022
Irina Rish–AGI, Scaling and Alignment
Oct 18, 2022
Shahar Avin–Intelligence Rising, AI Governance
Sep 23, 2022
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Sep 16, 2022
Markus Anderljung–AI Policy
Sep 09, 2022
Alex Lawsen—Forecasting AI Progress
Sep 06, 2022
Robert Long–Artificial Sentience
Aug 28, 2022
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Aug 24, 2022
Robert Miles–Youtube, AI Progress and Doom
Aug 19, 2022
Connor Leahy–EleutherAI, Conjecture
Jul 22, 2022
Raphaël Millière Contra Scaling Maximalism
Jun 24, 2022
Blake Richards–AGI Does Not Exist
Jun 14, 2022
Ethan Caballero–Scale is All You Need
May 05, 2022
10. Peter Wildeford on Forecasting
Apr 13, 2022
9. Emil Wallner on Building a €25000 Machine Learning Rig
Mar 23, 2022
8. Sonia Joseph on NFTs, Web 3 and AI Safety
Dec 22, 2021
7. Phil Trammell on Economic Growth under Transformative AI
Oct 24, 2021
6. Slava Bobrov on Brain Computer Interfaces
Oct 06, 2021
5. Charlie Snell on DALL-E and CLIP
Sep 16, 2021
4. Sav Sidorov on Learning, Contrarianism and Robotics
Sep 05, 2021
3. Evan Hubinger on Takeoff speeds, Risks from learned optimization & Interpretability
Jun 08, 2021
2. Connor Leahy on GPT3, EleutherAI and AI Alignment
May 04, 2021
1. Does the world really need another podcast?
Apr 25, 2021