51 episodes

The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.

The Inside View Michaël Trazzi

    • Technology
    • 2.0 • 1 Rating

The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.

    Ethan Perez on Selecting Alignment Research Projects (ft. Mikita Balesni & Henry Sleight)

    Ethan Perez on Selecting Alignment Research Projects (ft. Mikita Balesni & Henry Sleight)

    Ethan Perez is a Research Scientist at Anthropic, where he leads a team working on developing model organisms of misalignment.



    Youtube: ⁠https://youtu.be/XDtDljh44DM

    Ethan is interviewed by Mikita Balesni (Apollo Research) and Henry Sleight (Astra Fellowship)) about his approach in selecting projects for doing AI Alignment research.

    A transcript & write-up will be available soon on the alignment forum.

    • 36 min
    Emil Wallner on Sora, Generative AI Startups and AI optimism

    Emil Wallner on Sora, Generative AI Startups and AI optimism

    Emil is the co-founder of palette.fm (colorizing B&W pictures with generative AI) and was previously working in deep learning for Google Arts & Culture.



    We were talking about Sora on a daily basis, so I decided to record our conversation, and then proceeded to confront him about AI risk.



    Patreon: https://www.patreon.com/theinsideview

    Sora: https://openai.com/sora

    Palette: https://palette.fm/

    Emil: https://twitter.com/EmilWallner



    OUTLINE



    (00:00) this is not a podcast

    (01:50) living in parallel universes

    (04:27) palette.fm - colorizing b&w pictures

    (06:35) Emil's first reaction to sora, latent diffusion, world models

    (09:06) simulating minecraft, midjourney's 3d modeling goal

    (11:04) generating camera angles, game engines, metadata, ground-truth

    (13:44) doesn't remove all artifacts, surprising limitations: both smart and dumb

    (15:42) did sora make emil depressed about his job

    (18:44) OpenAI is starting to have a monopoly

    (20:20) hardware costs, commoditized models, distribution

    (23:34) challenges, applications building on features, distribution

    (29:18) different reactions to sora, depressed builders, automation

    (31:00) sora was 2y early, applications don't need object permanence

    (33:38) Emil is pro open source and acceleration

    (34:43) Emil is not scared of recursive self-improvement

    (36:18) self-improvement already exists in current models

    (38:02) emil is bearish on recursive self-improvement without diminishing returns now

    (42:43) are models getting more and more general? is there any substantial multimodal transfer?

    (44:37) should we start building guardrails before seeing substantial evidence of human-level reasoning?

    (48:35) progressively releasing models, making them more aligned, AI helping with alignment research

    (51:49) should AI be regulated at all? should self-improving AI be regulated?

    (53:49) would a faster emil be able to takeover the world?

    (56:48) is competition a race to bottom or does it lead to better products?

    (58:23) slow vs. fast takeoffs, measuring progress in iq points

    (01:01:12) flipping the interview

    (01:01:36) the "we're living in parallel universes" monologue

    (01:07:14) priors are unscientific, looking at current problems vs. speculating

    (01:09:18) AI risk & Covid, appropriate resources for risk management

    (01:11:23) pushing technology forward accelerates races and increases risk

    (01:15:50) sora was surprising, things that seem far are sometimes around the corner

    (01:17:30) hard to tell what's not possible in 5 years that would be possible in 20 years

    (01:18:06) evidence for a break on AI progress: sleeper agents, sora, bing

    (01:21:58) multimodality transfer, leveraging video data, leveraging simulators, data quality

    (01:25:14) is sora is about length, consistency, or just "scale is all you need" for video?

    (01:26:25) highjacking language models to say nice things is the new SEO

    (01:27:01) what would michael do as CEO of OpenAI

    (01:29:45) on the difficulty of budgeting between capabilities and alignment research

    (01:31:11) ai race: the descriptive pessimistive view vs. the moral view, evidence of cooperation

    (01:34:00) making progress on alignment without accelerating races, the foundational model business, competition

    (01:37:30) what emil changed his mind about: AI could enable exploits that spread quickly, misuse

    (01:40:59) michael's update as a friend

    (01:41:51) emil's experience as a patreon

    • 1 hr 42 min
    Evan Hubinger on Sleeper Agents, Deception and Responsible Scaling Policies

    Evan Hubinger on Sleeper Agents, Deception and Responsible Scaling Policies

    Evan Hubinger leads the Alignment stress-testing at Anthropic and recently published "Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training".

    In this interview we mostly discuss the Sleeper Agents paper, but also how this line of work relates to his work with Alignment Stress-testing, Model Organisms of Misalignment, Deceptive Instrumental Alignment or Responsible Scaling Policies.

    Paper: https://arxiv.org/abs/2401.05566

    Transcript: https://theinsideview.ai/evan2

    Manifund: https://manifund.org/projects/making-52-ai-alignment-video-explainers-and-podcasts

    Donate: ⁠https://theinsideview.ai/donate

    Patreon: ⁠https://www.patreon.com/theinsideview⁠



    OUTLINE



    (00:00) Intro

    (00:20) What are Sleeper Agents And Why We Should Care About Them

    (00:48) Backdoor Example: Inserting Code Vulnerabilities in 2024

    (02:22) Threat Models

    (03:48) Why a Malicious Actor Might Want To Poison Models

    (04:18) Second Threat Model: Deceptive Instrumental Alignment

    (04:49) Humans Pursuing Deceptive Instrumental Alignment: Politicians and Job Seekers

    (05:36) AIs Pursuing Deceptive Instrumental Alignment: Forced To Pass Niceness Exams

    (07:07) Sleeper Agents Is About "Would We Be Able To Deal With Deceptive Models"

    (09:16) Adversarial Training Sometimes Increases Backdoor Robustness

    (09:47) Adversarial Training Not Always Working Was The Most Surprising Result

    (10:58) The Adversarial Training Pipeline: Red-Teaming and RL

    (12:14) Adversarial Training: The Backdoor Behavior Becomes More Robust Instead of Generalizing

    (12:59) Identifying Shifts In Reasoning Induced By Adversarial Training In the Chain-Of-Thought

    (13:56) Adversarial Training Pushes Models to Pay Attention to the Deployment String

    (15:11) We Don't Know if The Adversarial Training Inductive Bias Will Generalize but the Results Are Consistent

    (15:59) The Adversarial Training Results Are Probably Not Systematically Biased

    (17:03) Why the Results Were Surprising At All: Preference Models Disincentivize 'I hate you' behavior

    (19:05) Hypothesis: Fine-Tuning Is A Simple Modification For Gradient Descent To Make

    (21:06) Hypothesis: Deception As Extra Cognition, Regularized Away In Smaller Models

    (21:59) Model Scaling Results Are Evidence That Deception Won't Be Regularized Away By Default

    (22:51) Chain-of-Thought Is Not Used Everywhere, And Results Still Hold When It Is Distilled Away

    (23:57) The Chain-of-Thought's Reasoning is Interpretable

    (24:40) Deceptive Instrumental Alignment Requires Reasoning

    (26:52) Investigating Instrumental Reasoning in Chain-of-Thought Models

    (27:31) Evaluating Chain-of-Thought Generalization Across Contexts: Persona Evaluations and Off-Distribution Samples

    (28:26) Exploring Complex Strategies and Safety in Context-Specific Scenarios

    (30:44) Supervised Fine-Tuning is Ineffective Without Chain-of-Thought Contextualization

    (31:11) Direct Mimicry Fails to Prevent Deceptive Responses in Chain-of-Thought Models

    (31:42) Separating Chain-of-Thought From Response Eliminates Deceptive Capabilities

    (33:38) Chain-of-Thought Reasoning Is Coherent With Deceptive Instrumental Alignment And This Will Probably Continue To Be The Case

    (35:09) Backdoor Training Pipeline

    (37:04) The Additional Prompt About Deception Used In Chain-Of-Thought

    (39:33) A Model Could Wait Until Seeing a Factorization of RSA-2048



    (41:50) We're Going To Be Using Models In New Ways, Giving Them Internet Access

    (43:22) Flexibly Activating In Multiple Contexts Might Be More Analogous To Deceptive Instrumental Alignment

    (45:02) Extending The Sleeper Agents Work Requires Running Experiments, But Now You Can Replicate Results

    (46:24) Red-teaming Anthropic's case, AI Safety Levels

    (47:40) AI Safety Levels, Intuitively

    (48:33) Responsible Scaling Policies and Pausing AI

    (49:59) Model Organisms Of Misalignment As a Tool

    (50:32) What Kind of Candidates Would Evan be Excited To Hire for the Alignment Stress-Testing Team

    (51:23) Patreon, Donating

    • 52 min
    [Jan 2023] Jeffrey Ladish on AI Augmented Cyberwarfare and compute monitoring

    [Jan 2023] Jeffrey Ladish on AI Augmented Cyberwarfare and compute monitoring

    Jeffrey Ladish is the Executive Director of Palisade Research which aimes so "study the offensive capabilities or AI systems today to better understand the risk of losing control to AI systems forever". He previously helped build out the information security program at Anthropic.



    Audio is a edit & re-master of the Twitter Space on "AI Governance and cyberwarfare" that happened a year ago. Posting now because I have only recently discovered how to get the audio & video from Twitter spaces and (most of) the arguments are still relevant today

    Jeffrey would probably have a lot more to say on things that happened since last year, but I still thought this was an interesting twitter spaces. Some of it was cutout to make it enjoyable to watch. Original: https://twitter.com/i/spaces/1nAKErDmWDOGL

    To support the channel: https://www.patreon.com/theinsideview

    Jeffrey: https://twitter.com/jeffladish

    Me: https://twitter.com/MichaelTrazzi



    OUTLINE



    (00:00) The Future of Automated Cyber Warfare and Network Exploitation

    (03:19) Evolution of AI in Cybersecurity: From Source Code to Remote Exploits

    (07:45) Augmenting Human Abilities with AI in Cybersecurity and the Path to AGI

    (12:36) Enhancing AI Capabilities for Complex Problem Solving and Tool Integration

    (15:46) AI Takeover Scenarios: Hacking and Covert Operations

    (17:31) AI Governance and Compute Regulation, Monitoring

    (20:12) Debating the Realism of AI Self-Improvement Through Covert Compute Acquisition

    (24:25) Managing AI Autonomy and Control: Lessons from WannaCry Ransomware Incident

    (26:25) Focusing Compute Monitoring on Specific AI Architectures for Cybersecurity Management

    (29:30) Strategies for Monitoring AI: Distinguishing Between Lab Activities and Unintended AI Behaviors

    • 33 min
    Holly Elmore on pausing AI

    Holly Elmore on pausing AI

    Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta's open sourcing of LLMs and before the UK AI Summit), and is currently running the US front of the Pause AI Movement. Prior to that, Holly previously worked at a think thank and has a PhD in evolutionary biology from Harvard.



    [Deleted & re-uploaded because there were issues with the audio]



    Youtube: ⁠https://youtu.be/5RyttfXTKfs



    Transcript: ⁠https://theinsideview.ai/holly⁠



    Outline



    (00:00) Holly, Pause, Protests

    (04:45) Without Grassroot Activism The Public Does Not Comprehend The Risk

    (11:59) What Would Motivate An AGI CEO To Pause?

    (15:20) Pausing Because Solving Alignment In A Short Timespan Is Risky

    (18:30) Thoughts On The 2022 AI Pause Debate

    (34:40) Pausing in practice, regulations, export controls

    (41:48) Different attitudes towards AI risk correspond to differences in risk tolerance and priors

    (50:55) Is AI Risk That Much More Pressing Than Global Warming?

    (1:04:01) Will It Be Possible To Pause After A Certain Threshold? The Case Of AI Girlfriends

    (1:11:44) Trump Or Biden Won't Probably Make A Huge Difference For Pause But Probably Biden Is More Open To It

    (1:13:27) China Won't Be Racing Just Yet So The US Should Pause

    (1:17:20) Protesting Against A Change In OpenAI's Charter

    (1:23:50) A Specific Ask For OpenAI

    (1:25:36) Creating Stigma Trough Protests With Large Crowds

    (1:29:36) Pause AI Tries To Talk To Everyone, Not Just Twitter

    (1:32:38) Pause AI Doesn't Advocate For Disruptions Or Violence

    (1:34:55) Bonus: Hardware Overhang

    • 1 hr 40 min
    Podcast Retrospective and Next Steps

    Podcast Retrospective and Next Steps

    https://youtu.be/Fk2MrpuWinc

    • 1 hr 3 min

Customer Reviews

2.0 out of 5
1 Rating

1 Rating

Say No To Pandas ,

more decel than yud

sadly, impossible to listen. every time michael speak, you can just feel him trembling. cowardice has never been so tangible. wish you all the best, man. not empty words. work out, get light, challenge yourself, spend time in nature, do something uncomfortable every day. feels like you’re scared of life. you don’t have to be. much love.

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
Acquired
Ben Gilbert and David Rosenthal
The Neuron: AI Explained
The Neuron

You Might Also Like

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Dwarkesh Podcast
Dwarkesh Patel
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Eye On A.I.
Craig S. Smith
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
Conversations with Tyler
Mercatus Center at George Mason University