pod.link/1630783021
pod.link copied!
LessWrong (Curated & Popular)
LessWrong (Curated & Popular)
LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong... more

Listen now on

Apple Podcasts
Spotify
Overcast
Podcast Addict
Pocket Casts
Castbox
Podbean
iHeartRadio
Player FM
Podcast Republic
Castro
RSS

Episodes

“The Most Forbidden Technique” by Zvi

The Most Forbidden Technique is training an AI using interpretability techniques. An AI produces a final output [X] via... more

14 Mar 2025 · 32 minutes
“Trojan Sky” by Richard_Ngo

You learn the rules as soon as you’re old enough to speak. Don’t talk to jabberjays. You recite them as... more

13 Mar 2025 · 22 minutes
“OpenAI:” by Daniel Kokotajlo

Exciting Update: OpenAI has released this blog post and paper which makes me very happy. It's basically the first steps... more

11 Mar 2025 · 7 minutes
“How Much Are LLMs Actually Boosting Real-World Programmer Productivity?” by Thane Ruthenis

LLM-based coding-assistance tools have been out for ~2 years now. Many developers have been reporting that this is dramatically increasing... more

09 Mar 2025 · 7 minutes
“So how well is Claude playing Pokémon?” by Julian Bradshaw

Background: After the release of Claude 3.7 Sonnet,[1] an Anthropic employee started livestreaming Claude trying to play through Pokémon Red.... more

09 Mar 2025 · 9 minutes
“Methods for strong human germline engineering” by TsviBT

Note: an audio narration is not available for this article. Please see the original text. The original... more

07 Mar 2025 ·
“Have LLMs Generated Novel Insights?” by abramdemski, Cole Wyeth

In a recent post, Cole Wyeth makes a bold claim: . . . there is one crucial test (yes... more

06 Mar 2025 · 3 minutes
“A Bear Case: My Predictions Regarding AI Progress” by Thane Ruthenis

This isn't really a "timeline", as such – I don't know the timings – but this is my current, fairly... more

06 Mar 2025 · 18 minutes
“Statistical Challenges with Making Super IQ babies” by Jan Christian Refsgaard

This is a critique of How to Make Superbabies on LessWrong. Disclaimer: I am not a geneticist[1], and I've... more

05 Mar 2025 · 17 minutes
“Self-fulfilling misalignment data might be poisoning our AI models” by TurnTrout

This is a link post.Your AI's training data might make it more “evil” and more able to circumvent your security,... more

04 Mar 2025 · 1 minute