pod.link/1580097837
pod.link copied!
Cold Takes Audio
Cold Takes Audio
coldtakes

Amateur read-throughs of blog posts on Cold-Takes.com, for those who prefer listening to reading. Available on Apple, Spotify, Google Podcasts, and anywhere... more

Listen now on

Apple Podcasts
Spotify
Overcast
Podcast Addict
Pocket Casts
Castbox
Podbean
iHeartRadio
Player FM
Podcast Republic
Castro
RSS

Episodes

What AI companies can do today to help with the most important century

Major AI companies can increase or reduce global catastrophic risks.https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/

20 Feb 2023 · 18 minutes
Jobs that can help with the most important century

People are far better at their jobs than at anything else. Here are the best ways to help the most... more

10 Feb 2023 · 30 minutes
Spreading messages to help with the most important century

For people who want to help improve our prospects for navigating transformative AI, and have an audience (even a small... more

25 Jan 2023 · 20 minutes
How we could stumble into AI catastrophe

Hypothetical stories where the world tries, but fails, to avert a global disaster.https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe

13 Jan 2023 · 28 minutes
Transformative AI issues (not just misalignment): an overview

An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI.https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/

05 Jan 2023 · 25 minutes
Racing Through a Minefield: the AI Deployment Problem

Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is... more

22 Dec 2022 · 21 minutes
High-level hopes for AI aligment

A few ways we might get very powerful AI systems to be safe.https://www.cold-takes.com/high-level-hopes-for-ai-alignment/

15 Dec 2022 · 23 minutes
AI safety seems hard to measure

Four analogies for why "We don't see any misbehavior by this AI" isn't enough.https://www.cold-takes.com/ai-safety-seems-hard-to-measure/

08 Dec 2022 · 22 minutes
Why Would AI "Aim" To Defeat Humanity?

Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix... more

29 Nov 2022 · 46 minutes
The Track Record of Futurists Seems ... Fine

We scored mid-20th-century sci-fi writers on nonfiction predictions. They weren't great, but weren't terrible either. Maybe doing futurism works fine.https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/

30 Jun 2022 · 21 minutes
Cold Takes Audio
How we could stumble into AI catastrophe
Cold Takes Audio
0:00
-0:00

Listen now on

Apple Podcasts
Spotify
Overcast
Podcast Addict
Pocket Casts
Castbox
Podbean
iHeartRadio
Player FM
Podcast Republic
Castro
RSS

Description

Hypothetical stories where the world tries, but fails, to avert a global disaster.https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe