Amateur read-throughs of blog posts on Cold-Takes.com, for those who prefer listening to reading. Available on Apple, Spotify, Google Podcasts, and anywhere... more
Major AI companies can increase or reduce global catastrophic risks.https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/
People are far better at their jobs than at anything else. Here are the best ways to help the most... more
For people who want to help improve our prospects for navigating transformative AI, and have an audience (even a small... more
Hypothetical stories where the world tries, but fails, to avert a global disaster.https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI.https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is... more
A few ways we might get very powerful AI systems to be safe.https://www.cold-takes.com/high-level-hopes-for-ai-alignment/
Four analogies for why "We don't see any misbehavior by this AI" isn't enough.https://www.cold-takes.com/ai-safety-seems-hard-to-measure/
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix... more
We scored mid-20th-century sci-fi writers on nonfiction predictions. They weren't great, but weren't terrible either. Maybe doing futurism works fine.https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it... more