The Nonlinear Library: LessWrong Daily

By The Nonlinear Fund

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store and Apple App Store.


Category: Education

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 0
Reviews: 0
Episodes: 81

Description

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

Episode Date
LW - Can I take ducks home from the park? by dynomight
Sep 14, 2023
LW - Highlights: Wentworth, Shah, and Murphy on "Retargeting the Search" by RobertM
Sep 14, 2023
LW - UDT shows that decision theory is more puzzling than ever by Wei Dai
Sep 13, 2023
LW - PSA: The community is in Berkeley/Oakland, not "the Bay Area" by maia
Sep 11, 2023
LW - US presidents discuss AI alignment agendas by TurnTrout
Sep 09, 2023
LW - Sum-threshold attacks by TsviBT
Sep 08, 2023
LW - Sharing Information About Nonlinear by Ben Pace
Sep 07, 2023
LW - Find Hot French Food Near Me: A Follow-up by aphyer
Sep 06, 2023
LW - Text Posts from the Kids Group: 2023 I by jefftk
Sep 05, 2023
LW - Defunding My Mistake by ymeskhout
Sep 04, 2023
LW - The goal of physics by Jim Pivarski
Sep 03, 2023
LW - The smallest possible button by Neil
Sep 02, 2023
LW - A Golden Age of Building? Excerpts and lessons from Empire State, Pentagon, Skunk Works and SpaceX by jacobjacob
Sep 01, 2023
LW - Responses to apparent rationalist confusions about game / decision theory by Anthony DiGiovanni
Aug 31, 2023
LW - Biosecurity Culture, Computer Security Culture by jefftk
Aug 30, 2023
LW - Introducing the Center for AI Policy (& we're hiring!) by Thomas Larsen
Aug 28, 2023
LW - Dear Self; we need to talk about ambition by Elizabeth
Aug 28, 2023
LW - Aumann-agreement is common by tailcalled
Aug 27, 2023
LW - Digital brains beat biological ones because diffusion is too slow by GeneSmith
Aug 26, 2023
LW - Assume Bad Faith by Zack M Davis
Aug 25, 2023
LW - The Low-Hanging Fruit Prior and sloped valleys in the loss landscape by Dmitry Vaintrob
Aug 24, 2023
LW - A Theory of Laughter by Steven Byrnes
Aug 23, 2023
LW - Large Language Models will be Great for Censorship by Ethan Edwards
Aug 22, 2023
LW - Steven Wolfram on AI Alignment by Bill Benzon
Aug 21, 2023
LW - AI Forecasting: Two Years In by jsteinhardt
Aug 20, 2023
LW - The U.S. is mildly destabilizing by lc
Aug 18, 2023
LW - 6 non-obvious mental health issues specific to AI safety. by Igor Ivanov
Aug 18, 2023
LW - Book Launch: "The Carving of Reality," Best of LessWrong vol. III by Raemon
Aug 17, 2023
LW - Ten Thousand Years of Solitude by agp
Aug 16, 2023
LW - Decomposing independent generalizations in neural networks via Hessian analysis by Dmitry Vaintrob
Aug 14, 2023
LW - We Should Prepare for a Larger Representation of Academia in AI Safety by Leon Lang
Aug 13, 2023
LW - [Linkpost] Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks by Bogdan Ionut Cirstea
Aug 13, 2023
LW - Biological Anchors: The Trick that Might or Might Not Work by Scott Alexander
Aug 12, 2023
LW - AI #24: Week of the Podcast by Zvi
Aug 11, 2023
LW - Inflection.ai is a major AGI lab by nikola
Aug 09, 2023
LW - A plea for more funding shortfall transparency by porby
Aug 08, 2023
LW - Feedbackloop-first Rationality by Raemon
Aug 07, 2023
LW - Stomach Ulcers and Dental Cavities by Metacelsus
Aug 06, 2023
LW - Private notes on LW? by Raemon
Aug 04, 2023
LW - Password-locked models: a stress case for capabilities evaluation by Fabien Roger
Aug 03, 2023
LW - My current LK99 questions by Eliezer Yudkowsky
Aug 01, 2023
LW - Apollo Neuro Results by Elizabeth
Jul 30, 2023
LW - UK Foundation Model Task Force - Expression of Interest by ojorgensen
Jun 18, 2023
LW - Jaan Tallinn's 2022 Philanthropy Overview by jaan
May 14, 2023
LW - Talking publicly about AI risk by Jan Kulveit
Apr 21, 2023
LW - Financial Times: We must slow down the race to God-like AI by trevor
Apr 14, 2023
LW - On AutoGPT by Zvi
Apr 13, 2023
LW - Abstracts should be either Actually Short™, or broken into paragraphs by Raemon
Mar 24, 2023
LW - We have to Upgrade by Jed McCaleb
Mar 23, 2023
LW - You Don't Exist, Duncan by Duncan Sabien
Feb 02, 2023
LW - Inner Misalignment in "Simulator" LLMs by Adam Scherlis
Jan 31, 2023
LW - On not getting contaminated by the wrong obesity ideas by Natália Coelho Mendonça
Jan 28, 2023
LW - Basics of Rationalist Discourse by Duncan Sabien
Jan 27, 2023
LW - Basics of Rationalist Discourse by Duncan Sabien
Jan 27, 2023
LW - Thoughts on the impact of RLHF research by paulfchristiano
Jan 25, 2023
LW - Thoughts on the impact of RLHF research by paulfchristiano
Jan 25, 2023
LW - Large language models learn to represent the world by gjm
Jan 22, 2023
LW - Transcript of Sam Altman's interview touching on AI safety by Andy McKenzie
Jan 20, 2023
LW - Book Review: Worlds of Flow by remember
Jan 17, 2023
LW - How does GPT-3 spend its 175B parameters? by Robert AIZI
Jan 15, 2023
LW - How we could stumble into AI catastrophe by HoldenKarnofsky
Jan 13, 2023
LW - A Year of AI Increasing AI Progress by ThomasW
Dec 30, 2022
LW - Things that can kill you quickly: What everyone should know about first aid by jasoncrawford
Dec 27, 2022
LW - It's time to worry about online privacy again by Malmesbury
Dec 26, 2022
LW - Shared reality: a key driver of human behavior by kdbscott
Dec 24, 2022
LW - On sincerity by Joe Carlsmith
Dec 23, 2022
LW - Sazen by Duncan Sabien
Dec 21, 2022
LW - How to Convince my Son that Drugs are Bad by concerned dad
Dec 17, 2022
LW - How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme by Collin
Dec 15, 2022
LW - Trying to disambiguate different questions about whether RLHF is “good” by Buck
Dec 14, 2022
LW - AI alignment is distinct from its near-term applications by paulfchristiano
Dec 13, 2022
LW - The Plan - 2022 Update by johnswentworth
Dec 01, 2022
LW - Be less scared of overconfidence by benkuhn
Nov 30, 2022
LW - Geometric Rationality is Not VNM Rational by Scott Garrabrant
Nov 27, 2022
LW - Respecting your Local Preferences by Scott Garrabrant
Nov 26, 2022
LW - Planes are still decades away from displacing most bird jobs by guzey
Nov 25, 2022
LW - Tyranny of the Epistemic Majority by Scott Garrabrant
Nov 22, 2022
LW - Career Scouting: Dentistry by koratkar
Nov 21, 2022
LW - When should we be surprised that an invention took “so long”? by jasoncrawford
Nov 19, 2022
LW - Announcing the Progress Forum by jasoncrawford
Nov 17, 2022
LW - Will we run out of ML data? Evidence from projecting dataset size trends by Pablo Villalobos
Nov 14, 2022