Hi! We’re Nicole and Prax. Join our weekly conversations as we share inspiring lessons, stories and mindsets to help you free-up time and space to live a happier, healthier and more productive life 🌱 We try to to motivate, inspire and minsan maging funny 🤪 Connect with us! IG: http://instagram.com/nicoleandprax FB Page: https://www.facebook.com/goodmorningnicoleprax Get Productivity Tips on our YouTube Channel: http://bit.ly/nicoleandprax Join our community on FB Group: https://www.facebook. ...
…
continue reading
Player FM - Internet Radio Done Right
Checked 2y ago
Added three years ago
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED
I
In Her Ellement


1 Navigating Career Pivots and Grit with Milo’s Avni Patel Thompson 26:18
26:18
Play Later
Play Later
Lists
Like
Liked26:18
How do you know when it’s time to make your next big career move? With International Women’s Day around the corner, we are excited to feature Avni Patel Thompson, Founder and CEO of Milo. Avni is building technology that directly supports the often overlooked emotional and logistical labor that falls on parents—especially women. Milo is an AI assistant designed to help families manage that invisible load more efficiently. In this episode, Avni shares her journey from studying chemistry to holding leadership roles at global brands like Adidas and Starbucks, to launching her own ventures. She discusses how she approaches career transitions, the importance of unpleasant experiences, and why she’s focused on making everyday life easier for parents. [01:26] Avni's University Days and Early Career [04:36] Non-Linear Career Paths [05:16] Pursuing Steep Learning Curves [11:51] Entrepreneurship and Safety Nets [15:22] Lived Experiences and Milo [19:55] Avni’s In Her Ellement Moment [20:03] Reflections Links: Avni Patel Thompson on LinkedIn Suchi Srinivasan on LinkedIn Kamila Rakhimova on LinkedIn Ipsos report on the future of parenting About In Her Ellement: In Her Ellement highlights the women and allies leading the charge in digital, business, and technology innovation. Through engaging conversations, the podcast explores their journeys—celebrating successes and acknowledging the balance between work and family. Most importantly, it asks: when was the moment you realized you hadn’t just arrived—you were truly in your element? About The Hosts: Suchi Srinivasan is an expert in AI and digital transformation. Originally from India, her career includes roles at trailblazing organizations like Bell Labs and Microsoft. In 2011, she co-founded the Cleanweb Hackathon, a global initiative driving IT-powered climate solutions with over 10,000 members across 25+ countries. She also advises Women in Cloud, aiming to create $1B in economic opportunities for women entrepreneurs by 2030. Kamila Rakhimova is a fintech leader whose journey took her from Tajikistan to the U.S., where she built a career on her own terms. Leveraging her English proficiency and international relations expertise, she discovered the power of microfinance and moved to the U.S., eventually leading Amazon's Alexa Fund to support underrepresented founders. Subscribe to In Her Ellement on your podcast app of choice to hear meaningful conversations with women in digital, business, and technology.…
Future Matters
Mark all (un)played …
Manage series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.
…
continue reading
9 episodes
Mark all (un)played …
Manage series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.
…
continue reading
9 episodes
All episodes
×F
Future Matters

1 #8: Bing Chat, AI labs on safety, and pausing Future Matters 41:48
41:48
Play Later
Play Later
Lists
Like
Liked41:48
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini . Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . Future Matters is also available in Spanish . 00:00 Welcome to Future Matters. 00:44 A message to our readers. 01:09 All things Bing. 05:27 Summaries. 14:20 News. 16:10 Opportunities. 17:19 Audio & video. 18:16 Newsletters. 18:50 Conversation with Tom Davidson. 19:13 The importance of understanding and forecasting AI takeoff dynamics. 21:55 Start and end points of AI takeoff. 24:25 Distinction between capabilities takeoff and impact takeoff. 25:47 The ‘compute-centric framework’ for AI forecasting. 27:12 How the compute centric assumption could be wrong. 29:26 The main lines of evidence informing estimates of the effective FLOP gap. 34:23 The main drivers of the shortened timelines in this analysis. 36:52 The idea that we'll be "swimming in runtime compute" by the time we’re training human-level AI systems. 37:28 Is the ratio between the compute required for model training vs. model inference relatively stable? 40:37 Improving estimates of AI takeoffs.…
F
Future Matters

Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini . Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . Future Matters is also available in Spanish . 00:00 Welcome to Future Matters. 00:57 Davidson — What a compute-centric framework says about AI takeoff speeds. 02:19 Chow, Halperin & Mazlish — AGI and the EMH. 02:58 Hatfield-Dodds — Concrete reasons for hope about AI. 03:37 Karnofsky — Transformative AI issues (not just misalignment). 04:08 Vaintrob — Beware safety-washing. 04:45 Karnofsky — How we could stumble into AI catastrophe. 05:21 Liang & Manheim — Managing the transition to widespread metagenomic monitoring. 05:51 Crawford — Technological stagnation: why I came around. 06:38 Karnofsky — Spreading messages to help with the most important century. 07:16 Wynroe Atkinson & Sevilla — Literature review of transformative artificial intelligence timelines. 07:50 Yagudin, Mann & Sempere — Update to Samotsvety AGI timelines. 08:15 Dourado — Heretical thoughts on AI. 08:43 Browning & Veit — Longtermism and animals. 09:04 One-line summaries. 10:28 News. 14:13 Conversation with Lukas Finnveden. 14:37 Could you clarify what you mean by AGI and lock-in? 16:36 What are the five claims one could make about the long run trajectory of intelligent life? 18:26 What are the three claims about lock-in, conditional on the arrival of AGI? 20:21 Could lock-in still happen without whole brain emulation? 21:32 Could you explain why the form of alignment required for lock-in would be easier to solve? 23:12 Could you elaborate on the stability of the postulated long-lasting institutions and on potential threats? 26:02 Do you have any thoughts on the desirability of long-term lock-in? 28:24 What’s the story behind this report?…
F
Future Matters

1 #6: FTX collapse, value lock-in, and counterarguments to AI x-risk 37:47
37:47
Play Later
Play Later
Lists
Like
Liked37:47
Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini . Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . Future Matters is also available in Spanish . 00:00 Welcome to Future Matters. 01:05 A message to our readers. 01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in. 02:33 Grace — Counterarguments to the basic AI x-risk case. 03:17 Grace — Let’s think about slowing down AI. 04:18 Piper — Review of What We Owe the Future. 05:04 Clare & Martin — How bad could a war get? 05:26 Rodríguez — What is the likelihood that civilizational collapse would cause technological stagnation? 06:28 Ord — What kind of institution is needed for existential security? 07:00 Ezell — A lunar backup record of humanity. 07:37 Tegmark — Why I think there's a one-in-six chance of an imminent global nuclear war. 08:31 Hobbhahn — The next decades might be wild. 08:54 Karnosfky — Why would AI "aim" to defeat humanity? 09:44 Karnosfky — High-level hopes for AI alignment. 10:27 Karnosfky — AI safety seems hard to measure. 11:10 Karnosfky — Racing through a minefield. 12:07 Barak & Edelman — AI will change the world, but won’t take it over by playing “3-dimensional chess”. 12:53 Our World in Data — New page on artificial intelligence. 14:06 Luu — Futurist prediction methods and accuracy. 14:38 Kenton et al. — Clarifying AI x-risk. 15:39 Wyg — A theologian's response to anthropogenic existential risk. 16:12 Wilkinson — The unexpected value of the future. 16:38 Aaronson — Talk on AI safety. 17:20 Tarsney & Wilkinson — Longtermism in an infinite world. 18:13 One-line summaries. 25:01 News. 28:29 Conversation with Katja Grace. 28:42 Could you walk us through the basic case for existential risk from AI? 29:42 What are the most important weak points in the argument? 30:37 Comparison between misaligned AI and corporations. 32:07 How do you think people in the AI safety community are thinking about this basic case wrong? 33:23 If these arguments were supplemented with clearer claims, does that rescue some of the plausibility? 34:30 Does the disagreement about basic intuitive case for AI risk undermine the case itself? 35:34 Could describe how your views on AI risk have changed over time? 36:14 Could you quantify your credence in the probability of existential catastrophe from AI? 36:52 When you reached that number, did it surprise you?…
F
Future Matters

1 #5: supervolcanoes, AI takeover, and What We Owe the Future 31:26
31:26
Play Later
Play Later
Lists
Like
Liked31:26
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini . Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . 00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge volcanic eruptions. 04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes. 05:28 Hilton — Preventing an AI-related catastrophe. 06:13 Lewis — Most small probabilities aren't Pascalian. 07:04 Yglesias — What's long-term about "longtermism”? 07:33 Lifland — Prioritizing x-risks may require caring about future people. 08:40 Karnofsky — AI strategy nearcasting. 09:11 Karnofsky — How might we align transformative AI if it's developed very soon? 09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force. 10:28 News. 14:28 Conversation with Ajeya Cotra. 15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it? 18:08 Could you walk us through the three assumptions you make about how this scenario plays out? 20:49 What are the key properties of the model you call Alex? 22:55 What do you mean by “playing the training game”, and why would Alex behave in that way? 24:34 Can you describe how deploying Alex would result in a loss of human control? 29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?…
F
Future Matters

1 #4: AI timelines, AGI risk, and existential risk from climate change 31:13
31:13
Play Later
Play Later
Lists
Like
Liked31:13
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini . Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . 00:00 Welcome to Future Matters 01:11 Steinhardt — AI forecasting: one year in 01:52 Davidson — Social returns to productivity growth 02:26 Brundage — Why AGI timeline research/discourse might be overrated 03:03 Cotra — Two-year update on my personal AI timelines 03:50 Grace — What do ML researchers think about AI in 2022? 04:43 Leike — On the windfall clause 05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover 06:32 Maas — Introduction to strategic perspectives on long-term AI governance 06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study 07:35 Trötzmüller — Why EAs are skeptical about AI safety 08:08 Schubert — Moral circle expansion isn’t the key value change we need 08:52 Šimčikas — Wild animal welfare in the far future 09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views 10:28 Rational Animations — Video on Karnofsky's Most important century 11:23 Other research 12:47 News 15:00 Conversation with John Halstead 15:33 What level of emissions should we reasonably expect over the coming decades? 18:11 What do those emissions imply for warming? 20:52 How worried should we be about the risk of climate change from a longtermist perspective? 26:53 What is the probability of an existential catastrophe due to climate change? 27:06 Do you think EAs should fund modelling work of tail risks from climate change? 28:45 What would be the best use of funds?…
F
Future Matters

1 #3: digital sentience, AGI ruin, and forecasting track records 34:05
34:05
Play Later
Play Later
Lists
Like
Liked34:05
Episode Notes Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini . Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . 00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates 05:13 Karnofsky — The track record of futurists seems … fine 06:08 Aaronson — Joining OpenAI to work on AI safety 06:52 Shiller — The importance of getting digital consciousness right 07:53 Pilz's — Germans opinions on translations of “longtermism” 08:33 Karnofsky — AI could defeat all of us combined 09:36 Beckstead — Future Fund June 2022 update 11:02 News 14:45 Conversation with Robert Long 15:05 What artificial sentience is and why it’s important 16:56 “The Big Question” and the assumptions on which it depends 19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability 21:57 AI sentience and the alignment problem 24:01 The Blake Lemoine saga and the quality of the ensuing public discussion 26:29 The risks of AI sentience becoming lumped in with certain other views 27:55 How to deal with objections coming from different frameworks 28:50 The analogy between AI sentience and animal welfare 30:10 The probability of large language models like LaMDA and GPT-3 being sentient 32:41 Are verbal reports strong evidence for sentience?…
F
Future Matters

1 #2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research 23:07
23:07
Play Later
Play Later
Lists
Like
Liked23:07
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini . Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . 00:01 Welcome to Future Matters 01:25 Schubert — Against cluelessness 02:23 Carlsmith — Presentation on existential risk from power-seeking AI 03:45 Vaintrob — Against "longtermist" as an identity 04:30 Bostrom & Shulman — Propositions concerning digital minds and society 05:02 MacAskill — EA and the current funding situation 05:51 Beckstead — Some clarifications on the Future Fund's approach to grantmaking 06:46 Caviola, Morrisey & Lewis — Most students who would agree with EA ideas haven't heard of EA yet 07:32 Villalobos & Sevilla — Potatoes: A critical review 08:09 Ritchie — How we fixed the ozone layer 08:57 Snodin — Thoughts on nanotechnology strategy research 09:31 Cotton-Barratt — Against immortality 09:50 Smith & Sandbrink — Biosecurity in the age of open science 10:30 Cotton-Barratt — What do we want the world to look like in 10 years? 10:52 Hilton — Climate change: problem profile 11:30 Ligor & Matthews — Outer space and the veil of ignorance 12:21 News 14:46 Conversation with Ben Snodin…
F
Future Matters

1 #1: AI takeoff, longtermism vs. existential risk, and probability discounting 29:55
29:55
Play Later
Play Later
Lists
Like
Liked29:55
The remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors. — John Stuart Mill Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini . Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack , read on the EA Forum and follow on Twitter . Research Scott Alexander's "Long-termism" vs. "existential risk" worries that “longtermism” may be a worse brand (though not necessarily a worse philosophy) than “existential risk”. It seems much easier to make someone concerned about transformative AI by noting that it might kill them and everyone else, than by pointing out its effects on people in the distant future. We think that Alexander raises a valid worry, although we aren’t sure the worry favors the “existential risk” branding over the “longtermism” branding as much as he suggests: existential risks are, after all, defined as risks to humanity's long-term potential. Both of these concepts, in fact, attempt to capture the core idea that what ultimately matters is mostly located in the far future: existential risk uses the language of “potential” and emphasizes threats to it, whereas longtermism instead expresses the idea in terms of value and the duties it creates. Maybe the “existential risk” branding seems to address Alexander’s worry better because it draws attention to the threats to this value, which are disproportionately (but not exclusively ) located in the short-term, while the “longtermism” branding emphasizes instead the determinants of value, which are in the far future. In General vs AI-specific explanations of existential risk neglect , Stefan Schubert asks why we systematically neglect existential risk. The standard story invokes general explanations, such as cognitive biases and coordination problems. But Schubert notes that people seem to have specific biases that cause them to underestimate AI risk, e.g. it sounds outlandish and counter-intuitive. If unaligned AI is the greatest source of existential risk in the near-term, then these AI-specific biases could explain most of our neglect. Max Roser’s The future is vast is a powerful new introduction to longtermism. His graphical representations do well to convey the scale of humanity’s potential, and have made it onto the Wikipedia entry for longtermism. Thomas Kwa’s Effectiveness is a conjunction of multipliers makes the important observation that (1) a person’s impact can be decomposed into a series of impact “multipliers” and that (2) these terms interact multiplicatively, rather than additively, with each other. For example, donating 80% instead of 10% multiplies impact by a factor of 8 and earning $1m/year instead of $250k/year multiplies impact by a factor of 4; but doing both of these things multiplies impact by a factor of 32. Kwa shows that many other common EA choices are best seen as multipliers of impact, and notes that multipliers related to judgment and ambition are especially important for longtermists. The first installment in a series on “ learning from crisis ”, Jan Kulveit's Experimental longtermism: theory needs data (co-written with Gavin Leech) recounts the author's motivation to launch Epidemic Forecasting, a modelling and forecasting platform that sought to present probabilistic data to decisionmakers and the general public. Kulveit realized that his "longtermist" models had relatively straightforward implications for the COVID pandemic, such that trying to apply them to this case (1) had the potential to make a direct, positive difference to the crisis and (2) afforded an opportunity to experimentally test those models. While the first of these effects had obvious appeal, Kulveit considers the second especially important from a longtermist perspective: attempts to think about the long-term future lack rapid feedback loops, and disciplines that aren't tightly anchored to empirical reality are much more likely to go astray. He concludes that longtermists should engage more often in this type of experimentation, and generally pay more attention to the longtermist value of information that "near-termist" projects can sometimes provide. Rhys Lindmark’s FTX Future Fund and Longtermism considers the significance of the Future Fund within the longtermist ecosystem by examining trends in EA funding over time. Interested readers should look at the charts in the original post for more details, but roughly it looks like Open Philanthropy has allocated about 20% of its budget to longtermist causes in recent years, accounting for about 80% of all longtermist grantmaking. On the assumption that Open Phil gives $200 million to longtermism in 2022, the Future Fund lower bound target of $100 million already positions it as the second-largest longtermist grantmaker, with roughly a 30% share. Lindmark’s analysis prompted us to create a Metaculus question on whether the Future Fund will give more than Open Philanthropy to longtermist causes in 2022 . At the time of publication (22 April 2022), the community predicts that the Future Fund is 75% likely to outspend Open Philanthropy. Holden Karnofsky's Debating myself on whether “extra lives lived” are as good as “deaths prevented” is an engaging imaginary dialogue between a proponent and an opponent of Total Utilitarianism . Karnofsky manages to cover many of the key debates in population ethics—including those surrounding the Intuition of Neutrality , the Procreation Asymmetry , the Repugnant and Very Repugnant Conclusions, and the impossibility of Theory X —in a highly accessible yet rigorous manner. Overall, this blog post struck us as one of the best popular, informal introductions to the topic currently available. Matthew Barnett shares thoughts on the risks from SETI . People underestimate the risks from passive SETI—scanning for alien signals without transmitting anything. We should consider the possibility that alien civilizations broadcast messages designed to hijack or destroy their recipients. At a minimum, we should treat alien signals with as much caution as we would a strange email attachment. However, current protocols are to publicly release any confirmed alien messages, and no one seems to have given much thought to managing downside risk. Overall, Barnett estimates a 0.1–0.2% chance of extinction from SETI over the next 1,000 years. Now might be a good opportunity for longtermists to figure out, and advocate for, some more sensible policies. Scott Alexander provides an epic commentary on the long-running debate about AI Takeoff Speeds . Paul Christiano thinks it more likely that improvements in AI capabilities, and the ensuing transformative impacts on the world, will happen gradually. Eliezer Yudkowsky thinks there will be a sudden, sharp jump in capabilities, around the point we build AI with human-level intelligence. Alexander presents the two perspectives with more clarity than their main proponents, and isolates some of the core disagreements. It’s the best summary of the takeoff debate we’ve come across. Buck Shlegeris points out that takeoff speeds have a huge effect on what it means to work on AI x-risk . In fast takeoff worlds, AI risk will never be much more widely accepted than it is today, because everything will look pretty normal until we reach AGI. The majority of AI alignment work that is done before this point will be from the sorts of existential risk–motivated people working on alignment now. In slow takeoff worlds, by contrast, AI researchers will encounter and tackle many aspects of the alignment problem “in miniature”, before AI is powerful enough to pose an existential risk. So a large fraction of alignment work will be done by researchers motivated by normal incentives, because making AI systems that behave well is good for business. In these worlds, existential risk–motivated researchers today need to be strategic, and identify and prioritise aspects of alignment that won’t be solved “by default” in the course of AI progress. In the comments , John Wentworth argues that there will be stronger incentives to conceal alignment problems than to solve them. Therefore, contra Shlegeris, he thinks AI risk will remain neglected even in slow takeoff worlds. Linchuan Zhang’s Potentially great ways forecasting can improve the longterm future identifies several different paths via which short-range forecasting can be useful from a longtermist perspective. These include (1) improving longtermist research by outsourcing research questions to skilled forecasters; (2) improving longtermist grantmaking by predicting how potential grants will be assessed by future evaluators; (3) improving longtermist outreach by making claims more legible to outsiders; and (4) improving the longtermist training and vetting pipeline by tracking forecasting performance in large-scale public forecasting tournaments. Zhang’s companion post, Early-warning Forecasting Center: What it is, and why it'd be cool , proposes the creation of an organization whose goal is to make short-range forecasts on questions of high longtermist significance. A foremost use case is early warning for AI risks, biorisks, and other existential risks. Besides outlining the basic idea, Zhang discusses some associated questions, such as why the organization should focus on short- rather than long-range forecasting, why it should be a forecasting center rather than a prediction market, and how the center should be structured. Dylan Mathews’s The biggest funder of anti-nuclear war programs is taking its money away looks at the reasons prompting the MacArthur Foundation to announce its exit from grantmaking in nuclear security. (For reference: in 2018, the Foundation accounted for 45% of all philanthropic funding in the field.) The decision was partly based on the conclusions of what appears to be a flawed report by the consulting firm ORS Impact, which “repeatedly seemed to blame the MacArthur strategy for not overcoming structural forces that one foundation could never overcome”. Fortunately, there are some hopeful developments in this space, as we report in the next section. Matthews also examines Congress’s epic pandemic funding failure . Per one recent estimate , COVID-19 cost the US upwards of $10 trillion. The Biden administration proposed spending $65 billion to reduce the risk of future pandemics, including major investments in vaccine manufacturing capacity, therapeutics, and early-warning systems. Congress isn’t keen, and is agreeing to a mere $2 billion of spending: better than nothing, but nowhere near enough to materially reduce pandemic risk. Alene Anello’s Who is protecting animals in the long-term future? describes a bizarre educational program, funded by the United States Department of Agriculture, that stimulates students to think about ways to raise chickens on Mars. Although factory farming doesn’t strike us as particularly likely to persist for more than a few centuries, either on Earth or elsewhere in the universe, we do believe that other scenarios involving defenseless moral patients (including digital sentients) warrant serious longtermist concern. Over the past few weeks, several posts on the EA Forum have raised various concerns regarding the recent influx of funding to the effective altruism community. We agree with Stefan Schubert that George Rosenfeld’s Free-spending EA might be a big problem for optics and epistemics is the strongest of these critical articles. Rosenfeld’s first objection (“optics”) is that, realities aside, many people—including committed effective altruists—are starting to perceive lots of EA spending as not just wasteful, but also self-serving. Besides exposing the movement to damaging external criticism, this perception may repel proto-EAs and, over time, alter the composition of our community. Rosenfeld’s second objection (“epistemics”) is that, because one can now get plenty of money by becoming a group organizer or by participating in other EA activities, it has become more difficult to think critically about the movement. Rosenberg concludes by sharing some suggestions on how to mitigate these problems. News Open Philanthropy has launched the Century Fellowship , offering generous support to early-career individuals doing longtermist-relevant work. Applications to join the 2022 cohort are open until the end of the year and will be assessed on a rolling basis. The Centre for the Governance of AI is hiring an Operations Manager and Associate . Applications are open until May 15th. William MacAskill’s long-awaited book, What We Owe The Future, is available to pre-order . It will be released on August 16th in the United States and on September 1st in the United Kingdom. The Cambridge Existential Risks Initiative published a collection of cause profiles to accompany their 2022 Summer Research Fellowship. It includes overviews of climate change , AI safety , nuclear risk , and meta , as well as other supplementary articles. The 80,000 Hours Podcast released two relevant conversations: one with Joan Rohlfing on how to avoid catastrophic nuclear blunders, and one with Sam Bankman-Fried on taking a high-risk approach to entrepreneurship and altruism. Upon learning that the MacArthur Foundation was leaving the field of nuclear security, Longview Philanthropy decided to launch its own nuclear security grantmaking program . Carl Robichaud—who until 2021 was Program Officer at the Carnegie Corporation, running the second-largest nuclear security grantmaking program—will be joining full-time next year. Provided that promising enough opportunities are found, Longview expects to make at least $10 million in grants—and this amount may grow substantially depending on what new opportunities they are able to identify. Longview is also hiring for a co-lead on the program. They are looking for applicants with a "strong understanding of the implications of longtermism" and you, dear reader of this newsletter, might be just the right candidate. Apply here . Last month, we wrote about the Future Fund’s project ideas competition. The awards have now been announced . Six submissions received each a prize of $5,000: Infrastructure to support independent researchers EA content translation service A regulatory failsafe for catastrophic or existential biorisks Datasets for AI alignment research High-quality human data Detailed stories about the future A working group on civilizational refugees composed of Linchuan Zhang, Ajay Karpur and an anonymous collaborator is looking for a technically competent volunteer or short-term contractor to help them refine and sharpen their plans. Rethink Priorities has a number of positions open in Operations and Research . Since the start of 2021, RP has grown from 15 to 40, and plans to have 60 staff by end of year. Eli Lifland and Misha Yagudin awarded prizes to some particularly impactful forecasting writing: Ryan Beck on whether genetic engineering will raise IQ by at least 10 points by 2050 . qassiov on whether synthetic biological weapons will infect at least 100 people by 2030 . FJehn on when carbon capture will cost less than $50 per ton . rodeoflagellum on how many gene-edited babies will be born by 2030 . The Berlin Hub, an initiative inspired by the EA Hotel , plans to convert a full hotel or similar building into a co-living space later this year. Express your interest here . Conversation with Petra Kosonen Petra Kosonen is a doctoral candidate in philosophy at the University of Oxford. Her DPhil thesis, supervised by Andreas Mogensen and Teruji Thomas, focuses on population ethics and decision theory, especially issues surrounding probability fanaticism. Previously, she studied at the University of Glasgow and the University of Edinburgh. Later this year, she will be starting a postdoc at the newly launched Population Wellbeing Initiative, which aspires to be the world’s leading centre for research on utilitarianism. She is also a Global Priorities Fellow at the Forethought Foundation and a participant of the FTX fellowship. Future Matters: Some of your research focuses on what you call “probability discounting” and whether it undermines longtermism. Could you tell us what you mean by “probability discounting” and your motivation for looking at this? Petra Kosonen: Probability discounting is the idea that we should ignore tiny probabilities in practical decision-making. Probability discounting has been proposed in response to cases that involve very small probabilities of huge payoffs, like Pascal’s Mugging. For those who’re not familiar with this case, it goes like this: A stranger approaches you and promises to use magic that will give you a thousand quadrillion happy days in the seventh dimension if you pay him a small amount of money. Should you do that? Well, there is a very small, but non-zero, probability that the stranger is telling the truth. And if he is telling the truth, then the payoff is enormous. Provided that the payoff is sufficiently great, the offer has positive expected utility, or at least that’s the idea. Also, the mugger points out that if you have a non-zero credence in the mugger being able to deliver any finite amount of utility, then the mugger can always increase the payoff until the offer has positive expected utility—at least if your utilities are unbounded. Probability discounting avoids the counterintuitive implication that you should pay the mugger by discounting the tiny probability of the mugger telling the truth down to zero. More generally, probability discounting is one way to avoid fanaticism, a term used to refer to the philosophical view that for every bad outcome, there is a tiny probability of a horrible outcome that is worse, and that for every good outcome, there is a tiny probability of a great payoff that is better. Other possible ways of avoiding fanaticism are, for example, having bounded utilities or conditionalising on knowledge before maximising expected utility. Future Matters: Within probability discounting, you distinguish between “naive discounting” and other forms of discounting. What do you mean by “naive discounting”? Petra Kosonen: Naive discounting is one of the simplest ways of cashing out probability discounting. On this view, there is some threshold probability such that outcomes whose probabilities are below this threshold are ignored by conditionalising on not obtaining these outcomes. One obvious problem with naive discounting is where this threshold is located. When are probabilities small enough to be discounted? Some have suggested possible thresholds. For example, Buffon suggested that the threshold should be one-in-ten-thousand. And Condorcet gave an amusingly specific threshold: 1 in 144,768. Buffon chose his threshold because it was the probability of a 56-year-old man dying in one day—an outcome reasonable people usually ignore. Condorcet had a similar justification. More recently, Monton has suggested a threshold of 1 in 2 quadrillion—significantly lower than the thresholds given by the historical thinkers. Monton thinks that the threshold is subjective within reason: there is no single threshold for everybody. Another problem for naive discounting comes from individuating outcomes. The problem is that if we individuate outcomes very finely by giving a lot of information about them, then all outcomes will have probabilities that are below the threshold. One possible solution is to individuate outcomes by utilities. The idea is that outcomes are considered “the same outcome” if their utilities are the same. This doesn’t fully solve the problem though. In some cases, all outcomes might have zero probability. Imagine for example an ideally shaped dart that is thrown on a dartboard. The probability that it hits a particular point may be zero. Lastly, one problem for naive discounting is that it violates dominance. Imagine a lottery that gives you a tiny probability of some prize and otherwise nothing, and compare this to a lottery that surely gives you nothing. The former lottery dominates the latter one, but naive discounting says they are equally good. Future Matters: Are there forms of probability discounting that avoid the problems of naive discounting? Petra Kosonen: One could solve the previous dominance violation by considering very-small-probability outcomes as tie-breakers in cases where the prospects are otherwise equally good. This is not enough to avoid violating dominance though, because the resulting view still violates dominance in a more complicated case. There are also many other ways of cashing out probability discounting. Naive discounting ignores very-small-probability outcomes. Instead, one could ignore states of the world that have tiny probabilities of occurring. The different versions of this kind of “state discounting” have other problems, though. For example, they give cyclic preference orderings or violate dominance principles in other ways. There is also tail discounting. On this view, you should first order all the possible outcomes of a prospect in terms of betterness. Then you should ignore the edges, that is, the very best and the very worst outcomes. Tail discounting solves the problems with individuating outcomes and dominance violations. But it also has one big problem: it can be money-pumped. This means that someone with this view would end up paying for something they could have kept for free—which makes it less plausible as a theory of instrumental rationality. Future Matters: Why do you think that probability discounting, in any of its forms, does not undermine longtermism? Petra Kosonen: In one of my papers I go through three arguments against longtermism from discounting small probabilities. I focus on existential risk mitigation as a longtermist intervention. The first argument is a very obvious one: that the probabilities of existential risks are so tiny that we should just ignore existential risks. This is the “Low Risks Argument”. But, it does not seem to be the case that the risks are so small. Even in the next 100 years many existential risks are estimated to be above any reasonable discounting thresholds. For example, Toby Ord has estimated that net existential risk in the next 100 years is 1/6. The British astronomer Sir Martin Rees has an even more pessimistic view. He thinks that the odds are no better than fifty-fifty that our present civilisation survives to the end of the century. And the risks from specific sources also seem to be relatively high. Some estimates Ord gives include, for example, 1 in 30 risk from engineered pandemics and 1 in 10 risk from unaligned artificial intelligence. (See Michael Aird’s database for many more estimates.) But now we come to the problem of how outcomes should be individuated. Although the risks in the next 100 years are above any reasonable discounting thresholds, the probability of an existential catastrophe due to a pandemic on the 4th of January 2055 at 13:00-14:00 might be tiny. Similarly the risk might be tiny at 14.00-15:00, and so on. Of course ignoring a high net existential risk on the basis of individuating outcomes this finely would be mad. But it is difficult to see how naive discounting can avoid this implication. Even if we individuate outcomes by utilities, we might end up individuating outcomes too finely because every second that passes could add a little bit of utility to the world. I mentioned earlier that tail discounting can solve the problem of outcome individuation. But what does it say about existential risk mitigation? Consider one type of existential risk: human extinction. Tail discounting probably wouldn’t tell us to ignore the possibility of a near-term human extinction even if its probability was tiny. Recall that tail discounting only ignores the very best and the very worst outcomes, provided that their probabilities are tiny. As long as there are sufficiently high probabilities of both better and worse outcomes than human extinction, human extinction will be a “normal” outcome in terms of value. So we should not ignore it on this view. The second argument against longtermism that I discuss in the paper concern the size of the future. For longtermism to be true, it also needs to be true that there is in expectation a great number of individuals in the far future—otherwise it would not be the case that relatively small changes in the probability of an existential catastrophe have great expected value. The “small future argument” states that once we ignore very-small-probability scenarios, such as space settlement and digital minds, the expected number of individuals in the far future is too small for longtermism to be true. Again, consider tail discounting. Space settlement and digital minds might be the kind of unlikely best-case-scenarios that tail discounting ignores. So is the small future argument right if you accept tail discounting? No, it does not seem so. Even if you ignore these scenarios, in expectation there seems to be enough individuals in the far future, at least if we take the far future to start in 100 years. This is true even on the relatively conservative numbers that Hilary Greaves and Will MacAskill use in their paper “The case for strong longtermism”. The final argument against longtermism that I discuss in the paper states that the probability of making a difference to whether an existential catastrophe occurs or not is so small that we should ignore it. This is the “no difference argument”. Earlier I mentioned the idea of state discounting on which you should ignore states that are associated with tiny probabilities. State discounting captures the idea of the no difference argument naturally: there is that one state in which an existential catastrophe happens no matter what you do, one state in which an existential catastrophe does not happen no matter what you do and the third state in which your actions can make a difference to whether the catastrophe happens or not. And, if the third state is associated with a tiny probability, then you should ignore it. I think the no difference argument is the strongest of the three arguments against longtermism that I discuss. Plausibly, at least for many of us, the probability of making a difference is indeed small, possibly less than some reasonable discounting thresholds. But there are some responses to this argument. First, as I mentioned earlier, the different versions of state discounting face problems like cyclic preference orderings and dominance violations. So we might want to reject state discounting for these reasons. Secondly, state discounting faces collective action type problems. For example, imagine an asteroid heading towards the Earth. There are multiple asteroid defence systems and (unrealistically) each has a tiny probability of hitting the asteroid and preventing a catastrophe. But the probability of preventing a catastrophe is high if enough of them try. Suppose that attempting to stop the asteroid involves some small cost. State discounting would then recommend against attempting to stop the asteroid, because the probability of making a difference is tiny for each individual. Consequently, the asteroid will almost certainly hit the Earth. To solve these kind of cases, state discounting would need to somehow take into account the choices other people face. But if it does so, then it no longer undermines longtermism. This is because plausibly we collectively, for example the Effective Altruism movement, can make a non-negligible difference to whether an existential catastrophe happens or not. So, my response to the no difference argument is that if there is a solution to the collective action problems, then this solution will also block the argument against longtermism. But if there is no solution to these problems, then state discounting is significantly less plausible as a theory. Either way, we don’t need to worry about the no difference argument. To sum up, my overall conclusion is that discounting small probabilities doesn’t undermine longtermism. Future Matters: Thanks, Petra! For helpful feedback on our first issue, we thank Sawyer Bernath, Ryan Carey, Evelyn Ciara, Alex Lawsen, Howie Lempel, Garrison Lovely and David Mears. We owe a special debt of gratitude to Fin Moorhouse for invaluable technical advice and assistance.…
> We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star. > — Ralph Waldo Emerson Welcome to Future Matters , a newsletter about longtermism brought to you by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to the Effective Altruism Forum and available as a podcast . Research We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we’re very unsure about (insects). In this post, Amanda Askell shares her views about AI consciousness . It seems unlikely that current AI systems are conscious, but they are improving and there’s no great reason to think we will never _create conscious AI systems. This matters because consciousness is morally-relevant, e.g. we tend to think that if something is conscious, we shouldn’t harm it for no good reason. Since it’s much worse to mistakenly _deny something moral status than to mistakenly attribute it, we should take a cautious approach when it comes to AI: if we ever have reason to believe some AI system is conscious, we should start to treat it as a moral patient. This makes it important and urgent that we develop tools and techniques to assess whether AI systems are conscious, and related questions, e.g. whether they are suffering. The leadership of the Global Catastrophic Risk Institute issued a Statement on the Russian invasion of Ukraine . The authors consider the effects of the invasion on (1) risks of nuclear war and (2) other global catastrophic risks. They argue that the conflict increases the risk of both intentional and inadvertent nuclear war, and that it may increase other risks primarily via its consequences on climate change, on China, and on international relations. Earlier this year, Hunga Tonga-Hunga Ha'apai—a submarine volcano in the South Pacific—produced what appears to be the largest volcanic eruption of the last 30 years. In What can we learn from a short preview of a super-eruption and what are some tractable ways of mitigating , Mike Cassidy and Lara Mani point out that this event and its cascading impacts provide a glimpse into the possible effects of a much larger eruption, which could be comparable in intensity but much longer in duration. The main lessons the authors draw are that humanity was unprepared for the eruption and that its remote location dramatically minimized its impacts. To better prepare for these risks, the authors propose better identifying the volcanoes capable of large enough eruptions and the regions most affected by them; building resilience by investigating the role that technology could play in disaster response and by enhancing community-led resilience mechanisms; and mitigating the risks by research on removal of aerosols from large explosive eruptions and on ways to reduce the explosivity of eruptions by fracking or drilling. The second part in a three-part series of great power conflict , Stephen Clare's How likely is World War III? attempts to estimate the probability of great power conflict this century as well as its severity, should it occur. Tentatively, Clare assigns a 45% chance to a confrontation between great powers by 2100, an 8% chance of a war much worse than World War II, and a 1% chance of a war causing human extinction. Note that some of the key sources in Clare's analysis rely on the Correlates of War dataset, which is less informative about long-run trends in global conflict than is generally assumed; see Ben Garfinkel's comment for discussion. Holden Karnofsky emails Tyler Cowen to make a very concise case that there’s at least a 1 in 3 chance we develop transformative AI this century (summarizing his earlier blogpost ). There’s some very different approaches to AI forecasting all pointing to a significant probability of TAI this century: forecasting based on ‘ biological anchors ’; forecasts by AI experts and Metaculus ; analyses of long-run economic growth ; and very outside-view arguments . On the other hand, there are few, if any, arguments that we should confidently expect TAI much later. Malevolent nonstate actors with access to advanced technology can increase the probability of an existential catastrophe either by directly posing a risk of collapse or extinction, or indirectly, by creating instability and thereby undermining humanity's capacity to handle other risks. The magnitude of the risk posed by these actors is a function of both their ability and their willingness to cause harm. In How big are risks from non-state actors? Base rates for terrorist attacks , Rose Hadshar attempts to inform estimates of the second of these two factors by examining base rates of terrorist attacks. She finds that attacks occur at a rate of one per 700,000 people worldwide and one per 3,000,000 people in the West. Most attacks are not committed with omnicidal intent. Population dynamics are an important force shaping humanity over decades and centuries. In Retrospective on Shall the Religious Inherit The Earth , Isabel Juniewicz evaluates predictions in a 2010 book which claimed that: (1) within religions, fundamentalist growth will outpace moderate growth; (2) within regions, fundamentalist growth will outpace growth among non-religious and moderates; (3) globally, religious growth will outpace non-religious growth. She finds strongest evidence for (1). Evidence for (2) is much weaker— in the US, the non-religious share of the population has increased over the last decade. Secularization and deconversion are more than counterbalancing the fertility advantage of religious groups, and the fertility gap between religious and non-religious populations has been narrowing. Haredi Jews are one notable exception, and continue to grow as a share of the population in US and Israel. (3) seems true in the medium-term, but due primarily to dynamics overlooked in the book: population decline in (more irreligious) East Asia, and population growth in (increasingly religious) Africa. Fertility rates in predominantly Muslim countries, on which the book’s argument for (3) is largely based, have been declining substantially, to near-replacement levels in many cases. For the most part, religious populations are experiencing declining fertility in parallel with secular groups. Overall it looks like the most significant trend in the coming decades and centuries will not be increasing global religiosity, but the continued convergence of global fertility rates to below-replacement levels. We’re appalled by many of the attitudes and practices of past generations. How can we avoid making the same sort of mistakes? In Future-proof ethics ( EA Forum discussion ), Holden Karnofsky suggests three features of ethical systems capable of meeting this challenge: (1) Systematization: rather than relying on bespoke, intuition-based judgements, we should look for a small set of principles we are very confident in, and derive everything else from these; (2) Thin utilitarianism: our ethical system should be based on the needs and wants of others rather than our personal preferences, and therefore requires a system of consistent ethical weights for comparing any two harms and benefits; and (3) Sentientism: the key ingredient in determining how much to weigh someone’s interests should be the extent to which they have the capacity for pleasure and suffering. Combining these elements leads to the sort of ethical stance that has a good track record of being ‘ahead of the curve.’ Progress on shaping long-term prospects for humanity is to a significant degree constrained by insufficient high-quality research with the potential to answer important and actionable questions. Holden Karnofsky's Important, actionable research questions for the most important century offers a list of questions of this type that he finds most promising, in the following three areas: AI alignment, AI governance, and AI takeoff dynamics. Karnofsky also describes a process for assessing whether one is a good fit for conducting this type of research, and draws a contrast between this and two other types of research: research focused on identifying " cause X " candidates or uncovering new crucial considerations ; and modest incremental research intended not to cause a significant update but rather to serve as a building block for other efforts. Andreas Mogensen and David Thorstad's Tough enough? Robust satisficing as a decision norm for long-term policy analysis attempts to open a dialogue between philosophers working in decision theory and decision-making under deep uncertainty (DMDU), a field developed by operations researchers and engineers mostly neglected by the philosophical community. The paper focuses specifically on robust satisficing as a decision norm, and discusses decision-theoretic and voting-theoretic motivations for it. The paper may be seen as an attempt to address complaints raised by some members of the effective altruism community that longtermist researchers routinely ignore the tools and insights developed by DMDU and other relevant fields. Fin Moorhouse wrote an in-depth profile on space governance —the most comprehensive examination of this topic by the EA community so far. His key points may be summarized as follows: Space, not Earth, is where almost everyone will likely live if our species survives the next centuries or millennia. Shaping how this future unfolds is potentially enormously important. Historically, a significant determinant of quality of life has been quality of governance: people tend to be happy when countries are well-governed and unhappy when countries are poorly governed. It seems plausible that the kind of space governance that ultimately prevails will strongly influence the value of humanity's long-term future. Besides shaping how the future unfolds, space governance could make a difference to whether there is a future at all: effective arms control in space has the potential to significantly reduce the risk of armed conflict back on Earth, especially of great power conflict ( which plausibly constitutes an existential risk factor ). The present and near future appear to offer unusual opportunities for shaping space governance, because of the obsolescence of the current frameworks and the growing size of the private space sector. There are identifiable areas to make progress in space governance, such as reducing the risk of premature lock-in; addressing worries about the weaponization of asteroid deflection technology; establishing rules for managing space debris; and exploring possible mechanisms for deciding questions of ownership. Reasons against working on space governance include the possibility that early efforts might either be nullified by later developments or entrench worse forms of governance than later generations could otherwise have put in place; that it may be intractable, unnecessary, or even undesirable; and that there may be even more pressing problems, such as positively influencing the development of transformative AI. The main opportunities for individuals interested in working on this problem are doing research in academia or for a think-tank, and shaping national or international decisions as a diplomat or civil servant, or for a private space company. Neel Nanda's Simplify EA pitches to “holy shit, x-risk” ( EA Forum discussion ) argues that one doesn't need to accept total utilitarianism, reject person-affecting views, have a zero rate of pure time preference or self-identify as a longtermist to prioritize existential risk reduction from artificial intelligence or biotechnology. It is enough to think that AI and bio have at least a 1% or 0.1% chance, respectively, of causing human extinction in our lifetimes. These are still "weird" claims, however, and Nanda proposes some framings for making them more plausible. YouGov surveyed Britons on human extinction . 3% think humanity will go extinct in the next 100 years; 23% think humanity will never go extinct. Asked to choose the three most likely causes of extinction from a list: top were nuclear war (chosen by 43%); climate change (42%); and a pandemic (30%). One interesting change since the survey was last conducted: 43% now think the UK government should be doing more to prepare for AI risk, vs. 27% in 2016. In The value of small donations from a longtermist perspective ( EA Forum discussion ) Michael Townsend argues that, despite recent substantial increases in funding, small longtermist donors can still have a significant impact. This conclusion follows from the simple argument that a donation to a GiveWell-recommended charity has a significant impact, and that the most promising longtermist donation opportunities are plausibly even more cost-effective than are GiveWell-recommended charities. As Townsend notes, this conclusion by itself is silent on the value of earning to give , and can be accepted even by those who believe that longtermists should overwhelmingly focus on direct work. Open Philanthropy's Longtermist EA Movement-Building team works to increase the amount of attention and resources put towards problems that threaten the future of sentient life. The team funds established projects such as 80,000 Hours, Lightcone Infrastructure, and the Centre for Effective Altruism, and has in 2021 directed approximately $60 million in giving. In Update from Open Philanthropy’s longtermist EA movement-building team , Claire Zabel summarizes the team's activities and accomplishments so far and shares a number of updates, two of which seem especially important. One is that they will be spending less time evaluating opportunities and more time generating opportunities. The other is that they are shifting from "cost-effectiveness" to "time-effectiveness" as the primary impact metric. This shift was prompted by the realization that grantee and grantmaker time, rather than money available for grantmaking, was the scarcer of the two resources. News Open Philanthropy's Longtermist EA Movement-Building team is hiring for several roles to scale up and expand their activities ( EA Forum discussion ). Applications close at 5 pm Pacific Time on March 25, 2022. The Forethought Foundation is hiring for several roles : a Director of Special Projects, a Program Associate, and a Research Analyst supporting Philip Trammell. Applications for the first two roles are due by April 4, 2022. The Policy Ideas Database is an effort to collect and categorize policy ideas within the field of existential risk studies. It currently features over 250 policies, and the authors estimate they have only mined 15% of the relevant sources. Read the EA Forum announcement for further details. The Eon Essay Contest offers over $40,000 in prizes, including a top prize of $15,000, for outstanding essays on Toby Ord's The Precipice written by high school, undergraduate and graduate students from any country. Essays are due June 15, 2022. The FTX Foundation’s Future Fund has officially launched . They hope to distribute at least $100m this year to ambitious projects aimed at improving humanity’s long-term prospects, roughly doubling the level of funding available for longtermist projects. Their team is made up of Nick Beckstead (CEO), Leopold Aschenbrenner, William MacAskill and Ketan Ramakrishnan. The FTX Foundation is funded primarily by Sam Bankman-Fried. The Future Fund also announced a prize of $5,000 for any project idea that they like enough to add it to the project ideas section of their website. The EA Forum linkpost attracted over 700 comments and hundreds of submissions. Note that the deadline for submitting projects has now passed. Finally, the Future Fund announced a regranting program offering discretionary budgets to independent part-time grantmakers. The budgets are to be spent in the next six months or so, and will typically be in the $250k–few million range. You can apply to be a regrantor, or recommend someone else for this role, here . The UK government is soliciting feedback from biosecurity experts to update its biological security strategy, which seeks to protect the country from domestic and global biological risks, including emerging infectious diseases and potential biological attacks. There is some discussion on the EA Forum. Effective Ideas is a project aiming to build an ecosystem of public-facing writing on effective altruism and longtermism. To this end, they are offering five $100,000 prizes for the very best new blogs on these themes, and further grants for promising young writers. Read more and apply on their website . Conversation with Nick Beckstead Nick Beckstead is an American philosopher and CEO of the FTX Foundation. Nick majored in mathematics and philosophy, and then completed a PhD in philosophy in 2013. During his studies, he co-founded the first US chapter of Giving What We Can , pledging to donate half of his post-tax income to the most cost-effective organizations fighting global poverty in the developing world. Prior to joining the FTX Foundation last year, Nick worked as a Program Officer for Open Philanthropy, where he oversaw much of that organization's research and grantmaking related to global catastrophic risk reduction. Before that he was Research Fellow at Oxford’s Future of Humanity Institute. Future Matters : Your doctoral dissertation, On the Overwhelming Importance of Shaping the Far Future , is one of the earliest attempts to articulate the case for longtermism—the view that "what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years" (p. ii). We are curious about the intellectual journey that finally led you to embrace this position. Nick Beckstead : I guess I became sympathetic to utilitarianism in college through reading Peter Singer and John Stuart Mill, and taking a History of Moral Philosophy class. I had this sense that the utilitarian version was the only one that turns out sensible answers in a systematic fashion, as if turning a crank or something. And so I got interested in that. I was interested in farm animal welfare and global health due to Peter Singer's influence, but had a sense of open-mindedness: is this the best thing? I don’t know. My conclusion reading Famine, affluence, and morality was—well you can do at least this [much] good surprisingly easily; who knows what the best thing is. I guess there were a number of influences there, but maybe most notably I remember I came across Nick Bostrom’s work, first on infinite ethics and then I read Astronomical waste and I had a sense "wow this seems kinda crazy but could be right; seems like maybe there’s a lot of claims here that seem iffy to me, about exactly how nanotechnology is going to go down or how many people you can fit onto a planet." But it seems like there’s an argument here that has some legs. And then I thought about exactly how that argument worked and what the best version of it [was] and exactly what it required, and ended up in a place that’s slightly different from what Bostrom said, but very much inspired by it—I think of it as a slight generalization. Future Matters : How have your views on this topic evolved over the last decade or so since writing your dissertation? Nick Beckstead: So there’s the baseline philosophy, and then there’s the speculative details about what matters for the long-term future. I think the main way my views have changed on the speculative details side is that I’ve got more confidence in the AI story and some of the radical claims about what is possible with technology, and maybe possible with technology this century, and how that affects the long-term future. In some sense, the main way that developed when I was writing this dissertation is that it was an extremely weird and niche thing to be talking about, and I had some sense of "well, I don’t know, this seems kinda crazy but could be right; I wonder if it will survive scrutiny from relevant experts?" I think the expert engagement that has been gotten has not been totally ideal because there isn’t a particular field where you can have somebody [e.g.] read Superintelligence and say “this is the definitive evaluation of exactly what is true and false in it.” It’s not exactly computer science, it’s not exactly economics, it’s not exactly philosophy—it's a lot of judgment, no-one is really authoritative. A thing that hasn’t happened, but could well have happened, is that it was strongly refuted by excellent arguments. I think that decisively has not happened. It’s a pretty interesting fact and it’s updated my belief in a bunch of that picture. So that’s the main way that has changed. And then a little thing, principles-wise: when I started writing my dissertation I was very intensely utilitarian and I was like “this is the way”, and toward the end of writing the dissertation I became more like “well, this is the best systematic framework that I know of, but I think it has issues around infinities and I don’t know how to resolve them in any nice systematic fashion and maybe there is no nice systematic resolution of that.” And what does that mean for thinking about ethics? I think it’s made me a little bit more antirealist, a little bit more reluctant to give up common-sense (to me) things that are inconsistent with [utilitarianism] and deeply held, while still retaining it as a powerful generative framework for identifying important conclusions and actions—that’s kinda how I use it now. Future Matters : Open Philanthropy has been the major longtermist funder for ten or so years. Now the Future Fund is on track to being the biggest funder this coming year. We are curious if you could say a bit about the differences in grantmaking philosophy between them and how you expect the Future Fund to differ in its approach to longtermist world-improvement. Nick Beckstead: I don't know whether it will be larger in total grantmaking than Open Phil—I don't actually have the numbers to hand of exactly how much Open Phil spent this year, and maybe they will spend more than $100 million this year. I think it's a good question how they are going to be different. I loved working at Open Phil and have the utmost respect for the team, so a lot of things are going to be really similar, and I'm going to try to carry over a lot of the things that I thought were the best about what Open Phil is doing. And then it's to be determined what exactly is going to be different. The Future Fund website is pretty straightforward about what it is and what it's trying to do right now. So if I just think about what's different, one thing is that we're experimenting with some very broad, decentralized grantmaking programs. And we're doing this "bold and decisive tests" philosophy for engaging with them. Our current plan is, we launched this regranting program for about 20 people, who are deputized as grantmakers who can make grants on our behalf. I think that's exciting, because if we are making big mistakes, these people may be able to fill in the gaps and bring us good ideas that we would have missed. It could be a thing that, if it goes well, could scale a lot with limited oversight, which is great if you are a major funder. So it's something it makes sense to experiment with. Another one is open calls for proposals. And I don't know if this will be the way we will always do things. All of these things are experiments we are doing this year. The intention is to test them in a really big way, where there's a big enough test that if it doesn't work you are, like, "eh, I guess that thing doesn't work", and you are not, like, "oh, well, maybe if we had made it 50% larger it would have been good". So, yeah, trying to boldly and decisively test these things. Open Phil has had some open calls for proposals, but ours was very widely promoted, and has very wide scope in terms of what people can apply for and range of funding. So that's an interesting experiment and we'll see what comes with that. We're interested in experimenting with prizes this year. Our thoughts on that are less developed in exactly what form it's going to take. And then the other big thing we want to do some of this year is just trying to get people to launch new projects. We've written down this list of 35–40 things that we are interested in seeing people do. As examples, and as a way to get people started. So we are trying to connect founders with those projects and see where that goes. Open Phil does some of that too, so I don't know if that's incredibly different. But it's a bit different in that we have a longer list, it's all in one place, and we’re doing a concentrated push around it. So we'll see how these things go. And if one of these things is great, maybe we'll do a lot more of it. If none of them are great, then maybe we'll be a whole lot like Open Phil. [ laughs ] Oh, I have one other way in which we are interestingly different. We don't have Program Officers that are dedicated to specific things right now. We may and probably will in the future, but right now we are testing mechanisms more than we are saying "we are hiring a Program Officer that does AI alignment, and we are going to make AI alignment grants under this following structure for a while". It's more like we are testing these mechanisms and people can participate through these mechanisms in many of our areas of interest, or anything else that they think is particularly relevant to the long-term future that we might have overlooked. Future Matters : Where do you imagine the Future Fund being in ten years' time? Given what you have said, there's a lot of uncertainty here, contingent on the results of these experiments. But can you see the outlines of how things may look a decade from now? Nick Beckstead: I think I can't, really. And it probably depends more on how these experiments go. We'll hopefully know a lot more about how it's gone for these different types of grantmaking. One answer is that if one of these things is great, we'll do a bunch more of it. But I think it's too early to say. Future Matters : And this experimentation over the next year, is there a hard limit there or could you imagine keeping this exploratory phase for many years? Nick Beckstead: Good question. I think I am mostly just thinking about the next year, and then we'll see where it goes. I think there will be some interesting tests. I'm excited about that: the intention is really to learn as much as possible, so we are optimizing around that. And for that reason, there's a lot of openness. Future Matters : Thanks, Nick!…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.