DiscoverFuture Matters Reader
Future Matters Reader
Claim Ownership

Future Matters Reader

Author: Matthew van der Merwe, Pablo Stafforini

Subscribed: 0Played: 15
Share

Description

Future Matters Reader releases audio versions of most of the writings summarized in the Future Matters newsletter
125 Episodes
Reverse
Success without dignity: a nearcasting story of avoiding catastrophe by luck, by Holden Karnofsky. https://forum.effectivealtruism.org/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoiding Note: Footnotes in the original article have been omitted.
In this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to move fast, aggravating race dynamics and in turn increasing existential risk. https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics
This is Otto Barten's summary of 'The effectiveness of AI existential risk communication to the American and Dutch public' by Alexia Georgiadis. In this paper Alexia measures changes in participants' awareness of AGI risks after consuming various media interventions. Summary: https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk Original paper: https://existentialriskobservatory.org/papers_and_reports/The_Effectiveness_of_AI_Existential_Risk_Communication_to_the_American_and_Dutch_Public.pdf Note: Some tables in the summary have been omitted in this audio version.
Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future. https://philpapers.org/archive/SHUHMS.pdf Note: Tables, notes and references in the original article have been omitted.
"The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy." https://forum.effectivealtruism.org/posts/HCuoMQj4Y5iAZpWGH/advice-on-communicating-in-and-around-the-biosecurity-policy Note: Some footnotes in the original article have been omitted.
The Global Priorities Institute has published a new paper summary: 'Are we living at the hinge of history?' by William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.
The Global Priorities Institute has published a new paper summary: 'Longtermist institutional reform' by Tyler John & William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.
The Global Priorities Institute has released Hayden Wilkinson's presentation on global priorities research. (The talk was given in mid-September last year but remained unlisted until now.) https://globalprioritiesinstitute.org/hayden-wilkinson-global-priorities-research-why-how-and-what-have-we-learned/
New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk. https://www.vox.com/future-perfect/2023/2/1/23580528/gain-of-function-virology-covid-monkeypox-catastrophic-risk-pandemic-lab-accident
"One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough." https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html
Victoria Krakovna makes the point that you don't have to be a longtermist to care about AI alignment.
Anthropic shares a summary of their views about AI progress and its associated risks, as well as their approach to AI safety. https://www.anthropic.com/index/core-views-on-ai-safety Note: Some footnotes in the original article have been omitted.
Noah Smith argues that, although AGI might eventually kill humanity, large language models are not AGI, may not be a step toward AGI, and there's no plausible way they could cause extinction. https://noahpinion.substack.com/p/llms-are-not-going-to-destroy-the
Peter Eckersley did groundbreaking work to encrypt the web. After his sudden death, a new organization he founded is carrying out his vision to steer artificial intelligence toward “human flourishing.” https://www.wired.com/story/peter-eckersley-ai-objectives-institute/
A working paper by Shakked Noy and Whitney Zhang examines the effects of ChatGPT on production and labor markets. https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1.pdf Note: Some tables and footnotes in the original article have been omitted.
Robin Hanson restates his views on AI risk. https://www.overcomingbias.com/p/ai-risk-again
In an Institute for Progress report, Bridget Williams and Rowan Kane make five policy recommendations to mitigate risks of catastrophic pandemics from synthetic biology. https://progress.institute/preventing-the-misuse-of-dna-synthesis/
Kevin Collier on how ChatGPT and advanced AI might redefine our definition of consciousness. https://www.nbcnews.com/tech/tech-news/chatgpt-ai-consciousness-rcna71777
Eric Landgrebe, Beth Barnes and Marius Hobbhahn discuss a survey of 1000 participants on their views about what values should be put into powerful AIs. https://www.lesswrong.com/posts/4iAkmnhhqNZe8JzrS/reflection-mechanisms-as-an-alignment-target-attitudes-on Note: Some tables in the original article have been omitted.
Risto Uuk published the EU AI Act Newsletter #24. https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-24
loading
Comments 
Download from Google Play
Download from App Store