DiscoverThe Nonlinear Library: EA Forum Daily
The Nonlinear Library: EA Forum Daily
Claim Ownership

The Nonlinear Library: EA Forum Daily

Author: The Nonlinear Fund

Subscribed: 0Played: 0
Share

Description

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
73 Episodes
Reverse
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EffectiveAltruismData.com is now a spreadsheet, published by Hamish Doodles on July 23, 2023 on The Effective Altruism Forum. A few years ago I built EffectiveAltruismData.com which looked like this: A few people told me they liked the web app. Some even said they found it useful, especially the bits that made the funding landscape more legible. But anyway, I never got around to automating the data scraping, and the website ended up hopelessly out of date. So I killed it. But it recently occurred to me that I could do the data scraping, data aggregation, and data visualisation, all within Google Sheets. So with a bit of help from Chatty G, I put together a spreadsheet which: Downloads the latest grant data from the Open Philanthropy website every 24 hours (via Google Apps Scripts). Aggregate funding by cause area Aggregate funding by organization. Visualise all grant data in a pivot table that lets you expand/aggregate by Cause Area, then Organization Name, then individual grants But note that expanding/collapsing counts as editing the spreadsheet, so you'll have to make a copy to be able to do this. You can also change the scale of the bar chart using the dropdown And you can sort grants by size or by date using the "Sort Sheet Z to A" option on the Amount or Date columns. Here's a link to the spreadsheet. You can also find it at www.effectivealtruismdata.com. Other funding legibility projects Here's another thing I made. It gives time series and cumulative bar charts for funding based on funder and cause area. You can hover over points on the time series to get the total funding per cause/org per year. The data comes from this spreadsheet by TylerMaule. Another thing which may be of interest is openbook.fyi by Rachel Weinberg & Austin Chen, which let's you search/view individual grants from a range of EA-flavoured sources. Openbook gets its data from donations.vipulnaik.com/ by Vipul Naik. I'm currently working on another spreadsheet which scrapes, aggregates, and visualises all the Vipul Naiks's data. Feedback & Requests I enjoy working on spreadsheets and data viz and stuff. Let me know if you can think of any other stuff in this area which would be useful. This is a joke. This is also a joke. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on yesterday's UN Security Council meeting on AI, published by Greg Colbourn on July 22, 2023 on The Effective Altruism Forum. Firstly, it's encouraging that AI is being discussed as a threat at the highest global body dedicated to ensuring global peace and security. This seemed like a remote possibility just 4 months ago. However, throughout the meeting, (possibly near term) extinction risk from uncontrollable superintelligent AI was the elephant in the room. ~1% air time, when it needs to be ~99%, given the venue and its power to stop it. Let's hope future meetings improve on this. Ultimately we need the UNSC to put together a global non-proliferation treaty on AGI, if we are to stand a reasonable chance of making it out of this decade alive.There was plenty of mention of using AI for peacekeeping. However, this seems naive in light of the offence-defence asymmetry facilitated by generative AI (especially when it comes to threats like bio-terror/engineered pandemics, and cybercrime/warfare). And in the limit of outsourcing intelligence gathering and strategy recommendations to AI (whist still keeping a human in the loop), you get scenarios like this. Highlights: China mentioned Pause: "The international community needs to. ensure that risks beyond human control don't occur. We need to strengthen the detection and evaluation of the entire lifecycle of AI, ensuring that mankind has the ability to press the pause button at critical moments". (Zhang Jun, representing China at the UN Security Council meeting on AI)) Mozambique mentioned the Sorcerer's Apprentice, human loss of control, recursive self-improvement, accidents, catastrophic and existential risk: "In the event that credible evidence emerges indicating that AI poses and existential risk, it's crucial to negotiate an intergovernmental treaty to govern and monitor its use." (MANUEL GONÇALVES, Deputy Minister for Foreign Affairs of Mozambique, at the UN Security Council meeting on AI)(A bunch of us protesting about this outside the UK Foreign Office last week.) (PauseAI's comments on the meeting on Twitter.) (Discussion with Jack Clark on Twitter re his lack of mention of x-risk. Note that the post war atomic settlement - Baruch Plan - would probably have been quite different if the first nuclear detonation was assessed to have a significant chance of igniting the entire atmosphere!)(My Tweet version of this post. I'm Tweeting more as I think it's time for mass public engagement on AGI x-risk.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA EDA: What do Forum Topics tell us about changes in EA?, published by JWS on July 15, 2023 on The Effective Altruism Forum. tl;dr2: Data on EA Forum posts and topics doesn't show clear 'waves' of EA tl;dr: I used the Forum API to collect data on the trends of EA Forum topics over time. While this analysis is by no means definitive, it doesn't support the simple narrative that there was a golden age of EA that has abandoned for a much worse one. There has been a rise in AI Safety posts, but that has also been fairly recent (within the last ~2 years) 1. Introduction I really liked Ben West's recent post about 'Third Wave Effective Altruism', especially for its historical reflection on what First and Second Wave EA looked like. This characterisation of EA's history seemed to strike a chord with many Forum users, and has been reflected in recent critical coverage of EA that claims the movement has abandoned its well-intentioned roots (e.g. donations for bed nets) and decided to focus fully on bizarre risks to save a distant, hypothetical future. I've always been a bit sceptical with how common this sort of framing seems to be, especially since the best evidence we have from funding for the overall EA picture shows that most funding is still going to Global Health areas. As something of a (data) scientist myself, I thought I'd turn to one of the primary sources of information for what EAs think to shed some more light on this problem - the Forum itself! This post is a write-up of the initial data collection and analysis that followed. It's not meant to be the definitive word on either how EA, or use of the EA Forum, has changed over time. Instead, I hope it will challenge some assumptions and intuitions, prompt some interesting discussion, and hopefully leads to future posts in a similar direction either from myself or others. 2. Methodology (Feel free to skip this section if you're not interested in all the caveats) You may not be aware, the Forum has an API! While I couldn't find clear documentation on how to use it or a fully defined schema, people have used it in the past for interesting projects and some have very kindly shared their results & methods. I found these following three especially useful (the first two have linked GitHubs with their code): The Tree of Tags by Filip Sondej Effective Altruism Data from Hamish This LessWrong tutorial from Issa Rice With these examples to help me, I created my own code to get every post made on the EA Forum to date (without those who have deleted their post). There are various caveats to make about the data representation and data quality. These include: I extracted the data on July 7th - so any totals (e.g. number of posts, post score etc) or other details are only correct as of that date. I could only extract the postedAt date - which isn't always when the post in question was actually posted. A case in point, I'm pretty sure this post wasn't actually posted in 1972. However, it's the best data I could find, so hopefully for the vast majority of posts the display date is the posted date. In looking for a starting point for the data, there was a discontinuity between August to September 2014, but the data was a lot more continuous after then. I analyse the data in terms of monthly totals, so I threw out the one-week of data I had for July. The final dataset is therefore 106 months from September 2014 to June 2023 (inclusive). There are around ~950 distinct tags/topics in my data, which are far too many to plot concisely and share useful information. I've decided to take the top 50 topics in terms of times used, which collectively account for 56% of all Forum tags and 92% of posts in the above time period. I only extracted the first listed Author of a post - however, only 1 graph shared below relies on a user-level aggregat...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Earning Less, published by ElliotJDavies on July 1, 2023 on The Effective Altruism Forum. This post is aimed at those working in jobs which are funded by EA donors who might be interested in voluntarily earning less. This post isn't aimed to influence pay scales at organisations, or at those not interested in earning less. When the Future Fund was founded in 2022, there was a simultaneous upwards pressure on both ambitiousness and net-earnings in the wider EA community. The pressure to be ambitious resulted in EAs really considering the opportunity cost of key decisions. Meanwhile, the discussions around why EAs should consider ordering food or investing in a new laptop pointed towards a common solution: EAs in direct work earning more.The funding situation has significantly shifted from then, as has the supply-demand curve for EA jobs. This should put a deflationary pressure on EAs' salaries, but I'd argue we largely haven't seen this effect, likely because people's salaries are "sticky". One result of this is that there are a lot of impactful projects which are unable to find funding right now, and in a similar vein, there's a lot of productive potential employees who are unable to get hired right now. There's even a significant proportion of employees who will be made redundant. This seems a shame, since there's no good reasons for salaries to be sticky. It seems especially bad if we do in fact see significant redundancies, since under a "veil of ignorance" the optimal behaviour would be to voluntarily lower your salary (assuming you could get your colleagues to do the same). Members of German labour unions quite commonly do something similar (Kurzarbeit) during economic downturns, to avoid layoffs and enable faster growth during an upturn Some Reasons you Might Want to Earn Less: You want to do as much good as possible, and suspect your organisation will do more good if it had more money at hand. Your Organisation is likely to make redundancies, which could include you. You have short timelines, and you suspect that by earning less, more people could work on alignment. You can consider your voluntary pay-cut a donation, which you can report on your GWWC account. (The great thing about pay-cut donations is you essentially get a 100% tax refund, which is particularly nice if you live somewhere with high income tax). Some Reasons you May Not Want to Earn Less: It would cause you financial hardship. You would experience a significant drop in productivity. You suspect it would promote an unhealthy culture in your organisation. You expect you're much better than the next-best candidate, and you'd be less likely to work in a high impact role if you had to earn less. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive rates Open-Phil-funded charity NTI “too rich”, published by Sanjay on June 18, 2023 on The Effective Altruism Forum. Exec summary Under SoGive’s methodology, charities holding more than 1.5 years’ expenditure are typically rated “too rich”, in the absence of a strong reason to judge otherwise. (more) Our level of confidence in the appropriateness of this policy depends on fundamental ethical considerations, and could be “clearly (c.95%) very well justified” or “c.50% to c.90% confident in this policy, depending on the charity” (more) We understand that the Nuclear Threat Initiative (NTI) holds > 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more) We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more) Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more) It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more) We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more) Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems. We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with. 0. Intent of this post Although this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them. Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to. 1. Background on SoGive’s methodology for assessing reserves The SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money. So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities. In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves. Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Driving Education on EA Topics Through Khan Academy, published by Hailey Dickson on August 27, 2022 on The Effective Altruism Forum. What would you teach students in <1 min to prepare them to change the world?Hello everyone, I'm Hailey and I manage the social content strategy for Khan Academy, one of the world's largest nonprofit EdTech platform providing free Pre-K through college curricula in >50 languages over 140 million users worldwide. You may know us by from our original Youtube channel, but we're now a global organization partnered with school districts across the US, Brazil, and India to try to improve learning outcomes in critical areas. In particular, we are now trying to move the needle on STEM education by accelerating learning in historically-under resourced communities. I'm tasked with launching our social content strategy to drive meaningful learning to as many students as possible. I'm trying to conceptualize a TikTok/Youtube Shorts series with advice, tips, lessons etc that will get young learners excited about EA topics. I would love to curate a list of topics from the EA community that you think would give students the best chance of creating a better future.I think there is a unique opportunity through Khan Academy to drive millions of learners toward the topics that could build a better future, so I'd love to make the most of our social platforms with your insights! If you'd like to connect, feel free to shoot me an email at hailey@khanacademy.org Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A critical review of GiveWell's 2022 cost-effectiveness model, published by Froolow on August 25, 2022 on The Effective Altruism Forum. Section 1 – Introduction 1.1 Summary This is an entry into the ‘Effective Altruism Red Teaming Contest’ – it looks critically at GiveWell’s current cost-effectiveness model. The goal of the essay is to change GiveWell’s mind about the appropriateness of specific features of their cost-effectiveness model. However, I have tried to avoid writing an exhaustive point-by-point deconstruction of GiveWell’s model and instead tried to shine a light on common ‘blind spots’ that frequently occur in economic modelling and which the EA community has apparently not error-corrected until this point. In that respect a secondary goal of the essay is to be more broadly applicable and change the EA community’s view about what the gold-standard in economic evaluation looks like and help provide a framework for error-correcting future economic models. This contributes to a larger strategic ambition I think EA should have, which is improving modelling capacity to the point where economic models can be used as reliable guides to action. Economic models are the most transparent and flexible framework we have invented for difficult decisions taken under resource constraint (and uncertainty), and in utilitarian frameworks a cost-effectiveness model is an argument in its own right (and debatably the only kind of argument that has real meaning in this framework). Despite this, EA appears much more bearish on the use of economic models than sister disciplines such as Health Economics. My conclusion in this piece is that there scope for a paradigm shift in EA modelling before which will improve decision-making around contentious issues. In general, GiveWell’s model is of very high quality. It has few errors, and almost no errors that substantially change conclusions. I would be delighted if professional modellers I work with had paid such care and attention to a piece of cost-effectiveness analysis. However, it has a number of ‘architectural’ features which could be improved with further effort. For example, the structure of the model is difficult to follow (and likely prone to error) and data sources are used in a way which appears inappropriate at times. A summary of the issues considered in this essay is presented below: In my view, all of these issues except the issue of uncertainty analysis could be trivially fixed (trivial for people as intelligent as the GiveWell staff, anyway!). The issue of uncertainty analysis is much more serious; no attempt is made in the model to systematically investigate uncertainty and this potentially leads to the model being underutilised by GiveWell. This failure to conduct uncertainty analysis is not limited to GiveWell, but is instead low hanging fruit for greatly improving the impact of future cost-effectiveness modelling across the whole of EA. I will write an essay on this topic specifically in the very near-term future. 1.2 Context This essay is an attempt to ‘red team’ the current state of cost-effectiveness modelling in Effective Altruism. I have done this by picking a cost-effectiveness model which I believe to be close to the current state-of-the-art in EA cost-effectiveness modelling – GiveWell’s 2022 cost-effectiveness analysis spreadsheet – and applied the same level of scrutiny I would as if I was peer reviewing an economic model in my own discipline of Health Economics. For various reasons I’ll address below, it was easier for me to give this critique after completely refactoring the original model, and therefore much of what follows is based on my own analysis of GiveWell’s input data. You might find it helpful to have my refactored version of the model open as a companion piece to this essay. If so, it is down...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critique of MacAskill’s “Is It Good to Make Happy People?”, published by Magnus Vinding on August 23, 2022 on The Effective Altruism Forum. In What We Owe the Future, William MacAskill delves into population ethics in a chapter titled “Is It Good to Make Happy People?” (Chapter 8). As he writes at the outset of the chapter, our views on population ethics matter greatly for our priorities, and hence it is important that we reflect on the key questions of population ethics. Yet it seems to me that the book skips over some of the most fundamental and most action-guiding of these questions. In particular, the book does not broach questions concerning whether any purported goods can outweigh extreme suffering — and, more generally, whether happy lives can outweigh miserable lives — even as these questions are all-important for our priorities. The Asymmetry in population ethics A prominent position that gets a very short treatment in the book is the Asymmetry in population ethics (roughly: bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles). The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172): If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second. This claim about “any argument” seems unduly strong and general. Specifically, there are many arguments that support the intrinsic badness of bringing a miserable life into existence that do not support any intrinsic goodness of bringing a flourishing life into existence. Indeed, many arguments support the former while positively denying the latter. One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem, and hence not morally worth “fixing” in a symmetric way (provided that no existing beings are deprived of that pleasure). A related class of arguments in favor of an asymmetry in population ethics is based on theories of wellbeing that understand happiness as the absence of cravings, preference frustrations, or other bothersome features. According to such views, states of untroubled contentment are just as good — and perhaps even better than — states of intense pleasure. These views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do. Another point that MacAskill raises against the Asymmetry is an example of happy children who already exist, about which he writes (p. 172): if I imagine this happiness continuing into their futures—if I imagine they each live a rewarding life, full of love and accomplishment—and ask myself, “Is the world at least a little better because of their existence, even ignoring their effects on others?” it becomes quite intuitive to me that the answer is yes. However, there is a potential ambiguity in this example. The term “existence” may here be understood to either mean “de novo existence” or “continued existence”, and interpreting it as the latter is made more tempting by the fact that 1) we are talking about already existing beings, and 2) the example mentions their happiness “continuing into their futures”. This is relevant because many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The community health team’s work on interpersonal harm in the community, published by Julia Wise on August 19, 2022 on The Effective Altruism Forum. This post aims to explain the work the community health team at the Centre for Effective Altruism does about particular kinds of community problems. The team does several kinds of work aimed at supporting the EA community and reducing risks to EA’s ability to have positive impact. We spend most of our time on those other kinds of work, but this post only focuses on work on interpersonal harm. We think this is likely the part of our work people have the most questions and confusion about, so we wanted to share more about it. In short, we try to reduce risk of harm to members of the community while being fair to people who are accused of wrongdoing. That’s a tricky balance, particularly when the need for confidentiality limits our ability to speak to everyone involved, and we sometimes get parts wrong. This post describes both some general principles and a year’s worth of specific examples. I’m not writing this now because anything in particular is going on. My goal here is to provide some transparency about how these things work in general, rather than commenting on any particular current situation. What kind of situation is this post about? It’s about actions people sometimes take that cause harm to others in the community, for example: Someone pushes past another person’s boundaries. This ranges from accidental discomfort, to sexual harassment, to deliberate sexual assault. Needlessly harsh or mean behavior. Erratic behavior that causes disruption or harm for others Deception / dishonesty Internally, we call this “risky actor” work. Concrete examples are below. Responses the community health team might make no action talking with the person about how to improve their behavior restricting them from CEA events Informing other EA groups / projects / organizations about the problem (very rarely) publicly warning others about the problem Often it’s very unclear what the best response would be, and people will disagree about whether we handled something well. Who works on these situations?The community health team is Nicole Ross (manager), Catherine Low, Chana Messinger, Eve McCormick, and me. I’ve done this kind of work at CEA since 2016. Currently Catherine and I are the main people on the team handling these kinds of situations. More background. Other people including group organizers also end up handling such situations. Difficult trade-offs Balances where I think both sides are valuable, and I’m not sure if we’ve got the balance right: Avoid false negatives: take action if there’s reason to think someone is causing problemsAvoid false positives: don’t unfairly harm someone’s reputation / ability to participate in EA Keep the community health team’s scope within what we can realistically handle; don’t take on too muchDon’t stand idly by while people do harm in the communityEncourage the sharing of research and other work, even if the people producing it have done bad stuff personallyDon’t let people use EA to gain social status that they’ll use to do more bad stuffTake the talent bottleneck seriously; don’t hamper hiring / projects too muchTake culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impactTry to improve the gender balance / not make it worse; take strong action on behavior that makes women uncomfortable in EA spacesDon’t crack down too much on spontaneity / dating / socializing; don’t make men feel that a slip-up or distorted accusation will ruin their lifeLet people know we take action against bad behavior and we care about thisDon’t create the impression that EA spaces are fully screened and safe - that’s not the caseGive people a second o...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rhodri Davies on why he's not an EA, published by Sanjay on August 18, 2022 on The Effective Altruism Forum. Rhodri Davies is a smart, reasonable, and well-respected commentator on philanthropy. Many people who follow charity and philanthropy in the UK (outside of EA) are familiar with his blog. He also has a background in maths and philosophy at Oxford (if I remember correctly) so he's exactly the sort of person that EA might attract, so it should be of interest to the EA movement to know why he didn't want to sign up. The critique that I most liked was the one entitled "Is EA just another in a long line of attempts to “rationalise” philanthropy?" I've copied and pasted it below. Rhodri has spent a lot of time thinking about the history of philanthropy, so his perspective is really valuable. Is EA just another in a long line of attempts to “rationalise” philanthropy? The dose of historical perspective at the end of the last section brings me to another one of my issues with EA: a nagging suspicion that it is in fact just another in a very long line of efforts to make philanthropy more “rational” or “effective” throughout history. The C18th and early C19th, for instance, saw efforts to impose upon charity the principles of political economy (the precursor to modern economics which focused on questions of production, trade and distribution of national wealth – as exemplified in the work of writers such as Adam Smith, Thomas Malthus and David Ricardo). Then in the C19th and early C20th the Charity Organisation Society and Scientific Philanthropy movements waged war on the perceived scourge of emotionally driven “indiscriminate giving”. Charity Organization Society, by Henry Tonks 1862-1937. (Made available by the Tate Gallery under a CC 3.0 license http://www.tate.org.uk/art/work/T11004) This perhaps bothers me more than most people because I spend so much of my time noodling around in the history of philanthropy. It also isn’t a reason to dismiss EA out of hand: the fact that it might have historical precedents doesn’t invalidate it, it just means that we should be more critical in assessing claims of novelty and uniqueness. It also suggests to me that there would be value in providing greater historical context for the movement and its ideas. Doing so may well show that EA is genuinely novel in at least some regards (the idea of total cause agnosticism, for instance, is something that one might struggle to find in previous attempts to apply utilitarian thinking to philanthropy). But the other thing the history of philanthropy tends to show is that everyone thinks at the time that their effort to make giving “better” or “more rational” is inherently and objectively right, and it is often only with the benefit of hindsight that it becomes clear quite how ideologically driven and of their time they actually are. For my money, it is still an open question as to whether future historians will look back on EA in the same way that we look back on the Charity Organisation movement today. The other thing that historical perspective brings is the ability to trace longer-term consequences. And this is particularly important here, because efforts to make charity more “rational” have historically had an unfortunate habit of producing unintended consequences. The “scientific philanthropy” movement of the early 20th century, for instance (which counted many of the biggest donors and foundations of the era among its followers) had its roots in the 19th century charity organisation societies, which were primarily concerned with addressing inefficiency and duplication of charitable effort at a local level, and ensuring that individual giving was sufficiently careful to distinguish between ‘deserving’ and undeserving’ cases (as outlined further in this previous article). Over time, how...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What We Owe The Future is out today, published by William MacAskill on August 16, 2022 on The Effective Altruism Forum. So, as some of you might have noticed, there’s been a little bit of media attention about effective altruism / longtermism / me recently. This was all in the run up to my new book, What We Owe The Future, which is out today! I think I’ve worked harder on this book than I’ve worked on any other single project in my life. I personally spent something like three and a half times as much work on it as Doing Good Better, and I got enormous help from my team, who contributed more work in total than I did. At different times, that team included (in alphabetical order): Frankie Andersen-Wood, Leopold Aschenbrenner, Stephen Clare, Max Daniel, Eirin Evjen, John Halstead, Laura Pomarius, Luisa Rodriguez, and Aron Vallinder. Many more people helped immensely, such as Joao Fabiano with fact checking and the bibliography, Taylor Jones with graphic design, AJ Jacobs with storytelling, Joe Carlsmith with strategy and style, and Fin Moorhouse and Ketan Ramakrishnan with writing around launch. I also benefited from the in-depth advice of dozens of academic consultants and advisors, and dozens more expert reviewers. I want to give a particular thank-you and shout out to Abie Rohrig, who joined after the book was written, to run the publicity campaign. I’m immensely grateful to everyone who contributed; the book would have been a total turd without them. The book is not perfect — reading the audiobook made vivid to me how many things I’d already like to change — but I’m overall happy with how it turned out. The primary aim is to introduce the idea of longtermism to a broader audience, but I think there are hopefully some things that’ll be of interest to engaged EAs, too: there are deep dives on moral contingency, value lock-in, civilisation collapse and recovery, stagnation, population ethics and the value of the future. It also tries to bring a historical perspective to bear on these issues more often than is usual in the standard discussions. The book is about longtermism (in its “weak” form) — the idea that we should be doing much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.). Some of you have worried (very reasonably!) that we should simplify messages to “holy shit, x-risk!”. I respond to that worry here: I think the line of argument is a good one, but I don't see promoting concern for future generations as inconsistent with also talking about how grave the catastrophic risks we face in the next few decades are. In the comments, please AMA - questions don’t just have to be about the book, can be about EA, philosophy, fire raves, or whatever you like! (At worst, I’ll choose to not reply.) Things are pretty busy at the moment, but I’ll carve out a couple of hours next week to respond to as many questions as I can. If you want to buy the book, here’s the link I recommend:. (I’m using different links in different media because bookseller diversity helps with bestseller lists.) If you’d like to help with the launch, please also consider leaving an honest review on Amazon or Good Reads! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Parable of the Boy Who Cried 5% Chance of Wolf, published by Kat Woods on August 15, 2022 on The Effective Altruism Forum. Epistemic status: a parable making a moderately strong claim about statistics Once upon a time, there was a boy who cried "there's a 5% chance there's a wolf!" The villagers came running, saw no wolf, and said "He said there was a wolf and there was not. Thus his probabilities are wrong and he's an alarmist." On the second day, the boy heard some rustling in the bushes and cried "there's a 10% chance there's a wolf!" Some villagers ran out and some did not. There was no wolf. The wolf-skeptics who stayed in bed felt smug. "That boy is always saying there is a wolf, but there isn't." "I didn't say there was a wolf!" cried the boy. "I was estimating the probability at low, but high enough. A false alarm is much less costly than a missed detection when it comes to dying! The expected value is good!" The villagers didn't understand the boy and ignored him. On the third day, the boy heard some sounds he couldn't identify but seemed wolf-y. "There's a 15% chance there's a wolf!" he cried. No villagers came. It was a wolf. They were all eaten. Because the villagers did not think probabilistically. The moral of the story is that we should expect to have a large number of false alarms before a catastrophe hits and that is not strong evidence against impending but improbable catastrophe. Each time somebody put a low but high enough probability on a pandemic being about to start, they weren't wrong when it didn't pan out. H1N1 and SARS and so forth didn't become global pandemics. But they could have. They had a low probability, but high enough to raise alarms. The problem is that people then thought to themselves "Look! People freaked out about those last ones and it was fine, so people are terrible at predictions and alarmist and we shouldn't worry about pandemics" And then COVID-19 happened. This will happen again for other things. People will be raising the alarm about something, and in the media, the nuanced thinking about probabilities will be washed out. You'll hear people saying that X will definitely fuck everything up very soon. And it doesn't. And when the catastrophe doesn't happen, don't over-update. Don't say, "They cried wolf before and nothing happened, thus they are no longer credible." Say "I wonder what probability they or I should put on it? Is that high enough to set up the proper precautions?" When somebody says that nuclear war hasn't happened yet despite all the scares, when somebody reminds you about the AI winter where nothing was happening in it despite all the hype, remember the boy who cried a 5% chance of wolf. Originally posted on my Twitter and personal blog. Reminder that if this reaches 25 upvotes, you can listen to this post on your podcast player using the Nonlinear Library. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Historical EA funding data, published by TylerMaule on August 14, 2022 on The Effective Altruism Forum. Summary I have consolidated publicly available grants data from EA organizations into a spreadsheet, which I intend to update periodically. Totals pictured below. Observations $2.6Bn in grants on record since 2012, about 63% of which went to Global Health. With the addition of FTX and impressive fundraising by GiveWell, Animal Welfare looks even more neglected in relative terms—effective animal charities will likely receive something like 5% of EA funding in 2022, the smallest figure since 2015 by a wide margin. Notes on the data NB: This is just one observer's tally of public data. Sources are cited in the spreadsheet; I am happy to correct any errors as they are pointed out. GiveWell: GiveWell uses a 'metrics year' starting 1 Feb (all other sources were tabulated by calendar year). GiveWell started breaking out 'funds directed' vs 'funds raised' for metrics year 2021. Previous years refer to 'money moved', which is close but not exactly the same. I have excluded funds directed through GiveWell by Open Phil and EA Funds, as those are already included in this data set. Open Phil Open Phil labels their grants using 25 'focus areas'. My subjective mapping to broader cause area is laid out in the spreadsheet. Note that about 20% of funds granted by Open Phil have gone to 'other' areas such as Criminal Justice Reform; these are omitted from the summary figures but still tabulated elsewhere in the spreadsheet. General 2022 estimates are a bit speculative, but a reasonable guess as to how funding will look with the addition of the Future Fund. The total Global Health figure for 2021 (~$400M) looks surprisingly low considering e.g. that GiveWell just reported over $500M funds directed for 2021 (including Open Phil and EA Funds). I think that this is accounted for by (a) GiveWell's metrics year extending through Jan '22, (Open Phil reported $26M of Global Health grants that month), and (b) the possibility that some of this was 'directed' i.e. 'firm commitment of $X to org Y' by Open Phil in 2021, but paid out or recorded to the grants database months later; still seeking explicit confirmation here. Future work If there is any presently available data that seems worth adding, let me know and I may consider it. I may be interested in a more comprehensive analysis on this topic, e.g. using the full budget of every GiveWell-recommended charity. I'd be interested to hear if anyone has access to this type of data, or if this type of project seems particularly valuable. Thanks to Niel Bowerman for helpful comments Currently the bottleneck to synchronizing data is the GiveWell annual metrics report, which is typically published in the second half of the following year. I may update more often if that is useful. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the EA Gather Town Event Hall as a global hub for online events, published by Emrik on August 13, 2022 on The Effective Altruism Forum. The EAGT Event Hall (contact) is a virtual space neighbouring the main EA Gather Town (EAGT) for hosting conferences, talks, discussion groups, and more. It looks kinda like a game, but whatever, I think there are strong reasons to prefer this platform over alternatives. More on that later. We're testing the space for the first time with our unconference on Sunday 12:30 UTC. We'd appreciate it if you joined to help us test it out! You'll also be helping us kickstart something that might turn into a noteworthy part of EA infrastructure. Our hope is that this will be one of many online events organically hosted here by various EA groups around the world. The main point of this post is that having a central venue for online events will help build and maintain an always-active meeting place for the EA community. Events hosted in the Hall will help familiarise people with using the space generally for coworking together, making connections, and having valuable conversations. Insofar as EAG conferences produce value by connecting people from across the world who then fly back to where they live, EAGT produces value for many of the same reasons without having to pay for the flight ticket in advance. How does it work? When I step near another person's character, our audio and video connects and I can engage them in a heated debate on the monistic ontology of blueberry muffins. Alternatively, I could book Auditorium A and lecture an entire hall about it. When I'm next to one of the microphones, I can broadcast to the entire room, but I cannot hear anyone in the audience unless I right-click to "spotlight" them. This comes in handy when e.g. they have their hand raised and I want their question to be heard by everyone. The audience hearing range is limited to include the speaker and the two people sitting right next to them. This allows them to speak to their neighbours if they wish without disturbing anyone else. Example event: The Unconference The easiest way to explain what you can do in the space is probably just to take you through our agenda for the unconference as an example. Feel free to take notes, or contact us and we'd be happy to help organise something here or just answer your questions. Before the event, we've emailed a document with some information for the speakers to help prepare them for the event. 1. Courtyard & intro When first logging into the space, everyone will spawn near the Earth portal (see image) that links the EAGT Event Hall and the EA coworking and lounge space, where the community is usually found coworking, talking, or socialising. When the event starts, Nguyên will give an introduction (broadcasting from the stage in the image) and share some practical information (like the fact that you can raise your hand with 6 and put it down again with 0). We will then proceed through the door northeast of the stage. 2. Foyer & browsing From the Foyer, all the session rooms can be found. The full schedule is linked via the big blue poster but you can also see it on the signs above each room. Two sessions will be held in parallel, and the participants may decide for themselves which room to enter. They are also encouraged to just roam, connect, and explore the main space through the portal. 3. Session rooms & group discussions & embeddable Padlets Each session lasts 30 minutes, but how it's structured is up to the speakers. For example, they could talk for 5 minutes, have 5 minutes for taking questions directly from the audience, and then 20 minutes where each table discusses among themselves and jots ideas/feedback down on their Padlets (unique per table). There are a maximum of 5 chairs per table, because bigg...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the GovAI Policy Team, published by MarkusAnderljung on August 1, 2022 on The Effective Altruism Forum. The AI governance space needs more rigorous work on what influential actors (e.g. governments and AI labs) should do in the next few years to prepare the world for advanced AI. We’re setting up a Policy Team at the Centre for the Governance of AI (GovAI) to help address this gap. The team will primarily focus on AI policy development from a long-run perspective. It will also spend some time on advising and advocating for recommendations, though we expect to lean heavily on other actors for that. Our work will be most relevant for the governments of the US, UK, and EU, as well as AI labs. We plan to focus on a handful of bets at a time. Initially, we are likely to pursue: Compute governance: Is compute a particularly useful governance node for AI? If so, how can this tool be used to meet various AI governance goals? Potential goals for compute governance include monitoring capabilities, restricting access to capabilities, and identifying high-risk systems such that they can be put to significant scrutiny. Corporate governance: What kinds of corporate governance measures should frontier labs adopt? Questions include: What can we learn from other industries to improve risk management practices? How can the board of directors most effectively oversee management? How should ethics boards be designed? AI regulation: What present-day AI regulation would be most helpful for managing risks from advanced AI systems? Example questions include: Should foundation models be a regulatory target? What features of AI systems should be mandated by AI regulation? How can we help create more adaptive and expert regulatory ecosystems? We’ll try several approaches to AI policy development, such as: Back-chaining from desirable outcomes to concrete policy recommendations (e.g. how can we increase the chance there are effective international treaties on AI in the future?); Considering what should be done today to prepare for some particular event (e.g. the US government makes an Apollo Program-level investment in AI); Articulating and evaluating intermediate policy goals (e.g. “ensure the world’s most powerful AI models receive external scrutiny by experts without causing diffusion of capabilities”); Analyzing what can and should be done with specific governance levers (e.g. the three bets outlined above); Evaluating existing policy recommendations (e.g. increasing high-skilled immigration to the US and UK); Providing concrete advice to decision-makers (e.g. providing input on the design of the US National AI Research Resource). Over time, we plan to evaluate which bets and approaches are most fruitful and refine our focus accordingly. The team currently consists of Jonas Schuett (specialization: corporate governance), Lennart Heim (specialization: compute governance), and myself (team lead). We’ll also collaborate with the rest of GovAI and people at other organizations. We’re looking to grow the team. We’re hiring Research Scholars (deadline: August 7th), hoping to add 2 people to the team. We’re also planning to work with people in the GovAI 3-month Fellowship (Winter Fellowship deadline: August 4th) and are likely to open applications for Research Fellows in the near future (you can submit expressions of interest now). We’re happy for new staff to work out of Oxford (where most of GovAI is based), the Bay Area (where I am based), or remotely. If you’d like to learn more, feel free to leave a comment below or reach out to me at markus.anderljung@governance.ai. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA in the mainstream media: if you're not at the table, you're on the menu, published by Jack Lewars on July 30, 2022 on The Effective Altruism Forum. Another article just dropped criticising Effective Altruism, this time in the Wall Street Journal. I'm not linking to it here because it's garbage, but if you really want to look it up, the headline is 'Effective Altruism is neither'. I'd encourage you not to, so as not to help its readership figures. Other people have written very good forum posts about why we should care about perception of EA, collecting other examples of damaging commentary and suggesting some things we could do to help improve the community's image. However, I haven't seen anyone suggesting that we should create an EA PR agency, or hire a team at a PR firm to influence perception of EA. I think this seems like a very good idea. It seems at the moment like EA is leaving a vacuum, which is being filled by criticism. This is happening in multiple languages. Much of it could easily be countered but we're not in the game. There are all sorts of reasons not to worry too much about this particular opinion piece. Its criticisms are transparently bad, I suspect even to the audience it's written for - suggesting that pandemic preparedness is 'shutting the door after the horse has bolted' is self-evidently stupid. I doubt the readers of the WSJ opinion page are a high priority, high potential audience for EA. Even if it was devastating criticism aimed a key audience, it might have bad reach and we'd only amplify it by responding. However, the point is that we should have some experts deciding this, rather than the current situation where no one seems to be monitoring this or trying to respond on our behalf. It seems to me that some dedicated PR professionals could fairly quickly move to a) place more positive pieces about EA in the mainstream media; b) give more exposure to high fidelity, convincing messengers from the community (e.g. Will MacAskill); c) become the go-to place for comment on damaging pieces (which currently don't ever seem to involve a response from anyone in the community); and even d) manage to head-off some of the most illogical, most bad-faith criticisms before they're published. I've been advised by people in PR that the most cost-effective way to do this would be to hire a team of 2-3 full-time people from the PR sector and pay them at market rates (so I guess ~$500k/year). It's possible that it would be better to do this by hiring a PR agency with a pre-existing team (which has fewer start up costs) but people who work in PR say that, over time, you just end up paying exorbitant fees if you take this approach. I'd be happy with either, but instinctively lean towards the first. In some ways, I think EA has already missed several golden PR opportunities, not least the release of several high profile books (where there has been some decent PR but I feel there probably could have been more); and the recent pandemic, which validated much of what the community has been saying for a long time. It would be good to avoid missing future opportunities; and also satisfying to see some counter-programming to take on these sporadic poor-quality/bad-faith critiques.Call to action: if you agree, please comment or upvote; but, more importantly, send this on to people who might be able to fund this or otherwise make it happen. If you want to discuss the idea or think you can help, please DM me. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: the Prague Fall Season, published by IrenaK on July 29, 2022 on The Effective Altruism Forum. tl;dr: We invite the broader EA/Rationality community to spend this fall in Prague. There will be a high concentration of global EA/Rationality events, workshops, retreats and seminars happening in and around Prague. While the events alone are interesting, we believe there will be additional benefits for staying around for a longer period of time and an opportunity to talk, create and meet caused by the higher density of people in one place. We will have a large coworking space available in Prague for people to work from and socialize. We also want to share with the world what we like about Prague and the local community. Prague seems to be a good place for thinking about hard problems - both Albert Einstein and Johannes Kepler made substantial progress on their fundamental research while living in Prague. We think you would enjoy being part of the Prague Fall Season if you: Want to spend an extended period of time this fall with other like-minded people, concentrated in one area, building momentum together. Are interested in exploring cities, cultures, and aesthetics different from the US or UK hubs. Are curious about the Czech EA/Rationality culture and want to spend some time with us. Want to work on some of the projects based in Prague. Want to experience what it’s like to live and work in Prague. If you are interested you can: Apply for a residency. Apply for one of the jobs based out of Prague. Apply to work from our new office space. Apply for a CFAR workshop. . or just visit us and we will figure it out! Why Prague? The events happening in Prague this autumn provide a Schelling point.Prague is a second-tier EA Hub, smaller than London or the Bay area, but comparable or larger than probably any other city in continental Europe. Prague has a thriving local EA community, a newly established alignment research group, ~20 full-time people working in/with high-impact organizations (eg Metaculus, ESPR, ALLFED, CFAR), and about a hundred people in the broader EA and rationalist communities. We aim for the Season to bring in on average additional 30-60 people staying for longer, and a few hundreds of shorter-time visitors, who will participate in some event and stay for a few days before or after. In our experience, Prague is a very good place to live - it has the benefits of a modern large city, while being walkable or bikeable, offering a high quality of living, and overall unique aesthetics and vibes (meaningfully different from other similar hubs); a likely fit for some creative high-impact people who seek a change in their environment. I often get questions about why there is such a big concentration of successful and interesting people in the Czech Republic, especially in the EA/Rationality community. My answer usually goes something like this. On a more serious note, we are quite excited about some of the virtues of the local EA/Rationality culture and would like to share them with the global community. If I were to summarize some of the key ones, they would be: Doing what's needed - one secret superpower we aim for is to do what needs to be done, even if the quests are not shiny and glittering with status. Sanity - Prague often feels like a more sane and more grounded place, which has distinct advantages and disadvantages; for example, if you feel you are too steady and unambitious or want to upend everything in your life, the vibes of Silicon Valley or the Bay area are likely better for moving in this direction. In contrast, if you feel the pressure of the competition for attention, networking, or similar source of stress is distracting you from useful work and you would benefit from having some time with less of that, Prague may be better for a few month...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interesting vs. Important Work - A Place EA is Prioritizing Poorly, published by Davidmanheim on July 28, 2022 on The Effective Altruism Forum. There are many important issues in the world, and many interesting topics. But these are not the same thing, and we should beware of suspicious convergence. Given that, our assumption should be that the most interesting topics we hear about are far less important than the attention they receive. Heeding Scott Alexander’s recent warning, I’ll therefore ask much more specifically, what are the most intellectually interesting topics in Effective Altruism, and then I’ll suggest that we should be doing less work on them - and list a few concrete suggestions for how to do that. What are the interesting things? Here are some of my concrete candidates for most interesting work: infinite ethics, theoretical AI safety, rationality techniques, and writing high-level critiques of EA. And to be clear, all of these ARE important. But the number of people we need working on them should probably be more limited than the current trajectory, and we should probably de-emphasize status for the most theoretical work. To be clear, I love GPI, FHI, CSER, MIRI, and many other orgs doing this work. The people I know at each org are great, and I think that many of the things they do are, in fact, really important. And I like the work they do - not only do I think it’s important, it’s also SUPER interesting, especially to people who like philosophy, math, and/or economics. But the convergence between important and interesting is exactly the problem I'm pointing towards. Motivating Theoretical Model Duncan Sabien talks about Monks of Magnitude, where different people work on things that have different feedback loop lengths, from 1 day, to 10-days, to people who spend 10,000 days thinking. He more recently mentioned that he noticed “people continuously vanishing higher into the tower,” that is, focusing on more abstract and harder to evaluate issues, and that very few people have done the opposite. One commenter, Ben Weinstein-Raun, suggested several reasons, among them that longer-loop work is more visible, and higher status. I think this critique fits the same model, where we should be suspicious that such long-loop work is over-produced. (Another important issue is that “it's easier to pass yourself off as a long-looper when you're really doing nothing,” but that’s a different discussion.) The natural tendency to do work that is more conceptual and/or harder to pin to a concrete measurable outcome is one we should fight back against, since by default it is overproduced. The basic reason it is overproduced is because people who are even slightly affected by status or interesting research, i.e. everyone, will give it at least slightly more attention than warranted, and further, because others are already focused on it, the marginal value is lower. This is not to say that the optimal amount of fun and interesting research is zero, nor that all fun and interesting work is unimportant. We do need 10,000 day monks - and lots of interesting questions exist for long-termism that make them significant moral priorities. And I agree with the argument for a form of long-termism. But this isn’t a contradiction - and work on long-termism can be concrete and visible, isn’t necessarily conceptual, and doesn’t necessarily involve slow feedback loops. Towards fixing the problem Effective Altruism needs to be effective, and that means we need evaluable outputs wherever possible. First, anyone and everyone attempting to be impactful needs a theory of change, and an output that has some way of impacting the world. That means everyone, especially academics and researchers, should make this model clear, at least to themselves, but ideally also to others. If you’re writing f...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The value of content density, published by Joey on July 20, 2022 on The Effective Altruism Forum. Different communities have different norms when it comes to writing and content production. One stylistic difference the EA community has, is that forum posts are often comparatively long, deep and advanced. I really like aspects of this and I think that having a relatively high level of depth can be very nice to offset the other mediums of communication EA uses (e.g., facebook). However, I wish EA forum authors would give content density a little bit more consideration. Different books have different levels of content density. Some have a strong concept on almost every page (a good example of this would be Ray Dalio's Principles), while other books could be summarized fairly easily into a single, short blog post (I think Carol Dweck’s Mindset falls into this category). There is often a strong social pressure to make something ~book length, so even if it's a bit concept-light it will get stretched to meet a word count. In the world of blog posts, no such limit exists but there are social norms that tend to affect the average duration. I think for entertainment blog posts there can be a lot of value in longer-form writing (e.g., Wait But Why’s writing), but for blog posts that are aiming to get a concept across (like this one) a leaner form of writing would be better. I do agree we should try to make writing entertaining, just not with huge length increases as a trade off. (A good example of both high content density + low wordcount). Why we should aim for higher content density Time limited: Many EA forum readers are time limited and thus it is often a worthwhile trade to write a post that covers 90% of the value and is half as long. I often see blog posts that have a good concept or two, but due to how long they are they get far less engagement and readership than they would otherwise. Counterfactuals: There is quite a volume of posts on the EA forum and this directly ties into most readers' opportunity cost. Oftentimes the trade-off might literally be reading two EA forum posts vs one that is double the length. Approachability: EAs are a fairly technical bunch and although the use of jargon has been written about, having a high volume of content with relatively sparse concepts also makes it harder for new people to get up to speed. It can make the forum as a whole seem more intimidating than it has to be. Possible heuristics Length: The rule of thumb I use for blog posts is ~0.5-1 page per major concept and 1-3 pages per blog post. This kind of cap creates a strong force toward brevity and conveying concepts concisely. It does sometimes require major editing (the quote “I would have written you a shorter letter but did not have the time” comes to mind). TL;DR and summaries: I think the EA forum makes great use of these and they are worth including in almost every post that is over one page. Top concept leveraging: I think blog posts often try to cover a large number of topics that could easily be broken into multiple posts. For example, this data on value drift post being separate from this strategies for preventing value drift post makes each piece of writing much more individually referable and digestible. More information Some examples of content-dense writing include: EA Dedicates Critiques of EA that I want to read A summary of every Replacing Guilt post Further reading: Steven Pinker has some really well-articulated and pointed suggestions for community writing norms, many of which I think would apply directly to the EA forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reducing nightmares as a cause area, published by Drew Housman on July 18, 2022 on The Effective Altruism Forum. I am someone who suffers from truly terrible nightmares. I've spent many hours running from murderous villains, being killed by monsters, and being forced to watch my loved ones suffer. If you haven't had a vivid nightmare, you must understand that they feel 100% real. And if you're open to the idea that every moment of experience matters, even when we're asleep, then you've gotta care about nightmares. To me, nightmares fit the classic EA criteria of being important, neglected, and tractable. Nightmares are Important Nightmares tend to have a negative affect on those who experience them. When they are related to PTSD, they lead to "decreased psychological and physiological functioning." Other studies link them to "waking psychopathology." And they negatively affect overall sleep quality. According to the American Academy of Sleep Medicine, 50-85% of people have the occasional nightmare. (They don't give a source for that though 🤨) One thing researchers seem to agree on is that traumatic life experiences cause nightmares. I think it's safe to assume that there are millions of people around the world suffering through traumatic experiences on a semi-regular basis. How tragic that these people don't just suffer while awake, but also must be terrorized in their sleep. As for me, I've seen and done things in nightmares that are so horrific that they leave me shaken throughout the day. I can't even tell most people about them because I feel like they'd look at me differently. I can't go into my tech job and be like, "I don't think I'm going to be as productive today because I just had to guillotine my little sister in a dream and it felt very real and it's kind of messing with my head." And I've lived a pretty sheltered life! I can't imagine what I'd be like if I fought in a war. Someone who goes off to battle and then gets nightmares of being back on the battlefield really, actually suffers during those dreams. Night after night after night. I keep hammering the point about the suffering being real because I think it's easy to dismiss nightmares as little aberrations that can be easily shrugged off. It feels like the cultural consensus around nightmares, at least in the USA, is,"It didn't really happen, so what's the big deal?" Or people will say, "They serve an evolutionary purpose." I want to be like, you don't get it. What happened in that nightmare really did happen to me! And there are a whole bunch of quirks of evolution that cause for needless suffering. I put nightmares into the category of things we should fight to get rid of, evolution be damned. Finally, if you buy into the idea that suffering is on a log scale, then it might be really, really bad to be a soldier who has to relive their torture several times a week. Nightmares are Neglected I searched the EA forum for posts on nightmares and didn't find a single one. Anecdotally, when I bring this up with people they seem highly skeptical that nightmares are a problem. Perhaps I'm the crazy one, or maybe they've just never had a bad enough nightmare. The Nightmare Problem is Tractable From what I can tell, Imagery Reversal Treatment shows great promise (68% of subjects in one study decreased their nightmares), so perhaps finding ways to publicize that treatment more would have a big effect. What if someone made a free website or app that walks people through the steps of imagery rehearsal treatment? Seems relatively low effort with a potentially high payoff. What I'd like to see Clearer research into how many people suffer from nightmares, and how bad that suffering is Clearer research into ways to prevent nightmares. We should eradicate the worst forms of nightmares I know this might seem a little ...
loading
Comments 
Download from Google Play
Download from App Store