DiscoverThe Nonlinear Library: EA Forum Top Posts
The Nonlinear Library: EA Forum Top Posts
Claim Ownership

The Nonlinear Library: EA Forum Top Posts

Author: The Nonlinear Fund

Subscribed: 2Played: 25
Share

Description

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio
438 Episodes
Reverse
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: My mistakes on the path to impact, published by Denise_Melchin on the effective altruism forum. Doing a lot of good has been a major priority in my life for several years now. Unfortunately I made some substantial mistakes which have lowered my expected impact a lot, and I am on a less promising trajectory than I would have expected a few years ago. In the hope that other people can learn from my mistakes, I thought it made sense to write them up here! I will attempt to list the mistakes which lowered my impact most over the past several years in this post and then analyse their causes. Writing this post and previous drafts has also been very personally useful to me, and I can recommend undertaking such an analysis. Please keep in mind that my analysis of my mistakes is likely at least a bit misguided and incomprehensive. It would have been nice to condense the post a bit more and structure it better, but having already spent a lot of time on it and wanting to move on to other projects, I thought it would be best not to let the perfect be the enemy of the good! To put my mistakes into context, I will give a brief outline of what happened in my career-related life in the past several years before discussing what I consider to be my main mistakes. Background I came across the EA Community in 2012, a few months before I started university. Before that point my goal had always been to become a researcher. Until early 2017, I did a mathematics degree in Germany and received a couple of scholarships. I did a lot of ‘EA volunteering’ over the years, mostly community building and large-scale grantmaking. I also did two unpaid internships at EA orgs, one during my degree and one after graduating, in summer 2017. After completing my summer internship, I started to try to find a role at an EA org. I applied to ~7 research and grantmaking roles in 2018. I got to the last stage 4 times, but received no offers. The closest I got was receiving a 3-month-trial offer as a Research Analyst at Open Phil, but it turned out they were unable to provide visas. In 2019, I worked as a Research Assistant for a researcher at an EA aligned university institution on a grant for a few hundred hours. I stopped as there seemed to be no route to a secure position and the role did not seem like a good fit. In late 2019 I applied for jobs suitable for STEM graduates with no experience. I also stopped doing most of my EA volunteering. In January 2020 I began to work in an entry-level data analyst role in the UK Civil Service which I have been really happy with. In November, after 6.5mon full-time equivalent worked, I received a promotion to a more senior role with management responsibility and a significant pay rise. First I am going to discuss what I think I did wrong from a first-order practical perspective. Afterwards I will explain which errors in my decision making process I consider the likely culprits for these mistakes - the patterns of behaviour which need to be changed to avoid similar mistakes in the future. A lot of the following seems pretty silly to me now, and I struggle to imagine how I ever fully bought into the mistakes and systematic errors in my thinking in the first place. But here we go! What did I get wrong? I did not build broad career capital nor kept my options open. During my degree, I mostly focused on EA community building efforts as well as making good donation decisions. I made few attempts to build skills for the type of work I was most interested in doing (research) or skills that would be particularly useful for higher earning paths (e.g. programming), especially later on. My only internships were at EA organisations in research roles. I also stopped trying to do well in my degree later on, and stopped my previously-substantial involvement in political work. In my firs...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Growth and the case against randomista development, published by HaukeHillebrandt, John G. Halstead on the effective altruism forum. Update, 3/8/2021: I (Hauke) gave a talk at Effective Altruism Global on this post: Summary Randomista development (RD) is a form of development economics which evaluates and promotes interventions that can be tested by randomised controlled trials (RCTs). It is exemplified by GiveWell (which primarily works in health) and the randomista movement in economics (which primarily works in economic development). Here we argue for the following claims, which we believe to be quite weak: Prominent economists make plausible arguments which suggest that research on and advocacy for economic growth in low- and middle-income countries is more cost-effective than the things funded by proponents of randomista development. Effective altruists have devoted too little attention to these arguments. Assessing the soundness of these arguments should be a key focus for current generation-focused effective altruists over the next few years. We hope to start a conversation on these questions, and potentially to cause a major reorientation within EA. We also believe the following stronger claims: 4. Improving health is not the best way to increase growth. 5. A ~4 person-year research effort will find donation opportunities working on economic growth in LMICs which are substantially better than GiveWell’s top charities from a current generation human welfare-focused point of view. However, economic growth is not all that matters. GDP misses many crucial determinants of human welfare, including leisure time, inequality, foregone consumption from investment, public goods, social connection, life expectancy, and so on. A top priority for effective altruists should be to assess the best way to increase human welfare outside of the constraints of randomista development, i.e. allowing intervention that have not or cannot be tested by RCTs. We proceed as follows: We define randomista development and contrast it with research and advocacy for growth-friendly policies in low- and middle-income countries. We show that randomista development is overrepresented in EA, and that, in contradistinction, research on and advocacy for growth-friendly economic policy (we refer to this as growth throughout) is underrepresented We then show why some prominent economists believe that, a priori, growth is much more effective than most RD interventions. We present a quantitative model that tries to formalize these intuitions and allows us to compare global development interventions with economic growth interventions. The model suggests that under plausible assumptions a hypothetical growth intervention can be thousands of times more cost-effective than typical RD interventions such as cash-transfers. However, when these assumptions are relaxed and compared to the very good RD interventions, growth interventions are on a similar level of effectiveness as RD interventions. We consider various possible objections and qualifications to our argument. Acknowledgements Thanks to Stefan Schubert, Stephen Clare, Greg Lewis, Michael Wiebe, Sjir Hoeijmakers, Johannes Ackva, Gregory Thwaites, Will MacAskill, Aidan Goth, Sasha Cooper, and Carl Shulman for comments. Any mistakes are our own. Opinions are ours, not those of our employers. Marinella Capriati at GiveWell commented on this piece, and the piece does not represent her views or those of GiveWell. 1. Defining Randomista Development We define randomista development (RD) as an approach to development economics which investigates, evaluates and recommends only interventions which can be tested by randomised controlled trials (RCTs). RD can take low-risk or more “hits-based” forms. Effective altruists have especially focused on the low-risk for...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Announcing my retirement, published by Aaron Gertler on the effective altruism forum. A few sharp-eyed readers noticed my imminent departure from CEA in our last quarterly report. Gold stars all around! My last day as our content specialist — and thus, my last day helping to run the Forum — is December 10th. The other moderators will continue to handle the basics, and we’re in the process of hiring my replacement. (Let me know if anyone comes to mind!) Managing this place was fun. It wasn’t always fun, but — on the whole, a good time. I’ve enjoyed giving feedback to a few hundred people, organizing some interesting AMAs, running a writing contest, building up the Digest, hosting workshops for EA groups around the world, and deleting a truly staggering number of comments advertising escort services (I’ll spare you the link). More broadly, I’ve felt a continual sense of admiration for everyone who cares about the Forum and tries to make it better — by reading, voting, posting, crossposting, commenting, tagging, Wiki-editing, bug-reporting, and/or moderating. Collectively, you’ve put in tens of thousands of hours of work to develop our strange, complicated, unique website, with scant compensation besides karma. (Now that I’m leaving, it’s time to be honest — despite the rumors, our karma isn’t the kind that gets you a better afterlife.) Thank you for everything you’ve done to make this job what it was. What’s next? In January, I’ll join Open Philanthropy as their communications officer, working to help their researchers publish more of their work. I’ll also be joining Effective Giving Quest as their first partnered streamer. Wish me luck: moderating this place sometimes felt like herding cats, but it’s nothing compared to Twitch chat. My Forum comments will be less frequent, but probably spicier. thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: My current impressions on career choice for longtermists, published by Holden Karnofsky on the effective altruism forum. This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it's valuable for there to be multiple perspectives on this topic out there. Edited to add: see below for why I chose to focus on longtermism in this post. While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize "paths" to particular roles working on particular causes; by contrast, I emphasize "aptitudes" one can build in a wide variety of roles and causes (including non-effective-altruist organizations) and then apply to a wide variety of longtermist-relevant jobs (often with options working on more than one cause). Example aptitudes include: "helping organizations achieve their objectives via good business practices," "evaluating claims against each other," "communicating already-existing ideas to not-yet-sold audiences," etc. (Other frameworks for career choice include starting with causes (AI safety, biorisk, etc.) or heuristics ("Do work you can be great at," "Do work that builds your career capital and gives you more options.") I tend to feel people should consider multiple frameworks when making career choices, since any one framework can contain useful insight, but risks being too dogmatic and specific for individual cases.) For each aptitude I list, I include ideas for how to explore the aptitude and tell whether one is on track. Something I like about an aptitude-based framework is that it is often relatively straightforward to get a sense of one's promise for, and progress on, a given "aptitude" if one chooses to do so. This contrasts with cause-based and path-based approaches, where there's a lot of happenstance in whether there is a job available in a given cause or on a given path, making it hard for many people to get a clear sense of their fit for their first-choice cause/path and making it hard to know what to do next. This framework won't make it easier for people to get the jobs they want, but it might make it easier for them to start learning about what sort of work is and isn't likely to be a fit. I’ve tried to list aptitudes that seem to have relatively high potential for contributing directly to longtermist goals. I’m sure there are aptitudes I should have included and didn’t, including aptitudes that don’t seem particularly promising from a longtermist perspective now but could become more so in the future. In many cases, developing a listed aptitude is no guarantee of being able to get a job directly focused on top longtermist goals. Longtermism is a fairly young lens on the world, and there are (at least today) a relatively small number of jobs fitting that description. However, I also believe that even if one never gets such a job, there are a lot of opportunities to contribute to top longtermist goals, using whatever job and aptitudes one does have. To flesh out this view, I lay out an "aptitude-agnostic" vision for contributing to longtermism. Some longtermism-relevant aptitudes "Organization building, running, and boosting" aptitudes[1] Basic profile: helping an organization by bringing "generally useful" skills to it. By "generally useful" skills, I mean skills that could help a wide variety of organizations accomplish a wide variety of different objectives. Such skills could include: Business operations and project management (including setting objectives, metrics, etc.) People management and management coaching (some manager jobs require specialized skills, but some just require general management-associated skills) Executive leadership (setting and enfo...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation, published by EA applicant on the effective altruism forum. (I am writing this post under a pseudonym because I don’t want potential future non-EA employers to find this with a quick google search. Initially my name could be found on the CV linked in the text, but after this post was shared much more widely than I had expected, I got cold feet and removed it.) In the past 12 months, I applied for 20 positions in the EA community. I didn’t get any offer. At the end of this post, I list all those positions, and how much time I spent in the application process. Before that, I write about why I think more posts like this could be useful. Please note: The positions were all related to long-termism, EA movement building, or meta-activities (e.g. grant-making). To stress this again, I did not apply for any positions in e.g. global health or animal welfare, so what I’m going to say might not apply to these fields. Costs of applications Applying has considerable time-costs. Below, I estimate that I spent 7-8 weeks of full-time work in application processes alone. I guess it would be roughly twice as much if I factored in things like searching for positions, deciding which positions to apply for, or researching visa issues. (Edit: Some organisations reimburse for time spent in work tests/trials. I got paid in 4 of the 20 application processes. I might have gotten paid in more processes if I had advanced further). At least for me, handling multiple rejections was mentally challenging. Additionally, the process may foster resentment towards the EA community. I am aware the following statement is super in-accurate and no one is literally saying that, but sometimes this is the message I felt I was getting from the EA community: “Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint. (20 applications later) . Yeah, when we said that we need people, we meant capable people. Not you. You suck.” Why I think more posts like this would have been useful for me Overall, I think it would have helped me to know just how competitive jobs in the EA community (long-termism, movement building, meta-stuff) are. I think I would have been more careful in selecting the positions I applied for and I would probably have started exploring other ways to have an impactful career earlier. Or maybe I would have applied to the same positions, but with less expectations and less of a feeling of being a total loser that will never contribute anything towards making the world a better place after being rejected once again 😊 Of course, I am just one example, and others will have different experiences. For example, I could imagine that it is easier to get hired by an EA organisation if you have work experience outside of research and hospitals (although many of the positions I applied for were in research or research-related). However, I don’t think I am a very special case. I know several people who fulfil all of the following criteria: - They studied/are studying at postgraduate level at a highly competitive university (like Oxford) or in a highly competitive subject (like medical school) - They are within the top 5% of their course - They have impressive extracurricular activities (like leading a local EA chapter, having organised successful big events, peer-reviewed publications while studying, .) - They are very motivated and EA aligned - They applied for at least 5 positi...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: EAF"s ballot initiative doubled Zurich’s development aid, published by Jonas Vollmer on the effective altruism forum. Summary In 2016, the Effective Altruism Foundation (EAF), then based in Switzerland, launched a ballot initiative asking to increase the city of Zurich’s development cooperation budget and to allocate it more effectively. In 2018, we coordinated a counterproposal with the city council that preserved the main points of our original initiative and had a high chance of success. In November 2019, the counterproposal passed with a 70% majority. Zurich’s development cooperation budget will thus increase from around $3 million to around $8 million per year. The city will aim to allocate it “based on the available scientific research on effectiveness and cost-effectiveness.” This seems to be the first time that Swiss legislation on development cooperation mentions effectiveness requirements. The initiative cost around $25,000 in financial costs and around $190,000 in opportunity costs. Depending on the assumptions, it raised a present value of $20–160 million in development funding. EAs should consider launching similar initiatives in other Swiss cities and around the world. Initial proposal and signature collection In spring 2016, the Effective Altruism Foundation (EAF), then still based in Basel, Switzerland, launched a ballot initiative asking for the city of Zurich’s development cooperation budget to be increased and to be allocated more effectively. (For information on EAF’s current focus, see this article.) We chose Zurich due to its large budget and leftist/centrist majority. I published an EA Forum post introducing the initiative and a corresponding policy paper (see English translation). (Note: In the EA Forum post, I overestimated the publicity/movement-building benefits and the probability that the original proposal would pass. I overemphasized the quantitative estimates, especially the point estimates, which don’t adequately represent the uncertainty. I underestimated the success probability of a favorable counterproposal. Also, the policy paper should have had a greater focus on hits-based, policy-oriented interventions because I think these have a chance of being even more cost-effective than more “straightforward” approaches and also tend to be viewed more favorably by professionals.) We hired people and coordinated volunteers (mostly animal rights activists we had interacted with before) to collect the required 3,000 signatures (plus 20% safety margin) over six months to get a binding ballot vote. Signatures had to be collected in person in handwritten form. For city-level initiatives, people usually collect about 10 signatures per hour, and paying people to collect signatures costs about $3 per signature on average. Picture: Start of signature collection on 25 May 2016. Picture: Submission of the initiative at Zurich’s city hall on 22 November 2016. The legislation we proposed (see the appendix) focused too strongly on Randomized Controlled Trials (RCTs) and demanded too much of a budget increase (from $3 million to $87 million per year). We made these mistakes because we had internal disagreements about the proposal and did not dedicate enough time to resolving them. This led to negative initial responses from the city council and influential charities (who thought the budget increase was too extreme, were pessimistic about the odds of success, and disliked the RCT focus), implying a <1% success probability at the ballot because public opinion tends to be heavily influenced by the city council’s official vote recommendation. At that point, we planned to retract the initiative before the vote to prevent negative PR for EA, while still aiming for a favorable counterproposal. Counterproposal As is common for Swiss ballot initiatives, the city d...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Is effective altruism growing? An update on the stock of funding vs. people, published by Benjamin_Todd on the effective altruism forum. This is a cross-post from 80,000 Hours. See part 2 on the allocation across cause areas. In 2015, I argued that funding for effective altruism – especially within meta and longtermist areas – had grown faster than the number of people interested in it, and that this was likely to continue. As a result, there would be a funding ‘overhang’, creating skill bottlenecks for the roles needed to deploy this funding. A couple of years ago, I wondered if this trend was starting to reverse. There hadn’t been any new donors on the scale of Good Ventures (the main partner of Open Philanthropy), which meant that total committed funds were growing slowly, giving the number of people a chance to catch up. However, the spectacular asset returns of the last few years and the creation of FTX, seem to have shifted the balance back towards funding. Now the funding overhang seems even larger in both proportional and absolute terms than 2015. In the rest of this post, I make some rough guesses at total committed funds compared to the number of interested people, to see how the balance of funding vs. talent might have changed over time. This will also serve as an update on whether effective altruism is growing – with a focus on what I think are the two most important metrics: the stock of total committed funds, and of committed people. This analysis also made me make a small update in favour of giving now vs. investing to give later. Here’s a summary of what’s coming up: How much funding is committed to effective altruism (going forward)? Around $46 billion. How quickly have these funds grown? About 37% per year since 2015, with much of the growth concentrated in 2020–2021. How much is being donated each year? Around $420 million, which is just over 1% of committed capital, and has grown maybe about 21% per year since 2015. How many committed community members are there? About 7,400 active members and 2,600 ‘committed’ members, growing 10–20% per year 2018–2020, and growing faster than that 2015–2017. Has the funding overhang grown or shrunk? Funding seems to have grown faster than the number of people, so the overhang has grown in both proportional and absolute terms. What might be the implications for career choice? Skill bottlenecks have probably increased for people able to think of ways to spend lots of funding effectively, run big projects, and evaluate grants. To caveat, all of these figures are extremely rough, and are mainly estimated off the top of my head. I haven’t checked them with the relevant donors, so they might not endorse these estimates. However, I think they’re better than what exists currently, and thought it was important to try to give some kind of rough update on how my thinking has changed. There are likely some significant mistakes; I’d be keen to see a more thorough version of this analysis. Overall, please treat this more like notes from a podcast than a carefully researched article. Which growth metrics matter? Broadly, the future[1] impact of effective altruism depends on the total stock of: The quantity of committed funds The number of committed people (adjusted for skills and influence) The quality of our ideas (which determine how effectively funding and labour can be turned into impact) (In economic growth models, this would be capital, labour, and productivity.) You could consider other resources like political capital, reputation, or public support as well, though we can also think of these as being a special type of labour. In this post, I’m going to focus on funding and labour. (To do an equivalent analysis for ideas, which could easily matter more, we could try to estimate whether the expected return of our best way...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Announcing "Naming What We Can"!, published by GidonKadosh, EdoArad, Davidmanheim, ShayBenMoshe, sella, Guy Raveh, Asaf Ifergan on the effective altruism forum. We hereby announce a new meta-EA institution - "Naming What We Can". Vision We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects. Goal To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis Mission Using our superior humor and language articulation prowess, we will come up with names for stuff. About us We are a bunch of revolutionaries who believe in the power of correct naming. We translated over a quintillion distinct words from English to Hebrew. Some of us have read all of Unsong. One of us even read the whole bible. We spent countless fortnights debating the in and outs of our own org’s title - we Name What We Can. What Do We Do? We're here for the service of the EA community. Whatever you need to rename - we can name. Although we also rename whatever we can. Even if you didn't ask. Examples As a demonstration, we will now see some examples where NWWC has a much better name than the one currently used. 80,000 Hours => 64,620 Hours. Better fits the data and more equal toward women, two important EA virtues. Charity Entrepreneurship => Charity Initiatives. (We don't know anyone who can spell entrepreneurship on their first try. Alternatively, own all of the variations: Charity Enterpeneurship, Charity Entreprenreurshrip, Charity Entrepenurship, Charity Entepenoorship, .) Global Priorities Institute => Glomar Priorities Institute. We suggest including the dimension of time, making our globe a glome. OpenPhil => Doing Right Philanthropy. Going by Dr.Phil would give a lot more clicks. EA Israel => זולתנים יעילים בארץ הקודש ProbablyGood => CrediblyGood. Because in EA we usually use credence rather than probability. EA Hotel => Centre for Enabling EA Learning & Research. Giving What We Can => Guilting Whoever We Can. Because people give more when they are feeling guilty about being rich. Cause Prioritization => Toby Ordering. Max Dalton => Max Delta. This represents the endless EA effort to maximize our ever-marginal utility. Will MacAskill => will McAskill. Evidently a more common use: Peter singer & steven pinker should be the same person, to avoid confusion. OpenAI => ProprietaryAI. Followed by ClosedAI, UnalignedAI, MisalignedAI, and MalignantAI. FHI => Bostrom's Squad. GiveWell => Don'tGivePlayPumps. We feel that the message could be stronger this way. Doing Good Better => Doing Right Right. Electronic Arts, also known as EA, should change its name to Effective Altruism. They should also change all of their activities to Effective Altruism activities. Impact estimation Overall, we think the impact of the project will be net negative on expectation (see our Guesstimate model). That is because we think that the impact is likely to be somewhat positive, but there is a really small tail risk that we will cause the termination of the EA movement. However, as we are risk-averse we can mostly ignore high tails in our impact assessment so there is no need to worry. Call to action As a first step, we offer our services freely here on this very post! This is done to test the fit of the EA community to us. All you need to do is to comment on this post and ask us to name or rename whatever you desire. Additionally, we hold a public recruitment process here on this very post! If you want to apply to NWWC as a member, comment on this post with a name suggestion of your choosing! Due to our current lack of diversity in our team, we particularly encourage women, people of color, ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Major UN report discusses existential risk and future generations (summary), published BY finm, Avital Balwit on the effective altruism forum. Co-written with Avital Balwit. Introduction and Key Points On September 10th, the Secretary General of the United Nations released a report called “Our Common Agenda”. This report seems highly relevant for those working on longtermism and existential risk, and appears to signal unexpectedly strong interest from the UN. It explicitly uses longtermist language and concepts, and suggests concrete proposals for institutions to represent future generations and manage catastrophic and existential risks. In this post we've tried summarising the report for an EA audience. Some notable features of the report: It explicitly discusses “future generations”, “long-termism”, and “existential risk” It highlights biorisks, nuclear weapons, advanced technologies, environmental disasters/climate change as extreme or even existential risks It recommends the “regulation of artificial intelligence to ensure that this is aligned with shared global values” It proposes several instruments for protecting future generations: A Futures Lab for futures impact assessments and “regularly reporting on megatrends and catastrophic risks” A Special Envoy for Future Generations to assist on “long-term thinking and foresight” and explore various international mechanisms for representing future generations, including... Repurposing the Trusteeship Council to represent the interests of future generations (a major but long-inactive organ of the UN) A Declaration on Future Generations It proposes instruments for addressing major risks: An Emergency Platform to convene key actors in response to complex global crises A Strategic Foresight and Global Risk Report to be released every 5 years It also calls for a 2023 Summit of the Future to discuss topics including these proposals addressing major risks and future generations Other topics discussed which might be of interest: Protecting and regulating the ‘digital commons’ and an internet-enabled ‘infodemic’ The governance of outer space Lethal autonomous weapons Improving pandemic response and preparedness Developing well-being indices to complement GDP Context A year ago, on the 75th anniversary of the formation of the UN, member nations asked the Secretary General, António Guterres, to produce a report with recommendations to advance the agenda of the UN. This report is his response. The report also coincides with Guterres’ re-election for his second term as Secretary General, which will begin in January 2022 and will likely last 5 years. The report was informed by consultations, listening exercises, and input from outside experts. Toby Ord (author of The Precipice) was asked to contribute to the report as such an ‘outside expert’. Among other things he underlined that ‘future generations’ does not (just) mean ‘young people’, and that international institutions should begin to address risks even more severe than COVID-19, up to and including existential risks. All of the new instruments and institutions described in the report are proposals made to the General Assembly of member nations. It remains to be seen how many of them will ultimately be implemented, and in what eventual form. Summary of the Report The report is divided into five main sections, with sections 3 and 4 being of greatest relevance from an EA or longtermist perspective. The first section situates the report in the context of the pandemic, suggesting that now is an unusually “pivotal moment” between “breakdown” and “breakthrough”. It highlights major past successes (the Montreal Protocol, the eradication of smallpox) and notes how the UN was established in the aftermath of WWII to “save succeeding generations” from war. It then calls for a “new globa...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Don't Be Bycatch, published by AllAmericanBreakfast on the effective altruism forum. It's a common story. Someone who's passionate about EA principles, but has little in the way of resources, tries and fails to do EA things. They write blog posts, and nothing happens. They apply to jobs, and nothing happens. They do research, and don't get that grant. Reading articles no longer feels exciting, but like a chore, or worse: a reminder of their own inadequacy. Anybody who comes to this place, I heartily sympathize, and encourage them to disentangle themselves from this painful situation any way they can. Why does this happen? Well, EA has two targets. Subscribers to EA principles who the movement wants to become big donors or effective workers. Big donors and effective workers who the movement wants to subscribe to EA principles. I won't claim what weight this community and its institutions give to (1) vs. (2). But when we set out to catch big fish, we risk turning the little fish into bycatch. The technical term for this is churn. Part of the issue is the planner's fallacy. When we're setting out, we underestimate how long and costly it will be to achieve an impact, and overestimate what we'll accomplish. The higher above average you aim for, the more likely you are to fall short. And another part is expectation-setting. If the expectation right from the get-go is that EA is about quickly achieving big impact, almost everyone will fail, and think they're just not cut out for it. I wish we had a holiday that was the opposite of Petrov Day, where we honored somebody who went a little bit out of their comfort zone to try and be helpful in a small and simple way. Or whose altruistic endeavor was passionate, costly, yet ineffective, and who tried it anyway, changed their mind, and valued it as a learning experience. EA organizations and writers are doing us a favor by presenting a set of ideas that speak to us. They can't be responsible for addressing all our needs. That's something we need to figure out for ourselves. EA is often criticized for its "think global" approach. But the EA is our local, our global local. How do we help each other to help others? From one little fish in the sEA to another, this is my advice: Don't aim for instant success. Aim for 20 years of solid growth. Alice wants to maximize her chance of a 1,000% increase in her altruistic output this year. Zahara's trying to maximize her chance of a 10% increase in her altruistic output. They're likely to do very different things to achieve these goals. Don't be like Alice. Be like Zahara. Start small, temporary, and obvious. Prefer the known, concrete, solvable problem to the quest for perfection. Yes, running an EA book club or, gosh darn it, picking up trash in the park is a fine EA project to cut our teeth on. If you donate 0% of your income, donating 1% of your income is moving in the right direction. Offer an altruistic service to one person. Interview one person to find out what their needs are. Ask, don't tell. When entrepreneurs do market research, it's a good idea to avoid telling the customer about the idea. Instead, they should ask the customer about their needs and problems. How do they solve their problems right now? Then they can go back to the Batcave and consider whether their proposed solution would be an improvement. Let yourself become something, just do it a little more gradually. It's good to keep your options open, but EA can be about slowing and reducing the process of commitment, increasing the ability to turn and bend. It doesn't have to be about hard stops and hairpin turns. It's OK to take a long time to make decisions and figure things out. Build each other up. Do zoom calls. Ask each other questions. Send a message to a stranger whose blog posts you like. Form relationships, and...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Reducing long-term risks from malevolent actors, published by David_Althaus, Tobias_Baumann on the effective altruism forum. Summary Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. (More) Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. (More) Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks. (More) We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future. The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs. (More) We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits. (More) Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution. (More) We argue that further work on reducing malevolence would be valuable from many moral perspectives and constitutes a promising focus area for longtermist EAs. (More) What do we mean by malevolence? Before we make any claims about the causal effects of malevolence, we first need to explain what we mean by the term. To this end, consider some of the arguably most evil humans in history—Hitler, Mao, and Stalin—and the distinct personality traits they seem to have shared.[1] Stalin repeatedly turned against former comrades and friends (Hershman & Lieb, 1994, ch. 15, ch. 18), gave detailed instructions on how to torture his victims, ordered their loved ones to watch (Glad, 2002, p. 13), and deliberately killed millions through various atrocities. Likewise, millions of people were tortured and murdered under Mao’s rule, often according to his detailed instructions (Dikötter, 2011; 2016; Chang & Halliday, ch. 8, ch. 23, 2007). He also took pleasure in watching acts of torture and imitating in what his victims went through (Chang & Halliday, ch. 48, 2007). Hitler was not only responsible for the death of millions, he also engaged in personal sadism. On his specific instructions, the plotters of the 1944 assassination attempt were hung by piano wires and their agonizing deaths were filmed (Glad, 2002). According to Albert Speer, “Hitler loved the film and had it shown over and over again” (Toland, 1976, p. 818). Hitler, Mao, and Stalin—and most other dictators—also poured enormous resources into the creation of personality cults, manifesting their colossal narcissism (Dikötter, 2019). (The section Malevolent traits of Hitler, Mao, Stalin, and other dictators in Appendix B provides more evidence.) Many scientific constructs of human malevolence could be used to summarize the relevant psychological traits shared by Hitler, Mao, Stalin, and other malevolent individuals in positions of power. We focus on the Dark Tetrad traits (Paulhus, 2014) because they seem especially relevant and have been studied extensively by psychologists. The Dark Tetrad comprises the following four traits—the more well-known Dark Triad (Paulhus & Williams, 2002) refers to the first three traits: Machiavellianism is characterized by manipulating and deceiving others to further one’s own interests, indifference to morality, and obsession with achieving power or wealth. Narcissism involves an inflated sense of one’s importance and abilities, an excessive need for admiration, a lack of emp...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Problem areas beyond 80,000 Hours' current priorities, published by Ardenlk on the effective altruism forum. Why we wrote this post At 80,000 Hours we've generally focused on finding the most pressing issues and the best ways to address them. But even if some issue is 'the most pressing'—in the sense of being the highest impact thing for someone to work on if they could be equally successful at anything—it might easily not be the highest impact thing for many people to work on, because people have various talents, experience, and temperaments. Moreover, the more people involved in a community, the more reason there is for them to spread out over different issues. There will eventually be diminishing returns as more people work on the same set of issues, and both the value of information and the value of capacity building from exploring more areas will be greater if more people are able to take advantage of that work. We're also pretty uncertain which problems are the highest impact things to work on—even for people who could work on anything equally successfully. For example, maybe we should be focusing much more on preventing great power conflict than we have been. After all, the first plausible existential risk to humanity was the creation of the atom bomb; it's easy to imagine that wars could incubate other, even riskier technological advancements. Or maybe there is some dark horse cause area—like research into surveillance—that will turn out to be way more important for improving the future than we thought. Perhaps for these reasons, many of our advisors guess that it would be ideal if 5-20% of the effective altruism community's resources were focused on issues that the community hasn't historically been as involved in, such as the ones listed below. We think we're currently well below this fraction, so it's plausible some of these areas might be better for some people to go into right now than our top priority problem areas. Who is best suited to work on these other issues? Pioneering a new problem area from an effective altruism perspective is challenging, and in some ways harder than working on a priority area, where there is better training and infrastructure. Working on a less-researched problem can require a lot of creativity and critical thinking about how you can best have a positive impact by working on the issue. For example, it likely means working out which career options within the area are the most promising for direct impact, career capital, and exploration value, and then pursuing them even if they differ from what most other people in the area tend to value or focus on. You might even eventually need to 'create your own job' if pre-existing positions in the area don't match your priorities. The ideal person would therefore be self-motivated, creative, and willing to chart the waters for others, as well as have a strong interest or relevant experience in one of these less-explored issues. We compiled the following lists by combining suggestions from 6 of our advisors with our own ideas, judgement, and research. We were looking for issues that might be very important, especially for improving the long-term future, and which might be currently neglected by people thinking from an effective altruism perspective. If something was suggested twice, we took that as a presumption in favor of including it. We're very uncertain about the value of working on any one of these problems, but we think it's likely that there are issues on these lists (and especially the first one) that are as pressing as our highest priority problem areas. What are the pros and cons of working in each of these areas? Which are less tractable than they appear, or more important? Which are already being covered adequately by existing groups we don't know enough about? What potentia...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: The case of the missing cause prioritisation research, published by weeatquince on the effective altruism forum. Introduction / summary In 2011 I came across Giving What We Can, which shortly blossomed into effective altruism. Call me a geek if you like but I found it exciting, like really exciting. Here were people thinking super carefully about the most effective ways to have an impact, to create change, to build a better world. Suddenly a boundless opportunity to do vast amounts of good opened up before my eyes. I had only just got involved and by giving to fund bednets and had already magnified my impact on the world 100 times. And this was just the beginning. Obviously bednets were not the most effective charitable intervention, they were just the most effective we had found to date – with just a tiny amount of research. Imagine what topic could be explored next: the long run effects of interventions, economic growth, political change, geopolitics, conflict studies, etc. We could work out how to compare charities of vastly different cause areas, or how to do good beyond donations (some people were already starting to talk about career choices). Some people said we should care about animals (or AI risk), I didn’t buy it (back then), but imagine, we could work out what different value sets lead to different causes and the best charities for each. As far as I could tell the whole field of optimising for impact seemed vastly under-explored. This wasn’t too surprising – most people don’t seem to care that much about doing charitable giving well and anyway it was only just coming to light how truly bad our intuitions were at making charitable choices (with the early 2000’s aid skepticism movement). Looking back, I was optimistic. Yet in some regards my optimism was well-placed. In terms of spreading ideas, my small group of geeky uni friends went on to create something remarkable, to shift £m if not £bn of donations to better causes, to help 1000s maybe 100,000s of people make better career decisions. I am no longer surprised if a colleague, tinder date or complete stranger has heard of effective altruism (EA) or gives money to AMF (a bednet charity). However, in terms of the research I was so excited about, of developing the field of how to do good, there has been minimal progress. After nearly a decade, bednets and AI research still seem to be at the top of everyone’s Christmas donations wish list. I think I assumed that someone had got this covered, that GPI or FHI or whoever will have answers, or at least progress on cause research sometime soon. But last month, whilst trying to review my career, I decided to look into this topic, and, oh boy, there just appears to be a massive gaping hole. I really don’t think it is happening. I don’t particularly want to shift my career to do cause prioritisation research right now. So I am writing this piece in the hope that I can either have you, my dear reader, persuade me this work is not of utmost importance, or have me persuade you to do this work (so I don’t have to). A. The importance of cause prioritisation research What is your view on the effective altruism community and what it has achieved? What is the single most important idea to come out of the community? Feel free to take a moment to reflect. (Answers on a postcard, or comment). It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept. This idea seems (shockingly and unfortunately) unique to EA.[1] It underpins all EA thinking, guides where EA aligned foundations give and leads to people seriously considering novel causes such as animal welfare or longtermism. This post mostly focuses on the current progress of and neglectedness of this work ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Lessons from my time in Effective Altruism, published by richard_ngo on the effective altruism forum. I’ll start with an overview of my personal story, and then try to extract more generalisable lessons. I got involved in EA around the end of 2014, when I arrived at Oxford to study Computer Science and Philosophy. I’d heard about EA a few years earlier via posts on Less Wrong, and so already considered myself EA-adjacent. I attended a few EAGx conferences, became friends with a number of EA student group organisers, and eventually steered towards a career in AI safety, starting with a masters in machine learning at Cambridge in 2017-2018. I think it’s reasonable to say that, throughout that time, I was confidently wrong (or at least unjustifiably confident) about a lot of things. In particular: I dismissed arguments about systemic change which I now find persuasive, although I don’t remember how - perhaps by conflating systemic change with standard political advocacy, and arguing that it’s better to pull the rope sideways. I endorsed earning to give without having considered the scenario which actually happened, of EA getting billions of dollars of funding from large donors. (I don’t know if this possibility would have changed my mind, but I think that not considering it meant my earlier belief was unjustified.) I was overly optimistic about utilitarianism, even though I was aware of a number of compelling objections; I should have been more careful to identify as "utilitarian-ish" rather than rounding off my beliefs to the most convenient label. When thinking about getting involved in AI safety, I took for granted a number of arguments which I now think are false, without actually analysing any of them well enough to raise red flags in my mind. After reading about the talent gap in AI safety, I expected that it would be very easy to get into the field - to the extent that I felt disillusioned when given (very reasonable!) advice, e.g. that it would be useful to get a PhD first. As it turned out, though, I did have a relatively easy path into working on AI safety - after my masters, I did an internship at FHI, and then worked as a research engineer on DeepMind’s safety team for two years. I learned three important lessons during that period. The first was that, although I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field. The second was that the job simply wasn’t a good fit for me (for reasons I’ll discuss later on). The third was that I’d been dramatically underrating “soft skills” such as knowing how to make unusual things happen within bureaucracies. Due to a combination of these factors, I decided to switch career paths. I’m now a PhD student in philosophy of machine learning at Cambridge, working on understanding advanced AI with reference to the evolution of humans. By now I’ve written a lot about AI safety, including a report which I think is the most comprehensive and up-to-date treatment of existential risk from AGI. I expect to continue working in this broad area after finishing my PhD as well, although I may end up focusing on more general forecasting and futurism at some point. Lessons I think this has all worked out well for me, despite my mistakes, but often more because of luck (including the luck of having smart and altruistic friends) than my own decisions. So while I’m not sure how much I would change in hindsight, it’s worth asking what would have been valuable to know in worlds where I wasn’t so lucky. Here are five such things. 1. EA is trying to achieve something very difficult. A lot of my initial attraction towards EA was because it seemed like a slam-dunk case: here’s an obvious i...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: EA needs consultancies, published by lukeprog on the effective altruism forum. Problem EA organizations like Open Phil and CEA could do a lot more if we had access to more analysis and more talent, but for several reasons we can't bring on enough new staff to meet these needs ourselves, e.g. because our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations.[1] This also contributes to there being far more talented EAs who want to do EA-motivated work than there are open roles at EA organizations.[2] A partial solution? In the public and private sectors, one common solution to this problem is consultancies. They can be think tanks like the National Academies or RAND,[3] government contractors like Booz Allen or General Dynamics, generalist consulting firms like McKinsey or Deloitte, niche consultancies like The Asia Group or Putnam Associates, or other types of service providers such as UARCs or FFRDCs. At the request of their clients, these consultancies (1) produce decision-relevant analyses, (2) run projects (including building new things), (3) provide ongoing services, and (4) temporarily "loan" their staff to their clients to help with a specific project, provide temporary surge capacity, provide specialized expertise that it doesn't make sense for the client to hire themselves, or fill the ranks of a new administration.[4] (For brevity, I'll call these "analyses," "projects," "ongoing services," and "talent loans," and I'll refer to them collectively as "services.") This system works because even though demand for these services can fluctuate rapidly at each individual client, in aggregate across many clients there is a steady demand for the consultancies' many full-time employees, and there is plenty of useful but less time-sensitive work for them to do between client requests. Current state of EA consultancies Some of these services don't require EA talent, and can thus be provided for EA organizations by non-EA firms, e.g. perhaps accounting firms. But what about analyses and services that require EA talent, e.g. because they benefit from lots of context about the EA community, or because they benefit from habits of reasoning and moral intuitions that are far more common in the EA community than elsewhere?[5] Rethink Priorities (RP) has demonstrated one consultancy model: producing useful analyses specifically requested by EA organizations like Open Philanthropy across a wide range of topics.[6] If their current typical level of analysis quality can be maintained, I would like to see RP scale as quickly as they can. I would also like to see other EAs experiment with this model.[7] BERI offers another consultancy model, providing services that are difficult or inefficient for clients to handle themselves through other channels (e.g. university administration channels). There may be a few other examples, but I think not many.[8] Current demand for these services All four models require sufficient EA client demand to be sustainable. Fortunately, my guess is that demand for ≥RP-quality analysis from Open Phil alone (but also from a few other EA organizations I spoke to) will outstrip supply for the foreseeable future, even if RP scales as quickly as they can and several RP clones capable of ≥RP-quality analysis are launched in the next couple years.[9] So, I think more EAs should try to launch RP-style "analysis" consultancies now. However, for EAs to get the other three consultancy models off the ground, they probably need clearer evidence of sufficiently large and steady aggregate demand for those models from EA organizations. At least at first, this probably means that these models will work best for services that demand relatively "generalist" talent, perhaps corresponding ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: The Cost of Rejection, published by Daystar Eld on the effective altruism forum. For those that don't know, I've worked as a therapist for the rationality and EA community for over two years now, first part time, then full time in early 2020. I often get asked about my observations and thoughts on what sorts of issues are particularly prevalent or unique to the community, and while any short answer to that would be oversimplifying the myriad of issues I've treated, I do feel comfortable saying that "concern with impact" is a theme that runs pretty wide and deep no matter what people come to sessions to talk about. Seeing how this plays out in various different ways has motivated me to write on some aspects of it, starting with this broad generalization; rejection hurts. Specifically, rejection from a job that's considered high impact (which, for many, implicitly includes all jobs with EA organizations) hurts a lot. And I think that hurt has a negative impact that goes beyond the suffering involved. In addition to basing this post off of my own observations, I’ve written it with the help of/on behalf of clients who have been affected by this, some of whom reviewed and commented on drafts. I. Premises There are a few premises that I’m taking for granted that I want to list out in case people disagree with any specific ones: The EA population is growing, as are EA organizations in number and size. This seems overall to be a very good thing. In absolute numbers, EA organizations are growing slower or at pace with the overall EA population. Even with massive increases in funding this seems inevitable, and also probably good? There are many high impact jobs outside of EA orgs that we would want people in the community to have. (By EA orgs I specifically mean organizations headed by and largely made up of people who self-identify as Effective Altruists, not just those using evidence-and-reason-to-do-the-most-good) ((Also there’s a world in which more people self-identify as EAs and therefore more organizations are considered EA and by that metric it’s bad that EA orgs are growing slower than overall population, but that’s also not what I mean)) Even with more funding being available, there will continue to be many more people applying to EA jobs than getting them. I don’t have clear numbers for this, but asking around at a few places got me estimates between ~47-124 applications for specific positions (one of which noted that ~¾ of them were from people clearly within and familiar with the EA community), and hundreds of applications for specific grants (at least once breaking a thousand). This is good for the organizations and community as a whole, but has bad side effects, such as: Rejection hurts, and that hurt matters. For many people, rejection is easily accepted as part of trying new things, shooting for the moon, and challenging oneself to continually grow. For many others, it can be incredibly demoralizing, sometimes to the point of reducing motivation to continue even trying to do difficult things. So when I say the hurt matters, I don’t just mean that it’s suffering and we should try to reduce suffering wherever we can. I also mean that as the number of EAs grows faster than the number of positions in EA orgs, the knock-on effects of rejection will slow community and org growth, particularly since: The number of EAs who receive rejections from EA orgs will likely continue to grow, both absolutely and proportionally. Hence, this article. II. Models There are a number of models I have for all of this that could be totally wrong. I think it’s worth spelling them out a bit more so that people can point to more bits and let me know if they are important, or why they might not be as important as I think they are. Difficulty in Self Organization First, I think it’s import...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Concerns with ACE's Recent Behavior, published by Hypatia on the effective altruism forum. Epistemic Status: I feel pretty confident that the core viewpoint expressed in this post is correct, though I'm less confident in some specific claims. I have not shared a draft of this post with ACE, and so it’s possible I’ve missed important context from their perspective. EDIT: ACE board member Eric Herboso has responded with his personal take on this situation. He believes some points in this post are wrong or misleading. For example, he disputes my claim that ACE (as an organization) attempted to cancel a conference speaker. EDIT: Jakub Stencel from Anima International has posted a response. He clarifies a few points and offers some context regarding the CARE conference situation. Background In the past year, there has been some concern in EA surrounding the negative impact of “cancel culture”[1] and worsening discourse norms. Back in October, Larks wrote a post criticizing EA Munich's decision to de-platform Robin Hanson.The post was generally well-received, and there have been other posts on the forum discussing potential risks from social-justice oriented discourse norms. For example, see The Importance of Truth-Oriented Discussions in EAand EA considerations regarding increasing political polarization. I'm writing this post because I think some recent behavior from Animal Charity Evaluators (ACE) is a particularly egregious example of harmful epistemic norms in EA. This behavior includes: Making (in my view) poorly reasoned statements about anti-racism and encouraging supporters to support or donate to anti-racist causes and organizations of dubious effectiveness Attempting to cancel an animal rights conference speaker because of his views on Black Lives Matter, withdrawing from that conference because the speaker's presence allegedly made ACE staff feel unsafe, and issuing a public statement supporting its staff and criticizing the conference organizers Penalizing charities in reviews for having leadership and/or staff who are deemed to be insufficiently progressive on racial equity, and stating it won't offer movement grants funding to those who disagree with its views on diversity, equity, and inclusion[2]. Because I'm worried that this post could hurt my future ability to get a job in EAA, I'm choosing to remain anonymous. My goal here is to: a) Describe ACE's behavior in order to raise awareness and foster discussion, since this doesn't seem to have attracted much attention, and b) Give a few reasons why I think ACE's behavior has been harmful, though I’ll be brief since I think similar points have been better made elsewhere I also want to be clear that I don't think ACE is the only bad actor here, as other areas of the EAA community have also begun to embrace harmful social-justice derived discourse norms[3]. However, I'm focusing my criticism on ACE here because: It positions itself as an effective altruism organization, rather than a traditional animal advocacy organization It is well known and generally respected by the EA community It occupies a powerful position within the EAA movement, directing millions of dollars in funding each year and conducting a large fraction of the movement's research And before I get started, I'd also like to make a couple caveats: I think ACE does a lot of good work, and in spite of this recent behavior, I think its research does a lot to help animals. I'm also not trying to “cancel” ACE or any of its staff. But I do think the behavior outlined in this post is bad enough that ACE supporters should be vocal about their concerns and consider withholding future donations. I am not suggesting that racism, discrimination, inequality, etc. shouldn't be discussed, or that addressing these important problems isn't EA-worthy. The EA commu...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Introducing Probably Good: A New Career Guidance Organization, published by omernevo, sella on the effective altruism forum. We’re excited to announce the launch of Probably Good, a new organization that provides career guidance intended to help people do as much good as possible. Context For a while, we have felt that there was a need for a more generalist careers organization than 80,000 Hours — one which is more agnostic regarding different cause areas and might provide a different entry point into the community to people who aren’t a good fit for 80K’s priority areas. Following 80,000 Hours’ post about what they view as gaps in the careers space, we contacted them about how a new organization could effectively fill some of those gaps. After a few months of planning, asking questions, writing content, and interviewing experts, we’re almost ready to go live (we aim to start putting our content online in 1-2 months) and would love to hear more from the community at large. How You Can Help The most important thing we’d like from you is feedback. Please comment on this post, send us personal messages on the Forum, email us (omer at probablygood dot org, sella at probablygood dot org), or set up a conversation with us via videoconference. We would love to receive as much feedback as we can get. We’re particularly interested in hearing about things that you, personally, would actually read // use // engage with, but would appreciate absolutely any suggestions or feedback. Probably Good Overview The most updated version of the overview is here. Following is the content of the overview at the time this announcement is posted. Overview Probably Good is a new organization that provides career guidance intended to help people do as much good as possible. We will start by focusing on online content and a small number of 1:1 consultations. We will later consider other forms of career guidance such as a job board, scaling up the 1:1 consultations, more in-depth research, etc. Our approach to guidance is focused on how to help each individual maximize their career impact based on their values, personal circumstances, and motivations. This means that we will accommodate a wide range of preferences (for example, different cause areas), as long as they’re consistent with our principles, and try to give guidance in accordance with those preferences. Therefore, we’ll be looking at a wide range of impactful careers under different views on what to optimize for or under various circumstantial constraints, such as how to maximize impact within specific career paths, within specific geographic regions, through earning to give, or within more specific situations (e.g. making an impact from within a large corporation). There are other organizations in this space, the most well-known being 80,000 Hours. We think our approach is complementary to 80,000 Hours’ current approach: Their guidance mostly focuses on people aiming to work on their priority problem areas, and we would be able to guide high quality candidates who aren’t. We would direct candidates to 80,000 Hours or other specialized organizations (such as Animal Advocacy Careers) if they’re a better fit for their principles and priority paths. This characterization of our target audience is very broad; this has two main motivations. First, as part of our experimental approach: we are interested in identifying which cause areas currently have the most unserved demand. By providing preliminary value in multiple areas of expertise, we hope to more efficiently identify where our investment would be most useful, and we may specialize (in a more informed manner) in the future. The second motivation for this is that one possibility for specialization is as a “router” interface - helping individuals make preliminary decisions tailored to the...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: What Makes Outreach to Progressives Hard, published by Cullen_OKeefe on the effective altruism forum. This post summarizes some of my conclusions on things that can make EA outreach to progressives hard, as well as some tentative recommendations on techniques for making such outreach easier. To be clear, this post does not argue or assume that outreach to progressives is harder than outreach to other political ideologies.[1] Rather, the point of this post is to highlight identifiable, recurring memes/thought patterns that cause Progressives to reject or remain skeptical of EA. My Background (Or, Why I am Qualified to Talk About This) Nothing in here is based on systematic empirical analysis. It should therefore be treated as highly uncertain. My analysis here draws on two sources: Reflecting on my personal journey as someone who transitioned from a very social-justice-y worldview to a more EA-aligned one (and therefore understands the former well), who is still solidly left-of-center, and who still retains contacts in the social justice (SJ) world; and My largely failed attempts as former head of Harvard Law School Effective Altruism to get progressive law students to make very modest giving commitments to GiveWell charities. Given that the above all took place in America, this post is most relevant to American political dynamics (especially at elite universities), and may very well be inapplicable elsewhere.[2] Readers may worry that I am being a bit uncharitable here. However, I am not trying to present the best progressive objections to EA (so as to discover the truth), but rather the most common ones (so as to persuade people better). In other words, this post is about marketing and communications, not intellectual criticisms. Since I think many of the common progressive objections to EA are bad, I will attempt to explain them in (what I take to be) their modal or undifferentiated form, not steelman them. Relatedly, when I say "progressives" through the rest of this post, I am mainly referring to the type of progressive who is skeptical of EA, not all progressives. There are many amazing progressive EAs, who do not see these two ideologies to be in conflict whatsoever. And many non-EA progressives will believe few of these things. Nevertheless, I do think I am pointing to a real set of memes that are common—but definitely not universal—among the American progressive left as of 2021. This is sufficient for understanding the messaging challenges facing EAs within progressive institutions. Reasons Progressives May Not Like EA Legacy of Paternalistic International Aid Many progressives have a strong prior against international aid, especially private international aid. Progressives are steeped in—and react to—stories of paternalistic international aid,[3] much in the way that EAs are steeped in stories of ineffective aid (e.g., Playpumps). Interestingly, EAs and progressives will often (in fact, almost always) agree on what types of aid are objectionable. However, we tend to take very different lessons away from this. EAs will generally take away the lesson that we have to be super careful about which interventions to fund, because funding the wrong intervention can be ineffective or actively harmful. We put the interests of our intended beneficiaries first by demanding that charities demonstrably advance their beneficiaries' interests as cost-effectively as possible. Progressives tend to take a very different lesson from this. They tend to see this legacy as objectionable due to the very nature of the relationship between aid donors and recipients. Roughly, they may believe that the power differential between wealthy donors from the Global North and aid recipients in developing countries makes unobjectionable foreign aid either impossible or, at the very least, extr...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Snapshot of a career choice 10 years ago, published by Julia_Wise on the effective altruism forum. Here’s a little episode from EA’s history, about how much EA career advice has changed over time. Ten years ago, I wrote an angsty LessWrong post called “Career choice for a utilitarian giver.” (“Effective altruism” and “earning to give” didn’t exist as terms at that point.) At the time, there was a lot less funding in EA, and the emphasis was very much on donation rather than direct work. Donation was the main way I was hoping to have an impact. I was studying to become a social worker, but I had become really worried that I should try for some higher-earning career so I could donate more. I thought becoming a psychiatrist was my best career option, since it paid significantly more than the social work career I was on track towards, and I thought I could be good at it. I prioritized donation very highly, and I estimated that going into medicine would allow me to earn enough to save 2500 lives more than I could by staying on the same path. (That number is pretty far wrong, but it’s what I thought at the time.) The other high-earning options I could think of seemed to require quantitative skills I didn’t have, or a level of ambition and drive I didn’t have. A few people did suggest that I might work on movement building, but for some reason it didn’t seem like a realistic option to me. There weren’t existing projects that I could slot into, and I’m not much of an entrepreneurial type. The post resulted in me talking to a career advisor from a project that would eventually become 80,000 Hours. The advisor and I talked about how I might switch fields and try to get into medical school. I was trying not to be swayed by the sunk cost of the social work education I had already completed, but I also just really didn’t want to go through medical school and residency. My strongest memory of that period is lying on the grass at my grad school, feeling awful about not being willing to put the years of work into earning more money. There were lives at stake. I was going to let thousands of people die from malaria because I didn’t want to work hard and go to medical school. I felt horribly guilty. And I also thought horrible guilt was not going to be enough to motivate me through eight years of intense study and residency. After a few days of crisis, I decided to stop thinking about it all the time. I didn’t exactly make a conclusive decision, but I didn’t take any steps to get into med school, and after a few more weeks it was clear to me that I had no real intention to change paths. So I continued to study social work, with the belief that I was doing something seriously wrong. (To be clear, nobody was telling me I should feel this way, but I wasn’t living up to my own standards.) In the meantime, I started writing Giving Gladly and continued hosting dinners at my house where people could discuss this kind of thing. The Boston EA group grew out of that. It didn’t occur to me that I could work for an EA organization without moving cities. But four years later, CEA was looking for someone to do community work in EA and was willing to consider someone remote. Because of my combination of EA organizing, writing, and experience in social work, I turned out to be a good fit. I was surprised that they were willing to hire someone remote. Although I struggled at first to work out what exactly I should be doing, over time it was clear to me that I could be much more useful here than either in social work or earning to give. I don’t think there’s a clear moral of the story about what this means other people should do, but here are some reflections: I look back on this and think, wow, we had very little idea how to make good use of a person like me. I wonder how many other square pegs are ou...
loading
Comments 
Download from Google Play
Download from App Store