EA - Effective Altruism's Implicit Epistemology by Violet Hour

The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund

Podcast artwork

Categorías:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism's Implicit Epistemology, published by Violet Hour on October 18, 2022 on The Effective Altruism Forum. Cross-posted from The Violet Hour. Note: This is a piece on the sociology of (longtermist) EA, aimed to be accessible to a wider audience. While I intend the post to also interest more immersed EAs, many readers here should feel comfortable heavily skimming the introduction, and likewise skimming (or skipping) Section 2, which provides an informal exposition of Subjective Bayesianism and Expected Value Theory. The appendix (~1.1k words) offers a preliminary discussion of some philosophical issues related to this post. 1. Introduction The future might be very big, and we might be able to do a lot, right now, to shape it. You might have heard of a community of people who take this idea pretty seriously — effective altruists, or ‘EAs’. If you first heard of EA a few years ago, and haven’t really followed it since, then you might be pretty surprised at where we’ve ended up. Currently, EA consists of a variety of professional organizations, researchers, and grantmakers, all with (sometimes subtly) different approaches to doing the most good possible. Organizations which, collectively, donate billions of dollars towards interventions aiming to improve the welfare of conscious beings. In recent years, the EA community has shifted its priorities towards an idea called longtermism — very roughly, the idea that we should primarily focus our altruistic efforts towards shaping the very long-run future. Like, very long-run. At least thousands of years. Maybe more. (From hereon, I’ll use ‘EA’ to talk primarily about longtermist EA. Hopefully this won’t annoy too many people). Anyway, longtermist ideas have pushed EA to focus on a few key cause areas — in particular, ensuring that the development of advanced AI is safe, preventing the development of deliberately engineered pandemics, and (of course) promoting effective altruism itself. I've been part of this community for a while, and I've often found outsiders bemused by some of our main priorities. I've also found myself puzzled by this bemusement. The explicit commitments undergirding (longtermist) EA, as many philosophers involved with EA remind us, are really not all that controversial. And those philosophers are right, I think. Will MacAskill, for instance, has listed the following three claims as forming the basic, core argument behind longtermism. (1) Future people matter morally. (2) There could be enormous numbers of future people. (3) We can make a difference to the world they inhabit. Together, these three claims all seem pretty reasonable. With that in mind, we’re left with a puzzle: given that EA’s explicitly stated core commitments are not that weird, why, to many people, do EA’s explicit, practical priorities appear so weird? In short, my answer to this puzzle claims that EA’s priorities emerge, to a large extent, from EA’s unusual epistemic culture. So, in this essay, I’ll attempt to highlight the sociologically distinctive norms EAs adopt, in practice, concerning how to reason, and how to prioritize under uncertainty. I’ll then claim that various informal norms, beyond some of the more explicit philosophical theories which inspire those norms, play a key role in driving EA’s prioritization decisions. 2. Numbers, Numbers, Numbers Suppose I asked you, right now, to give me a probability that Earth will experience an alien invasion within the next two weeks. You might just ignore me. But suppose you‘ve come across me at a party; the friend you arrived with is conspicuously absent. You look around, and, well, the other conversations aren’t any better. Also, you notice that the guy you fancy is in the corner; every so often, you catch him shyly glancing at us. Fine, you think, I could be crazy, but ...

Visit the podcast's native language site