EA - Prioritizing x-risks may require caring about future people by elifland
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritizing x-risks may require caring about future people, published by elifland on August 14, 2022 on The Effective Altruism Forum. Introduction Several recent popular posts (here, here, and here) have made the case that existential risks (x-risks) should be introduced without appealing to longtermism or the idea that future people have moral value. They tend to argue or imply that x-risks would still be justified as a priority without caring about future people. I felt intuitively skeptical of this claim and decided to stress-test it. In this post, I: Argue that prioritizing x-risks over near-term interventions and global catastrophic risks may require caring about future people. More Disambiguate connotations of “longtermism”, and suggest a strategy for introducing the priority of existential risks. More Review and respond to previous articles which mostly argued that longtermism wasn’t necessary for prioritizing existential risks. More Prioritizing x-risks may require caring about future people I’ll do some rough analyses on the value of x-risk interventions vs. (a) near-term interventions, such as global health and animal welfare and (b) global catastrophic risk (GCR) interventions, such as reducing risk of nuclear war. I assume a lack of caring about future people to test whether it’s necessary for prioritizing x-risk above alternatives. My goal is to do a quick first pass, which I’d love for others to build on / challenge / improve! I find that without taking into account future people, x-risk interventions are approximately as cost-effective as near-term and GCR interventions. Therefore, strongly prioritizing x-risks may require caring about future people; otherwise, it depends on non-obvious claims about the tractability of x-risk reduction and the moral weights of animals. InterventionRough estimated cost-effectiveness, current lives only ($/human-life-equivalent-saved)General x-risk prevention (funding bar)$125 to $1,250AI x-risk prevention$375Animal welfare$450Bio x-risk prevention$1,000Nuclear war prevention$1,250GiveWell-style global health (e.g. bednet distribution)$4,500 Estimating the value of x-risk interventions This paper estimates that $250B would reduce biorisk by 1%. Taking Ord’s estimate of 3% biorisk this century and a population of ~8 billion, we get: $250B / (8B .01 .03) = $104,167/life saved via biorisk interventions. The paper calls this a conservative estimate, so a more optimistic one might be 1-2 more OOMs as effective at ~$10,000 to ~$1,000 / life saved; let’s take the optimistic end of $1,000 / life saved as a rough best guess, since work on bio x-risk likely also reduces the likelihood of deaths from below-existential pandemics and these seem substantially more likely than the most severe ones. For AI risk, 80,000 Hours estimated several years ago that another $100M/yr (for how long? let’s say 30 years) can reduce AI risk by 1%; unclear if this percentage is absolute or relative, relative seems more reasonable to me. Let’s again defer to Ord and assume 10% total AI risk. This gives: ($100M 30) / (8B .01 .1) = $375 / life saved. On the funding side, Linch has ideated a .01% Fund which would aim to reduce x-risks by .01% for $100M-$1B. This implies a cost-effectiveness of ($100M to $1B) / (8B .0001) = $125 to 1,250 / life saved. Comparing to near-term interventions GiveWell estimates it costs $4,500 to save a life through global health interventions. This post estimates that animal welfare interventions may be ~10x more effective, implying $450 / human life-equivalent, though this is an especially rough number. Comparing to GCR intervention Less obviously than near-term interventions, a potential issue with not caring about future people is over-prioritizing global catastrophic risks (that might kill a substantial percentage o...
