EA - Should strong longtermists really want to minimize existential risk? by tobycrisford
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should strong longtermists really want to minimize existential risk?, published by tobycrisford on December 4, 2022 on The Effective Altruism Forum.Strong longtermists believe there is a non-negligible chance that the future will be enormous. For example, earth-originating life may one day fill the galaxy with 1040 digital minds. The future therefore has enormous expected value, and concern for the long-term should almost always dominate near-term considerations, at least for those decisions where our goal is to maximize expected value.It is often stated that strong longtermism reduces in practice to the goal: “minimize existential risk at all costsâ€. I argue here that this is inaccurate. I claim that a more accurate way of summarising the strong longtermist goal is: “minimize existential risk at all costs conditional on the future possibly being very bigâ€. I believe the distinction between these two goals has important practical implications. The strong longtermist goal may actually conflict with the goal of minimizing existential risk unconditionally.In the next section I describe a thought experiment to demonstrate my claim. In the following section I argue that this is likely to be relevant to the actual world we find ourselves in. In the final section I give some concluding remarks on what we should take away from all this.The Anti-Apocalypse MachineThe Earth is about to be destroyed by a cosmic disaster. This disaster would end all life, and snuff out all of our enormous future potential.Fortunately, physicists have almost settled on a grand unified theory of everything that they believe will help them build a machine to save us. They are 99% certain that the world is described by Theory A, which tells us we can be saved if we build Machine A. But there is a 1% chance that the correct theory is actually Theory B, in which case we need to build Machine B. We only have the time and resources to build one machine.It appears that our best bet is to build Machine A, but there is a catch. If Theory B is true, then the expected value of our future is many orders of magnitude larger (although it is enormous under both theories). This is because Theory B leaves open the possibility that we may one day develop slightly-faster-than-light travel, while Theory A being true would make that impossible.Due to the spread of strong longtermism, Earth's inhabitants decide that they should build Machine B, acting as if the speculative Theory B is correct, since this is what maximizes expected value. Extinction would be far worse in the Theory B world than the Theory A world, so they decide to take the action which would prevent extinction in that world. They deliberately choose a 99% chance of extinction over a 1% chance, risking all of humanity, and all of humanity's future potential.The lesson here is that strong longtermism gives us the goal to minimize existential risk conditional on the future possibly being very big, and that may conflict with the goal to minimize existential risk unconditionally.Relevance for the actual worldThe implication of the above thought experiment is that strong longtermism tells us to look at the set of possible theories about the world, pick the one in which the future is largest, and, if it is large enough, act as if that theory were true. This is likely to have absurd consequences if carried to its logical conclusion, even in real world cases. I explore some examples in this section.The picture becomes more confusing when you consider theories which permit the future to have infinite value. In Nick Beckstead's original thesis, On the Overwhelming Importance of Shaping the Far Future, he explicitly singles out infinite value cases as examples of where we should abandon expected value maximization, and switch to using a more timid deci...
