“The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating” by Corin Katzke, Gideon Futerman

EA Forum Podcast (All audio) - Un pódcast de EA Forum Team

This is a link post. Below is the executive summary of our new paper, The Manhattan Trap. Please visit the link above to see the full paper. We also encourage discussion and feedback in the comments here. This paper examines the strategic dynamics of international competition to develop Artificial Superintelligence (ASI). We argue that the same assumptions that might motivate the US to race to develop ASI also imply that such a race is extremely dangerous. A race to develop ASI is motivated by two assumptions: that ASI provides a decisive military advantage (DMA) to the first state that develops it, and that states are rational actors aware of ASI's strategic implications. However, these same assumptions make racing catastrophically dangerous for three reasons. First, an ASI race creates a threat to strategic stability that could trigger a war between the US and its adversaries, particularly China. If ASI could [...] --- First published: January 21st, 2025 Source: https://forum.effectivealtruism.org/posts/QxRLBuQcvv6sdKkCg/the-manhattan-trap-why-a-race-to-artificial --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site