EA - We are fighting a shared battle (a call for a different approach to AI Strategy) by Gideon Futerman
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are fighting a shared battle (a call for a different approach to AI Strategy), published by Gideon Futerman on March 16, 2023 on The Effective Altruism Forum.Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this.Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and "It's all about power" for the core of my argument.My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals.It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & Whittlestone, Baum, Stix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused.Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions.A choice not an inevitabilityAt present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous.It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companies, Progressive Era trust-busting, postwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history. Acceptance of corporate dominance i...
