EA - Why people want to work on AI safety (but don’t) by Emily Grundy

The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund

Podcast artwork

Categorías:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why people want to work on AI safety (but don’t), published by Emily Grundy on January 24, 2023 on The Effective Altruism Forum.Epistemic status: Based mainly on my own experience and a couple of conversations with others learning about AI safety. It’s very likely that I’m some overlooking existing resources that address my concerns - feel free to add them in the comments.I had an ‘eugh’ response to doing work in the AI safety space. I understood the importance. I believed in the urgency. I wished I wanted to work on it. But I didn’t.An intro AI safety course helped me unpack my hesitation. This post outlines what I found: three barriers to delving into the world of AI safety, and what could help address them.Why read this post?If you’re similarly conflicted, this post might be validating and evoke the comforting, "Oh it’s not just me" feeling. It might also help you discern where your hesitation is coming from. Once you understand that, you can try to address it. Maybe, ‘I wish I wanted to work on AI safety’ just becomes, ‘I want to work on AI safety’.If you want to build the AI safety community, it could be helpful to understand how a newcomer, like myself, interprets the field (and what makes me less likely to get involved).I’ll discuss three barriers (and their potential solutions):A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"AI can be technical but it’s not clear how much of that you need to know: AKA "But I don’t program"There’s a lot of jargon and it’s not always well explained: AKA "Can you explain that again.but like I’m 5"Jump to the end for a visual summary.A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"A lot of intro AI explainers go like this:Here’s where we’re at with AI (cue chess, art, and ChatGPT examples)Here are a bunch of reasons why AI could (and probably will) become powerfulI mean, really powerfulAnd here’s how it could go wrongWhat I don’t get from these explanations is an image of what it actually looks like to: 1) live in a world with advanced AI or 2) go from our current world to that one. Below I outline what I mean by those two points, why I think they’re important, and what could help.What does it look like to live in a world with AI?I can regurgitate examples of how advanced AI might be used in the future – maybe it’ll be our future CEOs, doctors, politicians, or artists. What I’m missing is the ability to imagine any of these things - to understand, concretely, how that might look. I can say things like, "AI might be the future policymakers", but have no idea how they would create or communicate policies, or how we would interact with them as policymakers.To flesh this out a bit, I imagine there are three levels of understanding here: 1) what AI could do (roles it might adopt, things it could achieve), 2) how that would actually look (concrete, detailed descriptions), and 3) how that would work (the technical stuff). A lot of content I’ve seen focuses on the first and third levels, but skips over the middle one. Here's a figure for the visually inclined:Why is this important?For me, having a very surface-level understanding of something stunts thought. Because I can’t imagine how it might look, I struggle to imagine other problems, possibilities, or solutions. Plus, big risks are already hard to imagine and hard to feel, which isn’t great for our judgement of those risks or our motivation to work on them.What could help?I imagine the go-to response to this is – "check out some fiction stories". I think that’s a great idea if your audience is willing to invest time into finding and reading these. But, I think fleshed out examples have a place beyond fiction.If you’re introducing people to the idea of AI (e.g., yo...

Visit the podcast's native language site