AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 1)
Future of Life Institute Podcast - Un pódcast de Future of Life Institute
Categorías:
The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the first of two parts attempting to partially remedy this by providing an overview of the organizations participating in technical AI research, their specific research directions, and how these approaches all come together to make up the state of technical AI alignment efforts. In this first part, Rohin moves sequentially through the technical research organizations in this space and carves through the field by its varying research philosophies. We also dive into the specifics of many different approaches to AI safety, explore where they disagree, discuss what properties varying approaches attempt to develop/preserve, and hear Rohin's take on these different approaches. You can take a short (3 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7 In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter. Topics discussed in this episode include: - The perspectives of CHAI, MIRI, OpenAI, DeepMind, FHI, and others - Where and why they disagree on technical alignment - The kinds of properties and features we are trying to ensure in our AI systems - What Rohin is excited and optimistic about - Rohin's recommended reading and advice for improving at AI alignment research