54 Episodo

  1. Owain Evans - AI Situational Awareness, Out-of-Context Reasoning

    Publicado: 23/8/2024
  2. [Crosspost] Adam Gleave on Vulnerabilities in GPT-4 APIs (+ extra Nathan Labenz interview)

    Publicado: 17/5/2024
  3. Ethan Perez on Selecting Alignment Research Projects (ft. Mikita Balesni & Henry Sleight)

    Publicado: 9/4/2024
  4. Emil Wallner on Sora, Generative AI Startups and AI optimism

    Publicado: 20/2/2024
  5. Evan Hubinger on Sleeper Agents, Deception and Responsible Scaling Policies

    Publicado: 12/2/2024
  6. [Jan 2023] Jeffrey Ladish on AI Augmented Cyberwarfare and compute monitoring

    Publicado: 27/1/2024
  7. Holly Elmore on pausing AI

    Publicado: 22/1/2024
  8. Podcast Retrospective and Next Steps

    Publicado: 9/1/2024
  9. Kellin Pelrine on beating the strongest go AI

    Publicado: 4/10/2023
  10. Paul Christiano's views on "doom" (ft. Robert Miles)

    Publicado: 29/9/2023
  11. Neel Nanda on mechanistic interpretability, superposition and grokking

    Publicado: 21/9/2023
  12. Joscha Bach on how to stop worrying and love AI

    Publicado: 8/9/2023
  13. Erik Jones on Automatically Auditing Large Language Models

    Publicado: 11/8/2023
  14. Dylan Patel on the GPU Shortage, Nvidia and the Deep Learning Supply Chain

    Publicado: 9/8/2023
  15. Tony Wang on Beating Superhuman Go AIs with Advesarial Policies

    Publicado: 4/8/2023
  16. David Bau on Editing Facts in GPT, AI Safety and Interpretability

    Publicado: 1/8/2023
  17. Alexander Pan on the MACHIAVELLI benchmark

    Publicado: 26/7/2023
  18. Vincent Weisser on Funding AI Alignment Research

    Publicado: 24/7/2023
  19. [JUNE 2022] Aran Komatsuzaki on Scaling, GPT-J and Alignment

    Publicado: 19/7/2023
  20. Nina Rimsky on AI Deception and Mesa-optimisation

    Publicado: 18/7/2023

1 / 3

The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.

Visit the podcast's native language site