494 Episodo

  1. Demystifying Reinforcement Learning in Agentic Reasoning

    Publicado: 19/10/2025
  2. Emergent coordination in multi-agent language models

    Publicado: 19/10/2025
  3. Learning-to-measure: in-context active feature acquisition

    Publicado: 19/10/2025
  4. Andrej Karpathy's insights: AGI, Intelligence, and Evolution

    Publicado: 19/10/2025
  5. Front-Loading Reasoning: The Synergy between Pretraining and Post-Training Data

    Publicado: 18/10/2025
  6. Representation-Based Exploration for Language Models: From Test-Time to Post-Training

    Publicado: 18/10/2025
  7. The attacker moves second: stronger adaptive attacks bypass defenses against LLM jail- Breaks and prompt injections

    Publicado: 18/10/2025
  8. When can in-context learning generalize out of task distribution?

    Publicado: 16/10/2025
  9. The Art of Scaling Reinforcement Learning Compute for LLMs

    Publicado: 16/10/2025
  10. A small number of samples can poison LLMs of any size

    Publicado: 16/10/2025
  11. Dual Goal Representations

    Publicado: 14/10/2025
  12. Welcome to the Era of Experience

    Publicado: 14/10/2025
  13. Value Flows: Flow-Based Distributional Reinforcement Learning

    Publicado: 14/10/2025
  14. Self-Adapting Language Models

    Publicado: 12/10/2025
  15. The Markovian Thinker

    Publicado: 12/10/2025
  16. Moloch’s Bargain: emergent misalignment when LLMs compete for audiences

    Publicado: 12/10/2025
  17. Transformer Predictor Dynamics and Task Diversity

    Publicado: 11/10/2025
  18. Base models know how to reason, thinking models learn when

    Publicado: 11/10/2025
  19. Spectrum tuning: Post-training for distributional coverage and in-context steerability

    Publicado: 11/10/2025
  20. Understanding Prompt Tuning and In-Context Learning via Meta-Learning

    Publicado: 11/10/2025

1 / 25

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site