550 Episodo

  1. LLM-based Conversational Recommendation Agents with Collaborative Verbalized Experience

    Publicado: 23/8/2025
  2. Signal and Noise: Evaluating Language Model Benchmarks

    Publicado: 23/8/2025
  3. Breaking Feedback Loops in Recommender Systems with Causal Inference

    Publicado: 21/8/2025
  4. RAG is Dead, Context Engineering is King: Building Reliable AI Systems

    Publicado: 20/8/2025
  5. A Survey of Personalization: From RAG to Agent

    Publicado: 20/8/2025
  6. Facilitating the Adoption of Causal Infer-ence Methods Through LLM-Empowered Co-Pilot

    Publicado: 19/8/2025
  7. Performance Prediction for Large Systems via Text-to-Text Regression

    Publicado: 16/8/2025
  8. Sample More to Think Less: Group Filtered Policy Optimization for Concise Reasoning

    Publicado: 15/8/2025
  9. DINOv3: Vision Models for Self-Supervised Learning

    Publicado: 15/8/2025
  10. Agent Lightning: Training Any AI Agents with Reinforcement Learning

    Publicado: 14/8/2025
  11. Computational-Statistical Tradeoffs at the Next-Token Prediction Barrier

    Publicado: 14/8/2025
  12. From Model Weights to Agent Workflows: Charting the New Frontier of Optimization in Large Language Models

    Publicado: 12/8/2025
  13. Is Chain-of-Thought Reasoning a Mirage?

    Publicado: 12/8/2025
  14. Agentic Web: Weaving the Next Web with AI Agents

    Publicado: 11/8/2025
  15. The Assimilation-Accommodation Gap in LLM Intelligence

    Publicado: 10/8/2025
  16. The Minimalist AI Kernel: A New Frontier in Reasoning

    Publicado: 6/8/2025
  17. Statistical Rigor for Interpretable AI

    Publicado: 6/8/2025
  18. Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value

    Publicado: 4/8/2025
  19. A foundation model to predict and capture human cognition

    Publicado: 4/8/2025
  20. Generative Recommendation with Semantic IDs: A Practitioner’s Handbook

    Publicado: 4/8/2025

7 / 28

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site