Interconnects
Un pódcast de Nathan Lambert
109 Episodo
-
Managing frontier model training organizations (or teams)
Publicado: 19/3/2025 -
Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
Publicado: 13/3/2025 -
Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL
Publicado: 12/3/2025 -
Elicitation, the simplest way to understand post-training
Publicado: 10/3/2025 -
Where inference-time scaling pushes the market for AI companies
Publicado: 5/3/2025 -
GPT-4.5: "Not a frontier model"?
Publicado: 28/2/2025 -
Character training: Understanding and crafting a language model's personality
Publicado: 26/2/2025 -
Claude 3.7 thonks and what's next for inference-time scaling
Publicado: 24/2/2025 -
Grok 3 and an accelerating AI roadmap
Publicado: 18/2/2025 -
An unexpected RL Renaissance
Publicado: 13/2/2025 -
Deep Research, information vs. insight, and the nature of science
Publicado: 12/2/2025 -
Making the U.S. the home for open-source AI
Publicado: 5/2/2025 -
Why reasoning models will generalize
Publicado: 28/1/2025 -
Interviewing OLMo 2 leads: Open secrets of training language models
Publicado: 22/1/2025 -
DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs
Publicado: 21/1/2025 -
Let me use my local LMs on Meta Ray-Bans
Publicado: 15/1/2025 -
(Voiceover) DeepSeek V3 and the actual cost of training frontier AI models
Publicado: 9/1/2025 -
The state of post-training in 2025
Publicado: 8/1/2025 -
Quick recap on the state of reasoning
Publicado: 2/1/2025 -
(Voiceover) 2024 Interconnects year in review
Publicado: 31/12/2024
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
