AI Safety Fundamentals: Alignment
Un pódcast de BlueDot Impact
Categorías:
83 Episodo
-
Constitutional AI Harmlessness from AI Feedback
Publicado: 19/7/2024 -
Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Publicado: 19/7/2024 -
Illustrating Reinforcement Learning from Human Feedback (RLHF)
Publicado: 19/7/2024 -
Chinchilla’s Wild Implications
Publicado: 17/6/2024 -
Deep Double Descent
Publicado: 17/6/2024 -
Intro to Brain-Like-AGI Safety
Publicado: 17/6/2024 -
Eliciting Latent Knowledge
Publicado: 17/6/2024 -
Toy Models of Superposition
Publicado: 17/6/2024 -
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Publicado: 17/6/2024 -
Discovering Latent Knowledge in Language Models Without Supervision
Publicado: 17/6/2024 -
ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation
Publicado: 17/6/2024 -
Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions
Publicado: 17/6/2024 -
Imitative Generalisation (AKA ‘Learning the Prior’)
Publicado: 17/6/2024 -
An Investigation of Model-Free Planning
Publicado: 17/6/2024 -
Low-Stakes Alignment
Publicado: 17/6/2024 -
Gradient Hacking: Definitions and Examples
Publicado: 17/6/2024 -
Empirical Findings Generalize Surprisingly Far
Publicado: 17/6/2024 -
Compute Trends Across Three Eras of Machine Learning
Publicado: 13/6/2024 -
Worst-Case Thinking in AI Alignment
Publicado: 29/5/2024 -
Public by Default: How We Manage Information Visibility at Get on Board
Publicado: 12/5/2024
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment