“The Short Timelines Strategy for AI Safety University Groups” by Josh Thorsteinson
EA Forum Podcast (All audio) - Un pódcast de EA Forum Team

Categorías:
Advice for AI safety university group organizers. Acknowledgements I collected most of these ideas in early January while attending OASIS 4.0, a three-day workshop for AI safety university group organizers in Berkeley, CA. Thank you to everyone who gave their input, and a special thanks to Cole Salvador, Jeremy Kintana, Neav Topaz, Tzu Kit Chan, and Chris Tardy for your feedback on earlier versions of this writeup. The contents of this post don't necessarily reflect the opinion of anyone but myself. All mistakes are my own. Summary Given short timelines, AI safety university groups should: Prioritize grad students and skilled researchers who could have a meaningful impact within 2-3 years Run selective upskilling programs and small, high-context gatherings Protect researcher time for those doing impactful work Have a succession plan with clear documentation Invest in AI governance fieldbuilding and community wellbeing Be cautious and coordinated in advocacy [...] ---Outline:(00:11) Acknowledgements(00:45) Summary(01:17) Context(03:05) Resource Allocation(03:09) High-Priority Activities(04:06) Technical vs. Governance(05:47) Time Management(06:36) Succession Planning(07:59) Community Building(08:04) Recommendations(08:24) Community Support and Wellbeing(09:41) Advocacy(09:46) A Word of Caution(10:51) Potential Activities(11:29) Should You Stage a Protest?The original text contained 2 footnotes which were omitted from this narration. --- First published: March 7th, 2025 Source: https://forum.effectivealtruism.org/posts/2H6kub6cNptYsgCA8/the-short-timelines-strategy-for-ai-safety-university-groups --- Narrated by TYPE III AUDIO.