EA - Do short timelines impact the tractability of 80k’s advice? by smk

The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund

Podcast artwork

Categorías:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do short timelines impact the tractability of 80k’s advice?, published by smk on January 5, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, looking to clarify my thinking on thisHello,80k recommends going into a graduate programme in machine learning to work on the alignment problem. For someone starting out studies, finishing a PhD will take at least 6-10 years including undergraduate/Master studies.Some put AGI take-off at ~4 years away, while the median Metaculus prediction for weak AGI is 2027 and the first superintelligence ~10 months after the first AGI. (The first public 'general AI' system is predicted in 2038, which makes me a bit confused. I fail to see how there's an 11 year gap between weak and 'strong' AI, especially with superintelligence ~10 months after the first AGI. Am I missing something?).To what extent is it still worthwhile to pursue a PhD in ML to work on the alignment problem when timelines are this short?Best, smkThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Visit the podcast's native language site