Resources
Start here to learn more about the field of AI Safety, and the ways you can work on this problem, both academically and from within industry.
Start here to learn more about the field of AI Safety, and the ways you can work on this problem, both academically and from within industry.
As an emerging field, there are many definitions of the field AI Safety, as well as many reasons people have come to view it as important. We encourage you to explore a diverse set of opinions and information in building your background understanding.
Article: The case for taking AI seriously as a threat to humanity by Kelsey Piper, Vox
Paper: The alignment problem from a deep learning perspective by Ngo et al, 2022.
Article: Preventing an AI-related catastrophe by Ben Hilton, 80,0000 Hours
Video: Intro to AI Safety by Rob Miles
Report: Benefits & Risks of Artificial Intelligence by Max Tegmark, Future of Life Institute
Syllabus: AGI Safety Fundamentals Curriculum by Richard Ngo, OpenAI
Podcast: 80,000 Hours Podcast on Artificial Intelligence
A career in AI safety may be the most impactful way to spend your working hours. As a university group, much of our focus is on preparing students for such pursuits.
Career guide: How to pursue a career in AI governance from 80,000 Hours
Career guide: Guide to pursuing a career in technical AI safety from 80,000 Hours
Career guide: Your biggest opportunity to make a difference: our guide to what makes for a high-impact career from 80,000 Hours
More: Lots of Links from AI Safety Support
A growing collection of materials on AI Safety, including research, guides, and other useful resources. We will keep updating it over time as we find more.
Career pathway: Levelling Up in AI Safety Research Engineering by Gabriel Mukobi (Stanford AI Alignment)