What can I do?
Reducing potential risk from advanced AI systems is an unsolved, difficult task. The communities that are working on this are relatively small, and the pathways for what is helpful are uncertain. However, here are some candidates for reducing risk:
AI Alignment Research & Eng. Making progress on technical AI alignment (research and engineering)
AI Governance Developing global norms, policies, and institutions to increase the chances that advanced AI is beneficial
Support Providing support for others working in AI alignment (e.g. operations roles)
Discussion Engaging in discussion about these risks with colleagues
Learn more about AI Alignment The technical fields of AI alignment and AI governance are still in their formative stages, making it important to thoroughly understand the theoretical and empirical problems of alignment, and current work in these areas.
If the arguments for working to reduce risks from advanced AI systems feel substantive to you, the field is pre-paradigmatic and needs many more thoughtful researchers, engineers, and support people. We encourage you to investigate the resources below. Finally, if you would like guidance or connections and you are interested in conducting research in AI alignment:
book a callTechnical AI Alignment Research and Engineering
Overview of the space
There are different subareas and research approaches within the field of AI alignment, and you may be a better fit for some than others.
- One of the major rough splits is:
- theoretical research (e.g. Alignment Research Center, MIRI) versus
- empirical research and engineering (e.g. Redwood Research, Anthropic, DeepMind's alignment teams, OpenAI's safety team).
- Academia (e.g. UC Berkeley's CHAI, NYU's Alignment Research Group, Jacob Steinhardt, David Krueger, Dan Hendrycks) and non-profit research organizations (e.g. Cooperative AI Foundation, FAR) often fall somewhere in between theory and empirical alignment work. (Empirical work often needs access to state of the art models and so is resource intensive.)
- Engineering aimed at AI alignment is almost always in industry, and ML engineering and research engineers are especially in-demand, but there's also a range of engineering roles, especially security, and also software.
- There's also work in technical AI governance (e.g. Governance team at OpenAI) and forecasting. Finally, there are some independent researchers, outside of academia and formal organizations, who publish on the AI Alignment Forum and LessWrong and tend to do more theory work.
- A provisionary list of alignment / safety organizations and examples of their work, as of Fall 2022: Shortform, Longform.
- For an orientation to the space's cultural components: AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022 by Sam Bowman (NYU).
Funding sources // Job board
- Apply for funding from Open Philanthropy and Long-Term Future Fund
- Opportunities in AGI Safety and Job Board
Guides to getting involved
- Research: FAQ: Advice for AI Alignment Researchers by Rohin Shah (DeepMind) or How to pursue a career in technical AI alignment
- Engineering: Levelling Up in AI Safety Research Engineering
- Overall: AI safety starter pack and book a call
Interested in working in China?
AI alignment is a relatively new subfield, and research is currently centered in the US and Europe. State of the art research development is not just taking place in the US and Europe, however, and there is particular need for new technical approaches and research into the AI alignment problem taking place in China.
- If you're interested in working in technical alignment in China, please book a call or get in contact with Tianxia 天下 and Concordia Consulting 安远咨询.
- Newsletter on China's AI landscape: ChinAI Newsletter
- Overview by 80,000 Hours
AI Governance
The Center for the Governance of AI (GovAI) describes the AI governance problem as "the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI" (GovAI Research Agenda). "AI governance" or "longtermist AI governance" is distinct from "AI policy" in its primary focus on advanced AI — the hypothetical general purpose technology — rather than current AI as it exists today. The field believes that different actions, risks, and opportunities come about when focusing on advanced AI systems as compared to more contemporary issues, though AI governance and AI policy naturally interface with each other and have overlapping domains. AI governance has historical roots in the study of existential risk and the Effective Altruism community, so much current research draws from these communities, but the field is expanding with time.
- Read through the AI Governance Curriculum (highly recommended)
- One highlight: The longtermist AI governance landscape: a basic overview (related posts)
- If you're interested in a career in US AI policy: Overview by 80,000 Hours; Fellowship by Open Philathrophy; Job Board by 80,000 Hours
- If you're interested in law: Legal Priorities Project, and Gillian Hadfield (U. Toronto)
Book a call
If you're interested in working in AI alignment and advanced AI safety, please book a call with Vael Gates, who leads this project and conducted the interviews as part of their postdoctoral work with Stanford University.