What can I do?

Reducing potential risk from advanced AI systems is an unsolved, difficult task. The communities that are working on this are relatively small, and the pathways for what is helpful are uncertain. However, here are some candidates for reducing risk:

AI Alignment Research & Eng. Making progress on technical AI alignment (research and engineering)

AI Governance Developing global norms, policies, and institutions to increase the chances that advanced AI is beneficial

Support Providing support for others working in AI alignment (e.g. operations roles)

Discussion Engaging in discussion about these risks with colleagues

Learn more about AI Alignment The technical fields of AI alignment and AI governance are still in their formative stages, making it important to thoroughly understand the theoretical and empirical problems of alignment, and current work in these areas.


If the arguments for working to reduce risks from advanced AI systems feel substantive to you, the field is pre-paradigmatic and needs many more thoughtful researchers, engineers, and support people. We encourage you to investigate the resources below. Finally, if you would like guidance or connections and you are interested in conducting research in AI alignment:

book a call


Technical AI Alignment Research and Engineering

Overview of the space

There are different subareas and research approaches within the field of AI alignment, and you may be a better fit for some than others.

Funding sources // Job board

Guides to getting involved

Interested in working in China?

AI alignment is a relatively new subfield, and research is currently centered in the US and Europe. State of the art research development is not just taking place in the US and Europe, however, and there is particular need for new technical approaches and research into the AI alignment problem taking place in China.



AI Governance

The Center for the Governance of AI (GovAI) describes the AI governance problem as "the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI" (GovAI Research Agenda). "AI governance" or "longtermist AI governance" is distinct from "AI policy" in its primary focus on advanced AI — the hypothetical general purpose technology — rather than current AI as it exists today. The field believes that different actions, risks, and opportunities come about when focusing on advanced AI systems as compared to more contemporary issues, though AI governance and AI policy naturally interface with each other and have overlapping domains. AI governance has historical roots in the study of existential risk and the Effective Altruism community, so much current research draws from these communities, but the field is expanding with time.


Book a call

If you're interested in working in AI alignment and advanced AI safety, please book a call with Vael Gates, who leads this project and conducted the interviews as part of their postdoctoral work with Stanford University.