Technical Governance Team

Technical research to inform better AI governance

We are a team at MIRI focused on technical research and analysis in service of AI governance goals to avoid catastrophic and extinction risks, and ensure that humanity successfully navigates the development of smarter-than-human AI.

Recent research

View All

We respond to the Request for Comment about the AI Diffusion Rule. We discuss the interaction between export controls and AI risks, considerations for designing effective export controls, and specific changes to the Diffusion Framework.

This AI governance research agenda lays out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI.

We respond to the Request for Information on the Development of an Artificial Intelligence (AI) Action Plan. We discuss the need for US state capacity, export controls, research into verification mechanisms, and international coordination.

Technical Governance Team Mission

AI systems are rapidly becoming more capable, and fundamental safety problems remain unsolved. Our goal is to increase the probability that humanity can safely navigate the transition to a world with smarter-than-human AI, focusing on technical research in service of governance goals.

governance goals:

1.

Coordination

Strengthen international coordination to allow for effective international agreements and reduce dangerous race dynamics.

2.

Security

Establish robust security standards and practices for frontier AI development, to reduce harms from model misalignment, misuse, and proliferation.

3.

Development Safeguards

Ensure that dangerous AI development could be shut down, if there was broad international agreement on the need for this.

4.

Regulation

Improve and develop domestic and international regulation of frontier AI, to prepare for coming risks, and identify when current safety plans are likely to fail.

Frequently asked questions

Common questions on AI governance, collaboration, and future risks

What do you mean by “Technical Governance”?

What is your relationship with the rest of MIRI?

What risks from AI are you trying to prevent?

When do you think smarter-than-human AI will be developed?