Technical Governance Team

Technical research to inform better AI governance

We are a team at MIRI focused on technical research and analysis in service of AI governance goals to avoid catastrophic and extinction risks, and ensure that humanity successfully navigates the development of smarter-than-human AI.

Recent research

View All

This workshop paper catalogs the compute used to develop AI algorithmic innovations used in open-source frontier models. We assess trends in the compute used in the experiments to develop innovations, and investigate the implications for compute caps.

This workshop paper discusses technical interventions that could contribute to a global halt on dangerous AI activities. We then provide a breakdown of which interventions play a role in various AI governance plans.

We respond to the Request for Comment about the AI Diffusion Rule. We discuss the interaction between export controls and AI risks, considerations for designing effective export controls, and specific changes to the Diffusion Framework.

Technical Governance Team Mission

AI systems are rapidly becoming more capable, and fundamental safety problems remain unsolved. Our goal is to increase the probability that humanity can safely navigate the transition to a world with smarter-than-human AI, focusing on technical research in service of governance goals.

governance goals:

1.

Coordination

Strengthen international coordination to allow for effective international agreements and reduce dangerous race dynamics.

2.

Security

Establish robust security standards and practices for frontier AI development, to reduce harms from model misalignment, misuse, and proliferation.

3.

Development Safeguards

Ensure that dangerous AI development could be shut down, if there was broad international agreement on the need for this.

4.

Regulation

Improve and develop domestic and international regulation of frontier AI, to prepare for coming risks, and identify when current safety plans are likely to fail.

Frequently asked questions

Common questions on AI governance, collaboration, and future risks

What do you mean by “Technical Governance”?

What is your relationship with the rest of MIRI?

What risks from AI are you trying to prevent?

When do you think smarter-than-human AI will be developed?