Technical Governance Team

Technical research to inform better AI governance

We are a team at MIRI focused on technical research and analysis in service of AI governance goals to avoid catastrophic and extinction risks, and ensure that humanity successfully navigates the development of smarter-than-human AI.

Recent research

View All

We analyze what AI evaluations can and cannot do for preventing catastrophic risks. While they can help determine lower bounds on AI capabilities, evaluations face fundamental limitations and cannot be solely relied upon to ensure safety.

In this research report we provide an in-depth overview of the mechanisms that could be used to verify adherence to international agreements about AI development.

We respond to the BIS Request for Comment on the Proposed Rule for Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters.

Technical Governance Team Mission

AI systems are rapidly becoming more capable, and fundamental safety problems remain unsolved. Our goal is to increase the probability that humanity can safely navigate the transition to a world with smarter-than-human AI, focusing on technical research in service of governance goals.

governance goals:

1.

Coordination

Strengthen international coordination to allow for effective international agreements and reduce dangerous race dynamics.

2.

Security

Establish robust security standards and practices for frontier AI development, to reduce harms from model misalignment, misuse, and proliferation.

3.

Development Safeguards

Ensure that dangerous AI development could be shut down, if there was broad international agreement on the need for this.

4.

Regulation

Improve and develop domestic and international regulation of frontier AI, to prepare for coming risks, and identify when current safety plans are likely to fail.

Frequently asked questions

Common questions on AI governance, collaboration, and future risks

What do you mean by “Technical Governance”?

What is your relationship with the rest of MIRI?

What risks from AI are you trying to prevent?

When do you think smarter-than-human AI will be developed?