Mechanisms to Verify International Agreements About AI Development

Aaron Scher
November 27, 2024
View source
Other Contributors:

International agreements about AI development may be required to reduce catastrophic risks from advanced AI systems. However, agreements about such a high-stakes technology must be backed by verification mechanisms—processes or tools that give one party greater confidence that another is following the agreed-upon rules, typically by detecting violations. This report gives an overview of potential verification approaches for three example policy goals, aiming to demonstrate how countries could practically verify claims about each other’s AI development and deployment. The focus is on international agreements and state-involved AI development, but these approaches could also be applied to domestic regulation of companies. While many of the ideal solutions for verification are not yet technologically feasible, we emphasize that increased access (e.g., physical inspections of data centers) can often substitute for these technical approaches, given sufficient political will from the relevant actors. Therefore, we remain hopeful that significant political will could enable ambitious international coordination, with strong verification mechanisms, to reduce catastrophic AI risks.

Footnotes