This paper reflects some of our preliminary thoughts on how an AI halt could be achieved technically. As a workshop paper, the ideas in this paper are relatively nascent, and we hope they are a starting point for future work, rather than the final word on this topic. We presented this paper at the ICML 2025 Technical AI Governance workshop.
Abstract:
The rapid development of AI systems poses unprecedented risks, including loss of control, misuse, geopolitical instability, and concentration of power. To navigate these risks and avoid worst-case outcomes, governments may proactively establish the capability for a coordinated halt on dangerous AI development and deployment. In this paper, we outline key technical interventions that could allow for a coordinated halt on dangerous AI activities. We discuss how these interventions may contribute to restricting various dangerous AI activities, and show how these interventions can form the technical foundation for potential AI governance plans.