This post addresses a common objection to our proposed international agreement to halt ASI development: that some countries would avoid joining the agreement and might pursue ASI themselves or weaken the agreement in other ways.
This post addresses a common objection to our proposed international agreement to halt ASI development: that countries would simply cheat by pursuing covert ASI projects.
We at the MIRI Technical Governance Team have released a report describing an example international agreement to halt the advancement towards artificial superintelligence. The agreement is centered around limiting the scale of AI training, and restricting certain AI research.
The Artificial Intelligence Risk Evaluation Act is an exciting step toward preventing catastrophic and existential risks from advanced artificial intelligence.