The Machine Intelligence Research Institute (MIRI) has submitted a response to the proposed AI Diffusion Rule from the Bureau of Industry and Security (BIS). Our comments do not apply exclusively to the Diffusion Framework and are instead oriented around U.S. AI export control policy, in general. Other rules would also benefit from an understanding of the considerations we discuss here. Our comments cover four topics: The interaction between The Diffusion Framework and catastrophic AI risks, especially the role of a coordinated pause on dangerous AI development. Important considerations for making these rules and updating them in the future, such as algorithmic progress, focus on pre-training versus other AI workloads, and the increasing importance of AI model weight security. Specific changes we believe would improve the Diffusion Framework, including around compute allocations and defining controlled models as related to open-weight models. Our general support for the Diffusion Framework as an approach to improving US national security and reducing catastrophic AI risks.