Announcing: MIRI Technical Governance Team Research Fellowship

Announcing: MIRI Technical Governance Team Research Fellowship

  • William Brewer
  • Peter Barnett
Dec 16, 2025

MIRI’s Technical Governance Team plans to run a small research fellowship program in early 2026. The program will run for 8 weeks, and include a $1200/week stipend. Fellows are expected to work on their projects 40 hours per week. The program is remote-by-default, with an in-person kickoff week in Berkeley, CA (flights and housing provided). Participants who already live in or near Berkeley are free to use our office for the duration of the program.

Fellows will spend the first week picking out scoped projects from a list provided by our team or designing independent research projects (related to our overall agenda), and then spend seven weeks working on that project under the guidance of our Technical Governance Team. One of the main goals of the program is to identify full-time hires for the team.

If you are interested in participating, please fill out this application as soon as possible (should take 45-60 minutes). We plan to set dates for participation based on applicant availability, but we expect the fellowship to begin after February 2, 2026 and end before August 31, 2026 (i.e., some 8 week period in spring/summer, 2026).

Strong applicants care deeply about existential risk, have existing experience in research or policy work, and are able to work autonomously for long stretches on topics that merge considerations from the technical and political worlds.

Unfortunately, we are not able to sponsor visas for this program.

Here are a few example projects we could imagine fellows approaching during their time in the program:

Adversarial detection of ML training on monitored GPUs: Investigate which hardware signals and side-channel measurements can most reliably distinguish ML training from other intensive workloads in an adversarial setting.

Confidence-building measures to facilitate international acceptance of the agreement: Analyze historical arms control and treaty negotiations to identify which confidence-building measures could help distrustful nations successfully collaborate on an international AI development halt before verification mechanisms are in place.

Interconnect bandwidth limits / "fixed-sets": Flesh out the security assumptions, efficacy, and implementation details of a verification mechanism that would restrict AI cluster sizes by severely limiting the external communication bandwidth of chip pods.

The security of existing AI chips for international agreement verification: Investigate whether the common assumption that current AI chips are too insecure for remote verification is actually true, or whether existing chips (potentially augmented with measures like video surveillance) could suffice without requiring years of new chip development.

Monitoring AI chip production during an AI capabilities halt: Produce detailed technical guidance for how governments and international institutions could effectively monitor AI chip production as part of an international agreement halting AI capabilities advancement.

Executive power to intervene in AI development: Analyze the legal powers relevant to the U.S. President’s ability to halt AI development or govern AI more broadly.

Subnational and non-state actor inclusion in AI governance: Analyze how international AI agreements could account for non-state actors (companies, research institutions, individuals) who control critical capabilities, drawing on precedents from environmental and cyber governance.

Mapping and preparing for potential AI warning shots: Identify the most plausible near-term AI incidents or capability demonstrations that could shift elite and public opinion toward supporting stronger AI governance measures. For each scenario, develop policy responses, communication strategies, and institutional preparations.

Footnotes