Mapping IAEA Verification Tools to AI Governance: A Mechanism-by-Mechanism Analysis
Abstract: At present, the speed of AI advancement outstrips current governance mechanisms. Advancing common understanding among all AI stakeholders – from governments to commercial developers to independent researchers – is imperative to the successful integration of assessment and verification measures. The International Atomic Energy Agency (IAEA) provides a mature, technical precedent for nuclear-focused, multilateral verification. This research does not argue for or against an “IAEA for AI” – a topic widely covered in existing literature – but assesses specific verification tools and mechanisms that illustrate historical success within the IAEA nuclear safeguards system, analyzing which could be applicable to AI governance. While many IAEA tools depend on traditional measurements of nuclear material, several – such as open-source analysis and voluntary reporting – are not reliant on measures of radioactivity or fissile material. Therefore, a select number of tools could inform short-term, feasible policy changes that various stakeholders could implement in the near future.
Christina Krawec is an international security professional specializing in nuclear nonproliferation, safeguards, AI policy, and space. She is the Founder of Earthnote LLC, an independent consulting company providing open source and satellite imagery analytical expertise for clients spanning the US laboratory system, think tanks, tech, and academia. Previously, Christina worked at the International Atomic Energy Agency, US Department of Defense, Google, and various other research-focused organizations. Christina Krawec holds a Master’s degree in Non-proliferation & International Security from King’s College London and a Bachelor’s degree in International Relations and Music (double major) from Stanford University.
Technical Contact: Brad Roberts
Event Manager: Katie Thomas, thomas94 [at] llnl.gov (thomas94[at]llnl[dot]gov)




