AI Safety Summit at Bletchley Park
Summary
Twenty-eight countries, including the United States and China, signed the Bletchley Declaration at the first global AI Safety Summit, acknowledging the potential risks of frontier AI and agreeing to cooperate on safety testing. The summit also announced the creation of the UK AI Safety Institute.
What Happened
On November 1-2, 2023, the United Kingdom hosted the first global AI Safety Summit at Bletchley Park — the historic site of World War II codebreaking. Representatives from 28 countries, including the US, China, the EU, and major AI-developing nations, gathered alongside leading AI companies and researchers.
The summit produced the Bletchley Declaration, a joint statement acknowledging that "frontier AI" — the most advanced AI systems — posed potential risks "of a serious, even catastrophic, nature." Signatories committed to international cooperation on understanding and mitigating these risks. Notably, China signed the declaration, marking a rare moment of US-China agreement on technology governance.
UK Prime Minister Rishi Sunak announced the creation of the UK AI Safety Institute (AISI), the world's first government body dedicated to evaluating frontier AI models for safety. The institute would be tasked with testing models before and after public deployment, initially through voluntary agreements with AI companies.
Major AI companies — including OpenAI, Anthropic, Google DeepMind, Meta, and others — agreed to allow pre-deployment safety testing of their frontier models by the new institute.
Why It Matters
Bletchley Park was the first significant attempt at international AI governance, and it accomplished something that many observers had considered impossible: getting both the US and China to sign a joint statement on AI risks. While the declaration was non-binding and critics dismissed it as insufficiently concrete, it established the principle that frontier AI safety was a legitimate subject for international coordination.
The creation of the UK AI Safety Institute established a template that other countries would follow. The US launched its own AI Safety Institute under NIST shortly thereafter, and other nations explored similar mechanisms. Whether these institutes would have real teeth — access to models, authority to delay deployment — remained an open question.
The summit also revealed the limits of voluntary governance. All commitments were non-binding, participation was optional, and the Declaration's language was deliberately vague enough to secure universal sign-on. Whether the Bletchley process would mature into something more substantive or remain diplomatic theater was, as of 2024, unresolved.