artificial intelligence

EXPLAINER: What’s in the Bletchley Declaration on AI?

Victor Barreiro Jr.

This is AI generated summarization, which may have errors. For context, always refer to the full article.

EXPLAINER: What’s in the Bletchley Declaration on AI?

AI SAFETY SUMMIT. A general view during the first plenary session on Day 1 of the AI Safety Summit at Bletchley Park in Bletchley, Britain on November 1, 2023.

Leon Neal/Pool via Reuters

The Bletchley Declaration – signed by the EU, and 28 countries including the Philippines – acknowledges the need for international cooperation towards transparency and accountability in AI development and mitigating risks posed by AI

The Bletchley Declaration on artificial intelligence (AI), released during the AI Safety Summit in the United Kingdom from November 1 to 2, acknowledges both the potential benefits and risks of AI and the need for adequate safeguards in its development.

The Bletchley Declaration – signed by the EU, and 28 countries including the Philippines – acknowledges the need for international cooperation towards transparency and accountability in AI development and mitigating the risks posed by AI.

Acknowledging the good

The Bletchley Declaration affirms that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.”

The declaration acknowledges AI systems are already being used for daily life in various sectors, such as in housing, employment, transportation, education, health, accessibility, and justice, with increased usage of AI growing likely.

As such, the signatories acknowledged the safe development of AI towards “transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally.”

“This includes,” the declaration states, “for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realize the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.”

The risks of artificial intelligence

At the same time, the signatories also noted the stakes of AI development by acknowledging its risks, and affirm the urgent need to understand the risks while working towards developing AI for good.

These risks may come from “potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict.”

“We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

“Frontier AI” is defined in the summit as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models” which includes large language models (LLMs) such as those running under popular user-facing platforms, OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Bard.

Must Read

AI models score poorly in Stanford University transparency index

AI models score poorly in Stanford University transparency index
Working together

The signatories agreed that “risks arising from AI are inherently international in nature, and so are best addressed through international cooperation.”

This means working together to ensure “human-centric, trustworthy and responsible AI” and recognizing that countries “should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximizes the benefits and takes into account the risks associated with AI.” 

The declaration added, “All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together.”

Looking at the frontier

Additionally, the Bletchley Declaration looks towards frontier AI development, or the development of AI capabilities and systems “which are unusually powerful and potentially harmful.”

By working together and supporting an inclusive network of scientific research on frontier AI safety, it is hoped that such would allow for the “best science available for policy making and the public good” to be developed.

The declaration acknowledges those working in frontier AI development have “a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures.”

Who signed the Bletchley Declaration?

The countries represented were:

  • Australia
  • Brazil
  • Canada
  • Chile
  • China
  • European Union
  • France
  • Germany
  • India
  • Indonesia
  • Ireland
  • Israel
  • Italy
  • Japan
  • Kenya
  • Kingdom of Saudi Arabia
  • Netherlands
  • Nigeria
  • The Philippines
  • Republic of Korea
  • Rwanda
  • Singapore
  • Spain
  • Switzerland
  • Türkiye
  • Ukraine
  • United Arab Emirates
  • United Kingdom of Great Britain and Northern Ireland
  • United States of America

Additionally, references to “governments” and “countries” include international organizations acting in accordance with their legislative or executive competences. –

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Download the Rappler App!
Person, Human, Sleeve


Victor Barreiro Jr.

Victor Barreiro Jr is part of Rappler's Central Desk. An avid patron of role-playing games and science fiction and fantasy shows, he also yearns to do good in the world, and hopes his work with Rappler helps to increase the good that's out there.