US, China and scores of governments sign AI safety ‘Bletchley Declaration’
Minister hails first collective effort to rein in frontier artificial intelligence risks
01 November 2023 - 19:00
byPaul Sandle and Martin Coulter
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
U.S. Secretary of Commerce Gina Raimondo and British Secretary of State for Science, Innovation and Technology Michelle Donelan listen to China's Vice Minister of Science and Technology Wu Zhaohui speaking on Day 1 of the AI Safety Summit at Bletchley Park in Bletchley, Britain on November 1, 2023. Picture: LEON NEAL/POOL VIAREUTERS
Britain on Wednesday published a “Bletchley Declaration”, agreed with countries including the US and China, aimed at boosting global efforts to co-operate on artificial intelligence (AI) safety.
The declaration, by 28 countries and the EU, was published on the opening day of the AI Safety Summit hosted at Bletchley Park in central England.
“The declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration,” Britain said in a separate statement accompanying the declaration.
The declaration encouraged transparency and accountability from actors developing frontier AI technology on their plans to measure, monitor and mitigate potentially harmful capabilities.
“This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI — helping ensure the long-term future of our children and grandchildren,” British Prime Minister Rishi Sunak said.
British Prime Minister Rishi Sunak delivers a speech on AI in London, England, October 26 2023. Picture: PETER NICHOLLS/REUTERS
It set out a two-pronged agenda focused on identifying risks of shared concern and building the scientific understanding of them, and also building cross-country policies to mitigate them.
“This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research,” the declaration said.
In a first for Western efforts to manage the safe development of AI, a Chinese vice-minister joined leaders from the US and the EU, alongside tech bosses such as Elon Musk and ChatGPT’s Sam Altman at the summit.
“China is willing to enhance our dialogue and communication in AI safety with all sides, contributing to an international mechanism with global participation in governance framework that needs wide consensus,” Wu Zhaohui said at the start of the summit, according to an official translation of his remarks.
“Countries regardless of their size and scale have equal rights to develop and use AI,” he added.
Elon Musk in Bletchley, Britain, November 1 2023. Picture: LEON NEAL/REUTERS
Musk, who has warned about the risks of AI, said the summit wanted to establish a “third-party referee” for companies developing the technology, so it could sound the alarm when risks develop, and so instil confidence in the public.
British digital minister Michelle Donelan said it was an achievement just to get so many key players in one room.
“For the first time, we now have countries agreeing that we need to look not just independently but collectively at the risk around frontier AI,” she said.
Meanwhile, US vice-president Kamala Harris called for urgent action to protect the public and democracy from the dangers posed by AI, announcing a series of initiatives to address safety concerns about the technology.
In a speech at the US embassy in London, Harris spoke of the dangers AI could pose for individuals and the Western political system.
The technology has the potential to create “cyberattacks at a scale beyond anything we have seen before” or “AI-formulated bioweapons that could endanger the lives of millions”, she said.
“These threats are often referred to as the ‘existential threats of AI’, because they could endanger the very existence of humanity,” Harris said.
The US on Wednesday announced plans to establish a new AI Safety Institute, which will assess potential risks. Britain announced a similar initiative last week.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
US, China and scores of governments sign AI safety ‘Bletchley Declaration’
Minister hails first collective effort to rein in frontier artificial intelligence risks
Britain on Wednesday published a “Bletchley Declaration”, agreed with countries including the US and China, aimed at boosting global efforts to co-operate on artificial intelligence (AI) safety.
The declaration, by 28 countries and the EU, was published on the opening day of the AI Safety Summit hosted at Bletchley Park in central England.
“The declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration,” Britain said in a separate statement accompanying the declaration.
The declaration encouraged transparency and accountability from actors developing frontier AI technology on their plans to measure, monitor and mitigate potentially harmful capabilities.
“This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI — helping ensure the long-term future of our children and grandchildren,” British Prime Minister Rishi Sunak said.
It set out a two-pronged agenda focused on identifying risks of shared concern and building the scientific understanding of them, and also building cross-country policies to mitigate them.
“This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research,” the declaration said.
In a first for Western efforts to manage the safe development of AI, a Chinese vice-minister joined leaders from the US and the EU, alongside tech bosses such as Elon Musk and ChatGPT’s Sam Altman at the summit.
“China is willing to enhance our dialogue and communication in AI safety with all sides, contributing to an international mechanism with global participation in governance framework that needs wide consensus,” Wu Zhaohui said at the start of the summit, according to an official translation of his remarks.
“Countries regardless of their size and scale have equal rights to develop and use AI,” he added.
Musk, who has warned about the risks of AI, said the summit wanted to establish a “third-party referee” for companies developing the technology, so it could sound the alarm when risks develop, and so instil confidence in the public.
British digital minister Michelle Donelan said it was an achievement just to get so many key players in one room.
“For the first time, we now have countries agreeing that we need to look not just independently but collectively at the risk around frontier AI,” she said.
Meanwhile, US vice-president Kamala Harris called for urgent action to protect the public and democracy from the dangers posed by AI, announcing a series of initiatives to address safety concerns about the technology.
In a speech at the US embassy in London, Harris spoke of the dangers AI could pose for individuals and the Western political system.
The technology has the potential to create “cyberattacks at a scale beyond anything we have seen before” or “AI-formulated bioweapons that could endanger the lives of millions”, she said.
“These threats are often referred to as the ‘existential threats of AI’, because they could endanger the very existence of humanity,” Harris said.
The US on Wednesday announced plans to establish a new AI Safety Institute, which will assess potential risks. Britain announced a similar initiative last week.
Reuters
Joe Biden signs executive order aimed at curbing AI risks
Big tech on the spot over disinformation in wars
Nobel laureate among group of top scientists to pen letter warning countries of AI risks
Publishers under increasing threat from Google’s use of AI
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Related Articles
JOHAN STEYN: Digital transformation is more than just ‘going digital’
Big tech on the spot over disinformation in wars
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.