Second global AI summit discusses how to keep AI in check
Samsung, Tencent, Google, Meta and OpenAI are among the 16 companies pledging to publish safety frameworks
21 May 2024 - 13:11
by Joyce Lee
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Sixteen companies involved in artificial intelligence, including Alphabet’s Google, Meta, Microsoft and OpenAI have committed to safe development of the technology. Picture: DADO RUVIC/REUTERS
Seoul — Sixteen companies involved in artificial intelligence (AI), including Alphabet’s Google, Meta, Microsoft and OpenAI, as well as companies from China, South Korea and the United Arab Emirates (UAE) have committed to safe development of the technology.
The announcement unveiled in a UK government statement on Tuesday came as South Korea and Britain host a global AI summit in Seoul at a time when the breakneck pace of AI innovation leaves governments scrambling to keep up.
The agreement was a step up from the number of commitments at the first global AI summit held six months ago, the statement said.
Zhipu.ai, backed by Chinese tech giants Alibaba, Tencent, Meituan and Xiaomi, as well as UAE’s Technology Innovation Institute were among the 16 companies pledging to publish safety frameworks on how they will measure risks of frontier AI models.
The firms, also including Amazon, IBM and Samsung Electronics, voluntarily committed to not develop or deploy AI models if the risks could not be sufficiently mitigated, and to ensure governance and transparency on approaches to AI safety, the statement said.
“It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety,” said Beth Barnes, founder at METR, a nonprofit for AI model safety.
The AI summit in Seoul this week aims to build on a broad agreement at the first summit held in the UK to better address a wider array of risks.
At the November summit, Tesla’s Elon Musk and OpenAI CEO Sam Altman mingled with some of their fiercest critics, while China cosigned the “Bletchley Declaration” on collectively managing AI risks alongside the US and others.
British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol will oversee a virtual summit later on Tuesday, followed by a ministerial session on Wednesday.
This week’s summit would address “building ... on the commitment from the companies, also looking at how the [AI safety] institutes will work together”, Britain’s technology secretary, Michelle Donelan, said on Tuesday.
Since November, discussion on AI regulation had shifted from longer-term doomsday scenarios to “practical concerns” such as how to use AI in areas like medicine or finance, said Aidan Gomez, co-founder of large language model firm Cohere.
Industry participants wanted AI regulation that would give clarity and security on where the companies should invest, while avoiding entrenching big tech, Gomez said.
With countries such as the UK and US establishing state-backed AI safety institutes for evaluating AI models and others expected to follow suit, AI firms were also concerned about the interoperability between jurisdictions, analysts said.
Representatives of the Group of Seven (G7) major democracies were expected to take part in the virtual summit, while Singapore and Australia were also invited, a South Korean presidential official said.
China would not participate in the virtual summit but is expected to attend Wednesday’s in-person ministerial session, the official said.
South Korea’s foreign ministry said Musk, former CEO of Google Eric Schmidt, Samsung Electronics chair Jay Y Lee and other AI industry leaders would participate in the summit.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Second global AI summit discusses how to keep AI in check
Samsung, Tencent, Google, Meta and OpenAI are among the 16 companies pledging to publish safety frameworks
Seoul — Sixteen companies involved in artificial intelligence (AI), including Alphabet’s Google, Meta, Microsoft and OpenAI, as well as companies from China, South Korea and the United Arab Emirates (UAE) have committed to safe development of the technology.
The announcement unveiled in a UK government statement on Tuesday came as South Korea and Britain host a global AI summit in Seoul at a time when the breakneck pace of AI innovation leaves governments scrambling to keep up.
The agreement was a step up from the number of commitments at the first global AI summit held six months ago, the statement said.
Zhipu.ai, backed by Chinese tech giants Alibaba, Tencent, Meituan and Xiaomi, as well as UAE’s Technology Innovation Institute were among the 16 companies pledging to publish safety frameworks on how they will measure risks of frontier AI models.
The firms, also including Amazon, IBM and Samsung Electronics, voluntarily committed to not develop or deploy AI models if the risks could not be sufficiently mitigated, and to ensure governance and transparency on approaches to AI safety, the statement said.
“It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety,” said Beth Barnes, founder at METR, a nonprofit for AI model safety.
The AI summit in Seoul this week aims to build on a broad agreement at the first summit held in the UK to better address a wider array of risks.
At the November summit, Tesla’s Elon Musk and OpenAI CEO Sam Altman mingled with some of their fiercest critics, while China cosigned the “Bletchley Declaration” on collectively managing AI risks alongside the US and others.
British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol will oversee a virtual summit later on Tuesday, followed by a ministerial session on Wednesday.
This week’s summit would address “building ... on the commitment from the companies, also looking at how the [AI safety] institutes will work together”, Britain’s technology secretary, Michelle Donelan, said on Tuesday.
Since November, discussion on AI regulation had shifted from longer-term doomsday scenarios to “practical concerns” such as how to use AI in areas like medicine or finance, said Aidan Gomez, co-founder of large language model firm Cohere.
Industry participants wanted AI regulation that would give clarity and security on where the companies should invest, while avoiding entrenching big tech, Gomez said.
With countries such as the UK and US establishing state-backed AI safety institutes for evaluating AI models and others expected to follow suit, AI firms were also concerned about the interoperability between jurisdictions, analysts said.
Representatives of the Group of Seven (G7) major democracies were expected to take part in the virtual summit, while Singapore and Australia were also invited, a South Korean presidential official said.
China would not participate in the virtual summit but is expected to attend Wednesday’s in-person ministerial session, the official said.
South Korea’s foreign ministry said Musk, former CEO of Google Eric Schmidt, Samsung Electronics chair Jay Y Lee and other AI industry leaders would participate in the summit.
Reuters
KATE THOMPSON DAVY: GPT-4o responds to emotional cues and simulates its own
DUNCAN McLEOD: PC vs Mac, and the battle for silicon supremacy
NEWS FROM THE FUTURE: The business of education goes bust
SHAWN HAGEDORN: Why May’s vote could be our last legitimate election
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Related Articles
Microsoft offers AMD chips as an alternative to Nvidia’s powerful GPUs
NEWS FROM THE FUTURE: Bits are in pieces
Cybercriminals turn to AI in search of higher returns
JOHAN STEYN: AI and the future of work — transforming SA’s hiring landscape
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.