Nobel laureate among group of top scientists to pen letter warning countries of AI risks
Latest AI models too powerful to let them develop without democratic oversight, researchers say
24 October 2023 - 09:53
by Supantha Mukherjee
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Stockholm — Artificial intelligence (AI) companies and countries’ governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a letter on Tuesday.
The letter, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks.
Signatories to the letter include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.
“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the letter, signed by three Turing Award winners, a Nobel laureate, and more than a dozen AI academics.
Currently there are no broad-based regulations focusing on AI safety, and the first set of legislations by the EU is yet to become law as legislators are yet to agree on several issues.
“Recent state-of-the-art AI models are too powerful and too significant to let them develop without democratic oversight,” said Yoshua Bengio, one of three people known as the godfather of AI. “It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.
Since the launch of OpenAI’s generative AI models, academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.
Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.
“Companies will complain that it’s too hard to satisfy regulations — that ‘regulation stifles innovation’ — that’s ridiculous,” said British computer scientist Stuart Russell. “There are more regulations on sandwich shops than there are on AI companies.”
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Nobel laureate among group of top scientists to pen letter warning countries of AI risks
Latest AI models too powerful to let them develop without democratic oversight, researchers say
Stockholm — Artificial intelligence (AI) companies and countries’ governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a letter on Tuesday.
The letter, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks.
Signatories to the letter include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.
“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the letter, signed by three Turing Award winners, a Nobel laureate, and more than a dozen AI academics.
Currently there are no broad-based regulations focusing on AI safety, and the first set of legislations by the EU is yet to become law as legislators are yet to agree on several issues.
“Recent state-of-the-art AI models are too powerful and too significant to let them develop without democratic oversight,” said Yoshua Bengio, one of three people known as the godfather of AI. “It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.
Since the launch of OpenAI’s generative AI models, academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.
Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.
“Companies will complain that it’s too hard to satisfy regulations — that ‘regulation stifles innovation’ — that’s ridiculous,” said British computer scientist Stuart Russell. “There are more regulations on sandwich shops than there are on AI companies.”
Reuters
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Related Articles
Microsoft to dramatically lift spending on AI in Australia
AI, data analytics show Sars the money
This is how AI recruitment systems keep discrimination alive
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.