The world awakes to dangers of AI
There’s a pushback but it’s not clear who will win
It’s not often you get the Western powers of the US, UK and EU to agree with China. But all these global powerhouses know artificial intelligence (AI) poses risks to humanity.
Twenty-eight countries signed the Bletchley Declaration this month, the first international treaty to deal with the dangers of AI. “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” reads the declaration. It is named for Bletchley Park outside London where British codebreakers, led by the genius Alan Turing, cracked the Germans’ World War 2 Enigma code.
After the launch of generative AI chatbot ChatGPT last November, the power, potential and pace of this still nascent technology have stunned the world. It really is a game-changing technology.
After years of development by start-up OpenAI, the software has scraped and ingested reams of data from the internet to build its large language models, which are the necessary databases for its tasks.
Since its release just under a year ago, OpenAI has released a newer, more powerful version (CPT-4) and so-called plug-ins or extensions that allow it to use current information — as opposed to data gleaned months or years ago.
OpenAI, which counts Tesla and SpaceX CEO Elon Musk and Microsoft as early investors, faces competition from the other major cloud players (Google and Amazon, which are heavily investing in AI start-ups). Microsoft has baked OpenAI’s technology into all of its offerings, calling it Copilot.
At the Bletchley Park event, Musk, who recently announced an AI chatbot for X, called Grok, warned that “for the first time, we have a situation where there’s something that is going to be far smarter than the smartest human”.
He added: “It’s not clear to me we can actually control such a thing.”
Musk is not the only global figure worried about the potential dangers of AI.
In April a group of scientists and tech luminaries wrote an open letter calling on developers to “pause giant AI experiments” for six months.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one — not even their creators — can understand, predict, or reliably control,” the letter read. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Earlier this month Sapiens author and historian Yuval Noah Harari warned that AI could cause a global financial crisis with “catastrophic” consequences.
“With AI, what you’re talking about is a very large number of dangerous scenarios, each of them having a relatively small probability that taken together ... constitutes an existential threat to the survival of human civilisation,” Harari told The Guardian.
The Bletchley Declaration was a “very important step forward”, said Harari.
“Maybe the most encouraging or hopeful thing about it is that they could get not just the EU, the UK and the US, but also the government of China to sign the declaration,” he said.
“I think that was a very positive sign. Without global co-operation, it will be extremely difficult, if not impossible, to rein in the most dangerous potential of AI.”
Part of the summit’s agreements is that developers of big AI will co-operate with the US, UK and EU on testing their advanced models before they are released.
Despite this, warns the scientist who organised the April open letter, “we’re witnessing a race to the bottom that must be stopped”.
Massachusetts Institute of Technology professor Max Tegmark told The Guardian last month: “We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk.”
Regulation is “critical to safe innovation, so that a handful of AI corporations don’t jeopardise our shared future”.
Last month US President Joe Biden signed an executive order that AI companies must share their safety test results, find ways to watermark content to prevent deep fakes and check AI models don’t discover a harmful means of exploiting gene sequencing or making new dangerous compounds.
“We’re going to see more technological change in the next 10, maybe next five years than we’ve seen in the past 50 years,” Biden said. “AI is all around us. Much of it is making our lives better ... but in some cases AI is making life worse.”
That is the ultimate goal: to make AI a net positive and help make humanity better, not worse. This, bizarrely, is what we humans have been doing to ourselves for thousands of years. In essence, humanity has to find a way to make AI less flawed than us. Good luck people.
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.