EU legislators back strict rules after OpenAI CEO's ouster
Voluntary agreements brokered by visionary leaders cannot be relied on, says Brando Benifei
London — As the EU edges closer to passing a wide-ranging set of laws governing artificial intelligence, legislators and experts say the surprise ousting of OpenAI CEO Sam Altman underscores the need for strict rules.
Altman, cofounder of the start-up that last year kicked off the generative AI boom, was abruptly fired by OpenAI’s board last week, sending shock waves through the tech world and prompting employees to make threats of a mass resignation at the company.
Across the Atlantic, the European Commission, the European parliament and the EU Council have been hashing out the fine print of the AI Act, a sweeping set of laws that would require some companies to complete extensive risk assessments and make data available to regulators.
In recent weeks, talks have hit stumbling blocks over the extent to which companies should be allowed to self-regulate.
Brando Benifei, one of two European parliament legislators leading negotiations on the laws, said, “The understandable drama about Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders.
“Regulation, especially when dealing with the most powerful AI models, needs to be sound, transparent and enforceable to protect our society.”
On Monday, France, Germany and Italy reached an agreement on how AI should be regulated, a move expected to accelerate negotiations at the European level.
The three governments support “mandatory self-regulation through codes of conduct” for those using generative AI models, but some experts said it will not be enough.
Alexandra van Huffelen, Dutch minister for digitalisation, said the OpenAI saga underscores the need for strict rules.
“The lack of transparency and the dependence on a few influential companies … clearly underlines the necessity of regulation,” Van Huffelen said.
Gary Marcus, an AI expert at New York University, wrote on social media platform X,.: “We can’t really trust the companies to self-regulate AI where even their own internal governance can be deeply conflicted.
“Please don’t gut the EU AI Act; we need it now more than before.”
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.