EU countries thrash out their position on foundation models, access to source codes and fines
04 December 2023 - 05:00
bySupantha Mukherjee, Foo Yun Chee and Martin Coulter
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Stockholm/Brussels/London — EU legislators cannot agree on how to regulate systems such as ChatGPT, in a threat to landmark legislation aimed at keeping artificial intelligence (AI) in check, six sources told Reuters.
Negotiators met on Friday for crucial discussions ahead of final talks scheduled for December 6, “foundation models”, or generative AI, have become the main hurdle in talks over the EU’s proposed AI Act, said the sources, who declined to be identified because the discussions are confidential.
Foundation models like the one built by Microsoft-backed OpenAI are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks.
After two years of negotiations, the bill was approved by the European parliament in June. The draft AI rules now need to be agreed through meetings between representatives of the European parliament, the council and the European Commission.
Experts from EU countries on Friday thrashed out their position on foundation models, access to source codes, fines and other topics while legislators from the European parliament are also gathering to finalise their stance.
If they cannot agree, the act risks being shelved due to lack of time before European parliamentary elections next year.
While some experts and legislators have proposed a tiered approach for regulating foundation models, defined as those with more than 4-million users, others have said smaller models could be equally risky.
But the biggest challenge to getting an agreement has come from France, Germany and Italy, who favour letting makers of generative AI models self-regulate instead of having hard rules.
In a meeting of the countries’ economy ministers on October 30 in Rome, France persuaded Italy and Germany to support a proposal, sources said.
Until then, negotiations had gone smoothly, with legislators making compromises across several other conflict areas such as regulating high-risk AI, sources said.
Self-regulation?
European parliamentarians, EU commissioner Thierry Breton and scores of AI researchers have criticised self-regulation.
In an open letter this week, researchers such as Geoffrey Hinton warned self-regulation is “likely to dramatically fall short of the standards required for foundation model safety”.
France-based AI company Mistral and Germany’s Aleph Alpha have criticised the tiered approach to regulating foundation models, winning support from their respective countries.
A source close to Mistral said the company favours hard rules for products, not the technology on which it is built.
“Though the concerned stakeholders are working their best to keep negotiations on track, the growing legal uncertainty is unhelpful to European industries,” said Kirsten Rulf, partner and associate director at Boston Consulting Group.
“European businesses would like to plan for next year, and many want to see some kind of certainty around the EU AI Act going into 2024,” she said.
Other pending issues in the talks include definition of AI, fundamental rights impact assessment, law enforcement exceptions and national security exceptions, sources said.
Legislators have also been divided over the use of AI systems by law enforcement agencies for biometric identification of individuals in publicly accessible spaces and could not agree on several of these topics in a meeting on November 29, sources said.
Spain, which holds the EU presidency until the end of the year, has proposed compromises in a bid to speed up the process.
If a deal does not happen in December, the next presidency, Belgium, will have a couple of months to one before it is likely shelved ahead of European elections.
“Had you asked me six or seven weeks ago, I would have said we are seeing compromises emerging on all the key issues,” said Mark Brakel, director of policy at the Future of Life Institute, a nonprofit aimed at reducing risks from advanced AI.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Why EU’s in two minds on generative AI
EU countries thrash out their position on foundation models, access to source codes and fines
Stockholm/Brussels/London — EU legislators cannot agree on how to regulate systems such as ChatGPT, in a threat to landmark legislation aimed at keeping artificial intelligence (AI) in check, six sources told Reuters.
Negotiators met on Friday for crucial discussions ahead of final talks scheduled for December 6, “foundation models”, or generative AI, have become the main hurdle in talks over the EU’s proposed AI Act, said the sources, who declined to be identified because the discussions are confidential.
Foundation models like the one built by Microsoft-backed OpenAI are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks.
After two years of negotiations, the bill was approved by the European parliament in June. The draft AI rules now need to be agreed through meetings between representatives of the European parliament, the council and the European Commission.
Experts from EU countries on Friday thrashed out their position on foundation models, access to source codes, fines and other topics while legislators from the European parliament are also gathering to finalise their stance.
If they cannot agree, the act risks being shelved due to lack of time before European parliamentary elections next year.
While some experts and legislators have proposed a tiered approach for regulating foundation models, defined as those with more than 4-million users, others have said smaller models could be equally risky.
But the biggest challenge to getting an agreement has come from France, Germany and Italy, who favour letting makers of generative AI models self-regulate instead of having hard rules.
In a meeting of the countries’ economy ministers on October 30 in Rome, France persuaded Italy and Germany to support a proposal, sources said.
Until then, negotiations had gone smoothly, with legislators making compromises across several other conflict areas such as regulating high-risk AI, sources said.
Self-regulation?
European parliamentarians, EU commissioner Thierry Breton and scores of AI researchers have criticised self-regulation.
In an open letter this week, researchers such as Geoffrey Hinton warned self-regulation is “likely to dramatically fall short of the standards required for foundation model safety”.
France-based AI company Mistral and Germany’s Aleph Alpha have criticised the tiered approach to regulating foundation models, winning support from their respective countries.
A source close to Mistral said the company favours hard rules for products, not the technology on which it is built.
“Though the concerned stakeholders are working their best to keep negotiations on track, the growing legal uncertainty is unhelpful to European industries,” said Kirsten Rulf, partner and associate director at Boston Consulting Group.
“European businesses would like to plan for next year, and many want to see some kind of certainty around the EU AI Act going into 2024,” she said.
Other pending issues in the talks include definition of AI, fundamental rights impact assessment, law enforcement exceptions and national security exceptions, sources said.
Legislators have also been divided over the use of AI systems by law enforcement agencies for biometric identification of individuals in publicly accessible spaces and could not agree on several of these topics in a meeting on November 29, sources said.
Spain, which holds the EU presidency until the end of the year, has proposed compromises in a bid to speed up the process.
If a deal does not happen in December, the next presidency, Belgium, will have a couple of months to one before it is likely shelved ahead of European elections.
“Had you asked me six or seven weeks ago, I would have said we are seeing compromises emerging on all the key issues,” said Mark Brakel, director of policy at the Future of Life Institute, a nonprofit aimed at reducing risks from advanced AI.
“This has now become a lot harder,” he said.
Reuters
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.