Exclusive: Behind EU lawmakers’ challenge to rein in ChatGPT and generative AI

LONDON/STOCKHOLM, April 28 (Reuters) – (This story was re-written on April 28 to remove the word duplicate in paragraph 1)

As recently as February, generative AI did not feature prominently in EU lawmakers’ plans to regulate AI technologies such as ChatGPT.

the The 108-page block proposal For the AI ​​Act, which was published two years ago, it included only one mention of the word “chatbot”. References to AI-generated content largely refer to deepfakes: images or audio designed to impersonate humans.

However, by mid-April, MEPs were racing to update those rules to keep pace with the explosion of interest in generative AI, which has fueled fear and anxiety since OpenAI unveiled ChatGPT six months ago.

That scramble culminated on Thursday with a new draft of legislation that identified copyright protection as a key part of the effort to keep artificial intelligence in check.

Interviews with four lawmakers and other sources close to the debates reveal for the first time how over the course of just 11 days this small group of politicians came up with what would become landmark legislation, reshaping the regulatory landscape for OpenAI and its competitors.

The bill is not final and lawyers say it will likely take years to come into force.

However, the speed of their work is also a rare example of consensus in Brussels, which is often criticized for the slow pace of decision-making.

Changes at the last minute

Since its launch in November, ChatGPT has become the fastest growing app in history, sparking a flurry of activity from Big Tech competitors and investment in AI startups like Anthropic and Midjourney.

See also  Starbucks CEO Slams Coffee Industry Unions

The massive popularity of such applications has led EU industry chief Thierry Breton and others to call for regulation of ChatGPT-like services.

An organization backed by Elon Musk, the billionaire CEO of Tesla Inc (TSLA.O) and Twitter, has stepped it up by issuing a letter warning of existential risks from AI and calling for stricter regulations.

On April 17, dozens of MEPs involved in drafting the legislation signed an open letter endorsing some parts of Musk’s letter and urging world leaders to convene a summit to find ways to control the development of advanced artificial intelligence.

However, on the same day, two of them — Dragos Todorac and Brando Benevi — proposed changes that would force companies with generating AI systems to disclose any copyrighted material used to train their models, according to four sources present at the meetings, who asked not to. Anonymity due to the sensitivity of the discussions.

The sources said that this strict new proposal had the support of various parties.

One proposal by conservative MEP Axel Voss — forcing companies to ask permission from rights holders before using data — has been dismissed as too restrictive and could derail the nascent industry.

After releasing details over the next week, the European Union outlined proposed laws that could impose an uncomfortable level of transparency on a notoriously secretive industry.

“I have to admit I was positively surprised by how easily we converged fairly on what should be in the text on these models,” Todorac told Reuters on Friday.

“It shows that there is a strong consensus, a common understanding about how to regulate at this point in time.”

See also  US bank regulator warns of crisis risks from spread of financial technology

The committee will vote on the deal on May 11 and, if successful, will progress to the next stage of negotiations, the trilateral one, in which EU member states will discuss the contents with the European Commission and Parliament.

“We are waiting to see if the deal is in place until then,” a source familiar with the matter said.

BIG BROTHER VS. position or terminator

Until recently, members of the European Parliament were still not convinced that generative AI deserved any special consideration.

In February, Todorach told Reuters that generative AI would “not be covered” in depth. “This is another discussion that I don’t think we will deal with in this text,” he said.

Citing data security risks over warnings of humanoid intelligence, he said, “I’m more afraid of Big Brother than I am of the Terminator.”

But Teodorach and his colleagues now agree that laws that specifically target the use of generative AI are needed.

Under new proposals targeting “foundation models,” companies like OpenAI, which is backed by Microsoft Corp (MSFT.O), would have to disclose any copyrighted material — books, photographs, videos, and more — used to train their systems.

Allegations of copyright infringement have enraged AI companies in recent months as Getty Images sued Stable Diffusion for using copyrighted images to train its systems. OpenAI has also faced criticism for Refuse to share details of the data set used to train its software.

“There have been calls from outside and within parliament to ban ChatGPT or classify it as high risk,” said MP Svenja Hahn. “The final compromise is favorable to innovation because it does not classify these models as ‘high risk’, but rather sets requirements for transparency and quality.”

See also  Wall Street fell as Credit Suisse launched a new bank sell-off

Additional reporting by Martin Coulter in London and Subanta Mukherjee in Stockholm; Editing by Josephine Mason, Kenneth Lee and Matthew Lewis

Our standards: Thomson Reuters Trust Principles.

Leave a Reply

Your email address will not be published. Required fields are marked *