AI

EU lawmakers eye tiered approach to regulating generative AI

Kommentar

Colorful streams of data flowing into colorful binary info.
Image Credits: NicoElNino / Getty Images

EU lawmakers in the European parliament are closing in on how to tackle generative AI as they work to fix their negotiating position so that the next stage of legislative talks can kick off in the coming months.

The hope then is that a final consensus on the bloc’s draft law for regulating AI can be reached by the end of the year.

“This is the last thing still standing in the negotiation,” says MEP Dragos Tudorache, the co-rapporteur for the EU’s AI Act, discussing MEPs’ talks around generative AI in an interview with TechCrunch. “As we speak, we are crossing the last ‘T’s and dotting the last ‘I’s. And sometime next week I’m hoping that we will actually close — which means that sometime in May we will vote.”

The Council adopted its position on the regulation back in December. But where Member States largely favored deferring what to do about generative AI — to additional, implementing legislation — MEPs look set to propose that hard requirements are added to the Act itself.

In recent months, tech giants’ lobbyists have been pushing in the opposite direction, of course, with companies such as Google and Microsoft arguing for generative AI to get a regulatory carve out of the incoming EU AI rules.

Where things will end up remains tbc. But discussing what’s likely to be the parliament’s position in relation to generative AI tech in the Act, Tudorache suggests MEPs are gravitating towards a layered approach — three layers in fact — one to address responsibilities across the AI value chain; another to ensure foundational models get some guardrails; and a third to tackle specific content issues attached to generative models, such as the likes of OpenAI’s ChatGPT.

Report details how Big Tech is leaning on EU not to regulate general purpose AIs

Under the MEPs’ current thinking, one of these three layers would apply to all general purpose AI (GPAIs) — whether big or small; foundational or non foundational models — and be focused on regulating relationships in the AI value chain.

“We think that there needs to be a level of rules that says ‘entity A’ puts on the market a general purpose [AI] has an obligation towards ‘entity B’, downstream, that buys the general purpose [AI] and actually gives it a purpose,” he explains. “Because it gives it a purpose that might become high risk it needs certain information. In order to comply [with the AI Act] it needs to explain how the model was was trained. The accuracy of the data sets from biases [etc].”

A second proposed layer would address foundational models — by setting some specific obligations for makers of these base models.

“Given their power, given the way they are trained, given the versatility, we believe the providers of these foundational models need to do certain things — both ex ante… but also during the lifetime of the model,” he says. “And it has to do with transparency, it has to do, again, with how they train, how they test prior to going on the market. So basically, what is the level of diligence the responsibility that they have as developers of these models?”

The third layer MEPs are proposing would target generative AIs specifically — meaning a subset of GPAIs/foundational models, such as large language models or generative art and music AIs. Here lawmakers working to set the parliament’s mandate are taking the view these tools need even more specific responsibilities; both when it comes to the type of content they can produce (with early risks arising around disinformation and defamation); and in relation to the thorny (and increasingly litigated) issue of copyrighted material used to train AIs.

“We’re not inventing a new regime for copyright because there is already copyright law out there. What we are saying… is there has to be a documentation and transparency about material that was used by the developer in the training of the model,” he emphasizes. “So that afterwards the holders of those rights… can say hey, hold on, what you used my data, you use my songs, you used my scientific article — well, thank you very much that was protected by law, therefore, you owe me something — or no. For that will use the existing copyright laws. We’re not replacing that or doing that in the AI Act. We’re just bringing that inside.”

The Commission proposed the draft AI legislation a full two years ago, laying out a risk-based approach for regulating applications of artificial intelligence and setting the bloc’s co-legislators, the parliament and the Council, the no-small-task of passing the world’s first horizontal regulation on AI.

Adoption of this planned EU AI rulebook is still a ways off. But progress is being made and agreement between MEPs and Member States on a final text could be hashed out by the end of the year, per Tudorache — who notes that Spain, which takes up the rotating six-month Council presidency in July, is eager to deliver on the file. Although he also concedes there are still likely to be plenty of points of disagreement between MEPs and Member States that will have to be worked through. So a final timeline remains uncertain. (And predicting how the EU’s closed-door trilogues will go is never an exact science.)

One thing is clear: The effort is timely — given how AI hype has rocketed in recent months, fuelled by developments in powerful generative AI tools, like DALL-E and ChatGPT.

The excitement around the boom in usage of generative AI tools that let anyone produce works such as written compositions or visual imagery just by inputting a few simple instructions has been tempered by growing concern over the potential for fast-scaling negative impacts to accompany the touted productivity benefits.

EU lawmakers have found themselves at the center of the debate — and perhaps garnering more global attention than usual — since they’re faced with the tricky task of figuring out how the bloc’s incoming AI rules should be adapted to apply to viral generative AI.  

The Commission’s original draft proposed to regulate artificial intelligence by categorizing applications into different risk bands. Under this plan, the bulk of AI apps would be categorized as low risk — meaning they escape any legal requirements. On the flip side, a handful of unacceptable risk use-cases would be outright prohibited (such as China-style social credit scoring). Then, in the middle, the framework would apply rules to a third category of apps where there are clear potential safety risks (and/or risks to fundamental rights) which are nonetheless deemed manageable.

The AI Act contains a set list of “high risk” categories which covers AI being used in a number of areas that touch safety and human rights, such as law enforcement, justice, education, employment healthcare and so on. Apps falling in this category would be subject to a regime of pre- and post-market compliance, with a series of obligations in areas like data quality and governance; and mitigations for discrimination — with the potential for enforcement (and penalties) if they breach requirements.

The proposal also contained another middle category which applies to technologies such as chatbots and deepfakes — AI-powered tech that raise some concerns but not, in the Commission’s view, so many as high risk scenarios. Such apps don’t attract the full sweep of compliance requirements in the draft text but the law would apply transparency requirements that aren’t demanded of low risk apps.

Being first to the punch drafting laws for such a fast-developing, cutting-edge tech field meant the EU was working on the AI Act long before the hype around generative AI went mainstream. And while the bloc’s lawmakers were moving rapidly in one sense, its co-legislative process can be pretty painstaking. So, as it turns out, two years on from the first draft the exact parameters of the AI legislation are still in the process of being hashed out.

The EU’s co-legislators, in the parliament and Council, hold the power to revise the draft by proposing and negotiating amendments. So there’s a clear opportunity for the bloc to address loopholes around generative AI without needing to wait for follow-on legislation to be proposed down the line, with the greater delay that would entail. 

Even so, the EU AI Act probably won’t be in force before 2025 — or even later, depending on whether lawmakers decide to give app makers one or two years before enforcement kicks in. (That’s another point of debate for MEPs, per Tudorache.)

He stresses that it will be important to give companies enough time to prepare to comply with what he says will be “a comprehensive and far reaching regulation”. He also emphasizes the need to allow time for Member States to prepare to enforce the rules around such complex technologies, adding: “I don’t think that all Member States are prepared to play the regulator role. They need themselves time to ramp up expertise, find expertise, to convince expertise to work for the public sector.

“Otherwise, there’s going to be such a disconnect between between the realities of the industry, the realities of implementation, and regulator, and you won’t be able to force the two worlds into each other. And we don’t want that either. So I think everybody needs that lag.”

MEPs are also seeking to amend the draft AI Act in other ways — including by proposing a centralized enforcement element to act as a sort of backstop for Member State-level agencies; as well as proposing some additional prohibited use-cases (such as predictive policing; which is an area where the Council may well seek to push back).

“We are changing fundamentally the governance from what was in the Commission text, and also what is in the Council text,” says Tudorache on the enforcement point. “We are proposing a much stronger role for what we call the AI Office. Including the possibility to have joint investigations. So we’re trying to put as sharp teeth as possible. And also avoid silos. We want to avoid the 27 different jurisdiction effect [i.e. of fragmented enforcements and forum shopping to evade enforcement].”

The EU’s approach to regulating AI draws on how it’s historically tackled product liability. This fit is obviously a stretch, given how malleable AI technologies are and the length/complexity of the ‘AI value chain’ — i.e. how many entities may be involved in the development, iteration, customization and deployment of AI models. So figuring out liability along that chain is absolutely a key challenge for lawmakers.

The risk-based approach also raises specific questions over how to handle the particularly viral flavor of generative AI that’s blasted into mainstream consciousness in recent months, since these tools don’t necessarily have a clear cut use-case. You can use ChatGPT to conduct research, generate fiction, write a best man’s speech, churn out marketing copy or pen lyrics to a cheesy pop song, for example — with the caveat that what it outputs may be neither accurate nor much good (and it certainly won’t be original).

Similarly, generative AI art tools could be used for different ends: As an inspirational aid to artistic production, say, to free up creatives to do their best work; or to replace the role of a qualified human illustrator with cheaper machine output.

(Some also argue that generative AI technologies are even more speculative; that they are not general purpose at all but rather inherently flawed and incapable; representing an amalgam of blunt-force investment that’s being imposed upon societies without permission or consent in a cripplingly-expensive and rights-trampling fishing expedition-style search for profit-making solutions.)

The core concern MEPs are seeking to tackle, therefore, is to ensure that underlying generative AI models like OpenAI’s GPT can’t just dodge risk-based regulation entirely by claiming they have no set purpose.

Deployers of generative AI models could also seek to argue they’re offering a tool that’s general purpose enough to escape any liability under the incoming law — unless there is clarity in the regulation about relative liabilities and obligations throughout the value chain.

One obviously unfair and dysfunctional scenario would be for all the regulated risk and liability to be pushed downstream, onto only the deployers of specific high risks apps. Since these entities would, almost certainly, be utilizing generative AI models developed by other/s upstream — so wouldn’t have access to the data, weights etc used to train the core model — which would make it impossible for them to comply with AI Act obligations, whether around data quality or mitigating bias.  

There was already criticism about this aspect of the proposal prior to the generative AI hype kicking off in earnest. But the speed of adoption of technologies like ChatGPT appears to have convinced parliamentarians of the need to amend the text to make sure generative AI does not escape being regulated.

And while Tudorache isn’t in a position to know whether the Council will align with the parliamentarians’ sense of mission here, he says he has “a feeling” they will buy in — albeit, most likely seeking to add their own “tweaks and bells and whistles” to how exactly the text tackles general purpose AIs.

In terms of next steps, once MEPs close their discussions on the file there will be a few votes in the parliament to adopt the mandate. (First two committee votes and then a plenary vote.)

He predicts the latter will “very likely” end up being taking place in the plenary session in early June — setting up for trilogue discussions to kick off with the Council and a sprint to get agreement on a text during the six months of the Spanish presidency. “I’m actually quite confident… we can finish with the Spanish presidency,” he adds. “They are very, very eager to make this the flagship of their presidency.”

Asked why he thinks the Commission avoided tackling generative AI in the original proposal, he suggests even just a couple of years ago very few people realized how powerful — and potentially problematic — these technology would become, nor indeed how quickly things could develop in the field. So it’s a testament to how difficult it’s getting for lawmakers to set rules around shapeshifting digital technologies which aren’t already out of date before they’ve even been through the democratic law-setting process.

Somewhat by chance, the timeline appears to be working out for the EU’s AI Act — or, at least, the region’s lawmakers have an opportunity to respond to recent developments. (Of course it remains to be seen what else might emerge over the next two years or so of generative AI which could freshly complicate these latest futureproofing efforts.)

Given the pace and disruptive potential of the latest wave of generative AI models, MEPs are sounding keen that others follow their lead — and Tudorache was one of a number of parliamentarians who put their names to an open letter earlier this week, calling for international efforts to cooperate on setting some shared principles for AI governance.

The letter also affirms MEPs’ commitment to setting “rules specifically tailored to foundational models” — with the stated goal of ensuring “human-centric, safe, and trustworthy” AI.

He says the letter was written in response to the open letter put out last month — signed by the likes of Elon Musk (who has since been reported to be trying to develop his own GPAI) — calling for a moratorium on development of any more powerful generative AI models so that shared safety protocols could be developed.

“I saw people asking, oh, where are the policymakers? Listen, the business environment is concerned, academia is concerned, and where are the policymakers — they’re not listening. And then I thought well that’s what we’re doing over here in Europe,” he tells TechCrunch. “So that’s why I then brought together my colleagues and I said let’s actually have an open reply to that.”

“We’re not saying that the response is to basically pause and run to the hills. But to actually, again, responsibly take on the challenge [of regulating AI] and do something about it — because we can. If we’re not doing it as regulators then who else would?” he adds.

Signing MEPs also believe the task of AI regulation is such a crucial one they shouldn’t just be waiting around in the hopes that adoption of the EU AI Act will led to another ‘Brussels effect’ kicking in in a few years down the line, as happened after the bloc updated its data protection regime in 2018 — influencing a number of similar legislative efforts in other jurisdictions. Rather this AI regulation mission must involve direct encouragement — because the stakes are simply too high.

“We need to start actively reaching out towards other like minded democracies [and others] because there needs to be a global conversation and a global, very serious reflection as to the role of this powerful technology in our societies, and how to craft some basic rules for the future,” urges Tudorache.

Unpicking the rules shaping generative AI

Europe spins up AI research hub to apply accountability rules on Big Tech

More TechCrunch

In early 2018, VC Mike Moritz wrote in the FT that “Silicon Valley would be wise to follow China’s lead,” noting the pace of work at tech companies was “furious”…

This is how bad China’s startup scene looks now

Fei-Fei Li, the Stanford professor many deem the “Godmother of AI,” has raised $230 million for her new startup, World Labs, from backers including Andreessen Horowitz, NEA, and Radical Ventures.…

Fei-Fei Li’s World Labs comes out of stealth with $230M in funding

Bolt says it has settled its long-standing lawsuit with its investor Activant Capital. One-click payments startup Bolt is settling the suit by buying out the investor’s stake “after which Activant…

Fintech Bolt is buying out the investor suing over Ryan Breslow’s $30M loan

The rise of neobanks has been fascinating to witness, as a number of companies in recent years have grown from merely challenging traditional banks to being massive players in and…

Dave and Varo Bank execs are coming to TechCrunch Disrupt 2024

OpenAI released its new o1 models on Thursday, giving ChatGPT users their first chance to try AI models that pause to “think” before they answer. There’s been a lot of…

First impressions of OpenAI o1: An AI designed to overthink it

Featured Article

Investors rebel as TuSimple pivots from self-driving trucks to AI gaming

TuSimple, once a buzzy startup considered a leader in self-driving trucks, is trying to move its assets to China to fund a new AI-generated animation and video game business. The pivot has not only puzzled and enraged several shareholders, but also threatens to pull the company back into a legal…

Investors rebel as TuSimple pivots from self-driving trucks to AI gaming

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want it in your inbox every Friday? Sign up here. This week…

Some startups and investors are more risk-averse than others

Silicon Valley startup accelerator Y Combinator will expand the number of cohorts it runs each year from two to four starting in 2025, Bloomberg reported Thursday, and TechCrunch confirmed today.…

Y Combinator expanding to four cohorts a year in 2025

Telegram has had a tough few weeks. The messaging app’s founder, Pavel Durov, was arrested in late August and later released on a €5 million bail in France, charged with…

Telegram CEO Durov’s arrest hasn’t dampened enthusiasm for its TON blockchain

Martin Casado, a general partner at Andreessen Horowitz, will tackle one of the most pressing issues facing today’s tech world — AI regulation — only at TechCrunch Disrupt 2024, taking…

A fireside chat with Andreessen Horowitz partner Martin Casado at TechCrunch Disrupt 2024

Christina Cacioppo, CEO and co-founder of Vanta, will be on the SaaS Stage at TechCrunch Disrupt 2024 to reveal how Vanta is redefining security and compliance automation and driving innovation…

Vanta’s Christina Cacioppo takes the stage at TechCrunch Disrupt 2024

On Thursday, cybersecurity giant Fortinet disclosed a breach involving customer data.  In a statement posted online, Fortinet said an individual intruder accessed “a limited number of files” stored on a…

Fortinet confirms customer data breach

Meta has confirmed that it’s restarting efforts to train its AI systems using public Facebook and Instagram posts from its U.K. userbase. The company claims it has “incorporated regulatory feedback” into a…

Meta reignites plans to train AI using UK users’ public Facebook and Instagram posts

Following the moves of other tech giants, Spotify announced on Friday it’s introducing in-app parental controls in the form of “managed accounts” for listeners under the age of 13. The…

Spotify begins piloting parent-managed accounts for kids on family plans

Uber users in Austin and Atlanta will be able to hail Waymo robotaxis through the app in early 2025 as part of a partnership between the two companies. 

Waymo robotaxis to become available on Uber in Austin, Atlanta in early 2025

There are plenty of calendar and scheduling apps that take care of your professional life and help you slot in meetings with your teammates and work collaborators. Howbout is all…

Howbout raises $8M from Goodwater to build a calendar that you can share with your friends

Delhivery claims Ecom Express has inaccurately represented Delhivery’s business metrics when drawing comparisons in its IPO filing. 

SoftBank-backed Delhivery contests metrics in rival Ecom Express’ IPO filing

It was a matter of time, but Apple is going to allow third-party app stores on the iPad starting next week, on September 16. This change will occur with the…

Alternative app stores will be allowed on Apple iPad in the EU from September 16

The U.K.’s antitrust regulator has delivered its provisional ruling in a longstanding battle to combine two of the country’s major telecommunication operators. The Competition and Markets Authority (CMA) says that…

Three and Vodafone’s $19B merger hits the skids as UK rules the deal would adversely impact customers and MVNOs

Late Thursday evening, Oprah Winfrey aired a special on AI, appropriately titled “AI and the Future of Us.” Guests included OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current…

Oprah just had an AI special with Sam Altman and Bill Gates — here are the highlights

Antonio Moraes, the grandson of a late prominent Brazilian billionaire, was never interested in joining the family-owned conglomerate of construction companies and a bank. Shortly after graduating from college, he…

XP Health grabs $33M to bring employees more affordable vision care

A crew of four private astronauts made history in the early hours of Thursday when they opened the hatch of their SpaceX Dragon capsule and conducted the first commercial spacewalk. …

Polaris Dawn astronauts perform historic private spacewalk while wearing SpaceX-made suits

Keith Rabois, managing director of Khosla Ventures, was having dinner with a “very successful CEO” in October 2018 when the CEO asked him a question: How many people does it…

Keith Rabois says Miami is still a great place for startups, even as a16z leaves

By making the AI info label harder to find, it might be easier for users to be deceived by content that was edited with AI, especially as editing tools become…

Meta is making its AI info label less visible on content edited or modified by AI tools

Cohost, a would-be X rival launched to the public in June 2022, is shutting down, the company announced via the social network’s staff account earlier this week. The service had…

Cohost, the X rival founded with an anti-Big Tech manifesto, is running out of money and will shut down

At the MTV Video Music Awards (VMAs) on Wednesday night, new technology allowed fans to shop their favorite artists’ styles as they appeared on the screen. Though the drama from…

Shopsense AI lets music fans buy dupes inspired by red-carpet looks at the VMAs

Featured Article

A comprehensive list of 2024 tech layoffs

A complete list of all the known layoffs in tech, from Big Tech to startups, broken down by month throughout 2024.

A comprehensive list of 2024 tech layoffs

Working away on his PhD in Munich only a few years ago, Stephan Herrmann (now a doctor) couldn’t have conceived of a time when his idea for a carbon-negative power…

This startup is making manure out of other biogas power plants and now has $62M to play with

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code…

ChatGPT: Everything you need to know about the AI-powered chatbot

Faraday Future is doling out big raises and bonuses to its CEO and its founder, despite having delivered just 13 cars in its 10-year history and recently laying off or…

Faraday Future gives CEO and founder raises and bonuses after delivering 13 cars