AI Act: MEPs close in on rules for general purpose AI, foundation models

[Tada Images/Shutterstock]

The European Parliament is set to propose stricter rules for foundation models like ChatGPT and distinguish them from general purpose AI, according to an advanced compromise text seen by EURACTIV.

The AI Act is a landmark EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. As AI solutions designed to handle a wide variety of tasks were not covered in the original proposal, the meteoric rise of ChatGPT has brutally disrupted the debate, leading to delays.

Although the file is close to finalisation, on Wednesday (19 April), the political meeting meant to certify an agreement was turned into a technical discussion on this part of the file, leading to the postponement of the key committee vote originally scheduled for 26 April.

Meanwhile, a revised text circulated Thursday indicates that MEPs are close to finalising their approach to ChatGPT and similar applications.

AI Act: European Parliament headed for key committee vote at end of April

EU lawmakers in the leading European Parliament committees are voting on the political agreement on the AI Act on 26 April, with many questions being settled but a few critical issues still open.

Foundation model vs General Purpose AI

As previously reported by EURACTIV, EU lawmakers want to distinguish general purpose AI from foundation models, introducing a stricter regime for the latter.

Foundation model, a term from Stanford University, is defined as “an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.”

By contrast, general purpose AI is deemed an “AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”

In other words, the difference between the two concepts focuses on training data, adaptability, and if they can be used for unintended purposes. Foundation models include generative AI systems like ChatGPT and Stable Diffusion, which were trained from data scraped from the entire internet.

Leading EU lawmakers propose obligations for General Purpose AI

The EU lawmakers spearheading the work on the AI Act pitched significant obligations for providers of large language models like ChatGPT and Stable Diffusion while seeking to clarify the responsibilities alongside the AI value chain.

Foundation models’ requirements

Before making the foundation model available, EU lawmakers want the provider to comply with a series of requirements.

These include testing and mitigating reasonably foreseeable risks to health, safety, fundamental rights, the environment, democracy and the rule of law with the involvement of independent experts.

The remaining non-mitigable risks and why they were not addressed should be documented.

According to the compromise, the requirements for foundation models would apply regardless of their distribution channels, development methods, or training data type.

Data governance measures are also required, notably in terms of measures to examine the sustainability of data sources, possible biases and appropriate mitigation.

The foundation models would have to keep appropriate levels of performance, interpretability, corrigibility, safety and cybersecurity throughout their lifecycle. The text mandates the employment of independent experts, documented analysis and extensive testing for this purpose.

In addition, the EU lawmakers also want foundation model providers to implement a quality management system to provide the relevant documents up to 10 years after the model is launched. Foundation models would also have to be registered on the EU database.

Foundation models that fall in the generative AI category must comply with further transparency obligations and implement adequate safeguards against generating content in breach of EU law.

Moreover, generative AI models would have to “make publicly available a summary disclose the use of training data protected under copyright law.”

By contrast, the provision requiring the providers of foundation models to conduct ‘know your business customer’ checks on the downstream operators has been removed.

The AI Office is tasked with keeping regular dialogue with providers of foundation models about their compliance efforts and providing guidance on the energy consumption related to training these models.

At the same time, foundation model providers would have to disclose the computing power required and the training time of the model.

AI Act: All the open political questions in the European Parliament

The European Parliament’s rapporteurs on the AI Act circulated on Monday (13 February) an agenda for a key political meeting which includes new compromises on AI definition, scope, prohibited practices, and high-risk categories.

Value chain responsibilities

The MEPs recognise the growing importance of AI models for economic operators that integrate them into various applications without necessarily having control over their development.

Thus, the idea is to address this power imbalance by introducing measures to ensure the proportionate sharing of responsibilities along the AI value chain while protecting fundamental rights, health and safety.

In particular, a downstream economic operator like an AI deployer or importer would become responsible for complying with the AI Act’s stricter regime if they substantially modify an AI system, including a general purpose AI system, in a way that qualifies it as a high-risk model.

In these cases, the original provider would have to support the compliance process by supplying all the relevant information and documentation on the capabilities of the AI model. This obligation does not apply to developers of open-source AI components supplied for free.

The MEPs want to task the EU Commission with developing non-binding standard contractual clauses that regulate rights and obligations consistent with each party’s level of control. These contractual models would need to consider requirements for specific sectors of business cases.

At the same time, the Parliament wants to ban unfair contractual obligations unilaterally imposed on SMEs and start-ups, preventing them from protecting their legitimate commercial interest.

For foundation models provided as a service, for instance, via an Application Programming Interface (API), the provider’s obligation to cooperate with the downstream operator applies throughout the entire service.

Alternatively, the foundation model provider can transfer the training model to the downstream operator with appropriate information on the datasets and development process or restrict the service so that the operator can comply with the AI rules without further support.

[Edited by Alice Taylor]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Abonnieren