Noticias

EU policymakers enter the last mile for Artificial Intelligence rulebook – EURACTIV

By Luca Bertuzzi | Euractiv.com 25-10-2023 (updated: 30-10-2023 )News Based on facts, either observed and verified directly by the

EU policymakers enter the last mile for Artificial Intelligence rulebook – EURACTIV

By Luca Bertuzzi | Euractiv.com
25-10-2023 (updated: 30-10-2023 )
News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.
[European Commission]
Languages: Deutsch

The world’s first comprehensive AI law is entering what might be its last weeks of intense negotiations. However, EU institutions have still to hash out their approach to the most powerful ‘foundation’ models and the provisions in the law enforcement areas.
No white smoke came out of the high-level meeting between the EU Council, Parliament and Commission. The inter-institutional negotiations – or trilogue, as it is known in EU jargon – started on Tuesday evening (24 October) and dragged on until the early hours of Wednesday.
EU policymakers agreed on the provisions concerning the classification of high-risk AI applications and provided general guidance on dealing with powerful foundation models and who should supervise them, but barely scratched the surface on the part on prohibitions and law enforcement.
All eyes are now on the next trilogue on 6 December, where a political agreement is expected but not guaranteed. Meanwhile, nine technical meetings have been scheduled to nail down the text on some of the AI law’s most complex and consequential aspects.
The AI Act follows a risk-based approach whereby the AI models that can pose a significant risk to people’s health, safety and fundamental rights must comply with strict obligations, such as risk management and data governance.
In the original proposal, all AI solutions that fall into a pre-determined list of critical use cases were classified as high-risk. However, at the previous trilogue on 2 October, a possible filter system was put on the table, allowing AI developers to get exempted from this stricter regime.
The criteria were subject to significant fine-tuning and a negative legal review from the European Parliament’s office. Still, the agreed text is mainly in line with the one Euractiv previously reported.
The only exemption condition that was modified is the one that refers to the detection of deviations from decision-making patterns, with the specification that the AI system must not be “meant to replace or influence the previously completed human assessment, without proper human review.”
In the text resulting from the last trilogue, the Commission is tasked with developing a comprehensive list of practical examples of high-risk and non-high-risk use cases. The conditions for the EU executive to modify the exemption conditions are still being refined by the lawyer linguists.
In a draft seen by Euractiv, the Commission will only be able to add new filters where there is concrete and reliable evidence that AI systems are falling in the high-risk category without posing a significant risk to people.
Conversely, the EU executive will only be able to delete the criteria if it is necessary to maintain the same level of protection in the EU.
The EU lawmakers spearheading the work on the EU’s AI bill have circulated a new version of the provisions regarding the classification of high-risk AI systems, maintaining the filter-based approach despite a contrary legal opinion.
The capillary diffusion of ChatGPT, a chatbot powered by OpenAI’s GPT-3.5 and GPT-4, disrupted the negotiations on the AI law, forcing policymakers to figure out how to deal with these powerful models.
Last week, Euractiv reported that the approach was based on several tiers. The idea is to have horizontal obligations for all foundational models, namely transparency and support to downstream economic operators to comply with the AI regulation.
The tiered approach seems to have broad support, but the main question is how to define the top tier of ‘very capable’ foundation models, like GPT-4, which will be subject to ex-ante vetting and risk mitigation measures.
The terminology might shift to ‘high impact’ models to highlight the focus on the systemic risks these models can pose. The overall agreement is to develop several criteria based on which a foundation model might be deemed ‘high impact’.
Possible criteria that have been floated around concern computing power, amount of training data, and economic resources of the AI providers, but concerns mostly relate to how to make this approach future-proof for a market changing at neck-breaking speed.
Researchers from leading universities like Stanford are also advising on the matter. In the coming days, EU negotiators are expected to hash out new drafts, putting this approach into black and white.
In terms of governance, there is growing consensus that, much like in the Digital Services Act (DSA), the enforcement of AI models that pose systemic risks should be centralised.
This will be one of the main tasks of the AI Office, which will be under the Commission but with ‘functional independence’. Also following the DSA, the Commission has proposed introducing a management fee to finance the new staffing requirements.
However, MEPs are sceptic about the management fee and do not consider the parallelism with the DSA, which is focused on services – while the AI Act follows product safety legislation – to be applicable in this case.
The EU approach to powerful AI models is taking shape as European countries discuss possible concessions in the upcoming negotiations on the world’s first comprehensive Artificial Intelligence (AI) rulebook.
The chapter on which AI applications should be prohibited and what exceptions should be left to law enforcement agencies is where the EU countries and MEPs are further apart and might turn into a show-stopper as both institutions are extremely rigid in their positions.
The only thing that emerged clearly from the last trilogue was that the negotiators are looking at different aspects to form a package deal that will put together the bans, law enforcement exceptions, the fundamental rights impact assessment and environmental provisions.
The Spanish presidency circulated three discussion papers on Friday (13 October) to gather EU countries’ feedback on key aspects of the AI law ahead of an upcoming negotiation session: fundamental rights, sustainability obligations and workplace decision-making.
[Edited by Nathalie Weatherald]
Languages: Deutsch

source

About Author

4tune

Leave a Reply

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *