The Spanish presidency of the EU Council has asked member states to show flexibility in the sensitive area of law enforcement, ahead of a crucial political meeting for the AI law.
The AI Act is a flagship bill aimed at regulating artificial intelligence based on its ability to cause harm, currently in the final phase of the legislative process, with the European Commission, Council and Parliament negotiating the provisions finals within the framework of what are called trilogues.
EU policymakers are scrambling to reach a final deal at the Dec. 6 trilogue. Before this crucial appointment, the Spanish presidency, which negotiates on behalf of European governments, will need a revised negotiating mandate.
On Friday, November 24, the Presidency circulated the first half of the negotiating mandate, asking for flexibility and indicating possible landing zones in the area of law enforcement. On Wednesday, the mandate must land on the desk of the Committee of Permanent Representatives (COREPER).
The second half of the mandate will address foundation models, governance, access to source code, the sanctions regime, the entry into force of the regulation and secondary law. This subject will be discussed at COREPER level on Friday December 1st.
Prohibitions
MEPs significantly expanded the list of prohibited practices – AI applications considered to carry an unacceptable level of risk.
The Presidency suggests accepting the ban on untargeted facial image capture, emotion recognition in the workplace and in educational institutions, biometric categorization to infer sensitive data such as sexual orientation and religious beliefs, and predictive policing of individuals.
Furthermore, “in a spirit of compromise”, the presidency proposes to put the European Parliament’s bans which have not been accepted in the list of high-risk use cases, namely all other biometric categorization applications and recognition of emotions.
Regarding remote biometric identification, parliamentarians agreed to abandon the total ban on real-time use in exchange for limiting its exceptional use and more guarantees. For the presidency, the ex post use of this technology is considered high risk.
Law Enforcement Exceptions
The Council’s mandate includes several exclusions allowing law enforcement to use AI tools. The presidency notes that it has succeeded in “keeping almost all of them”.
This includes making the text more flexible for police forces regarding the obligation for human supervision, reporting of systems at risk, post-market surveillance and confidentiality measures to avoid disclosing sensitive operational data.
The presidency also wants law enforcement to be able to use emotion recognition and biometric categorization software without informing the subjects.
The European Parliament got police forces to register high-risk systems in the EU database, but in a non-public section. The deadline for large-scale computing systems to comply with the obligations of the AI law has been set at 2030.
National security exception
France has pushed for a broad national security exemption in the AI law. At the same time, the Presidency noted that the European Parliament showed no flexibility in accepting the wording of the Council’s mandate.
Spain proposes to divide this provision into two paragraphs. The first states that the regulation does not apply to areas which do not fall under EU law and should in no way affect the competences of Member States in the field of national security or of any entity entrusted with tasks in this domain.
Secondly, the text specifies that the AI law would not apply to systems placed on the market or put into service for activities related to defense and the military.
Fundamental rights impact assessment
MPs from left to center presented the fundamental rights impact assessment as a new obligation that should be carried out by users of high-risk systems before their deployment. For the presidency, it is “absolutely necessary” to include it to reach an agreement with Parliament.
A sticking point on this topic has been scope, with parliamentarians asking all users and EU countries pushing to limit provision to public bodies. The compromise was to cover public bodies and only private actors providing services of general interest.
Furthermore, the fundamental rights impact assessment should cover aspects that are not already covered by other legal obligations in order to avoid overlaps.
Regarding risk management, data governance and transparency obligations, users only need to verify that the high-risk system provider has complied with them.
For the Presidency, the obligation to carry out a six-week consultation should be removed, even for public bodies, and replaced by a simple notification to the competent national authority.
Tests in real conditions
A point of contention in the negotiations was the possibility introduced by the Council to test high-risk AI systems outside of regulatory sandboxes. According to the presidency note, certain guarantees were included to make the measure acceptable to Parliament.
The text indicates that persons subjected to the test must give informed consent and that, when in the case of police activities it is not possible to request consent, the test and its result cannot negatively affect the persons concerned .
Exemption from conformity assessment
The Council introduced an emergency procedure which allows law enforcement authorities to urgently deploy a high-risk AI tool that has not yet passed the conformity assessment procedure.
MEPs want this process to be subject to judicial authorization, a point the presidency considers unacceptable for EU countries. As a compromise, the Spanish proposed reintroducing the mechanism, allowing the Commission to review the decision.