State lobbyists roll back AI Act (Repost from The Good Lobby)

Original article here.


From 2 February 2025, EU governments will have the power to deploy AI-based technologies to track citizens in public spaces, monitor refugees in border areas in real time and use facial recognition tools against people based on their presumed political affiliation or religious beliefs. 

These are just a few examples of the sweeping national security exceptions imposed by The European Artificial Intelligence Act highlighted by a recent report by Investigate Europe. The AI Act is the first set of laws aimed at regulating the use of AI and safeguarding citizens from its possible harms. Paradoxically, however, the introduction of broad exemptions for public authorities risks enabling significant abuses under the pretense of public safety.

What is at stake in these exceptions? 

With the changes to the final text requested by certain Member States, led largely by France, law enforcement and border authorities will now be able to circumvent the prohibition on the use of AI in public spaces. Indeed, the Act broadly states that it does not ‘affect the competences of the Member States concerning national security’ (Article 2(3)). This applies, for example, to demonstrations or political protests, which can now be targeted by computer-assisted surveillance if the police have national security concerns, thereby raising potential conflicts with freedom of expression and assembly. These exemptions will also apply to private companies and possibly third countries that supply AI technology to the police and law enforcement agencies.

Controversial measures such as the use of emotional recognition systems will be banned from 2 February in workplaces, schools and universities. However, due to lobbying by Member States, these systems will be permitted for all police forces and immigration and border authorities.

Biometric identification systems used to determine race, political opinions, religion or sexual orientation, or even trade union membership, are prohibited. Once again, an exception for the police is provided, who will be free to use these systems and collect image data on any individual or buy data from private companies, creating room for potential overreach. 

Lastly, under the original act the use of so-called ‘high-risk’ technologies was to be subject to requirements such as judicial authorisation, registration in a European database and an impact assessment to ensure respect for fundamental rights. However, with the addition of a new article enabling self-policing, companies will be able to fill in a self-certification form and decide whether or not their product is ‘high-risk’, which could relieve them of certain obligations and undermine accountability mechanisms.

This latest revision of the AI Act is raising many questions about its implications for fundamental rights and European law, especially in relation to vulnerable populations such as refugees and marginalised groups. While civil society and NGOs have long warned of the dangers of discriminatory algorithms, digital rights advocates denounce a well-intentioned but flawed regulation that could open the door to abuse. Only time will tell how this regulatory framework will be applied and whether it will succeed in reconciling technological innovation and respect for human rights.