On 21st April 2021, Executive Vice-President Margrethe Vestager and Commissioner Thierry Breton held a press conference to present the European approach to Artificial Intelligence through new rules and actions to turn Europe into the global hub for trustworthy AI.
In particular, the Commission published a Proposal for a Regulation on a European approach for AI, and a new Coordinated Plan with Member States focused on guaranteeing people and businesses' safety and fundamental rights, but maintaining the will to strengthen investment and innovation across the EU.
Additionally, a Proposal for a Regulation on machinery products was published to complement the AI package.
Vice-President Vestager stated as “on Artificial Intelligence, trust is a must, not a nice to have”, and added as “Europe was not at the head of the first wave of digitization, but it can lead the second one. This is the first legislation on the subject, and we are the first to have it”.
Indeed, Vestager displayed as the package represents a landmark on the AI regulation and it was conceived with a proportionate and risk-based approach, in full consideration of the impact that such systems have on our life.
Moreover, the Commissioner for Internal Market expressed as “AI is a means, not an end” and as “it offers immense potential in areas as diverse as health, transport, energy, agriculture, cyber-security but it also presents several risks”. Commissioner Breton added that “an ecosystem of trust goes alongside with an ecosystem of excellence” and as such Regulation on AI must be read together with the GDPR and e-Privacy Regulations.
The legal framework will apply to both public and private operators inside and outside the EU as long as the AI systems are placed on the Union Market or its use may affect EU citizens. Providers and users (i.e., Banks) of high-risk AI systems are not exempt. It does not apply to private, non-professional users.
The need to regulate the use of Artificial Intelligence is based on the potential benefits of such technology for our societies, from improved medical care to better regulation. However, specific AI uses may create a risk to user safety and fundamental rights.
In the Proposal for a Regulation on AI, the Commission, as above mentioned, follows a risk-based approach divided into four levels.
The first one, “Unacceptable risk”, concerns the AI systems that have to be banned due to their threat to safety, livelihoods and fundamental rights. Such a prohibition covers systems with the potential to manipulate person through subliminal techniques beyond their consciousness. Vice President Vestager presented the example of the prohibition of AI-based social scoring for general purposes done by public authorities and “real time” remote biometric identification systems in publicly accessible spaces for law enforcement, unless for certain limited exceptions.
The second one is “High-risk” and concerns a limited number of AI systems that create an adverse impact on EU Citizens safety and fundamental rights, and include technology used in:
- Critical infrastructures (e.g., transport) that could put the life and health of citizens at risk;
- Educational or vocational training that may determine the access to education and professional course of someone's life (e.g., scoring of exams);
- Safety components of products (e.g., AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g., CV-sorting software for recruitment procedures);
- Essential private and public services (e.g., credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people's fundamental rights (e.g., evaluation of the reliability of evidence);
- Migration, asylum and border control management (e.g., verification of the authenticity of travel documents);
- Administration of justice and democratic processes (e.g., applying the law to a concrete set of facts).
Regarding the “High risk” level, Vice-President Vestager highlighted as the systems that fall into this category must be characterized by “very high-quality data”. This, in order to prevent bias and discrimination.
The Proposal sets mandatory requirements to ensure trust and protection, such as the quality of data sets used; adequate risk assessment and mitigation systems; technical documentation and record-keeping; transparency and the provision of information to users, human oversight robustness, accuracy and cybersecurity.
The transparency aims at giving full access to National Authorities to the information needed to investigate whether the use of the AI system complied with the law.
During the press conference Q&A, Vice-President Vestager explained that biometric identification systems, such as facial recognition in public spaces, are “prohibited in principle”. Allowed exception concern the fight against terrorism, the protection of security or the search for a missing child. Such use must be subject to authorization by a judicial or other independent body through a conformality assessment, including documentation and human oversight requirements by design, and must consider appropriate limits in time and space.
The third level is “Limited risk” and concerns AI systems where there is a clear risk of manipulation and thus subject to specific transparency obligations. Vice-President Vestager mentioned the use of chatbots as an example. In such cases, users must be informed that they interact with a machine to decide if continuing or stepping back freely.
The last level of the “pyramid” is “Minimal risk”. Such AI systems are considered the majority of the ones currently used in the EU and are fully allowed without additional legal obligations. An example presented by Vice-President Vestager are filters that prevent spam to enter in our email inbox.
Regarding the enforcement phase, Member States will hold a key role and each one of them will have to designate one or more national competent authorities to supervise the application and implementation and carry out market surveillance activities. Furthermore, to increase efficiency and set an official point of contact with the public and other counterparts, each Member State should designate one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board.
Such a board will comprise high-level representatives of competent national supervisory authorities, the European Data Protection Supervisor, and the Commission.
The aim is to achieve an effective and harmonized implementation of the new AI Regulation. The board have the duty to issue recommendations and opinions to the Commission regarding high-risk AI systems and other aspects relevant to the effective and uniform implementation of the new rules.
Moreover, Member States will have to lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and to take all measures necessary to ensure that they are properly and effectively implemented.
Infringements as non-compliance with the prohibition of the artificial intelligence practices and requirements may cause administrative fines of up to 30 000 000 EUR or, if the offender is a company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
The President of the EU Commission, Ursula von der Leyen, stated on Twitter “AI is a fantastic opportunity for Europe and citizens deserve technologies they can trust”.
Following the above analyzed Proposal, the Regulation could enter into force in the second half of 2022 in a transitional period. The second half of 2024 is the earliest time the regulation could become applicable to operators with the standards ready.