BACK TO THE FUTURE: THE EU ARTIFICIAL INTELLIGENCE ACT

Overview

The Council of the European Union defines artificial intelligence (AI) as “the use of digital technology to create systems capable of performing tasks commonly thought to require human intelligence”.

The EU Artificial Intelligence Act (AI Act) provides a uniform framework for regulating the use of AI across the European Union’s single market. This Act was initiated to address the challenges and risks arising from the evolution of artificial intelligence in recent years. The Act is in the form of a Regulation, which will have direct effect across the EU Member States. This means that implementing measures at a national level will not be required.

Status of Legislation

Following approval by EU Member States, on 2 February 2024, the AI Act was formally approved by the EU Internal Market and Consumer Protection Committee (IMCO) and the EU Civil Liberties, Justice and Home Affairs Committee (LIBE), on 13 February 2024. This paves the way for final approval of the Act at a European Parliament plenary vote, which is due in April 2024.

Scope of Legislation

The AI Act applies to all public and private producers, providers, deployers and users of artificial intelligence tools developed or used in the EU as well as AI systems which may impact upon those in the EU.

Deployers and importers of artificial intelligence systems, to whom the Act applies, should be aware of their obligations to ensure that any foreign provider of AI tools has completed the appropriate conformity assessment procedure, bears a European Conformity (CE) marking and is accompanied by the relevant documentation, including instructions for use.

The Risk Based Regulatory Approach

The AI Act adapts a risk based approach to regulation. This means there are different categories of risk, with each category having various compliance requirements, which are outlined below.

  1. Unacceptable Risk Systems

Unacceptable risk systems are those which violate fundamental rights, such as those relating to privacy and protection of personal data, as protected by the EU Charter of Fundamental Rights. Examples of unacceptable risk systems are ones which include:

  • Use of subliminal techniques;
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions;
  • Biometric categorisation of natural persons based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation;
  • Individual predictive policing;
  • Emotion recognition in the workplace and education institutions, except when for medical or safety reasons;
  • Untargeted scraping of the internet or CCTV for facial images to add to databases.

Artificial intelligence tools with unacceptable risk systems will be prohibited.

  1. High Risk Systems

High risk systems are those which may have an adverse impact on the safety and fundamental rights of citizens. Annex III of the AI Act provides for a list of high risk systems, which may be revised by the EU Commission in line with the evolution of artificial intelligence. Examples of high risk systems include:

  • Critical infrastructures, such as those relating to transport or water, gas, heating or electricity supplies;
  • Educational or vocational training tools which may influence access to education or learning processes;
  • Safety components of products such as those used in robot-assisted surgery;
  • Employment, management of workers and access to self-employment tools, including recruitment software and employee evaluation systems;
  • Essential private and public service provisions, such as credit-scoring facilities;
  • Law enforcement tools which may interfere with fundamental rights;
  • Migration, asylum and border control management systems;
  • Administration of justice and democratic process tools.

High risk artificial intelligence systems will need to have the following requirements in place before they may be launched on the market:

  • Risk assessment and mitigation systems;
  • High quality datasets feeding into AI systems so as to minimise risks and discriminatory outcomes;
  • Activity logging to ensure traceability of results;
  • Detailed documentation with information regarding systems and processes so that authorities may assess compliance;
  • Clear and adequate information for users;
  • Adequate human oversight measures for risk minimisation.

It should be noted that all remote biometric identification systems are considered high risk within the Act. As a result, these systems are subject to strict requirements. An exception will apply in cases of remote biometric identification in publicly accessible spaces for law enforcement purposes.

  1. Limited Risk Systems

Limited risk systems are those which allow users interacting with the tools to make informed decisions. Examples of limited risk systems are chatbots (which are computer programmes which simulate human conversation) and systems that generate or manipulate content, including deepfakes (being media which manipulates image or audio content). These systems are subject to transparency obligations. For example, people corresponding with chatbots will need to be informed that they are interacting with a machine.

  1. Minimal or No Risk Systems

Systems not captured by the above categories will fall into this category. Examples of systems in this category are AI enabled spam filters and video games. Aside from risk assessment and transparency requirements, the AI Act does not stipulate any additional obligations for these systems.

Voluntary Codes of Conduct

Providers of applications which are not of unacceptable risk may ensure their AI systems are trustworthy by developing their own voluntary Codes of Conduct or adhering to Codes of Conduct adopted by other representative associations. The European Commission will encourage industry associations and other representative organisations to adopt voluntary Codes of Conduct. These codes will apply simultaneously alongside any transparency obligations for AI systems.

Enforcement

Each EU Member State, including Ireland, will be required to establish or designate a national competent authority to supervise the implementation and application of the AI Act within their jurisdiction. These organisations will also monitor activities in the AI market. Each Member State will also be required to designate a national supervisory authority, which will act as a representative on the European Artificial Intelligence Board. This Board will be responsible for ensuring harmonised implementation of the AI Regulation. Ireland has not yet designated a national supervisory authority. An Advisory Forum, comprised of AI industry stakeholders, including companies and civil society, will be established to provide technical expertise. A European AI Office will be set up within the EU Commission. This Office will supervise General Purpose AI (GPAI) models and assess potential systemic risks arising from their use. This Office will also accept complaints for investigation.

Penalties 

Fines for non-compliance with the EU AI Act range from €7.5 million, or 1.5 % of turnover in the previous year, up to €35 million, or 7% of global turnover in the previous year. Fines will be dependent on the nature of an infringement and the size of company involved.

Implications for Businesses

Businesses should assess the applicability of the AI Act to their artificial intelligence developments. All AI systems should be classified according to the risk categorisations above.

Legal obligations need to be identified and strategies should be implemented to ensure legislative compliance. Businesses should be mindful of the costs of compliance and these should be provided for in financial plans.

Businesses should establish and implement AI governance procedures. These should include the maintenance of detailed documentation in relation to artificial intelligence systems.

Investment in employee oversight is fundamental as human input, scrutiny and review are essential for the implementation of AI. Companies should plan for workforce training in relation to the legal and ethical implications of artificial intelligence.

If you would like further information, please contact your usual adviser in Whitney Moore.

Authored by Brendan Ringrose, Corporate Partner and Lynda Crosbie (Lynda.Crosbie@whitneymoore.ie), Whitney Moore.