The EU AI Act – What It Means for Businesses Worldwide

The EU AI Act - What It Means for Businesses Worldwide

By Sinéad Floody FCG, 20th November 2024

The European Union has taken a pioneering step by introducing the EU AI Act, the world’s first comprehensive law regulating artificial intelligence. This blog will help you understand the key aspects of this ground-breaking legislation and how it will impact your business.

What is the EU AI ACT?

The EU AI Act defines AI in accordance with the newly revised OECD definition. Its extraterritorial application means that it extends to organisations beyond the EU, akin to the GDPR. However, there are exemptions for certain organisations that deal with national security, military, research and development, and partial coverage for open-source projects. A grace period of 6 to 24 months for compliance will be given to organisations.

AI systems are categorised based on risk:

  1. Prohibited AI
  2. High-Risk AI
  3. Limited Risk AI
  4. Minimal Risk AI

The Act places stringent demands on ‘Providers’ and ‘Users’ of High-Risk AI, and generative AI systems must meet specific transparency and disclosure criteria.

Prohibited AI

The EU AI Act strictly prohibits certain AI systems due to their potential for significant harm and ethical concerns. These include systems for social credit scoring, which can unfairly judge individuals based on their behaviour, and emotion recognition systems used in employment and education, which can invade personal privacy. AI that exploits people’s vulnerabilities, such as age or disability, is also banned, as is any technology that manipulates behaviour and undermines free will. The indiscriminate collection of facial images for facial recognition, biometric categorisation involving sensitive traits, and specific uses of predictive policing are prohibited.

Additionally, law enforcement is generally barred from using real-time biometric identification in public spaces, except in restricted, pre-approved scenarios. While prohibited AI will not be tolerated under any circumstances, High Risk AI will be allowed once the Key Requirements laid out by the act are fulfilled.

High-Risk AI

High-Risk AI systems include those used in:

  • Medical devices and automobiles
  • Recruitment, HR and workforce supervision
  • Educational and professional training
  • Political elections and voting
  • Access to services such as insurance, banking, credit, and benefits
  • Overseeing essential infrastructure like water, gas, and electricity
  • Systems for emotion recognition and biometric identification
  • Policing and border control
  • Particular merchandise or safety elements within specific products

Key Requirements for High-Risk AI

High-Risk AI systems must meet several key requirements in order to be legally utilised, these include:

  • Conducting an assessment to ensure the AI system respects fundamental rights and complies with regulations
  • Registering high-risk AI systems in the public EU database
  • Establishing systems to manage risks and ensure quality
  • Implementing data governance measures, including bias mitigation and the use of representative training data
  • Enhancing transparency by providing clear instructions for use and technical documentation
  • Incorporating human oversight to ensure the AI system is explainable, has auditable logs, and involves humans in decision-making processes
  • Ensuring the AI system’s accuracy, robustness, and cybersecurity through regular testing and monitoring

General Purpose AI (GPAI)

It’s important for all GPAI to be transparent, which means providing clear technical documents, summaries of training data, and protecting copyrights and intellectual property. High-impact models that could pose significant risks need to go through thorough assessments, risk evaluations, adversarial testing, and incident reporting. Generative AI systems, like chatbots, must inform people when they are interacting with AI and ensure that AI-generated content, such as deepfakes, is clearly labelled and identifiable.

Penalties & Enforcement

The EU AI Act imposes significant penalties for non-compliance:

  • Up to 7% of global annual turnover or €35 million for breaches involving prohibited AI
  • Up to 3% of global annual turnover or €15 million for most other violations
  • Up to 1.5% of global annual turnover or €7.5 million for providing inaccurate information

There are limits on fines for SMEs and startups. The Act also establishes the European ‘AI Office’ and ‘AI Board’ centrally within the EU. Market surveillance authorities in EU nations are tasked with enforcing the AI Act, and any individual is empowered to file complaints regarding non-compliance. Ireland is set to establish its AI HQ in early 2025.

If you have any additional questions regarding the EU AI Act or how it may impact your business, do not hesitate to contact the Company Bureau team! Give us a call at +353(0)1 6461625 or fill out our online contact form.

Disclaimer: This article is for guidance purposes only. It does not constitute legal or professional advice. No liability is accepted by Company Bureau for any action taken or not taken in reliance on the information set out in this article. Professional or legal advice should be obtained before taking or refraining from any action as a result of this article. Any and all information is subject to change.