Business Insight

European Union: leading the way with a legislative framework

Red swirls

    Unlike the UK, the EU has cemented the regulation of AI in new specific l regulation. This led to the EU AI Act, the world's first comprehensive legal framework for AI. Following extensive negotiations and the usual adoption procedures, the AI Act was published in the Official Journal on 12 July 2024 and came into force, subject to transitional provisions, on 1 August 2024.

    The AI Act, in Article 3, defines an AI system broadly, as a:

    "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments"

    The scope of the Act is wide, not only in who it covers but also, significantly, in terms of its territorial scope. It applies to "providers" and "deployers" (see below) as defined in Article 3(3) and 3(4) respectively, importers, and distributors of AI systems within the EU, regardless of where they are established.

    In fact, the Act's full territorial reach is potentially wide – it extends to:

    "providers and deployers of AI systems that have their place of establishment or who are located in a third country [i.e., outside the EU], where the output produced by the system is used in the [EU]”

    Notably, the distinction between "providers" - entities that develop AI systems and intend to market them in the EU - and "deployers" - entities using AI systems under their authority for non-personal, professional activities - is of particular importance as each role carries with it differing responsibilities and legal implications, with the majority of obligations falling on providers of high-risk AI systems.

    Risk-based approach

    The AI Act takes a cross-sector, horizontal and risk-based approach to regulation. It allocates AI systems into one of four risk categories: unacceptable risk, high risk, limited risk and low/minimal risk. The category into which an AI system fits determines the overarching legislative obligations that apply throughout its lifecycle, from training, testing, validation, conformity assessments, risk management systems, all the way to post-market monitoring.

    Some AI systems are banned from use altogether in the EU on the basis that they pose an unacceptable risk. These include applications relating to subliminal manipulation, predictive policy or scraping facial images (Article 5). High-risk AI systems - such as applications intended to be used as a safety component of a product or relating to education, law enforcement ,etc. under article 6 or annex 3 of the AI Act - are permitted, but are subject to fairly stringent obligations before they can enter the EU market. These obligations include:

    • adequate risk assessment and mitigation systems;
    • use of high-quality datasets feeding the system to minimise risks and discriminatory outcomes;
    • CE marking;
    • the registration of AI systems featured in annex 3 and their respective authorised representatives in the EU database referred to in article 71;
    • logging of activity to ensure traceability of results;
    • detailed documentation requirements,
    • providing all information on the system and its purpose necessary for authorities to assess its compliance;
    • providing clear and adequate information to users;
    • appropriate human oversight measures to minimise risk; and
    • high levels of robustness, security, and accuracy.

    The AI Act also introduces statutory obligations for general purpose AI models (GPAI) (also called foundational models). These are models that:

    "display significant generality, are capable to competently perform a wide range of distinct tasks [and] that can be integrated into a variety of downstream systems or applications” (see Recital 63).

    GPAI models are classified in accordance with Article 52 of the Act as either posing non-systemic or systemic risks depending on the computing power required to train the model. While all models will be required to meet transparency requirements, those with systemic risks are subject to stricter obligations, including conducting model evaluations, adversarial testing, tracking and reporting serious incidents and ensuring cyber security protections (Article 55).

    Implementation timeline

    As mentioned above, the AI Act was published in the Official Journal of the EU on 12 July 2024 and came into force on 1 August, 20 days after its publication. Most provisions of the AI Act (including those for high-risk, limited-risk, and minimal-risk AI systems) will become fully applicable on 2 August 2026. There are, however, exceptions - for example, there is a longer lead time of 36 months for AI systems embedded into regulated products. Other rules have different lead times, but the provisions covering prohibited AI systems will come into effect on 2 February 2025 and those relating to GPAI models, governance and sanctions from 2 August 2025.

    Oversight and enforcement

    Non-compliance with the AI Act – which is dealt with in Article 99 - can result in substantial penalties of:

    • up to €35 million or 7% of annual worldwide turnover for violations relating to prohibited AI systems;
    • up to €15 million or 3% of the annual global turnover for non-compliance with obligations not relating to prohibited AI systems (i.e. those not outlined in article 5);
    • a maximum penalty for the provision of incorrect, incomplete or misleading information is €7,500,000 or 1% of a company's annual worldwide turnover.

    A coordinated network of regulators – some new, some already established - will oversee the enforcement and implementation of the AI Act. Each EU Member State will designate "national competent authorities" consisting of:

    • at least one notifying authority, responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring,
    • one market surveillance authority responsible for reporting, to the Commission and relevant national competition authorities, any information identified in the course of market surveillance activities that may be of potential interest for the application of EU law on competition rules.

    The European AI Office was established by the European Commission to monitor, supervise, and enforce AI Act requirements across Member States, particularly for GPAI models. This office, which started its operations on 21 February 2024, will also support the Commission’s role as lead AI Act enforcer by conducting joint investigations, preparing decisions, implementing and delegating acts, and issuing standardisation requests.

    The European AI Board will assist the AI Office in supporting national competent authorities in developing regulatory sandboxes, controlled environments where stakeholders can develop and test innovative AI systems with regulatory support. The AI Act designates the European Data Protection Board (EDPB) as the notified body and market surveillance authority for high-risk AI systems developed or deployed by EU institutions, and as the competent authority for supervising AI systems used by EU institutions. Notably, the EDPB launched a ChatGPT Task Force to coordinate and enforce EU laws on ChatGPT maker OpenAI, following requests from several European regulators for greater coordination on the AI chatbot.

    The AI Pact

    The European Commission has developed the AI Pact, a voluntary initiative to promote compliance with the AI Act throughout the industry. Over 100 stakeholders, including many global names, have signed up. The aim of the Pact is to encourage participants to share good practice and be proactive about meeting their obligations under the legislation.

    National strategies

    Addressing the growing presence of AI, many Member States have adopted their own strategies in addition to the AI Act. Spain established Europe's first AI regulatory agency on 22 August 2023. The Spanish Agency for the Supervision of Artificial Intelligence (AESIA) is a key element of the Digital Agenda 2026 and Spain's National Strategy for Artificial Intelligence. The creation and functions of AESIA are governed by the Spanish Royal Decree 729/2023. This decree outlines AESIA's mandate, which includes supervisory responsibilities such as inspection and enforcement under the EU AI Act, as well as providing guidance, raising awareness, and offering training for the proper implementation of all national and European regulations regarding inclusive, sustainable, and society-centric AI use and development. Meanwhile in France, the French data protection authority (CNIL) established an Artificial Intelligence Department to proactively address AI system development from a data protection perspective, issuing recommendations on the convergence between AI and privacy and inviting public consultation for further refinement and feedback.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.