| |

LLMs and the EU AI Act: What You Need to Know

Introduction

In recent years, Large Language Models (LLMs) have experienced a meteoric rise in popularity and capability. Interestingly, this rapid advancement coincided with the development of the European Union’s Artificial Intelligence Act (EU AI Act). This timing raises important questions about how these powerful AI models are addressed within the AI Act as they were not considered in the original text when it was first proposed in 2021.

This article provides an overview of how LLMs are handled within the EU AI Act. You’ll learn how to determine your role in the AI value chain and how to classify your application under the Act’s framework. Additionally, we’ll explore the specific requirements for different scenarios, helping you understand your obligations whether you’re a model provider, system developer, or both.

Key Distinctions: AI Models vs. AI Systems

The EU AI Act makes a crucial distinction between “General Purpose AI (GPAI) models” and “AI systems” Understanding this difference is essential for compliance:

To illustrate the difference:

  • GPT-4o is a General-Purpose AI Model
  • ChatGPT is an AI System that uses GPT-4o (or a variant) along with a user interface, safety measures, and other components

Risk classification of GPAI models

While AI Systems are categorized based on their specific use and potential harm (e.g., high-risk systems for healthcare), GPAI models are separately assessed for systemic risk. According to Article 51, a GPAI model is classified as “General Purpose AI Models with Systemic Risk” if it meets any of the following conditions:

  • It has high impact capabilities evaluated based on appropriate technical tools and methodologies, including indicators and benchmarks.
  • The European Commission determines it has capabilities or an impact equivalent to those with high impact capabilities.

Importantly, the Act presumes that a GPAI model has high impact capabilities if the cumulative computation used for its training exceeds 10^25 floating point operations.

Rules for General-Purpose AI model providers

The EU AI Act Article 53 introduces specific obligations for general-purpose AI models (GPAI):

  1. Technical Documentation: GPAI providers must create and maintain comprehensive technical documentation of the model, including details on the training and testing processes, as well as evaluation results, as outlined in Annex XI.
  2. Transparency for Downstream Providers: Providers must offer information and documentation to downstream system providers who intend to integrate the GPAI model into their AI systems, as specified in Annex XII.
  3. Copyright Compliance: GPAI providers are required to implement policies ensuring compliance with EU copyright law, including identifying and complying with rights reservations.
  4. Training Data Transparency: Providers must publish a detailed summary of the content used for training the GPAI model, using a template provided by the AI Office.

These obligations do not apply to open-source AI models, except those with systemic risks.

Additional Requirements for GPAI Models with Systemic Risk

According to Article 55, GPAI models deemed to have systemic risk are subject to further obligations:

  • Conducting thorough model evaluations, including adversarial testing
  • Assessing and mitigating possible systemic risks
  • Maintaining incident response and reporting procedures for serious incidents
  • Ensuring adequate cybersecurity protection for the model and its infrastructure

Requirements for downstream AI system providers

Downstream AI system providers, e.g. providers of AI systems which use GPAI models, must ensure their systems comply with the Act’s requirements, which vary depending on the risk level of the AI system. Certain AI systems such as general-purpose AI system have additional transparency obligations including:

  • Inform users they’re interacting with an AI system (unless obvious).
  • Mark AI-generated content (audio, image, video, text) as artificial.

Scenarios and Responsibilities

To make the requirements more tangible, let’s consider three common scenarios for GPAI model and AI system providers:

1. GPAI Model Provider and AI System provider

2. Provider Creates an AI System using another’s GPAI model

3. Provider Fine-tunes an Existing GPAI Model

In each scenario, providers must carefully consider their role in the AI value chain and comply with the relevant obligations under the EU AI Act.

Next Steps for Compliance

The EU AI Act provides a phased implementation timeline with first restrictions starting to apply on 2nd of February 2025. However, proactive preparation is advisable to ensure timely compliance. By adhering to these regulations, organizations can contribute to the responsible development and deployment of AI technologies while mitigating potential risks.

At Validaitor, we specialize in helping companies achieve and maintain compliance with the EU AI Act through our comprehensive AI testing and auditing platform. Our tools and expertise enable organizations to assess their AI systems’ risk levels, conduct thorough evaluations, and implement necessary safeguards, ensuring they meet regulatory requirements while fostering responsible AI development.

For the latest updates on the EU AI Act and its implementation timeline, visit the EU AI Act website by the Future of Life Institute or follow us on LinkedIn

Similar Posts