The European Union has taken a proactive stance in regulating artificial intelligence (AI), aiming to strike a balance between innovation and the protection of fundamental rights. The cornerstone of this approach is the EU AI Act, adopted on 13 June 2024, which has introduced harmonised rules for the development, deployment, and oversight of AI systems across Member States. This regulation is complemented by the AI Liability Directive (AILD) and the revised Product Liability Directive (PLD), forming a comprehensive framework for the governance of AI.
A risk-based regulatory model
The AI Act categorises AI systems into four risk levels:
- Minimal risk: Includes systems such as video games and spam filters. These are largely exempt from regulation.
- Limited risk: Covers general-purpose systems like chatbots, which require transparency measures to mitigate manipulation risks.
- High risk: Encompasses AI used in sensitive areas such as education, employment, law enforcement, and biometric identification. These systems are subject to strict compliance obligations.
- Unacceptable risk: Refers to prohibited practices that infringe on fundamental rights, such as social scoring by governments.
High-risk systems must comply with detailed requirements, including transparency, traceability, and impact assessments on fundamental rights. Providers and deployers are required to implement written policies, risk management systems, and technical documentation to ensure compliance.
Governance structure
The AI Act establishes a two-tier governance model:
- European level: The European Committee on Artificial Intelligence oversees harmonised application of the regulation. It is supported by the AI Office, an advisory forum, and a scientific panel of independent experts.
- National level: Member States must designate one or more competent authorities as Market Surveillance Authorities (MSAs) within one year of the Act’s entry into force. These bodies are responsible for ensuring compliance and coordinating with EU institutions.
Data protection authorities are expected to play a central role in supervising high-risk AI systems, leveraging their expertise in safeguarding fundamental rights.
Liability and enforcement
The AI Liability Directive aims to harmonise non-contractual civil liability rules across the EU. It introduces mechanisms for evidence disclosure and shifts the burden of proof in fault-based claims. This directive applies to all AI systems, regardless of risk level, and provides individuals and organisations with a right to compensation for damages caused by AI.
The revised Product Liability Directive extends strict liability to software and AI systems, enabling consumers to seek redress for harm caused by defective AI products.
National implementation
Member States are responsible for implementing the AI Act and related directives through local legislation. For example:
- Ireland: Ireland has designated nine national authorities to oversee AI compliance across sectors such as media, financial services, and human rights. The Data Protection Commission will monitor AI systems for data protection compliance, while other bodies will address sector-specific concerns.
- Germany: Germany is integrating the AI Act into its existing legal framework, including data protection, product safety, and labour law. The Works Constitution Act mandates co-determination by works councils when AI systems monitor employee behaviour.
- France: France has issued guidance through the CNIL to align AI development with GDPR requirements, emphasising transparency and lawful data processing.
Find out more on local adoption of AI regulation.
Interaction with GDPR
The AI Act complements the General Data Protection Regulation (GDPR). While the GDPR governs personal data processing, the AI Act sets standards for safe AI development and deployment. In many cases, both regulations apply simultaneously, particularly for high-risk systems involving personal data.
For instance, the AI Act requires providers to conduct fundamental rights impact assessments, while the GDPR mandates data protection impact assessments. These obligations reinforce each other, promoting responsible AI use.
Applicability of the EU AI Act to UK businesses
Although the United Kingdom is no longer part of the European Union, the EU AI Act still has implications for UK-based organisations. Under Article 2 of the Regulation, the AI Act applies extraterritorially to providers and deployers of AI systems that place products on the EU market or affect EU data subjects. This means that UK businesses developing or deploying AI systems in the EU, or whose systems process data of EU residents, must comply with the Regulation’s requirements.
UK companies may be subject to obligations such as:
- Risk classification and compliance: AI systems must be assessed and categorised according to the EU’s risk-based framework. High-risk systems will require conformity assessments, documentation, and transparency measures.
- Fundamental rights impact assessments: Where AI systems affect EU citizens, UK businesses may need to conduct assessments to ensure compliance with EU standards on privacy, fairness, and non-discrimination.
- Appointment of EU-based representatives: Non-EU providers may be required to designate authorised representatives within the EU to ensure regulatory oversight and facilitate enforcement.
Timeline for implementation
The AI Act follows a phased implementation schedule. In 2025, provisions on prohibited AI practices took effect, rules for general-purpose AI models have been implemented, and appointment of national authorities has taken place. Still to come are:
- August 2026: Full application of the Regulation, including high-risk systems
- August 2027: Additional provisions for high-risk systems in Annex I
The European Commission has tasked the European Committee for Standardization with drafting harmonised standards to support compliance.
Key takeaways
- The EU’s AI regulatory framework is built around the AI Act, AILD, and PLD, offering a structured approach to managing AI risks.
- AI systems are regulated based on risk level, with high-risk and unacceptable-risk systems subject to stringent controls.
- Governance is split between EU-level bodies and national authorities, with data protection agencies playing a key role.
- The AI Act complements GDPR, ensuring that AI systems respect data protection and fundamental rights.
- Member States are actively implementing the regulation, with tailored approaches reflecting national legal frameworks.
- The phased rollout of the AI Act allows organisations time to adapt and comply with evolving standards.
If you have any concerns about the AI Act and its application to your businesses, get in touch with us.
