7 min read

Domestic European approaches to AI regulation

Read more

By DAC Beachcroft

|

Published 05 January 2026

Overview

As artificial intelligence (AI) continues to evolve, European countries are navigating the complex task of regulating its development and deployment. While the European Union has introduced overarching legislation such as the EU AI Act and the AI Liability Directive, individual Member States are also shaping domestic frameworks to address national priorities and sector-specific concerns. This article examines how selected jurisdictions are addressing AI regulation domestically and how these efforts intersect with EU-level initiatives.

Find out more about a specific jurisdiction:

 

France: Fragmented liability and data protection leadership

France does not currently have dedicated legislation governing artificial intelligence. Instead, it relies on general civil liability provisions under the French Civil Code, which present challenges when applied to AI systems: 

  • Fault-based liability: Victims must prove fault and causation, which can be challenging due to the opacity of AI systems (Articles 1240 – 1241).
  • Strict liability for “actions of things": This regime does not require proof of fault but raises questions about who qualifies as the “guardian” of an AI system (Article 1242).
  • Product liability Producers may be liable for defective AI systems, but exemptions exist for development risks, especially when defects are undetectable due to the novelty of the technology (Articles 1245 et seq.): 

In parallel, France’s data protection authority, CNIL, plays a leading role in AI governance. It has published 12 practical guidance sheets to help AI developers and deployers comply with the GDPR. These cover legal bases for processing, impact assessments, transparency obligations, and security measures.

CNIL emphasises that the AI Act complements but does not replace GDPR requirements, and both may apply depending on the risk level and use of personal data. 

France is also preparing for the implementation of the EU AI Act, which requires the designation of national market surveillance authorities. While the specific French authority has not yet been named, CNIL is expected to play a central role, particularly for high-risk AI systems. 

 

Germany: Adapting existing laws and preparing for EU alignment

Germany currently regulates AI through its existing legal framework, with no standalone AI legislation in place. Key areas of focus include: 

  • Data protection: The General Data Protection Regulation (GDPR) remains central, particularly Article 22, which restricts decisions based solely on automated processing. 
  • Product safety: The German Product Safety Act (ProdSG) applies to AI systems, requiring that products do not endanger users when used as intended. 
  • IT security: The Federal Office for Information Security (BSIG) mandates that operators of critical infrastructure implement robust organisational and technical safeguards. 
  • Labour law: The Works Constitution Act (BetrVG) grants works councils co-determination rights when AI systems monitor employee behaviour. 

Germany is actively implementing the EU AI Act, with a risk-based approach categorising AI systems into minimal, limited, and high-risk tiers. High-risk applications, such as those in healthcare and autonomous driving, are subject to stringent transparency and safety requirements. The country is also preparing for the AI Liability Directive, which will integrate fault-based claims into national tort law. 

 

Ireland: Sectoral oversight and strategic planning

Ireland has not yet enacted specific AI legislation, but is preparing for the EU AI Act through public consultations and strategic initiatives. In 2024, Ireland designated nine national authorities to oversee AI compliance across sectors, including: 

  • Data Protection Commission: Ensures AI systems comply with data protection principles. 
  • Coimisiún na Meán: Monitors AI’s influence on media, including misinformation and deepfakes. 
  • Financial Services and Pensions Ombudsman: Focuses on consumer protection in financial services. 
  • Irish Human Rights and Equality Commission: Addresses discrimination and bias in AI applications. 

Ireland’s AI Strategy, refreshed in 2024, promotes regulatory sandboxes, research hubs, and digital upskilling. The National Standards Authority of Ireland is contributing to EU-wide standards, which will underpin the AI Act’s implementation. While enforcement mechanisms are still being defined, Ireland is expected to introduce domestic legislation to support penalties and compliance measures. 

 

Italy: A national framework that complements the EU AI Act

Italy has taken a leading role in European AI regulation with the introduction of Law No. 132, which comes into force on 10 October 2025. It is the first comprehensive national framework designed to operate in tandem with the EU AI Act, rather than duplicating it. It adopts the same definitions for key terms, reducing interpretive discrepancies between national and EU-level rules. 

The law’s success will depend on effective implementation and coordination, but it positions Italy as a regulatory frontrunner in the European AI landscape. 

 

Key principles: Data protection and ethical safeguards 

  • Foundational principles: Article 4 sets out requirements for lawful and transparent data processing, with an emphasis on clear communication of risks and processing activities. 
  • Minors’ rights: A two-tier consent model is introduced. Parental consent is required for AI-related data processing for children under 14, while those aged 14 to 18 may consent independently, provided the information is presented in an accessible format. 
  • Healthcare data: Articles 8 and 9 establish a framework for AI-driven healthcare research, recognising its constitutional importance. Secondary use of personal data is permitted once direct identifiers are removed, subject to oversight by the Garante via a notification process. 
  • Transparency obligations: Data controllers must provide general information notices online, and specific disclosures remain mandatory for anonymisation and pseudonymisation activities. 

 

Governance and enforcement 

Italy adopts a dual-authority model between AgID (Agency for Digital Italy), who oversee innovation and conformity assessments, and the ACN (National Cybersecurity Agency), responsible for market surveillance, inspections, and enforcement. 

The powers of existing authorities such as the Garante and the Communications Regulatory Authority also remain intact. The Government is also authorised to issue legislative decrees to expand enforcement powers and introduce criminal provisions for unlawful AI use. 

 

Transparency, copyright, and creative works 

Article 13 of the framework limits AI use in intellectual professions, and professionals must disclose the use of AI tools to clients to ensure transparency.

Additionally, Italy is the first EU country to explicitly extend copyright protection to works created with the assistance of AI, provided they reflect human intellectual input. 

 

Spain: Structured implementation and evaluation

Spain has outlined a comprehensive roadmap for AI governance, which includes 88 "responsibilities' that have been identified at a national level. These include: 

  • 18 tasks aimed at establishing a national AI governance system.
  • 7 legislative items that may be introduced or supported at the national level.
  • 55 categories of implementation and control activities, some of which have already been implemented.
  • 8 tasks focused on post-implementation evaluation, extending to at least 2031. 

To date, there is no comprehensive proposal for global legal regulation of issues arising from the use of AI devices. Numerous regulatory efforts and working groups are analysing the risks and potential impacts of AI use in order to anticipate necessary legal changes. 

Spain’s approach reflects a structured commitment to both immediate compliance and long-term oversight. The country is actively preparing for the EU AI Act’s phased implementation, including the establishment of regulatory sandboxes and mechanisms for evaluating AI systems. 

 

Common themes and future outlook 

Across these jurisdictions, several common themes emerge: 

  • Reliance on existing legal frameworks: Most countries are adapting current laws in areas such as data protection, product safety, and labour rights to address AI-related challenges. 
  • Gradual implementation of EU legislation: The AI Act and AI Liability Directive are driving harmonisation, but Member States retain discretion over enforcement and penalties. 
  • Sector-specific oversight: National authorities are being designated to regulate AI within their respective domains, ensuring tailored governance. 
  • Transparency and accountability: There is a shared emphasis on making AI systems explainable and ensuring users understand how decisions are made. 

As the EU’s regulatory landscape matures, domestic approaches will continue to evolve. Member States are balancing innovation with risk management, aiming to foster trustworthy AI while protecting fundamental rights. 

 

Key takeaways 

  • Germany, Ireland, Italy, and Spain are adapting existing laws and preparing for EU-level AI regulation.
  • The EU AI Act introduces a risk-based framework, with phased implementation across Member States.
  • Ireland has designated sectoral regulators and refreshed its national AI strategy.
  • Italy applies general civil liability principles to AI, pending specific legislation.
  • Spain has developed a detailed roadmap for AI governance and evaluation.
  • Transparency, accountability, and sector-specific oversight are central to domestic approaches. 

For further information or advice on any of the jurisdictions mentioned and how they apply to your business, get in touch with us.