The UK currently lacks overarching legislation or regulations specifically tailored for artificial intelligence (AI). The previous Conservative government adopted a pro-innovation, principles-based approach to AI regulation. Rather than establishing a single, central AI regulator, the current framework relies on existing regulators, leveraging their expertise to oversee AI within their respective sectors.
This position was expected to change with the election of a Labour government in the 2024 general election. However, although there was reference to AI legislation in the King's Speech in 2025, we are unlikely to see any change until after the King's Speech in 2026. In the meantime, the Government has expressed commitment to the pro-innovation approach, the Data (Use and Access) Bill is now in force, updating current data protection legislation, and we wait to see if a second reading of the Artificial Intelligence (Regulation) Bill, a Private Members Bill, will be scheduled.
To navigate the evolving landscape of AI regulation, it is important to acknowledge that this is a fast-moving area of the law and may be subject to frequent change in the near future. This memo reflects the accurate information available as of 16th December 2025, and we encourage ongoing awareness and adaptability to new developments in AI policy and regulation.
For further detail on the UK's approach to AI, the House of Commons research briefing Artificial intelligence: a reading list is a useful source of information.
Contents
- Where have we come from: A pro-innovation approach
- Change of government: what changed?
- Governance of AI: Government response and policy consolidation
- Delayed legislation and the shift to regulatory enablement
- Political context and future pressure for legislation
- Institutional architecture: regulators, RIO and the AI Growth Lab
- What have regulators done so far?
- Private Members' Bill
- Date Use and Access Act
- Large language models and generative AI: House of Lords Communications and Digital Committee report
- How have the UK courts dealt with liability for AI?
Where have we come from: A pro-innovation approach
The previous Conservative government set out its proposed approach for regulating AI within a White Paper titled: A pro-innovation approach to AI regulation published on 29th March 2023. The White Paper formally introduced the idea of a principles-based and "context specific" approach to AI regulation in the UK.
Previously, the following five principles from the White Paper set out the parameters which the UK Government will expect regulators to enforce in their sector:
Safety, security and robustness: AI must primarily be safe. AI Suppliers must have appropriate measures in place to ensure their AI systems are secure and robust and that risks are identified and managed accordingly.
Appropriate transparency and "explainability": The person or entity needs to be aware that AI is being used and to have access to (and understand) the decision-making processes of an AI system.
Fairness: AI systems must not undermine the rights of individuals or organisations, including not discriminate unfairly or create unfair market outcomes.
Accountability and governance: AI systems must be governed in a way that ensures effective oversight and clear accountability.
Contestability and redress: Users of AI need to be able to contest an AI decision which is harmful or creates a material risk
These five principles were not intended to be statutory. Existing regulators were instead empowered to issue guidance regarding interpretation of the five principles and what practical measures can be taken in order to ensure compliance. The previous government's reasoning for taking this approach, rather than introducing AI-specific legislation, is that it would:
- Create a regulatory framework which is adaptable in the face of rapidly evolving technology
- Avoid a scenario whereby the remit of existing regulators is undermined by new legislation
Therefore, rather than looking to precisely define an AI system, as under the EU AI Act, the White Paper approach instead defines AI systems by reference to two key characteristics: adaptivity and autonomy.
In essence, the previous government acknowledged that AI systems are constantly evolving and are becoming increasingly complex. As such and in order to not have regulation lag behind the pace of development of AI systems, the White Paper focused on outputs of the system rather than the operation of the systems themselves.
It is important to note the White Paper has not yet been formally superseded, and the principles-based framework remains the operative policy baseline for regulators, who refer to the principles in their AI guidance.
Change of government: what changed?
The change of government prompted widespread speculation that the UK might move towards a more robust and comprehensive legislative framework for AI. In practice, however, the initial signals suggested a more cautious and incremental shift rather than an immediate break with the existing approach.
This was evident in the first King’s Speech delivered in July 2024. Notably, no standalone AI Bill was announced, and artificial intelligence was not listed among the Government’s legislative priorities. Instead, the Speech contained a more general commitment “to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.
The wording of this commitment is significant. Rather than signalling a broad, economy-wide regulatory framework, it pointed towards a targeted focus on frontier AI systems, those models considered to be the most powerful and potentially the most risky. This emphasis on developers of advanced models, rather than users or deployers of AI more generally, suggested continuity with a risk-based, selective approach to intervention.
At the same time, the limited detail provided left open important questions about scope, timing and ambition. As later developments indicate, it remains uncertain whether this targeted focus on frontier models will endure, or whether the Government’s position on AI regulation is evolving towards a broader legislative framework. The sections that follow explore this in more detail.
Governance of AI: Government response and policy consolidations
On 10 January 2025, the Government published its response to the House of Commons Science, Innovation and Technology Committee’s report on the governance of AI (originally published in May 2024). The response did not propose new primary legislation, but instead reaffirmed the Government’s existing policy position on AI regulation.
In particular, the Government reiterated its commitment to a pro-innovation regulatory approach, emphasising the importance of enabling AI development and adoption while managing risks through existing regulatory frameworks. It confirmed that responsibility for AI oversight would continue to sit primarily with sector regulators, supported by enhanced coordination and guidance at a central level.
The response also highlighted a number of supporting themes, including:
- Improving coordination between regulators on AI-related issues
- Increasing transparency and public engagement around the use of AI
- Continuing to work closely with international partners to promote interoperability and alignment across jurisdictions
Taken together, the response signalled continuity rather than change. It confirmed that the Government did not, at that stage, intend to introduce a horizontal AI Act or a standalone statutory framework governing AI systems, and that regulatory development would continue to be incremental and regulator-led.
Delayed legislation and the shift to regulatory enablement
Shortly after publishing its response on AI governance, the government announced that it would delay its previously signalled plans to introduce AI-specific legislation. Earlier expectations of a short, targeted AI Bill, focused primarily on frontier or foundation models, were abandoned in favour of a more ambitious and comprehensive legislative proposal.
The Government indicated that any future AI Bill would be broader in scope, potentially incorporating AI safety, copyright and wider governance issues. However, it also confirmed that this legislation would not be introduced before the next King’s Speech, expected in May 2026. At the time of writing, no draft Bill or consultation has been published, and the precise scope and structure of any future legislation remain uncertain.
In the absence of imminent legislation, the Government has placed increased emphasis on regulatory coordination, capability-building and controlled experimentation. This approach is reflected in the Regulation Action Plan, first published on 17 March 2025 and updated in October 2025. Although the Action Plan applies across the regulatory landscape, it sets out a number of themes that are central to AI oversight.
In particular, the Action Plan:
- Confirms that no new, horizontal AI regulation is planned in the near term
- Reinforces the expectation that regulators will take primary responsibility for AI risks within their existing statutory remits
- Emphasises the need for regulators to develop technical expertise, AI literacy and supervisory tools
- Supports the use of sandboxes and test-and-learn mechanisms to enable responsible AI innovation
This shift has been accompanied by a series of practical initiatives, most notably the proposed AI Growth Lab, alongside sector-specific regulatory activity and increased investment in regulatory capability. Together, these measures reflect a deliberate policy choice to prioritise regulatory enablement and evidence-gathering over immediate legislative intervention.
At the same time, this approach has attracted growing political scrutiny. Calls for binding AI regulation have increased, including from parliamentarians and through Private Members’ Bills, highlighting an emerging tension between the Government’s preference for sequencing and flexibility and demands for more formal statutory controls.
Political context and future pressure for legislation
However, this approach this approach has attracted growing political scrutiny, with pressure for more formal regulation increasing. In December 2025, more than 100 UK politicians signed a public call for binding regulation of powerful AI systems, supported by the campaign group Control AI. The campaign argues that current measures are insufficient and has called for mandatory standards to be introduced within the next one to two years.
Against this backdrop, the UK’s AI regulatory strategy can be characterised as one of deliberate sequencing: enabling innovation and experimentation first, building regulatory capability and evidence through sandboxes and pilots, and deferring comprehensive legislation until there is greater clarity on risks, use cases and international alignment.
Whether that strategy can be sustained in the face of growing political and public pressure remains an open question.
Institutional architecture: regulators, RIO and the AI Growth Lab
As it stands, a central feature of the UK approach is the expectation that regulators themselves will become “AI-ready”. This means building internal capability in AI literacy, technical understanding and risk assessment, so that existing regulatory regimes can be applied effectively to AI-enabled products and services.
The Regulatory Innovation Office (RIO) plays a key coordinating role in this ecosystem. RIO acts as a cross-government hub for regulatory innovation, supporting regulators on frontier technologies and facilitating alignment on cross-cutting AI issues.
One of the most significant new proposals in this space is the AI Growth Lab. The Lab is envisaged as a multi-sector, real-world testing environment for advanced AI systems, operating under regulatory supervision.
Within the Lab:
- Organisations would be able to test AI systems in live environments
- Regulators could temporarily modify or relax specific regulatory requirements where justified
- Experimentation would take place under defined safeguards and monitoring
The stated objectives are to accelerate innovation, reduce time-to-market, and generate evidence to inform future regulatory reform. While not a central AI regulator in the EU sense, the AI Growth Lab represents the closest the UK has come to a coordinated, system-level AI oversight mechanism - albeit one framed around enablement rather than control.
Alongside this, a number of sector-specific AI initiatives are being developed, reflecting the continued reliance on existing regulators to tailor AI oversight to the risks and realities of their respective sectors.
What have regulators done so far?
As we take a look at the efforts made by regulators to date, it is important to acknowledge that while much of the regulatory action taken thus far has largely stemmed from responses to the previous government's White Paper, it remains relevant in the current context. The Government's 'principles-based approach' means that it is down to individual regulators to shape the AI landscape. As a result, most regulatory developments in the UK so far have come from individual regulators interpreting and implementing the principles according to the risks and requirements of their specific sectors, rather than from central legislation.
The key regulators that will be involved in the governance of AI includes the following:
- The Information Commissioner's Office (ICO)
- The Financial Conduct Authority (FCA)
- The Prudential Regulation Authority (PRA) and Bank of England (BoE)
- The Medicines and Healthcare products Regulatory Agency (MHRA)
- The Competition and Markets Authority (CMA)
- The Office of Communications (Ofcom)
It is generally acknowledged that the regulators are expected to cooperate with each other and ensure that approaches taken and guidance issued are aligned with one another. This allows AI systems to operate across multiple regulatory remits in circumstances where an AI system is sector agnostic. The risk of course being the issuance of competing (and potentially contradictory) guidance and the dilemma of which takes precedence.
The UK government is advancing its pro-innovation regulatory agenda through the proposed AI Growth Lab, a cross-economy regulatory sandbox announced by DSIT on 21 October 2025. Designed to allow organisations to test AI-enabled products and services in a controlled environment, the Lab would offer time-limited regulatory flexibilities so innovators can develop real-world evidence while regulators gain insight into emerging risks and necessary safeguards. To shape how the sandbox should operate, including which sectors to prioritise and what conditions should govern participation, DSIT opened a call for evidence running until 2 January 2026. If implemented following this consultation, the AI Growth Lab is expected to become a key mechanism for balancing responsible oversight with the UK’s ambition to accelerate safe and effective AI deployment across the economy.
Digital Regulation Cooperation Forum
The work of the Digital Regulation Corporation Forum (DRCF) was seen as an example of best practice. The DRCF is a voluntary forum established in 2020 which brings together four UK regulators with responsibilities for digital regulation – the CMA, FCA, ICO and Ofcom.
These regulators recognised that, by working together, they could better respond to the global nature of large digital platforms and the speed at which they innovate. There is a suggestion that an expanded version of the DRCF should be considered to co-ordinate approaches between all regulators. However, the DRCF has indicated that there are no immediate plans to expand their membership.
The group engages on overlapping policy areas to deliver a coherent and coordinated approach to digital regulation for both consumers and business. The DRCF works with a wide range of stakeholders including industry, trade associations and technology companies. Regular engagement is undertaken with UK Government stakeholders including those in the devolved nations and a wider group of regulators outside of the members of the DRCF.
The DRCF's 2024/25 Workplan set out the priorities for the DRCF and established a three-year vision reflecting the shared priorities of the members. As previously reported, the DRCF confirmed 12 Workplan projects which focus on key AI issues:
- DRCF AI and Digital Hub: This 12-month pilot will provide regulatory clarity to AI and digital innovators across multiple regulatory remits, by making it easier to deliver digital products to market. The DRCF will publish outcomes of queries as case studies on the website and seek feedback from users to evaluate the performance and service offering of the pilot.
- Artificial Intelligence: The group will work together on a range of activities to improve regulatory coherence and business compliance in line with the UK Government's AI framework. These activities are expected to include joint research into consumer experience of generative AI and the third party auditing market; hosting events to explore the interaction of the UK Government's AI principles with regulation and raise awareness of research into responsible AI; sharing knowledge with users and the UK Government's central AI function.
- Online Safety and Data Protection (Ofcom/ICO): The regulators will continue to ensure coherence between data protection and online safety. The development of existing working practices within the legal framework, along with cooperation and collaboration on issues such as Ofcom's codes of practice and guidance under the Online Safety Act and ICO guidance on user profiling and behaviour identification safety will be carried out.
- Digital Assets (FCA/ICO): To allow continued collaboration to deepen the understanding of digital assets, the regulators will engage with the crypto asset industry and other stakeholders, and also look to align ICO guidance and FCA work in this area.
- Illegal Online Financial Promotions (FCA/Ofcom): The regulators will continue to engage and explore the interaction and coherence between Ofcom's Online Safety Act codes of practice and Financial Promotions legislation. The interventions of other regulators such as the ICO may also be considered to support data sharing.
- Promoting Competition and Data Protection (ICO/CMA): The regulators will discuss the possibility of any potential or perceived tension due to overlapping CMA and ICO regimes that the interests of both business and consumers are served. This will involve joint statements on AI foundation models to guide development and updating the 2021 joint statement on competition and data protection. There will be continued cooperation on supervisory and investigation activities and the continued evaluation of Google's compliance to CMA commitments on the Privacy Sandbox.
- Sharing the latest developments on cross-cutting digital issues: The group will share knowledge on areas of shared interest such as online architecture, online advertising and privacy enhancing technologies (PETS). New areas of interest will also be discussed.
- Horizon Scanning and Emerging Technology: The group will supplement the existing horizon scanning functions of the individual regulators to plan for new innovations, and the subsequent regulatory risks and opportunities. This will involve conducting and publishing research on the future of digital identity and deepfakes, as well as revisiting previous horizon scanning topics.
- Supervisory Technologies: The group will explore opportunities to collaborate in the development of technologies to assist regulatory functions. This will involve exchanging information on use of machine learning and AI, and exploring how Large Language Models can be used to addressed complex regulatory issues.
- Skills and Capabilities: In order to build the regulators' digital capabilities, the group will promote joint learning and development opportunities, embed graduate placement and staff secondment pathways between regulator and upskill for AI governance.
The DRCF has since released their 2025/26 Workplan in April of this year, which builds upon the projects seen in the 2024/25 edition. Notably, the original 12-month “Hub pilot” (AI & Digital Hub) has now transitioned into a broader, ongoing “AI & Innovation Services” function. Horizon Scanning and Emerging Tech remains a core stream and in 2025/26 there is specific mention of “agentic AI systems” among emerging tech to be studied. This reflects accelerating technological change and the need for forward-looking regulation. Projects on online safety, data protection, digital markets & online choice architecture, fraud/scams, and consumer-protection in financial services also appear in 2025/26.
In short, the vast majority of the 2024/25 AI-related project strands are retained in 2025/26, though structured under somewhat different headings and combined within broader workstreams. The DRCF is demonstrating strong continuity in AI governance which we expect to continue through to 2026/27.
The Information Commissioner's Office
Given the significant overlap between data protection requirements and AI regulation it is no surprise that the ICO has been particularly proactive in respect of its regulation of AI.
This activity has included:
- Issuing extensive AI-focused guidance as well as practical resources including the AI and data protection risk toolkit
- Launching a consultation series, with the ICO’s response being published on 13 December 2024, on Generative AI which looked at: i) the lawful basis for web scraping to train generative AI models; and ii) purpose limitation in the generative AI lifecycle; iii) accuracy of training data and model outputs; iv) engineering individual rights into generative AI models; and v) allocating controllership across the generative AI supply chain
- Taking enforcement action. In October 2023, it issued a preliminary enforcement notice to Snap (Snap, Inc and Snap Group Limited) as a result of its purported failure to adequately assess risks posed to children by use of its generative chatbot, "My AI"
- Publishing a new "AI and biometrics" strategy in June 2025. The strategy commits the ICO to "step up our supervision of AI and biometric technologies" demonstrating an intention to increase oversight when organisations deploy AI/biometrics
- Publishing its internal AI use policy, effective August 2025, with the hope of giving organisations confidence and regulatory certainty on AI
Providing a flexible framework enabling organisations to adapt to developments in AI, in its response to the White Paper as referenced above, the ICO suggests the principles-based approach of data protection law can be considered analogous to the proposed White Paper principles. The ICO therefore considers that it "has active experience of implementing the aims and objectives" of the White Paper principles.
The response reflects upon each of the five AI principles in the White Paper and highlights how these map to the existing data protection principles. Albeit it has now been replaced with the Data (Use and Access) Act, the response also commented on the progress at the time of the Data Protection and Digital Information Bill, highlighting that any future approach to AI regulation will be affected by this new legislation, but that the ICO's role in regulating AI will not.
The Group Manager spoke about the ICO's role in regulating the UK's AI sector recently, viewing its role as ensuring responsible innovation by enforcing data protection across the AI lifecycle, supporting ethical development, and balancing economic growth with public trust and safety.
The published response from the ICO in respect of its consultation series on generative AI states that it has retained its position on purpose limitation, accuracy and controllership. However, interestingly the ICO has updated its position on the availability of legitimate interests as a lawful basis for web scraping to train generative AI models and on engineering individual rights into generative AI models.
The key takeaways are that, concerning web scraping and the availability of legitimate interests for generative AI, the ICO expects generative AI developers to significantly improve their approach to transparency. Regarding data subject rights, the ICO states that organisations acting as controllers must design and build systems that implement the data protection principles effectively and integrate necessary safeguards into the processing.
However, the ICO has been regulating AI for several years, with a landmark report on Big Data, Artificial Intelligence, Machine Learning and Data Protection first being published in 2014. As referenced earlier, the ICO has since developed a suite of guidance and products to help organisations navigate the use of AI with reference to data protection law. General guidance is regularly updated, along with supplementary guidance on automated decision-making and profiling, and explaining decisions made by AI, guidance that was produced in partnership with the Alan Turing Institute. Specific applications of AI such as biometric recognition technologies are also covered.
The response also refers to the ICO's award-winning AI and Data Protection Risk Toolkit, also referenced earlier, the Regulatory Sandbox and Innovation Advice service. In addition, the ICO undertakes consensual audits to help organisations assess their processing of personal data using AI and provides advice to improve their methods.
Regulatory action to promote compliance and safeguard individuals is a significant part of the ICO's remit. There have already been a small number of regulatory decisions in relation to the use of AI which the report highlights; the £7.5 million fine issued to Clearview AI (subject to ongoing legal proceedings), the issue of enforcement notices to Serco Leisure and others relating to the use of facial recognition and fingerprint scanning for employee monitoring, and an ongoing action relating to the use of generative AI chatbots on the social media platform Snapchat.
The ICO's response, and other action taken throughout the year, makes clear that the ICO will continue to actively scrutinise AI to safeguard users and the public from harm. The communication of regulatory outcomes is highlighted as a necessary tool to drive improvements. Collaboration with other regulators is also identified by the ICO as a key element of its strategic approach.
Thematically, and perhaps unsurprisingly, the ICO's strategic approach to AI regulation indicates a particular focus on children (recognising that children are a vulnerable group and more exposed to the risks of AI) and biometric technology (including biometric recognition and behaviour classification). The three enforcement actions noted in the ICO's response align with these focus areas.
The Bank of England and Prudential Regulation Authority
As set out in their response to the White Paper, the BoE and PRA remain focused on "understanding how to support the safe and responsible adoption of AI/ML in financial services from a macro-financial and prudential perspective, given the potential benefits – including driving innovation – that AI/ML could bring to firms". For that purpose, the existing regulatory framework is considered to be 'well-equipped' for regulated firms use of AI and ML.
However, the BoE and PRA emphasise that although a technology-agnostic approach is currently being followed, the use of certain technologies affecting statutory objectives and creating risk for both firms and regulators may warrant new guidance and clarification of existing rules in due course. The BoE and PRA have various projects underway/planned to understand, assess and manage the risks presented by AI and ML including:
- Exploring four potential areas where further clarification on the regulatory framework could be beneficial, being: (1) Data Management; (2) Model Risk Management; (3) Governance; and (4) Operational Resilience and Third-Party Risks. In addition, deeper analysis on the potential financial stability implications of AI/ML will be undertaken over the course of this year. This analysis will be considered by the Financial Policy Committee.
- Working alongside other relevant authorities both domestically and internationally, ensuring that the UK financial system is resilient to risks that may arise from widespread adoption of AI/ML. The need for regulatory cooperation will be an ongoing process. Discussions are continuing with the FCA and other regulators around the safe adoption of AI and ML in the financial services sector.
- Working with the DRCF on selected AI projects, which includes conducting joint research to better understand cross-sector adoption of generative AI technology. The BoE is also a member of the Information Commissioner's AI and Regulators Working Group and is represented on the Alan Turing Institute's Standards Hub Regulators' Forum.
- Continuing to build on established frameworks to enhance the cyber and operational resilience of the financial sector.
- Commencing the third instalment of the ‘ML in UK financial services’ survey this year, and considering the establishment of an industry-wide AI consortium to follow-up on the AI Public-Private Forum.
- Establishing a cross-organisation AI task force to ensure that progress using AI is made effectively, safely and responsibly. The three aims of the taskforce are to identify and pilot promising AI/ML use cases, to develop appropriate guiderails to ensure risks from using AI are controlled and identify training need to ensure AI/ML can be used effectively.
In early 2025, the BoE published its AI Strategy, setting out how the Bank intends to use, govern, and monitor AI technologies across its own operations and supervisory activities. This strategy aligns with the PRA’s 2025/26 Business Plan, which explicitly commits the regulator to remain “at the forefront of identifying and responding to opportunities and risks” arising from firms’ use of AI and ML. As part of this work, the PRA held two Chief Risk Officer roundtables in October 2025 with 21 PRA-regulated firms, focusing in particular on how firms are applying the expectations outlined in Supervisory Statement SS1/23, especially in relation to model governance, validation, explainability, and oversight of third-party or outsourced AI systems.
Financial Conduct Authority
The FCA continues to take a technology-neutral approach, focusing on identifying and mitigating risks to its statutory objectives rather than mandating or prohibiting the use of specific technologies. While its rules, regulations, and core principles are designed to be technology-agnostic, the FCA now provides more targeted guidance and engagement around AI to support safe and responsible adoption in UK financial markets.
In 2025, the FCA published an AI Update document outlining how it intends to facilitate the adoption of AI while balancing risks to consumers, markets, and the financial system. In particular, the FCA confirmed that it will not introduce AI-specific rules immediately, instead relying on existing regulatory frameworks such as the Consumer Duty and the Senior Managers & Certification Regime (SM&CR), which already provide governance, accountability, and consumer protection safeguards.
The FCA is also actively promoting innovation through its AI Live Testing service, first proposed in April 2025. Following feedback from its engagement paper, the FCA published a Feedback Statement in late 2025 confirming that the service aims to create a collaborative, real-world testing environment. The initiative allows firms to experiment with AI models in a controlled setting, helping them to develop innovative products while managing risks to consumers and markets.
Looking ahead, the FCA has indicated that AI providers themselves could be directly subject to UK financial services regulation in the future, signalling a willingness to adapt the regulatory perimeter as the technology evolves. These steps collectively illustrate the FCA’s strategy of fostering innovation and growth in UK financial services, while maintaining robust safeguards to protect consumers and market integrity.
Private Members' Bill
Two Private Members' Bills have also been introduced in the House of Lords to make provision for the regulation of artificial intelligence.
Firstly, the Public Authority Algorithmic and Automated Decision-Making Systems Bill (the PAA Bill), which was introduced in September 2024, and which aims to regulate the usage of algorithmic and automated decision-making systems across the public sector. It seeks to make the use of such systems more transparent, fair, and to mitigate against the risks arising from such technologies, including bias and discrimination.
The PAA Bill would require public authorities to complete an 'Algorithmic Impact Assessment' prior to the deployment of an algorithmic or automated decision-making system.
Such an impact assessment, as required by the Bill, must include:
- A detailed description of the algorithmic or automated decision-making system
- An assessment of the relative benefits and risks of the system including the risks to the privacy and security of personal information, risks to the safety of a service user or group of service users, and risks and likely impacts on employees of public authorities
- An explanation of the steps taken to minimise those risks
- Independent external scrutiny of the efficacy and accuracy of the system
- More detail can be found in the House of Lords library briefing on the Bill.
In addition, the Artificial Intelligence (Regulation) Bill (the AI Regulation Bill) was introduced into Parliament on 4 March 2025. A similar bill was put forward during the previous Parliamentary session but was not concluded due to the 2024 general election.
Per the parliamentary briefing notes from the previous version of this proposal, the AI Regulation Bill would establish a new body, the AI Authority, which would have various functions help address artificial intelligence (AI) regulation in the UK.
This would include a requirement for the AI Authority to ensure relevant existing regulators were taking account of AI; to ensure alignment in approach between these regulators; and to undertake a gap analysis of regulatory responsibilities with respect to AI. The AI Authority would also have various other functions including monitoring economic risks arising from AI, conducting horizon-scanning of developing technologies, facilitating sandbox initiatives to allow the testing of new AI models, and accrediting AI auditors. In addition, the bill would introduce a set of regulatory principles governing the development and use of AI.
We are still waiting on a second reading of the AI Regulation Bill, and on ongoing Parliamentary consideration for the PAA Bill. Neither Bill has passed into law.
It should be noted that, Private Members’ Bills are introduced by individual MPs or members of the Lords rather than by the Government. Very few Private Members' Bills become law but, by creating publicity around an issue, they may affect legislation indirectly. Therefore, it is important to fully consider the proposals set out in these Bills to understand issues of concern around AI.
As noted, the introduction of these Bills is reflective of certain concerns around the use of AI systems and the way in which they can be used in the public sector. Although they are unlikely to be passed, it creates further discussion on the introduction of government legislation in the space.
Date Use and Access Act
On 23 October 2024, the UK Government published the Data (Use and Access) Bill, as anticipated in the King’s Speech. The Bill subsequently received Royal Assent on 19 June 2025 and is now in force as the Data (Use and Access) Act 2025 (DUAA).
While DUAA is not AI-specific legislation, it makes a number of important changes to the UK data protection framework that are directly relevant to the development and deployment of AI systems. In particular, it reshapes the UK approach to automated decision-making and clarifies the conditions under which data (including personal data) may be used for research purposes.
A key change introduced by DUAA is the reform of the UK GDPR’s approach to automated decision-making (ADM). DUAA repeals the original Article 22 UK GDPR and replaces it with a new set of provisions (Articles 22A–22D). These provisions move away from the blanket prohibition on certain forms of solely automated decision-making and introduce a more flexible, risk-based framework. In broad terms, the new regime permits ADM involving non-special category data, including decisions with legal or similarly significant effects, provided that appropriate safeguards are in place. These safeguards include measures such as meaningful human oversight, transparency, and the right for individuals to contest decisions. This change is particularly relevant to AI-driven decision-making in commercial and operational contexts, where automated tools are increasingly used to support or make significant decisions.
DUAA also makes targeted amendments to existing data protection law to clarify when personal data may be processed for scientific research or statistical purposes. In doing so, it broadens the definition of “scientific research” to explicitly include commercial scientific research, such as the use of personal data by businesses to develop new products, services, or insights. This clarification is likely to be significant for AI development, where large datasets are often used to train, test, and refine models in a commercial context.
In addition, DUAA confirms that consent may be obtained for an “area of scientific research”, even where it is not possible at the outset to fully specify all future processing purposes, provided certain conditions are met. These include that the processing falls within a recognised area of research, that the use of broad consent is consistent with generally recognised ethical standards, and that individuals are given a genuine opportunity to consent to only part of the research rather than being required to agree to all processing activities.
This approach is particularly relevant for many AI projects, especially in academic, health, social science and real-world data contexts, where the precise use cases or analytical outputs may evolve over time. By permitting broader forms of consent in these circumstances, DUAA offers organisations greater flexibility to collect and reuse personal data for AI-driven research and development without needing to seek fresh consent each time the purpose is refined, while still maintaining appropriate ethical and governance safeguards.
Large language models and generative AI: House of Lords Communications and Digital Committee report
On 21 November 2024, the House of Lords debated the House of Lords Communications and Digital Committee report Large language models and generative AI, which was published in February 2024. The previous Conservative Government responded to the report in May 2024.
The purpose of the inquiry was to "examine likely trajectories for Large Language Models (LLMs) over the next three years and the actions required to ensure the UK can respond to opportunities and risks in time". The Committee in this report set out 61 recommendations and directed particular attention to several key recommendations namely:
- Making market competition an explicit AI policy objective
- Avoid narrowly focussing on risks and safety and support innovation and development. In essence, avoid regulatory capture but regulate proportionately
- Introduce protections concerning discrimination, bias and data protection
- Develop accredited standards and common auditing methods, with a view of establishing proportionate regulation
The outcome from the debate indicated a general consensus leaning towards proactive and collaborative governance to ensure ethical, safe and equitable AI deployment.
The debate made clear that LLMs and AI can, and will, be hugely positive contributors to our lives and economy. However, there is still a need to balance the desire to grow the economy by harnessing the potential of AI with the risks of harming humanity. In essence, regulate proportionately.
More recently, The House of Lords Select Communications and Digital Committee has launched an inquiry on AI and copyright. It will explore:
- The practical steps that would enable creative rightsholders to reserve and enforce their rights meaningfully in relation to AI systems
- What levels of transparency and accountability can reasonably be expected from AI developers
- How licensing, attribution and labelling tools might support a viable marketplace for creative content. This ties in with the government's consultation on AI and copyright, the outcome of which has been long awaited
On 4 November 2025, the committee held its first oral evidence session with creative sector representatives. Saying that UK's copyright regime is a "gold standard", some argued that the UK needs regulation and enforcement, not changes to the copyright system.
Key themes included:
- UK copyright law is fit for purpose; the core problem is transparency and enforcement
- Strong opposition to the introduction of a text and data mining exception for commercial AI training
- AI developers should be subject to mandatory, auditable, detailed transparency obligations, including the establishment of a regulator-backed bot register
- Collective management organisations can scale to deal with both retrospective compensation and forward-looking licences, provided that there is appropriate transparency and access control
- Overseas scraping of UK content without consent should be copyright infringement in the UK
How have the UK courts dealt with liability for AI?
Given the lack of specific legislation in respect of AI, how have the UK courts approached the issue of liability for AI?
Given the increasing prevalence of AI in information technology solutions it is surprising that there are only a few published Court cases in England & Wales which directly deal with or raise AI issues, i.e. cases where AI is central to the issue the Court is asked to decide rather than merely an incidental feature of the object of the dispute.
To date the overall approach of the Courts in England & Wales has been to apply established legal concepts to AI solutions in the same way in which the issues would be resolved in the absence of AI, rather than to develop new approaches.
Court decisions
This is, for example, demonstrated by the widely reported decision of the Court of Appeal in Comptroller – General of Patents, Designs and Trade Marks v Emotional Perception AI Limited over the patentability of an AI system which deployed an artificial neural network to recommend data files. Ultimately the Court of Appeal's conclusion was that AI technology (and artificial neural networks in particular) fall under the definition of a "computer program" and that their patentability therefore needs to be assessed in the same way as for any other computer program. This in essence means that AI technologies must make a technical contribution in order to be patentable. However, the Supreme Court granted Emotional Perception AI Limited permission to appeal this decision, and we are awaiting on the Supreme Court's judgment.
Indeed this traditional approach goes back some way. The case of Tyndaris v MMWWVWM Limited arose from the fact that in 2017 investment fund Tyndaris started to use AI-based software to make investment decisions without human intervention based on real-time analysis of content (incl. news and social media activity). The deployment of the software very quickly resulted in losses of in excess of US$20m+ and the demand by Tyndaris' customer to suspend trading. While the resulting litigation settled, so that there was never any Court decision adjudicating on the issues in dispute, the way in which the case was put by the parties' lawyers largely followed traditional legal concepts: it was primarily phrased as a claim for misrepresentation by Tyndaris as to their pre-contractual promises over the capabilities of the AI-based software solution, the nature of the testing carried out, the appropriateness of human intervention etc.
This does of course not mean that the fact that the subject matter of a Court case involves AI is not something which is capable of requiring Courts and lawyers dealing with such cases to have to pay special attention to issues which arise from the inherently different way in which AI systems operate when compared to traditional computer software, in particular the fact that AI influenced outputs are not deterministic in the same way.
An example is the recent case of Getty Images (US) Inc & Others v Stability AI Limited in which Getty alleged infringement of their intellectual property rights as a result of Stability AI's alleged use of Getty's images for the purposes of training and development of an AI system (marketed as "Stable Diffusion") which automatically generates images based on text or image prompts.
In an interim decision by the High Court, the Court applied established legal principles over, for example, the territorial scope of the English Courts' copyright infringement jurisdiction and what constitutes an "infringing copy", but expressly recognised that application to AI-based software raises issues which required determination at trial with the benefit of the full extent of the available evidence following disclosure, witness statements, cross-examination etc., for example regarding the factually potentially complex question as to where the training of an AI system has taken place and therefore whether it engages the jurisdiction of the English courts.
The judgment was handed down on 4 November 2025 and concluded the following:
- The Court did not consider whether training an AI model on copyrighted images constitutes infringement, which was the golden ticket that we were holding out for. However, a key takeaway is that if training of AI systems occurs outside the United Kingdom, a UK court may not consider primary copyright infringement claims relating to that training.
- On Getty's secondary copyright claim, the Court held that Stable Diffusion is not an 'infringing copy' for the purposes of section 27 CDPA. They considered that an AI model contains statistically trained parameters, not stored copies or reconstructions of photographs. The Court also held that importation, possession or distribution of the model in the UK did not constitute secondary copyright infringement.
- On the trade mark infringement claim, the Court concluded that there were some 'limited' and 'historic' instances of trade mark infringement from earlier iterations of Stable Diffusion. However, the Court found that the infringement did not amount to detriment to distinctive character or reputation, or unfair advantage.
- The Court declined to address the passing off claims in light of the trade mark infringement decision.
Risks of the black box
In addition to legal liability arising from breaches of AI specific legislative or regulatory standards or principles as set out above, AI-based software also has the potential to provide fruitful ground for litigation in areas other than intellectual property rights (cf. the Emotional Perception AI or Getty Images cases cited above) and (pre)contractual or tortious liability (cf. the case of Tyndaris).
For example the fact that most AI-based software solutions operate as "black box" systems, i.e. not even their developers or operators are able to explain how the AI system has come to a particular decision, does not only create evidential issues (e.g. when seeking to prove that an AI system meets contractually agreed or otherwise promised standards or includes appropriate safeguards), it also is at issue with many general regulatory standards (e.g. transparency requirements or accountability standards under UK GDPR in relation to personal data). Indeed inherent issues with the way in which AI-based software operates are capable of compounding themselves, for example where the use of an AI-based system leads to discriminatory outputs it will be difficult to disprove discrimination where the software operates as a "black box" system and it is not possible to reference individual decisions to non-discriminatory training data.
The fact that AI systems provide non-static outputs and are usually designed to constantly adapt and change, also raises issues over the level of testing required prior to deployment: while traditional determinative software can be tested to a particular level of operation (say x%), this is not in the same way possible with AI-based software as each further test has the potential to (and most likely will) influence the operation of the software itself.
Therefore the increased risk potential of AI software solutions arises not only from the way the systems are trained (e.g. the use of personal or discriminatory data) but also from how they operate (e.g. as "black box" systems) and the output they generate (e.g. discriminatory decisions or defamatory statements).
What next for AI in the UK courts?
In conclusion new AI specific legal principles will most likely originate from legislative acts by Parliament or regulators, while the Courts will continue to apply established legal principles to such disputes despite the fact that in a case law legal system, such as England and Wales, Courts could theoretically make new law themselves. This in essence will in many cases mean that the AI solution will be treated as an extension of its developer or operator and its operation or output judged as if it were the output of the developer or operator itself. In other words those deploying an AI solution will be held responsible for their choice to use an AI-based solution when performing a particular task in the same way as if they had performed the same task themselves.
Up to a certain point of "intelligence" of AI this approach will work well and ultimately reflects the fact that AI systems do not have separate legal personality from their developers or operators. Real questions over separate legal accountability are thereby (rightly) left to legislators. Until then, lawyers will have to grapple with potentially complex questions as to how traditional legal principles apply to AI-based solutions. However, such questions are unlikely to be answered by new legal principles specific to AI. Instead the answers to these questions will be primarily methods of how to evidence compliance with contractual, regulatory, legislative etc. standards in light of a system of complex and obscure operation, incl. often the use of vast amounts of data without any direct way of demonstrating the relationship between the input into the system and its output. Since our last update, the position has not changed.
