6 min read

An update on the UK's approach to AI regulation

Read more

By Jade Kowalski & Amanda Mackenzie

|

Published 08 March 2024

Overview

February has seen several further developments in the AI regulatory landscape in the UK. On 6 February the Department for Science and Technology ( DSIT ) published the Government's consultation response to its March 2023 White Paper: "A pro-innovation approach to AI regulation" ( the Response ). The Response together with DIST's " Initial Guidance for Regulators", also published in February, provides further details and clarity on the UK Government's approach to AI regulation.

 

Key takeaways

  1. UK Government is committed to its pro-innovation, context-based framework with the five core Principles [1] interpreted by existing regulators.
  2. Significant Government investment of £100 million to support AI innovation and regulation, including £10 million to help bolster the AI skillsets and capabilities of regulators.
  3. The establishment of a central function to drive coherence across Government and the regulators and who will produce an AI risk register .
  4. The AI and Digital Hub has been launched by the Digital Regulation Corporation Forum ( DRCF ). This is a pilot scheme for a new advisory service to support innovation run by expert regulators.
  5. Nine new research hubs are planned across the UK to harness the power of AI in everything from healthcare to chemical discovery and mathematics.
  6. Key regulators are required to publish plans by the end of April setting out how they are responding to the risks and opportunities presented by AI.
  7. New guidance to support regulators to implement the Principles effectively has been published by the Government.
  8. Future targeted, binding requirements for the most advanced general-purpose AI systems are being considered to ensure that powerful, sophisticated AI develops safely.
  9. International cooperation remains critical. The work started at the AI Safety Summit (including the establishment of the AI Safety Institute) demonstrates the UK's key role towards a coherent and collaborative approach to international governance.

The following analysis dives further into more a few of the more interesting key takeaways.

 

AI-specific regulation in the future? But not yet.

In the last article in our AI Explainer Series we highlighted a difference of opinion between the House of Commons and the House of Lords to UK AI regulation, with the latter putting forward arguments (including via a Private Member's Bill) for the five Principles contained in the White Paper to be codified in law. Further, a recent report from the House of Lords Communications and Digital Committee [2] urged caution against getting caught up in the more improbable risks and highlighted issues including disinformation, cyberattacks and copyright infringement by large language model developers as examples of areas where immediate action should be taken. In particular, mandatory safety tests for high risk models.

Despite the House of Lords' position, the Response has confirmed that the Government is committed to its pro-innovation approach. It is suggested that the "overall approach" of combining cross sectorial principles and a context-specific framework, international leadership and collaboration and voluntary measures on developers is right for today and allows the Government to keep pace with the rapid and uncertain advances in AI.

However it is acknowledged that "the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured". Government is clear that acting before risks are properly understood could harm the ability to benefit from technological progress and be inflexible in spite of the need to adapt quicky to emerging risks. The Response concludes: "We will legislate when we are confident that is the right thing to do".

 

Potential for binding requirements for only the most capable AI systems.

Despite continuing with their pro-innovation approach, the Government is considering specific binding requirements for ‘highly capable general-purpose AI systems’. Although voluntary agreements on AI safety were concluded with key countries and companies at the AI Safety Summit, the Government recognises these voluntary measures may be "deemed incommensurate to the risk and countries will want some binding measures to keep the public safe". The Government will continue to discuss these issues at an international level.

The Response indicates that these systems will currently include only the most cutting-edge foundation models that underpin consumer-facing applications. It considers that developers of these highly capable systems currently face the least clear legal responsibilities and therefore some of the risks arising from such systems might not be effectively addressed by existing regulation. These systems have ‘the least coverage by existing regulation while presenting some of the greatest potential risk’.

The Government considers this 'gap' exists mainly because existing regulation better addresses risks at deployment (i.e. when it is made available to users), but it may fail to adequately address risks at the foundation model development stage where potentially the greater risk resides.

It appears therefore that any binding requirement would be designed to ‘fill the gap’ in existing regulation. The Response is clear however that such requirements would follow the Principles and maintain the pro-innovation approach. It is expected that the Government will publish an update on highly capable general-purpose AI systems by the end of 2024.

 

The new central function and the AI risk register

The establishment of a new Government central function is already underway. Its remit is to monitor and assess risks presented by AI and to support regulatory coordination. The central function will have a multidisciplinary team who will launch a targeted consultation on a cross-sector AI risk register to ensure it comprehensively captures the emerging risks of AI. The AI risk register is intended to provide a "single source of truth" for use by regulators, government departments and other external groups. It is also intended to support the Government to identify risks that fall across or in between regulators’ remits, so that it can identify gaps and prioritise further action as required.

 

Regulatory guidance

Although a number of regulators have already followed directions contained in the White Paper to take action within their remit on the impact of AI (for example the Competition and Market Authority's ( CMA ) review of foundation models and the updated guidance on data protection and AI by the Information Commissioners office (ICO)), a number have yet to do so. The Government has written to several regulators, including the Office of Communications ( Ofcom ), the Financial Conduct Authority ( FCA ) and the Medicines and Healthcare products Regulatory Agency ( MHRA ), requesting that they publish updates outlining their strategic approach to AI by 30 April 2024.

The Government has also published its first phase of guidance for regulators [3] . It sets out details of what regulators should consider when developing tools and guidance, though it is not intended to be a prescriptive guide as the Principles are voluntary and how they are implemented is within the regulators' discretion. DIST intends to take a three stage approach to issuing their guidance going forward:

  • Phase 1 (this initial guidance): is designed to support regulators to start developing tools, guidance and their regulatory activities to support AI developers and deployers within their regulatory remit.
  • Phase 2: will iterate and expand the initial guidance following feedback from the regulators. This will be issued by summer 2024. It will also include details of resources offered by the central function.
  • Phase 3: will involve collaborative working with regulators identifying what additional resources are required and encouraging the production of multi-regulator guidance.

 

What will happen next?

The Response demonstrates the Government's commitment to their original approach with some alignment on innovation and safety. This clarity should be welcomed by industry as it ensures that businesses can confidently adopt new AI-powered products and services.

However, potential political change on the horizon could lead to further uncertainty to the UK's proposed AI regulatory framework. The Labour Party have already indicated that they will expect to see greater transparency from AI tech companies. Peter Kyle, the shadow technology secretary, confirmed his party's intention to move from a voluntary code to a statutory code if elected. Watch this space, the UK may enact a UK AI Act yet.

 

 

[1] (i) Safety, security and robustness; (ii) Appropriate transparency and explainability, (iii) Fairness; (iv) Accountability and governance (v) Contestability and redress

[2] House of Lords Communications and Digital Committee Report 2 February 2024

[3] Implementing the UK's AI Regulatory Principles: Initial Guidance for Regulators

Author