5 min read

ICO Sets Out Strategic Approach To AI

Read more

By Peter Given

|

Published 10 May 2024

Overview

The Information Commissioner's Office has published details of its strategic approach to artificial intelligence.

'Regulating AI: The ICO's strategic approach' is one of a number of strategic approaches which have been recently published by key regulators in response to a request from the Secretary of State for Science, Innovation and Technology earlier this year. Our Data, Privacy and Cyber bulletin this month also covers the response from the Bank of England and Prudential Regulation Authority. The Government has collated the responses of those regulators contacted, and those responses can be found here.

 

 

Risks and opportunities

As personal data is processed in the development and deployment of many AI systems, it is important to understand the approach taken by the ICO to AI to ensure AI is adopted successfully in the UK.

The report emphasises that AI has the potential to transform society for the better, but there are a number of inherent risks and concerns in ongoing AI development and deployment. These risks and concerns, both novel and existing, relate to fairness and bias, transparency and explainability, safety and security, and accountability and redress, among other things.

Risks will also derive from the specific context of AI deployment (such as healthcare, law enforcement and education), others from the development process, and some will derive from the use of personal data in AI systems. The ICO highlights it has the ability and the tools to intervene across the AI supply chain, where data protection law applies. It is acknowledged that many AI risks sit outside data protection law such as national security and election integrity, but the ICO is working with the AI Safety Institute to address risks falling within its remit.

More widely, the ICO welcomes the approach of the government in building upon the strengths of existing regulators to tackle AI risks across their various sectors. The ICO is of the view that the risks relating to AI do not require "new, extensive, cross-cutting legislation, but appropriate resourcing of existing UK regulators and their empowerment to hold organisations to account."

 

 

The ICO and AI currently

Providing a flexible framework enabling organisation to adapt to developments in AI, the ICO suggests the principles-based approach of data protection law can be considered analogous to the proposed AI Regulation White Paper principles. The ICO therefore "has active experience of implementing the aims and objectives" of the White Paper principles.

The report reflects upon each of the five AI Principles and highlights how these map to the existing data protection principles. The report notes the ongoing progress of the Data Protection and Digital Information Bill, highlighting that any future approach to AI regulation will be affected by this new legislation, but that the ICO's role in regulating AI will not.

The ICO has been regulating AI for several years, with a landmark report on Big Data, Artificial Intelligence, Machine Learning and Data Protection first being published in 2014. The ICO has since developed a suite of guidance and products to help organisations navigate the use of AI with reference to data protection law. General guidance is regularly updated, along with supplementary guidance on Automated Decision-Making and Profiling, and Explaining Decisions Made. Specific applications of AI such as biometric recognition technologies are also covered.

The report also refers to the ICO's award-winning AI and Data Protection Risk Toolkit, the Regulatory Sandbox and Innovation Advice service. In addition, the ICO undertakes consensual audits to help organisations assess their processing of personal data using AI, and provide advice to improve their methods.

Regulatory action to promote compliance and safeguard individuals is a significant part of the ICO remit. There have already been a small number of regulatory decisions in relation to the use of AI which the report highlights; the £7.5 million fine issued to Clearview AI (subject to ongoing legal proceedings), the issue of enforcement notices to Serco Leisure and others relating to the use of facial recognition and fingerprint scanning for employee monitoring, and an ongoing action relating to the use of generative AI chatbots on the social media platform Snapchat.

 

 

Developments and partnerships

The ICO has already undertaken a number of consultations on generative AI and data protection, the third of which is commented on in this month's bulletin [insert link].

AI is a focus of the ICO in 2024/25, and the consultation series is identified as one of a number of upcoming projects which involves the use of AI including a consultation on biometric classification and updated guidance on AI and data protection. The Regulatory Sandbox will be used to support a number of AI-related projects, and assistance will also be provided by the ICO's Innovation Hub projects in partnership with other organisations.

The report makes clear that the ICO will continue to actively scrutinise AI to safeguard users and the public from harm. The communication of regulatory outcomes is highlighted as a necessary tool to drive improvements.

Collaboration with other regulators is also identified by the ICO as a key element of its strategic approach, whether via direct bilateral contact or groups such as the Regulators and AI Working Group. This month, we have also written on the Digital Regulation Cooperation Forum (DRCF) Workplan for 2024/25 [insert link]. As a founding member of the DRCF, the ICO highlights many of the key initiatives being undertaken by this cross-sector group of regulators.

Work will also continue with government, standards bodies and international partners. Continued engagement on the AI Regulation White Paper is identified as a key priority for the ICO.

 

 

Other points to note

Thematically, and perhaps unsurprisingly, the ICO's strategic approach to AI regulation indicates a particular focus on children (recognising that children are a vulnerable group and more exposed to the risks of AI) and biometric technology (including biometric recognition and behaviour classification). The three enforcement actions noted in the ICO's report align with these focus areas.

The ICO also notes in the report the importance of AI to its own operations. Currently, the ICO uses AI to support a customer service chatbot and an email triage tool, and it is developing an AI tool to identify websites using non-compliant cookies banners. The ICO anticipates that in the future nearly all data protection roles at the ICO will involve AI to some extent.

Authors