6 Min Read

The ICO launches an AI and data protection risk toolkit

Read more

By Charlotte Halford

|

Published 12 May 2022

Overview

Last week, the Information Commissioner’s Office (“ICO”) launched v1.0 of its AI and data protection risk toolkit (the “Toolkit”). The Toolkit forms part of the ICO’s wider Guidance on AI and data protection. V1.0 of the Toolkit follows the launches of its alpha and beta versions from March and July 2021 respectively.

What is the Toolkit?

The ICO describes the Toolkit a risk assessment tool containing risk statements to help organisations using AI (artificial intelligence) to process personal data understand the risks to individuals’ information rights caused by such processing.

Speaking at the “AI and Data Protection Risk Toolkit Launch” hosted by TechUK on 3 May 2022, Stephen Bonner, the ICO’s Executive Director for Regulatory Futures and Innovation explained that the Toolkit is intended to be a practical product for organisations, designed to provide them with confidence that by following the Toolkit, organisations can have high assurances that their use of AI is compliant with data protection laws. Stephen Bonner made it clear that the Toolkit is not intended to slow organisations down, but speed them up with this. In creating the Toolkit, Stephen Bonner said in his speech, the ICO considered three “core pillars” – principles, practicalities and prognostication.

With regard to principles, he referred to the seven data protection principles contained in the UK GDPR (the UK’s retained version of the EU’s General Data Protection Legislation): lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality (security) and accountability. These principles go to the very core of UK data protection legislation, and the ICO suggests that organisations should follow them also for the good development and use of AI.

The ICO considered how to turn these principles into practicalities for the Toolkit. A key point highlighted by Stephen Bonner in his speech was the accountability principle – organisations need to understand who is accountable while using AI to process personal data, and key stakeholders need to understand enough about the software and applications to be able to explain them and decisions made using AI. The Toolkit is designed to aid organisations with this.

In order to develop the Toolkit further (as part of the prognostication), the ICO is working together with other UK regulators (Competition and Markets Authority, Financial Conduct Authority and Ofcom) to gain views and share priorities on practicalities in regulated sectors. The ICO is also seeking case studies for how organisations use the Toolkit.

Using the Toolkit

Also speaking at the TechUK webinar, the ICO’s Senior Policy Officer Alister Pearson and Principal Technology Advisor – AI Ahmed Razek highlighted that the Toolkit focusses on the risks to individuals’ rights and freedoms, connecting them to the relevant provisions of UK data protection laws. This makes it easier for organisations to comply with their data protection obligations.

The Toolkit also provides best practice guidance for technical and organisational measures that can be used to mitigate and manage identified risks, which makes organisations’ compliance with data protection laws more likely. Using the Toolkit is not a legal requirement, however the ICO points out that this can be a very useful part of organisations’ data protection impact assessment processes. Ahmed Razek also emphasised that due to the nature of AI, its lifecycle is not linear – so the Toolkit should be used within the different phases of the AI software/application’s lifecycle.

In terms of practical guidance, the Toolkit is divided into 10 risk areas that align with the UK GDPR principles. An example drawn out by Alister Pearson was ensuring that a “meaningful human review” of the AI’s decision-making takes place to ensure that no prohibited processing takes place, and no unfair decisions are made regarding individuals.

The Toolkit lists three different categories for practical steps for organisations to take in order to reduce, mitigate or manage the risks to individual rights and freedoms by processing personal data by AI:

  1. “Must”: it is a legal requirement to comply with these steps;
  2. “Should”: it is best practice to follow these steps; and
  3. “Could”: it is optional good practice to follow these steps.

The Toolkit also links to relevant ICO guidance associated with each data protection risk statement, clarifying particularly the “musts” and “shoulds” set out in the Toolkit.

The ICO envisages the tool to be used mainly by organisations’ risks and governance teams, model development teams and senior leadership, with it being particularly important for the last category of users given senior leadership well need to be aware the risks their organisations need to navigate and be comfortable with.

On 24 May, the ICO is hosting a free webinar to provide guidance on how organisation can use the Toolkit in their day-to-day work and offer advice on managing AI systems. You can sign up to the webinar through this link.

Future of the Toolkit

The main benefits of the Toolkit, in the ICO’s view, are threefold – it enables organisations to have greater confidence that they are compliant with data protection law, it helps organisations to understand risks to individuals’ rights by AI, and it provides clarity for organisations about the laws that apply to them, allowing them to innovate responsibly.

It will remain to be seen how organisations will use the Toolkit, and whether the ICO’s envisaged benefits of the Toolkit apply in practice. In order to test this, the ICO is actively looking for organisation to provide case studies for how they use the Toolkit and what they have gained or learned using it. With regard to future developments and iterations of the Toolkit, the ICO plans on focussing on the case studies in order to test the Toolkit and to identify areas for future development, however Alister Pearson and Ahmed Razik mentioned that the ICO would be looking to add more functionality to the Toolkit in the future as and when a need for it is identified for example through the case studies.

Further, the Toolkit focusses mainly on AI based on machine learning (the study of computer algorithms that can improve automatically through experience and by the use of data), and at present, the ICO does not have specific guidance on AI that does not use machine learning. During the TechUK webinar this was also identified as a point for future development. 

From our review of the Toolkit, this can be a useful, practical tool, empowering organisations to self-assess and ensure compliance with data protection laws through all lifecycles of the AI. The main challenge of the Toolkit will be keeping up with developments in the field of AI, and ensuring it is broad enough to cover a wide range of sectors, technologies and types of organisations.

We will eagerly await to see how organisations adopt using the Toolkit into their processes, and we will be following up with updates on the Toolkit in due course.

Authors