9 min read

Data, privacy and cyber in March 2026: In case you missed it

Read more

By Hans Allnutt, Jade Kowalski, Peter Given & Justin Tivey

|

Published 20 April 2026

Overview

Our 'In Case You Missed It' section of the Data, Privacy and Cyber Bulletin provides readers with a high-level digest of important regulatory and legal developments from March 2026.

 

Contents

  1. Case law updates
  2. Regulatory developments
  3. Data & privacy developments
  4. Cyber developments

 

Case law updates

Brillen Rottler GmbH & Co. KG v TC Case C-526/24

Our article, linked here, considers the implications of this decision. In summary, the individual involved had subscribed to the online newsletter of a family-owned business, providing his personal data to do so. Less than two weeks later, the individual submitted an access request pursuant to the GDPR. The business refused this first request, stating that the individual's history of similar requests suggested the request was abusive. The individual sought damages.

On referral, the Court of Justice of the European Union ruled that an individual's subject access request for your own personal data may be considered abusive and subsequently refused if the sole purpose is to make a subsequent claim for compensation under the GDPR.

The CJEU decision can be found here in full, as well as the Court's press summary.

 

Regulatory developments

Digital Omnibus on AI: European Council and Parliament agree respective negotiating positions

The European Council and Parliament have both confirmed they have agreed their respective negotiating positions in respect of those elements of the Digital Omnibus proposals advanced by the Commission which address artificial intelligence. Our December 2025 briefing sets out the details of Digital Omnibus on AI.

As noted within one of the briefings produced by the European Parliament, there is significant time-related pressure for the EU co-legislators, given that the Digital Omnibus on AI seeks to amend the date of application of certain obligations under the AI Act, such as the date of application of high-risk AI rules (currently to take place on 2 August 2026).

The European Council confirmed it had agreed its negotiating position including the following amendments:

  • Prohibition of AI practices regarding the generation of non-consensual intimate content or abuse material involving children
  • Fixed timelines for the delayed application of high-risk rules; 2 December 2027 for standalone high-risk AI systems, and 2 August 2028 for high-risk AI systems embedded in products
  • Reinstatement of the obligation for providers to register AI systems in the EU database for high-risk systems, where they consider their systems to be exempted from classification as high-risk
  • Reinstatement of the standard of strict necessity for the processing of special categories of personal data for the purpose of ensuring bias detection and correction

The European Parliament also confirmed its agreed position, harmonising with the Council's proposals on the fixed timelines for the delayed application of high-risk rules, and also proposing to ban apps or systems that create or generate non-consensual intimate content. Among other proposals, MEPs also seek to allow service providers to process personal data to detect and correct biases in AI systems, but with the introduction of safeguards to ensure this is done only when strictly necessary.

Following the agreements within both Council and Commission, trilogue negotiations with the European Commission can now commence.

 

ICO fines Police Scotland for mishandling of data

The ICO issued a £66,000 fine and reprimand to Police Scotland for serious failures when handling sensitive personal information. Police Scotland extracted the entire contents of a person's mobile phone following the report of a crime. This extraction resulted in the collection of a substantial amount of highly sensitive information unrelated to the investigation, which was then shared with a third party who should have not received it.

The ICO concluded that Police Scotland failed, among other things, to implement appropriate organisational and technical measures to ensure data security and limit personal information to what was strictly necessary. Linked are the full ICO press release, and reprimand and penalty notice.

 

Ofcom fines 4chan over £500,000 for breaches of the Online Safety Act

Ofcom , the UK online safety watchdog, fined the website 4chan in excess of £500,000 for failures in preventing children from accessing adult materials on its site, not assessing the risks of illegal materials appearing on the platform and not setting out in its terms of service how users will be protected from criminal content.

Ofcom's announcement can be seen here.

 

Luxembourg: Amazon EUR746 million GDPR fine quashed

The Administrative Court in Luxembourg overturned a EUR746 million issued to Amazon in 2021 by the National Data Protection Commission (CNPD) following breaches of GDPR.

The findings of the CNPD relating to Amazon's non-compliance and infringements around GDPR and targeted advertising were upheld by the Administrative Court. However, it held that the regulator did not assess whether Amazon's infringements were negligent or intentional, and thus whether a fine was the appropriate response.

The CNPD issued a statement in response to the decision, noting the annulment of the fine, but highlighting that its actions "has led to Amazon’s practices being brought into full compliance with the relevant provisions of the case regarding online behavioural advertising."

 

European Commission publishes second draft 'Code of Practice on Marking and Labelling of AI-generated content'

The European Commission published the second draft of a code of practice to assist providers and deployers meet the marking and labelling requirements for AI generated content as required by Article 50 of the AI Act.

Split into two sections for providers and deployers respectively, the second draft provided greater flexibility and reduced the compliance burden. Following the closure of the window for feedback on 30 March 2026, it is expected that the code will be finalised by the start of June 2026, in anticipation of 2 August 2026, when the rules covering the transparency of AI-generated content will become applicable.

The second draft of the code of practice can be accessed via the above link.

 

Legislative proposals for regulation of AI in the United States

Proposals for a national AI framework in the United States have been introduced into the Senate by Senator Marsha Blackburn. The draft framework, The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry (TRUMP AMERICA AI Act) would aim it would protect the “4 Cs” (children, creators, conservatives, and communities).

This proposal was followed shortly after by the unveiling of a legislative framework by the White House to help inform Congress of the administration's objectives around AI. The framework largely aligns with the key elements of TRUMP AMERICA AI Act, focusing on six key objectives including the removal of barriers to innovation, the protection of children and developing an AI-ready workforce.

 

Data & privacy developments

ICO launches consultation on draft automated decision-making guidance

The ICO has commenced a consultation to update existing guidance relating to automated decision-making (ADM) including profiling during businesses' recruitment processes. The update is required following the introduction of the Data (Use and Access) Act.

Launching the consultation, the ICO noted that the DUAA makes the use of ADM more straightforward, albeit requiring proper safeguards to protect the data protection rights of any jobseekers. The ICO also published its report 'Recruitment rewired' setting out findings following ICO scrutiny of the use of ADM by both major employers and recruitment platforms. Initial findings suggest that employers need to improve transparency measures to inform candidates of the use of ADM, and also ensure the application of meaningful human involvement to all candidates to ensure fair treatment and compliance. The ICO also set out a brief summary of jobseekers' rights in this area.

 

ICO part of joint taskforce considering handling of motor finance claims

The ICO has confirmed it will be part of a joint taskforce with the Financial Conduct Authority (FCA) and other regulators to tackle poor handling of motor finance claims. The FCA announcement, which can be found here, was announced in anticipation of the final compensation scheme for motor finance customers.

A statement from the Head of Investigations at the ICO highlighted those existing risks that the final compensation scheme would likely exacerbate, namely unlawful practices relating to unsolicited direct marketing without consent. Emphasising the financial implications of such practices, the ICO also announced the imposition of a £100,000 fine to an alarm company for making over 260,000 unsolicited marketing calls over a seven-month period.

 

ICO publishes materials on age assurance measures

The ICO published an open letter to a number of social media and video-sharing platforms calling for their age assurance measures to be strengthened. The ICO noted that self-declaration on a number of social media platforms can be easily circumvented, allowing underage children to access services not intended or designed for them.

The letter encourages the use of viable age assurance technologies such as facial age estimation, digital ID, or one-time photo matching. The ICO will monitor developments, has started to engage with the highest risk services, and will consider whether further regulatory action is necessary.

The ICO also issued a joint statement with Ofcom in March regarding key areas of interaction between online safety and data protection relating to age assurance. The joint statement, accessible in full here, summarised the key aspects of both ICO and Ofcom policy. The document provides practical examples for services likely to be accessed by children, and in scope of both data protection legislation and the Online Safety Act, such as social media services.

 

European Commission announces proceedings against Snapchat under Digital Services Act

The European Commission has announced it has opened formal proceedings against Snapchat pursued to the Digital Services Act. The investigation will focus on five specific areas, age assurance measures, the alleged recruitment of minors for criminal activities, inadequate default account settings, the dissemination of information relating to prohibited products and the reporting of illegal content.

The Commission will now gather further information, including the issue of requests for information, interviews and inspections.

 

EU Commissioner provides update on data protection and digital governance plans

Michael McGrath, the EU Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection, provided an update on EU data protection and digital governance plans at the Forum Europe 15th Annual Data Protection and Privacy Conference.

The Commissioner noted the current position of the Digital Omnibus proposals, emphasising that the proposed amendments are targeted to "address fragmentation, enhance legal clarity, and simplify obligations for data controllers." The Commissioner also commented on the upcoming Digital Fairness Act, which will complement existing EU legislation and focus on a variety of issues including "dark patterns, addictive design features, unfair personalisation practices, problematic influencer marketing, and problems with digital contracts and subscriptions."

The full text of the speech can be found here.

 

EDPB publishes case digest on 'legitimate interest'

The European Data Protection Board (EDPB) published a one-stop-shop case digest on the legal basis of legitimate interest. The case digest presents the most important decisions within that theme, including examples of how the Data Protection Authorities (DPAs) of Member States analyse controllers’ reliance on the legal basis of legitimate interest in specific contexts, providing positive and negative compliance examples.

The case digest also summarises how DPAs apply the three-step test to assess reliance on legitimate interest, and how EDPB Guidelines are applied in practice. The case digest can be found here.

 

Cyber developments

EDPB and EDPS issue joint opinion on cybersecurity proposals

The EPDB and the European Data Protection Supervisor (EDPS) published Joint Opinion 4/2026 on the Proposal for a Cybersecurity Act 2 and the Proposal on amendments to the NIS 2 Directive.

The European Commission published proposals in January 2026 to replace the current Cybersecurity Act, and also introduce amendments to the NIS 2 Directive to make compliance for regulated entities easier. Our detailed article [located here] discuss these proposals and the EDPB and EPDS Joint Opinion in detail.

 

NCSC issues warning in response to Middle East conflict

The National Cyber Security Centre (NCSC) issued an advisory to UK organisations, indicating a heightened risk of indirect cyber threat or those organisation and entities with a presence or supply chains, resulting from the ongoing conflict in the Middle East.

Although the level of cyber threat from Iran specific to the UK has not changed, the NCSC directed organisations to previously issued advisories on DDoS attacks, phishing activity and ICS targeting, all which may be collateral impacts from Iran-linked hacktivists. The NCSC advisory can be found here.

 

NCSC warns of messaging app risks

The NCSC has warned that high-risk individuals with potential access to sensitive information and high-profile individuals are being targeted by malicious actors using messaging apps such as WhatsApp and Signal. These attackers are using a variety of methods including impersonation, phishing using malicious links or QR codes, and the attempted sharing of login or recovery codes.

The NCSC alert can be found here, setting out a number of steps that individuals can take to reduce risks.

 

FCA sets out new incident and third party rules to bolster resilience

The FCA has set out new rules to enable it to respond to disruptive incidents such as cyber attacks. PS26/2 sets out the requirements for reporting operational incidents and third party arrangements, and will come into force on 18 March 2027.

The rules and associated Finalised Guidance provide examples of what firms should report, help applying the thresholds and guidance on completing forms and registers. The FCA press release accompanying the new rules noted that "over 40% of cyber incidents reported to us [in 2025] involved a third party and [the FCA has] seen several recent high-profile incidents impacting the financial services sector including the Cloudflare and AWS outage."

Authors