On 4 February 2025, two days after prohibited AI practices were banned in the EU pursuant to EU Regulation 2024/1689 (the "EU AI Act"), the European Commission published guidelines to assist in the interpretation of relevant provisions (the "Guidelines"). The Guidelines have provided much-need clarity for individuals and organisations on the scope of what is prohibited under the EU AI Act.
This article reflects on the Guidelines and provides examples of how the prohibitions are likely to work in practice.
Prohibited AI practices
Article 5 of the EU AI Act prohibits placing on the market or putting into service of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental EU rights and values.
The prohibitions contained within Article 5 cover almost every organisation involved in the AI supply chain, including "providers", "importers", "distributors", "deployers" and "product manufacturers" of AI systems. This article will focus on "providers" and "deployers" only.
Article 3 defines "providers" as organisations "that develop AI systems or have them developed and place them on the Union market, or put them into service under their own name or trademark", whilst "deployers" are organisations "using AI systems under their authority, unless the use is for a personal non-professional activity." Even providers and deployers based outside the EU can be caught by the EU AI Act, where the output of the AI system is used within the EU.
Article 5 provides that the following AI practices are prohibited:
- Harmful manipulation or deception
- Harmful exploitation of vulnerabilities
- Social scoring
- Predictive policing
- Facial recognition databases
- Emotion recognition in workplaces or educational settings
- Biometric categorisation, and
- Real-time remote biometric identification for law enforcement
An AI audit ought to be carried out if any AI practices being used by an organisation could potentially fall within one of the above headings, with a particular emphasis on the system or practice's design, functionality and its interaction with the end user. Given the way in which the EU AI Act is drafted, a scenario could arise where a business is using two similar systems, one of which is prohibited for the purposes of Article 5, and one of which constitutes a "high risk" practice under Article 6. Therefore, all organisations should carefully review and assess their AI practices on a case-by-case basis, having regard to the Guidelines.
In this article we will consider in more detail practices that constitute harmful manipulation, deception or exploitation of vulnerabilities, and social scoring, and we will therefore focus on these practices in more detail for the purposes of this article.
Harmful manipulation, deception or exploitation of vulnerabilities and social scoring
Harmful manipulation and deception
Article 5(1)(a) of the EU AI Act prohibits the production or use of AI systems that use subliminal techniques (i.e. beyond a person's consciousness) or purposefully manipulative methods with the objective, or the effect, of materially distorting the behaviour of a person by impairing their ability to make an informed decision, resulting in or likely to result in significant harm.
The Act does not define "subliminal", "purposefully manipulative" or "deceptive" techniques, however the Guidelines do provide helpful, practical examples of the types of AI systems in this regard, including the following:
- An AI system which shows or embeds images or text flashed briefly during video playback which are technically visible, but flashed too quickly for the conscious mind to register while still being capable of influencing attitudes or behaviours (this could include TV or digital adverts), and
- An AI system which deploys sounds or verbal messages at low volumes or masked by other sounds, influencing the listener without conscious awareness. These sounds are still technically within the range of hearing but are not consciously noticed by the listener due to their subtlety or masking by other audio (again, this could include personalised digital advertising)
It is apparent from Article 5(1)(a) that it is not necessary that the provider or user intends to distort the behaviour of an individual, if usage of the AI system ultimately causes this result. The Guidelines confirm that the main types of "harms" relevant for the purposes of Article 5(1)(a) include physical, psychological, financial and economic harms that may be compound with broader societal harms in certain cases.
Harmful exploitation of vulnerabilities
Article 5(1)(b) of the Act prohibits the production or use of AI systems that exploit any of the vulnerabilities persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.
Examples of prohibited AI systems from the Guidelines that would fall within the above definition include:
- An AI system that exploits the reduced cognitive vulnerabilities of older people by particularly targeting expensive medical treatments, unnecessary insurance policies or deceptive investments schemes to older persons may lead to significant loss of savings, increased debt, and emotional distress for older people
- AI-enabled differential pricing practices in key services such as insurance that exploit the specific socio-economic situation and provide higher prices to lower income consumers can lead to a significant financial burden to pay more for the same coverage, leaving them vulnerable to shocks, and
- An AI-predictive algorithm that deploys targeted advertisements for predatory financial products to people who live in low-income post-codes, thus exploiting their susceptibility to such advertisements because of possible despair and causing them significant financial harm
As with the "manipulative/deceptive" practices definition, intent is not strictly necessary, as long as usage of the system ultimately results in the distortion of behaviour referred to.
The Guidelines confirm that the definition of "harm" mirrors that of the Article 5(1)(a) prohibition (discussed above), however for vulnerable groups, the harm may be particularly severe and multifaceted due to these groups' heightened susceptibility to exploitation, and what may be considered an acceptable risk of harm for adults often represents an unacceptable harm for children and other vulnerable groups. In those circumstances, it is strongly recommended that organisations take a precautionary approach where there is potential for significant harm to vulnerable groups.
Social scoring
Finally, Article 5(1)(c) of the Act prohibits the production or use of AI systems that assess or classify individuals or groups over a certain period of time based on their social behaviour, or known, inferred or predicted personal characteristics which lead to, or are capable of leading to, detrimental or unfavourable treatment, in particular where the data comes from multiple unrelated social contexts or the treatment is disproportionate to the gravity of the social behaviour.
The Guidance provides a number of examples of prohibited social scoring AI systems or practices, which include the following:
- An insurance company collects spending and other financial information from a bank which is unrelated to the determination of eligibility of candidates for life insurance and which is used to determine the price of the premium to be paid for such insurance. An AI system analyses this information and recommends, on that basis, whether to refuse a contract or set higher life insurance premiums for a particular individual or a group of customers; and
- A private credit agency uses an AI system to determine the creditworthiness of people and decide whether an individual should obtain a loan for housing based on unrelated personal characteristics.
The Guidelines provide that the "detrimental or unfavourable treatment" may be caused by the AI system itself, or in combination with other human assessments. In addition, the treatment does not need to be caused by the entity or individual using the scoring system and can be produced by a third party. By way of example, the Guidelines advise that Article 5(1)(c) could capture a scenario where a public authority obtains a score for a person’s creditworthiness from a third party company specialised in creditworthiness, based on information about the individual from a variety of sources, which ultimately results in an unfavourable decision by the public body against that individual.
Ultimately, the Guidance is clear that the prohibition under Article 5(1)(c) should not affect lawful evaluation practices carried out for a specific purpose, for example, credit scoring and risk scoring and underwriting, which it acknowledges are essential aspects of the services of financial and insurance businesses. It goes on to state that these practices are not per se prohibited, assuming they are carried out in accordance with the Act and any other relevant legislation (by way of example, the rules on profiling under the GDPR), and assuming that any detrimental or unfavourable treatment from using scoring systems is justified and proportionate to the gravity of the social behaviour.
Steps to compliance
Failure to comply with the provisions of Article 5 of the EU AI Act could result in serious consequences, including substantial fines of up to €35 million or 7% global annual turnover, whichever is greater (for more detail on enforcement and penalties, click here). In those circumstances, it is imperative that organisations take immediate steps to ensure they have robust policies and procedures in place to ensure compliance with the legislation, and in particular, Article 5, on an ongoing basis.
If they have not done so already, organisations should immediately take the following steps:
- Conduct an AI audit to work out what AI systems are, or could be, used in the business
- Carefully assess, in line with the Guidance, if any of the systems identified could potentially fall within the prohibitions set out in Article 5. In particular, organisations should scrutinise any AI-assisted advertising practices, any practices targeting vulnerable groups, and/or any profiling or risk scoring processes in light of the commentary above
- Immediately halt the use of any AI systems or practices which do fall within the prohibitions under Article 5. It is important to keep in mind that organisations will also need to consider their right to end certain contracts for AI solutions, or the prohibited parts of them, with third party suppliers
- If organisations determine that certain AI systems used by the business are "high risk", rather than "prohibited", or would otherwise fall within the remit of the Act, they should ensure they are in compliance with all of their obligations in respect of these systems also. By way of example, if an organisation is a provider of a high risk AI system, it would be required to establish, implement, document, and maintain a risk management system throughout the lifecycle of that system to effectively identify and mitigate risks, both internally and externally. It is worth noting that the EU Commission is currently considering "stopping the clock" on the implementation of the provisions on "high risk" AI practices whilst it considers the simplification of the regime. This assessment is currently ongoing and undoubtedly, commentators will be watching this space with interest. For more on the EU AI Act, and AI generally, see our "AI Explainer" available here and our AI microsite, available here.
- Implement robust governance and compliance programs on AI to ensure the business remains compliant with the provisions of the Act going forward and to avoid any potential future use of prohibited AI systems
For a complete copy of the Guidelines on prohibited AI practices click here.