Earlier this month, the European Commission published the Guidelines on prohibited Artificial Intelligence (AI) practices, complying with article 96 (1), paragraph b), of the AI Act, and enhancing the need to “increase legal clarity and to provide insights into the Commission’s interpretation of the prohibitions in Article 5 AI Act with a view to ensuring their consistent, effective and uniform application”. These guidelines, however, arrive with a slight delay, since the prohibitions of this article are directly applicable as from February 2nd (article 113, paragraph a) of the AI Act). Once approved, they can be updated when necessary (article 96 (2) of the Act).
Quickly remembering the prohibitions of article 5, they include: i) harmful manipulation and deception; ii) harmful exploitation of vulnerabilities; iii) social scoring; iv) individual criminal offence risk assessment and prediction; v) untargeted scraping to develop facial recognition databases; vi) emotional recognition; vii) biometric categorization and viii) real-time remote biometric identification in public accessible spaces (paragraphs a) to h), respectively).
Overall, the Guidelines help enlighten our understanding of this robust article and all the concepts in it – some of them not included in article 3’s definitions –, although, as the Commission itself recognized, the main role on the interpretation of this norm will be taken by the European Union Court of Justice.
Let’s take a dive into some of (what I believe are) the most important clarifications made by the Commission, starting with (1) and (2), that share some of the most relevant aspects that needed explanation.
On the prohibition of harmful manipulation and deception – (1) –, the concept of subliminal technique includes i) visual subliminal messages; ii) auditory subliminal messages; iii) subvisual and subaudible cueing; iv) embedded images; v) misdirection, and vi) temporal manipulation.
The Commission defines “purposefully manipulative techniques” as those who “are designed or objectively aim to influence, alter, or control an individual’s behavior in a manner that undermines their individual autonomy and free choices”.
In this first paragraph, the Commission also points out the need for a “plausible causal link between the techniques deployed, the material distortion of the behavior of the person, and the significant harm” resulting from it. A situation of “significant harm” should consider i) the severity of harm; ii) the context and cumulative effects; iii) the scale and intensity; iv) the affected persons’ vulnerability and v) the duration and reservability.
Regarding paragraph c) (social scoring), the most important clarification lies on the difference between evaluation and classification – in the sense that evaluation involves some sort of “assessment or judgment about a person or group of persons”, relating to the concept of “profiling”.
The biggest merits that follow are the numerous examples for the situations covered in paragraphs d), e), f) and g) – the latter deeply connected to the principle of equality and non-discrimination (article 13 of the Portuguese Constitution and article 21 (1) of the European Charter of Fundamental Rights) – and in the widely focused paragraph h), in which the principle of proportionality is highlighted.
In several moments, the Commission reminds that article 5 is lex specialis when confronted with EU legislation on data and consumer protection. lex specialis quando confrontado com a legislação da UE sobre a proteção de dados e do consumidor.
Although these guidelines are not binding, these 135 pages will surely contribute to the interpretation of the norm and to the preparation of providers and deployers of AI systems. All in all, this is a tremendously complex norm, not only because of the sensitiveness of the matter, but also due to all the practical implementation challenges it poses.
Hashtags: #AI #AIAct #prohibitedAIpractices #EuropeanCommission #GuidelinesonAI
Image source: Unsplash