Blog

The AI Act political agreement: a cautionary tale

LPL Author(s):
Domingos Farinho

Since the night of December 8th’s announcement on the political agreement regarding the AI Act the Internet has been pouring with tweets, posts and texts marking the event. Most of them are celebratory in nature, advertising the AI Act as the first law that aims to regulate artificial intelligence. But legal scholars and practitioners shouldn’t be too quick to celebrate. The political agreement still leaves many technical issues to be written into the law and legal principles and rules will very much depend on the way those provisions are written. When the final text gets published in the Official Journal much attention should be devoted to read the AI Act in search of the actual rules that come out of it with all the interpretation and application problems that will be involved. In such a long and fractured process, as is the one that will lead to the AI Act in full force, intentionalism will be difficult to defend, as legislative intention will more often that not be hard to ascertain, and textualism will matter when applying the AI Act rules, especially when public supervisory authorities and courts start adjudicating. So, the role of legal scholars in extracting the rules from the final text and uncovering application problems is especially relevant in this case. Much of what has already been written on different versions of the AI Act proposal will have to be revised and updated in face of the final text.

Also, I would argue that one of the most important dimensions of the AI Act is not in the political agreement or even in the enactment of the regulation, but the administrative apparatus that will have to be put into place, both at EU and at Member State level to uphold the new framework. The AI Act allows for a lot of discretion both in self-regulation and in exercising supervisory powers. On the self-regulation level there is a high degree of discretion when assessing the compliance requirements set out in the Act, when putting in place a quality management system, and when drawing-up the technical documentation. Theses areas of discretion exercise are in turn controlled by another level of discretion granted to the market surveillance authorities in Member States when exercising their regulatory and supervisory powers. One of the most important tasks entrusted to Member States is to certify the private entities that will perform the conformity assessment procedure of AI systems deployed in the market. The exercise of these and other powers will have to be adequately framed within the EU legal system, including the case law of the CJEU. The most important example of such approach is the mandatory fundamental rights impact assessment, also present in the DSA, that will entail a very difficult exercise in interpretation, application and balancing of legal provisions and rules.

In turn Member States authorities will have to work with the new AI Office at EU level which follows similar approaches found both on the GDPR and the DSA. Again a new layer of discretion is present given that the AI Act uses terms such as cooperation to describe the relation between these different authorities within their power-exercising framework. 

In order for the AI Act to be fully applicable as a comprehensible and foreseeable set of rules much has yet to happen and the role of lawyers, both practical and theoretical, will now be more important than ever.

Lisbon Public Law Research Centre

O que procura?

Ex. Investigadores, Eventos, Publicações…