Author: Mariana de Lemos Campos
Regulation (EU) 2024/1689, which establishes harmonized rules on Artificial Intelligence (the AI Act), represents a landmark in the global regulation of Artificial Intelligence (AI). Its overall structure is based on a risk-based approach and categorizes AI systems into four levels, ranging from "minimal risk" (not subject to the Regulation) to "unacceptable risk" (which is prohibited).
At first glance, the AI Act appears effective in terms of compliance: it requires risk assessments, documentation, testing, and human oversight. Compliance: exige avaliações de risco, documentação, testes e supervisão humana.
However, when it comes to human oversight (defined in various parts of the AI Act, but especially in Article 14), a serious challenge arises: it is a requirement that presupposes humans can effectively monitor and intervene in autonomous operations at all times.
The article states that “High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use” (Article 14(1)).
The expression “effectively overseen” essentially implies “constantly supervised.” And expecting full human oversight ignores decades of research on automation complacency (or, as the AI Act itself puts it, “automation bias”).
Although the AI Act explicitly acknowledges the need “to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems […]” (Article 14(4)(b)), this seems more a theoretical acknowledgment than a substantive treatment of the problem.
This is even more critical when considering autonomous vehicles (AVs).
In scenarios where machines handle the majority of tasks (and do so correctly and efficiently), human operators gradually disengage. They trust the automation and may be unable to react promptly when the system fails. This is not a new phenomenon. Researchers such as Raja Parasuraman and Dietrich Manzey in the field of Applied Psychology have, for over a decade, discussed mental fatigue and cognitive load as direct impacts on human performance in attention-demanding tasks. The inability to maintain the required level of attention is not about laziness — it is a cognitive consequence of long-term reliance on systems that rarely require intervention.
The phenomenon in which human operators lose skills when interacting with highly automated systems — known as the "out-of-the-loop performance problem" — has been studied since the 1980s (especially in the context of air traffic control). It shows that when humans are not continuously engaged in an activity, they lose situational awareness and the ability to respond when something goes wrong.
Therefore, although the AI Act demands human oversight, it does not address the psychological and operational limitations of such oversight. It merely assumes that assigning responsibility to a human and mentioning the risk of automation complacency are sufficient. In practice, an AV operator is not supervising — they are set up to fail.
Although the AI Act went through extensive institutional negotiation and includes input from stakeholders across industry and civil society, its structure remains primarily technocratic. Especially regarding AVs, it is treated mostly as a technical safety and compliance challenge, rather than a social and political issue involving deeper notions of security.
Without being overly pessimistic, it is necessary to recognize that the AI Act holds real promise: it introduces obligations of transparency, documentation, and oversight that can help curb harmful uses of AI.
But the reality is that in contexts like AVs, where human control is more symbolic than real, these mechanisms may be insufficient.
If we rely on human oversight — which does not and cannot reliably occur — we risk turning the deployment of autonomous vehicles into a risky public experiment with potentially harmful consequences.
© photo: https://blog.sintef.com/digital-en/automated-vehicles-how-to-keep-humans-in-control/