European Regulation on Artificial Intelligence and Industrial Relations
Regulation (EU) 2024/1689 of the European Parliament and of the Council regulates the implications of the use of artificial intelligence also in the context of labor relations, both individually and collectively.
Although it may seem at first sight that Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (which entered into force on August 2 of that year and becomes applicable, generally, on August 2, 2026 except for some matters that are already applicable) hardly pays any attention to the use of artificial intelligence (AI) in the field of labor relations, and they are addressed as a secondary matter, that is not at all the case.
Firstly, it should be borne in mind that practically all AI systems that can be used in the workplace are classified as high-risk AI systems. Point 4 of Annex III to the Regulation includes among the high-risk AI systems referred to in article 6(2) those related to the core aspects of industrial relations: AI systems used for the selection and recruitment of individuals, including the assessment of candidates, and those used to make decisions affecting working conditions, the establishment or termination of contractual relationships, the assignment of tasks, the evaluation and supervision of the performance and behavior of employees. This means that for practically all aspects of labor relations (selection and hiring, working conditions, assignment of tasks, supervision and control of work activity, promotion of employees and dismissal) the use of AI systems means that they are classified as high risk and therefore require application of the detailed and stringent rules provided for them. In data governance, possible biases need to be taken into account that may affect the health and safety of individuals and their fundamental rights, or that may lead to some type of discrimination (article 10.2.f), which is especially significant in the workplace. In addition, high-risk AI systems must be designed and developed in such a way that they can be effectively monitored by natural persons (article 14.1). And this human supervision will aim to prevent or minimize risks to health, safety or fundamental rights, which, again, is particularly relevant in the workplace.
Another point to consider is that the express prohibition of AI practices could have a particular impact in the field of labor relations. This means that practices aimed at inferring the emotions of a natural person (do legal persons or machines have emotions?) at workplaces are prohibited, unless the AI system is used for medical or safety reasons (article 5.1.f). Similarly, the use of biometric characterization systems that classify individual natural persons on the basis of biometric data to deduce, among other things, their union membership is prohibited (article 5.1.g). This protection of trade union membership data (placed alongside race, political opinions, religious or philosophical convictions, life and sexual orientation) is undoubtedly important because of the relevance given to it, although I really fail to understand how a person's trade union membership can be deduced from biometric data (barring Lombrosian exaggerations). In any case, these rules highlight concern about the use of AI systems in the workplace and the intention to protect certain aspects of employees’ private sphere.
The regulation also covers the information rights of employees, and their representatives, where AI systems are used in the workplace. Article 26.7 provides that before putting into service or using a high-risk AI system in the workplace, employers must inform the employees’ representatives and the employees concerned that they will be exposed to the use of the high-risk AI system. In the absence of representation, the information must be provided in any case to the employees concerned. On the other hand, the information only covers the fact that they will be exposed to an AI system and should not be extended to its characteristics or details. This makes it possible, on the one hand, to safeguard trade secrets and, on the other, to warn the employees’ representatives, and the employees individually, of the use of the AI system so that they can carry out their surveillance and control tasks, taking care to respect fundamental rights and reacting to discriminatory bias. The obligation of those responsible for the deployment of high-risk AI systems to provide "a summary of the findings of the fundamental rights impact assessment" (Annex VIII, Section C, point 4) should be taken into account. Also, and very importantly, that article 86.1 of the regulation enshrines the right of any person who is affected by a decision that the person responsible for deployment adopts based on the output results of a high-risk AI system listed in Annex III and that produces legal effects or significantly affects them, in a way that they consider has a detrimental effect on their health, safety or fundamental rights, to obtain from the person responsible for the deployment clear and meaningful explanations about the role that the AI system has had in the decision-making process and the main elements of the decision taken.
On the other hand, paragraph 11 of the same article 26 of the regulation provides that those responsible for the deployment of high-risk systems that make decisions or help to make decisions related to natural persons must inform them that they are exposed to the use of high-risk AI systems. This, as far as labor issues are concerned, implies a duty to provide information in that respect to candidates in a personnel selection process in which an AI system is to be used.
We are therefore in the presence of complete and directly applicable rules, which do not require any implementation in domestic law. Having said that, article 2.11 of the regulation states that its provisions "shall not prevent the Union or the Member States from maintaining or introducing laws, regulations or administrative provisions that are more favourable to employees as regards the protection of their rights with regard to the use of AI systems by employers or that encourage or allow the application of collective agreements that are more favourable to employees".
This is established in what "concerns the protection of their (employees’) rights with respect to the use of AI systems by employers", which leaves out issues related to the AI systems themselves, as well as the procedural aspects of their application (information rights of article 26.7) and possible collective or trade union rights.
The extension or improvement of employees’ rights allowed by the regulation should therefore focus on the possibilities of monitoring and reacting to the consequences of the application of AI systems. Its implementation is within the scope of business powers, and such systems are also protected by trade secrets, so the reinforcement of labor rights must come from the possibilities for reacting to possible irregular uses or discriminatory results of their application. And this is more appropriate for collective bargaining than for legal intervention. It is the collective agreements that, in these situations, must establish the necessary corrective measures, and any preventive controls that the legal provisions could try to introduce are inadequate. This is a challenge that our collective bargaining should certainly have started to face by now.
Contact
