Garrigues Digital_

Legal innovation in Industry 4.0

 

Garrigues

ELIGE TU PAÍS / ESCOLHA O SEU PAÍS / CHOOSE YOUR COUNTRY / WYBIERZ SWÓJ KRAJ / 选择您的国家

The legal risks of [actual] deployment of artificial intelligence, from all angles of business law

Artificial intelligence (AI), in its various stages, has been used in our society and in the digital economy for years. The appearance of generative artificial intelligence (GenAI) in all spheres is, without a doubt, one of the important topics of the moment. 

Increasingly, companies are reviewing and giving thought to their AI adoption strategies coinciding with the “explosion” as GenAI starts to be open to the public, through the inclusion of tools related to this technology. This strategy implies a review, among others, of risk management models, particularly any related to legal implications and compliance obligations. We take a look at this from every angle of business law

Legal implications

Data protection

AI tools can generally deal with decisions that have an impact on humans from a data protection standpoint, which means a few precautions need to be considered. Katiana Otero, principal associate in the data economy, privacy and cybersecurity practice at Garrigues, explained that the privacy legislation already covers issues such as citizens’ rights not to be subject to automated individual decisions. In these cases, the key will lie in determining whether replies will be given without human intervention and with legal effects on the interested party, because, if so, this automated processing has to comply with restrictions under the data protection legislation: an impact analysis will be required to avoid potential risks, and a number of precautions will be needed along with enhanced disclosure obligations.

Alejandro Padín, partner in the data economy, privacy and cybersecurity practice at Garrigues, recalled, in relation to GenAI, that one of the key topics on the table in this connection has to do with the information feeding these artificial intelligence systems, which could include personal data. The same issue arises with the prompts given to the system to obtain a reply, as well as with the output supplied by the tool. Padín highlighted four key issues that companies must keep closely in mind: legal authority, transparency, international data transfers and security. And recommends governance systems and oversight and internal control bodies at organizations.

Intellectual property

A disruptive technology like GenAI, with creative capabilities, always causes tensions, especially in a legal context, although, as Carolina Pina, intellectual property partner at Garrigues, explained, we must be optimistic because it will not replace human creativity, as some are foretelling. A few questions need to be asked: what happens with intellectual property rights to the output of a GenAI tool? In the words of this expert, something that has not been created with human intervention cannot be protected. Therefore, a recommendation that could be given to companies is to use a GenAI tool in processes with human intervention, and to include elements in the output that set it apart from outputs created using only GenAI. This would, at least, allow all the elements determining the final output to be protected, and create rights to it.

And what happens with the millions of data items in the dataset (the information feeding the GenAI system)? In the general purpose public models, all types of public sources (on the internet) have been used to train them, and, although their contents might be protected by intellectual property rights, what GenAI systems do is similar to “read to learn”, something the European legislature had already covered and which may be used for these purposes. The Digital Single Market Directive provides two types of protection for intellectual property rights in relation to data mining (articles 3 and 4): those data may be captured to create the dataset, but the website owners can close the door to a web crawler capturing training information for GenAI, if they so wish.

Another important risk for companies is the subsequent use made of outputs or replies supplied by the GenAI tool. This is the case, for example, with images created with a GenAI model which could give an exact replica or a partial reproduction of works used in its training. The consequences from the standpoint of intellectual property rights can vary from one case to another, with the risk of a breach of the legislation protecting intellectual property rights. In view of this risk, it is recommendable to assess the model carefully, including how it operates and its output, so as to avoid future liability.

In relation to the datasets that are used to train a GenAI model and form its “brain”, Cristina Mesa, intellectual property partner at Garrigues, noted that they cannot be protected by the intellectual property legislation, and in principle they would not be patentable either,  although they could perhaps be protected as trade secrets.

Labor and employment law

From a labor and employment law standpoint, the question to be asked is to what extent AI as a general concept is able to replace the human resources function. In the opinion of Clara Herreros, labor and employment partner at Garrigues, it will foreseeably be used in connection with coordination tasks, as well as for an increasingly necessary supervision which will always require human intervention. The most repetitive and straightforward tasks related to administrative activities (such as payroll management, for example) have been undergoing an automation process for years, which enables the human resources function to focus more on people management. Here is where the human factor cannot be replaced by AI: “notifying a pay reduction in an automated letter is not the same as doing so in person”.

In relation to the algorithm based decisions that will be able to be adopted in a human resources department, Clara Herreros pointed to a few risks, such a possible breach of the right to equality and non-discrimination, occupational health and safety, among others, and noted that someone will have to oversee that those decisions comply with the labor legislation in force: an “algorithm watcher” is going to be needed.

This Garrigues partner also recalled that the law known as the Rider Law introduced a general provision on AI: an amendment was made to article 64.4 of the Workers’ Statute, which now sets out the work council’s right to be informed of the parameters, rules and prompts on which AI algorithms or AI systems are based where they affect companies’ decisions with an impact on working conditions or on accessing or retaining jobs, including the creation of profiles.

A guide drawn up by the Ministry of Labor in 2022 also mentions the rights of workers’ representatives or of the workers’ themselves in this field. This document needs to be kept closely in mind, because the courts and tribunals could use it when interpreting the law and delivering judgments.

Tax law

In the tax field, in which the amount of taxpayer information accessible by the tax authorities keeps growing and is becoming more complete, and the algorithms applied in the rules together with the interpretation principles on taxes have always been a central and controversial factor, who is watching the watchers?

Gonzalo Rincón, tax partner at Garrigues, noted that the Spanish Tax Agency’s 2020-2023 Strategic Plan recalls that the use of AI tools may contribute to and be very useful for monitoring  compliance with tax law and combating evasion and recognizes that these systems have been in use for a long time now. This expert added, however, that, for this to be so, it needs to be ensured that the tax authorities’ weapon and taxpayers’ rights have matching strengths.

One of the great risks is the lack of transparency or explainability of the adopted AI system, which, as Gonzalo Rincón discussed, will mean that in tax audits, besides substantiating the facts of the case, the Public Treasury will also have to substantiate the way in which their conclusions was drawn on the taxpayer concerned. AI must also avoid bias in any drawing up of risk profiles among citizens due to belonging to a certain race, social background, and so on.

Litigation and arbitration

The justice system is immersed in an intense digital transformation process. The 2030 plan of the Ministry of Justice includes various projects related to digital immediacy or the automation of processes, and the term algorithmic justice can now be used. María Marelza Cózar, dispute resolution senior associate, in the litigation and arbitration practice at Garrigues, noted that the development and use of AI systems in the Spanish judicial system is mostly being seen in procedural formalities. The current wording of the EU’s draft Artificial Intelligence Regulation does not consider a high risk to exist in its use as a tool for purely ancillary and bureaucratic activities, although not so for researching, interpreting and in making court decisions, because in these cases there could be an adverse impact on fundamental rights, democratic values and especially the right to an effective remedy and to a fair trial. It is a question (especially if GenAI is introduced in the sphere of justice) of avoiding the risk of potential bias, errors and opaqueness. She recalled on this subject a comment by Manuel Marchena, presiding judge of Chamber Two of the Supreme Court, regarding the possible proliferation of robot judges: “From them we can expect exact decisions, not fair ones”.

Where there does appear to be fertile ground for AI, María Marelza Cózar underlined, is in alternative dispute resolution, where it could benefit greater productivity and efficiency in the resolution of this type of cases, characterized by their predictability and low level of complexity in their assessment.

On the subject of what is happening in our neighboring countries, Cózar spoke about the United Kingdom, where AI is already being used for online divorces or for small claims proceedings. More concerning news is coming from other countries, such as China, where judges and public prosecutors are using AI in criminal cases.

Administrative law

Javier Fernández Rivaya, administrative and constitutional law partner at Garrigues, highlighted three adopted measures that are going to have an effect on the course AI will take in Spain or indeed in Europe as a whole.

One is the creation of the Spanish Artificial Intelligence Oversight Agency, which will work towards the creation of regulated testing environments to facilitate responsible deployment of an artificial intelligence system. This agency will carry out oversight, advisory, awareness raising and training activities targeted at public and private law entities for the adequate implementation of domestic and European legislation on adequate use and the development of artificial intelligence systems and their algorithms. It will also be responsible for auditing, examining, imposing penalties and other functions conferred on it by its specific legislation.

An AI regulatory sandbox is also scheduled to be set up, with the aim of contributing to promoting reliable, ethical and robust technology. This controlled testing environment will be created shortly with the basic aim of serving as a tool to study the operativeness of the requirements that the proposal for an EU regulation seeks to impose, and it is expected to give rise to reports on good practices and to the preparation of technical guidelines for execution and oversight based on the evidence obtained.     

Alongside this, the European Centre for Algorithmic Transparency (ECAT), installed in Seville, will bring together top-level experts to support the European Commission’s work as regulator, and, in addition to becoming an international knowledge hub on this subject contributing to achieving the greatest possible transparency, it is intended to serve to prevent systemic risks associated with the misuse of algorithms.   

Companies’ adoption of GenAI

In view of all the described risks it is crucial in their decisions on whether to procure or purchase tools that include GenAI for companies to be extremely thorough in their legal analysis of those tools and to make internal preparations for them to be used at the organization as a work tool. Two important practical questions that need to be considered are: the terms and conditions of the product to be acquired and the preparation of internal policies on its use.

Procurement

When procuring GenAI tools attention needs to be paid to the terms and conditions, and the small print of agreements. It is crucial to understand how the risks related to this technology are distributed between suppliers and customers.

On this subject, Cristina Mesa, intellectual property partner at Garrigues, distinguished between the contractual rules that a company finds when using GenAI solutions supplied by a specialized supplier, or when it decides to “build” or integrate its own solution using an open source model or directly from scratch. She highlighted a few important contractual issues needing to be reviewed in these cases: her checklist includes matters such as the service terms and conditions (access); ownership of outputs; information and transparency on the source of the training data; indemnity against breaches of intellectual property rights; a disclaimer in relation to errors in the system’s replies; use restrictions (policies); confidentiality; data protection; warnings regarding as-is services (on obligation of result); user’s responsibility; or jurisdiction and governing law.

She also discussed issues relating to due diligence processes in M&A transactions where this type of systems exist at the company under review. As this expert noted, it is essential to know where the value of these assets lies, which is not always the software.

Policy on use of GenAI systems

Once integrated, GenAI systems will be used across the board by everyone at the organization and, depending on the type of system that is going to be used, it is crucial to have internal regulations setting out how the employees and members of that organization have to use these tools for their work. These regulations, which may take the form of an internal policy, will have to be known by every member of the organization and complied with as a mandatory obligation, to avoid liability for the company. It is important for the specific content of this policy to be adapted to both the type of organization and the type of tools that are going to be used.