Garrigues Digital_

Legal innovation in Industry 4.0

 

Garrigues

ELIGE TU PAÍS / ESCOLHA O SEU PAÍS / CHOOSE YOUR COUNTRY / WYBIERZ SWÓJ KRAJ / 选择您的国家

The European Union's role guarding fundamental rights in the artificial intelligence framework: ‘nulla IA sine ethica et sine lege’

Carolina Pina and Marta Valero, Garrigues IP Department.

 

The European Union has taken the lead with a proposal of regulation for AI, that encourages better conditions for the development and use of this technology. The key will be to achieve a regulatory framework that strikes the balance between fostering innovation and guaranteeing a human-centric, ethical and responsible AI.

AI is a fast-evolving technology which has the potential to transform our lives and bring progress to our society. However, at the same time, the unpredictability of AI poses tremendous risks to mankind, as some of the most reputable engineers, philosophers, sociologists and legal scholars have warned. Quoting the apocalyptic words of Professor Harari and Stephen Hawking published in The Economist and BBC, respectively, “AI has hacked the operating system of our civilization” and “the development of full artificial intelligence could spell the end of the human race."

In response to the AI challenge, the European Union has taken the lead with a proposal which lays down a legal framework that fosters the development of AI, while ensuring a high level of protection for health, safety, fundamental rights, the environment, and EU values. However, in certain areas, the wording of the proposal is somewhat confusing, hindering its comprehension and giving rise to legal ambiguity.

Firstly, the debate surrounding the issue of “definitions” has been one of the principal challenges of the European agenda in the last few years. As we explained in this post, in the amendments approved in June, the European Parliament chose a technologically neutral interpretation of “AI system”, which is in line with the OECD’s definition and differs substantially from the Commission’s initial proposal. In addition, the European Parliament has included new concepts such as “foundation models”, “general purpose AI systems” or “generative AI systems”, which are potentially subject to new changes, adding further complexity to the regulatory landscape.

Secondly, given the pace of technological innovation, EU policymakers have been compelled to incorporate new transparency and risk-management obligations for foundation models regardless of their use and level of risk, which effectively means that they would be treated as high-risk applications. Whether imposing such extensive requirements could jeopardize Europe's competitiveness and technological sovereignty remains uncertain.

Thirdly, digging into the details of the text, is to enter a labyrinth of annexes, interdependencies and cross-references that can make the process of classifying an AI system highly tortuous. While it seems appropriate to adopt a risk-based approach, the boundaries between prohibited and high-risk systems – for instance, biometric systems are blurred and should be clarified. Moreover, properly distinguishing between each type of high-risk system or identifying the fundamental pillars of the conformity assessment, involves sorting through a multi-layered, horizontal, regulatory framework, which is not easy to unravel.

Finally, the use of concepts such as “significant risk of harm” raises legal challenges and could undermine the rollout of the regulation, especially when it is used for crucial issues such as correctly classifying an AI system. A more penetrating interpretation than the one provided in the recitals is required, but we will have to wait for the courts to address these concepts and cast light on the matter. This may entail a risk of inconsistent national interpretations and fragmentation in the EU.

It is worth noting that compliance with this convoluted regulation could be particularly challenging and costly for small and medium-sized enterprises (SMEs). To facilitate the testing and prove the technical feasibility of AI, Member States must support setting up regulatory sandboxes, for which public funding could be essential.

The European institutions will still have a chance to fine-tune the draft text during its legislative journey, in the trilogues. To prevent fragmentation in the regulatory interplay and ensure legal certainty, it is crucial that the above-mentioned challenges are tackled from an international perspective and with an eye on future breakthroughs in technology that will undoubtedly arise. This is the only way that we will be able to obtain an AI regulation that can resist the passage of time and address the unstoppable technological progress that we are witnessing. This is particularly true given that the EU proposal is likely to have the so-called "Brussels effect" and become the legal standard in other jurisdictions, as occurred with the GDPR.

It is essential that the European Union remains the guardian of fundamental rights, which must always prevail over short-term economic interests. Indeed, we must ultimately be able to rely on a regulatory framework that strikes the balance between fostering innovation and guaranteeing a human-centric, ethical and responsible AI. In the face of apocalyptic predictions, we must remain confident that EU lawmakers will be up to the task of solving this unprecedented challenge to society.