Garrigues Digital_

Legal innovation in Industry 4.0

 

Garrigues

ELIGE TU PAÍS / ESCOLHA O SEU PAÍS / CHOOSE YOUR COUNTRY / WYBIERZ SWÓJ KRAJ / 选择您的国家

Should artificial intelligence be ethical?

Ethics are the set of moral principles guiding human behavior in all areas of life. In general, we turn to ethics to identify the best way for us to act. But if ethics are for human behavior, should machines also be required to follow ethical principles? The 52 experts appointed to the European Commission’s high-level group on artificial intelligence (“AI”) believe they should, and that to minimize the risks of this technology, a human-centric approach to AI is needed. The group considers that AI must be trustworthy, which means it should respect fundamental rights, be technically robust and have an ethical purpose.

The expert group recently published for public consultation its draft AI Ethics Guidelines, the final version of which is expected for March 2019. The aim is less to provide a list of core values and principles for AI, but rather to offer guidance on the concrete implementation and operationalization thereof into AI systems. The Guidelines are not intended to substitute or modify any policymaking or regulations, but rather to complement them. In the final version, a mechanism will be put forward to allow stakeholders to voluntarily endorse the guidelines.

The document consists of three chapters, from the most abstract to the most concrete. The first chapter offers guidance for ensuring ethical principles in AI. The second and third chapters focus on realizing and assessing trustworthy AI. The underlying belief is that ethical principles should be based on the fundamental rights enshrined in EU treaties, particularly respect for human dignity, democracy, individual rights and freedoms, equality, solidarity and non-discrimination.

The expert group considers that any AI-based technical development should observe the following five principles: a) beneficence, improving individual and collective wellbeing; b) non maleficence, or “doing no harm”, especially against vulnerable demographics but also in terms of the environment; c) preservation of human autonomy, whereby humans remain free from subordination to AI systems; d) justice, avoiding discrimination and uneven distribution of the positives and negatives of AI; and e) explicability and transparency regarding AI decision making.

Applying the above principles, the draft Guidelines go on to list 10 requirements for AI to be trustworthy:

  • Accountability measures, including compensation for AI failures
  • Use of complete, appropriate and unbiased data in AI systems, ensuring that such data cannot be used against the individuals who provided the data
  • Design for all, whereby services can be used by all citizens, including those with any disability
  • Human oversight in algorithm-based decisions
  • Non-discrimination
  • Respect for and enhancement of human autonomy, which includes ensuring a fair distribution of the benefits created by AI systems and guaranteeing that systems tasked with helping users protect their overall wellbeing as central to system functionality. In my opinion, this will be one of the most difficult requirements to put into practice, since current developments are heading in precisely the opposite direction, as illustrated by the Israeli historian and professor Yuval Noah Harari in his recent article.
  • Respect for privacy
  • Robustness
  • Safety
  • Transparency

To address these requirements, the experts recommend using both technical and non-technical methods. Among the latter, the group mentions education. I believe this must be one the key points in the discussion, since, in today’s world, only a small minority have the knowledge needed to understand the global implications and impact of AI-based services.

I find the Guidelines’ last chapter to be the most interesting, given that it lists specific questions that can be asked to assess whether the 10 requirements for trustworthy AI are truly in place. Although these questions can be applied to any sector, the expert group has undertaken to prepare specific lists for healthcare, insurance and autonomous vehicles.

The Guidelines are a good starting point, and one with which the European Commission aims to lead the discussion on the ethical impact of a technology that is currently in the hands of only a few tech companies. The expert group intends to publish a second deliverable on policy and investment recommendations in May 2019.

As we discussed in an earlier article, the issue of investment is key: if in the coming years Europe is unable to invest more resources than in the US and China, we will not only miss the AI boat for good, but also risk becoming consumers of services designed beyond our borders.