Garrigues Digital_

Legal innovation in Industry 4.0

 

Garrigues

ELIGE TU PAÍS / ESCOLHA O SEU PAÍS / CHOOSE YOUR COUNTRY / WYBIERZ SWÓJ KRAJ / 选择您的国家

This is how the US wants to regulate artificial intelligence

The US, which seeks to maintain its leadership in artificial intelligence, has published a request for comments, together with the principles that should guide its regulation.

According to Google’s current CEO, artificial intelligence (AI) is probably one of the most important things that humanity is working on. And, in fact, the United States and China, or, to be more specific, a group of cutting edge companies in both countries, have been investing billions of dollars with two very clear aims in mind: to be the leaders, now and in the future, of the technologies that AI encompasses under its banner and to capitalize on the huge benefits that it will bring.

The US government has just published a request for comments on a draft memorandum that provides guidance to federal agencies on the regulation of artificial intelligence[1].

The document starts by underscoring that America’s maintenance of its status as a global leader in AI is vital to preserving its economic and national security. Innovation and growth of AI is one of the US government’s top priorities and as a result agencies should avoid both any action that needlessly hampers such growth, as well as a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits. In addition, they should consider not just the risks, but also the benefits and costs of using AI solutions, when compared to the systems AI has been designed to complement or replace. Federal regulatory agencies may in some circumstances use their authority to address inconsistent and duplicative state or local laws that prevent the emergence of a national market.

In line with the theories of Professor Adam Thierer and other authors, the Memorandum states that agencies should consider a new regulation only after they have reached the decision that federal regulation is necessary. Such regulation should be based on the following principles:

- Promoting robust, and trustworthy AI applications, which will contribute to public trust in AI.

- Public participation is essential. Agencies should make every effort to provide information on any draft legislation in this field and give the public the opportunity to get more involved.

- Take into account and leverage the scientific and technical knowledge available when approving the regulation.

- Carry out a complete, transparent risk evaluation at appropriate intervals which provides details about which developments in AI are acceptable and which may cause excessive harm.

 - Evaluate and quantify the benefits and costs, direct and indirect or potential, fostering an approach that maximizes net benefits.

- Given the current highly volatile climate, any regulation that is approved should pursue a performance-based, flexible approach. Rigid approaches or those that attempt to impose specific technical solutions or standards should be avoided since they may become obsolete. Moreover, federal agencies should make every effort to keep in mind international uses of AI, in order to ensure that local regulations do not place American companies at a disadvantage as compared with their global competitors. This approach is very interesting because it reflects the pragmatism of the American mentality and help us understand its leadership in the technological world.

- Consider the important consequences of bias in IA and the discrimination and injustice it may cause.

- Promote the development of reliable and trustworthy AI systems from the design stage, that guarantee access to the information processed or stored.

- All the federal agencies involved must coordinate with each other.

- Facilitate access to data, metadata and information that is not confidential, of the various administrative bodies in order to facilitate research and the development of AI solutions.

The memorandum also envisages the possibility of non-regulatory actions when this is the most advisable solution. The possible actions mentioned are sector-specific guidance or frameworks and pilot programs that provide exemption from the application of certain rules or codes of conduct in the industry.

Comments may be submitted until March 13, 2020.

 


[1] Refers to the so-called narrow or weak artificial intelligence, which was defined legally for the first time in the John S. McCain National Defense Authorization Act for Fiscal Year 2019.