Spain plans to set up a controlled testing facility to ensure that this technology develops in line with European legislation.
The need to adapt regulations to technological advances has led to the development of regulatory sandboxes which are isolated test environments, promoted and supervised by the authorities, that enable innovative business models and systems to be developed in a secure manner in order to analyze their evolution and possible impacts, and as a result, employ a measured approach to the appropriate regulatory coverage.
So far, Spain has set up two sandboxes, one for the financial system, pursuant to the Law 7/2020 of 13 November on digital transformation of the financial system, and another for the electricity sector as a result of Royal Decree 568/2022 of 11 July establishing the regulatory framework for research and innovation in the electricity sector. Similarly, further frameworks are in the pipeline for other sectors such as industry and transport.
Given the current increase and rapid rise of the phenomenon of artificial intelligence, a sandbox will also be set up in Spain for artificial intelligence, designed to ensure reliable, ethical and robust technology, in line with the proposed Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence, also known as the future European Artificial Intelligence Act.
In its initial version of April 2021, the aforementioned proposal addressed the appropriateness of adopting new forms of regulatory surveillance for this technology, for which purpose it urged national authorities to set up controlled facilities which would facilitate the development of artificial intelligence systems that would be subject to strict regulatory surveillance prior to placing them on the market, or commissioning them. Subsequently, following amendments to the proposal, which was approved last June, the recommendations became requirements, and as a result, all the member states will be required to establish at least one experimentation facility which will facilitate the research and development of these systems.
In anticipation of this requirement, in May, the Ministry of Economic Affairs and Digital Transformation presented the draft Royal Decree to a public audience, creating a controlled testing environment in order to comply with the European Parliament and Council Regulation establishing harmonized regulations in artificial intelligence matters.
Therefore, the sandbox will be set up basically to serve as a vehicle for studying the operability of the requirements likely to be enacted with the European Regulation proposal, and it is expected to result in reports on best practices and the compilation of technical guidelines for execution and supervision, based on the evidence obtained.
As such, the goal of the Spanish artificial intelligence sandbox will not be to collaborate with the authorities in defining and developing an adequate regulatory framework which is normally the aim of this type of testing ground, as the pertinent regulations will be determined instead by the European Regulation, which will directly apply to all member states following its approval.
Conversely, the aim will be to anticipate the requirements that, foreseeably, will be included in the regulation, in order to ensure that, suppliers, users and authorities can begin to analyze and put in place measures that are expected to be introduced as regulatory requirements, without having to wait until approval of the regulation. In fact, this is the reason why the sandbox will be launched with a predetermined expiration date, as there are plans to cease operations 36 months after its launch, or when the European Regulation is finally approved, should this take place earlier, without, for the moment, knowing what will happen to the experimental projects underway at that time, or if this scheduled expiration date could be cancelled, once it has been confirmed that setting up a sandbox will not be optional for Member States.
Regarding the different types of artificial intelligence systems that could be included in the sandbox, the draft Royal Decree considers two different categories:
- High risk systems, such as the following:
- Those that concern products regulated by European Union harmonization legislation in several fields (machinery, toys, pleasure craft and jet skis, elevators, protective apparatus and systems for use in potentially explosive atmospheres, radio frequency equipment, pressure equipment, installation of cable transport, personal protective equipment, gaseous fuel burning apparatus, and sanitary products), whenever these are subject to third party compliance assessment, prior to placing them on the market or commissioning.
- Those destined for use as the security components of a product regulated by a European Union harmonization norm in the same cases as above.
- Specific artificial intelligence systems relating to biometrics, critical infrastructure, education and vocational training, employment, access to public and private services , prosecution of crimes, management of migration, asylum and border controls, and the administration of justice in cases in which they influence actions or decision making, and could lead to a significant risk to health, safety or fundamental rights.
- General purpose schemes, defined by their supplier as those designed to carry out functions of general application (recognition of texts and images, creation of audios and videos, responses to questions, translations etc.).
This group includes foundation models (that the draft royal decree fails to define but which are systems designed with large scale data sets and which adapt to a wide range of tasks) and generative artificial intelligence systems ( which are foundation models designed specifically to generate content such as complex texts, images, audios or videos with varying levels of autonomy).
Irrespective of their category, artificial intelligence systems participating in the sandbox may not be included in some cases, such as those designed exclusively for research and scientific purposes, or for military and defense activities, or those concerned with national security; those that use subliminal techniques to alter the behavior of people when this could harm them; taking advantage of vulnerabilities, using them to create social classifications of people which could lead to harmful treatment; or, without prejudice to a few exceptions, biometric identification systems in real time.
Artificial intelligence systems already on the market will be selected for the sandbox as well as systems which are currently undergoing substantial changes, which will facilitate introduction of the measures needed to comply with the future European Regulation on Artificial Intelligence, in addition to those which, while not yet on the market, have attained a sufficiently advanced level of development to be marketed or commissioned within the time frame of the controlled testing environment.
Participation in the sandbox will be open to the following people:
- Suppliers of artificial intelligence systems, with these considered to be any private legal person, Public Authority, public sector body or entity of any other kind that has developed, or for whom a third party has developed, an artificial intelligence system when they introduce it in the market or put it into service under their own name or trade name as a supplier.
Suppliers may present one or several different systems; however, they shall only be admitted to participate in respect of one of these systems.
- Users who are resident or established in Spain defined as legal persons (therefore natural persons are excluded). Public Authorities or public sector bodies under whose authority a system of artificial intelligence is used.
Users may only access the sandbox if the corresponding supplier also does so.
Suppliers and users wishing to apply for participation in the controlled test environment which shall at all times be voluntary (with free withdrawal) shall need to do so formally, once the competent authority makes a decision and issues the corresponding call for applicants and specifies the requisite terms and conditions. Initially, the draft Royal decree attributes this task to the State Secretariat for Digitalization and Artificial Intelligence although the recent approval of the Statute of the Spanish Agency for Supervision of Artificial Intelligence by the Royal Decree 729/2023, 22 August may well lead to the adoption of a different measure.
Applications will be assessed taking into account a number of different criteria such as the degree of innovation or technical complexity of the product or service, its social or business impact or public interest, the degree of duplicability and transparency of the algorithm included in the system, its potential to become a high risk artificial intelligence system, its level of maturity etc.
Participants admitted to the controlled test environment shall prove and evaluate the respective artificial intelligence systems in accordance with the preliminary guidelines and technical specifications provided by the State Secretariat of Digitalization and Artificial Intelligence, which may be subject to updates and which, logically, will be based on the requirements that the European Regulation is expected to impose.
When the participant has developed the necessary activities to ensure that their system fulfils these requirements, they shall be required to submit to a self-assessment of their compliance which shall only be effective within the framework of the controlled test environment, and therefore it shall not replace any compliance assessment which may be required by specific sector legislation which, in each case may be applicable to artificial intelligence systems, although their purpose is precisely to assist participants in dealing with these requirements.
Having completed the report, it will be submitted to the Secretariat of State for Digitalization and Artificial Intelligence (with the aforementioned exception) for the purposes of evaluation, especially in terms of the quality management system, technical documentation and the monitoring plan following its marketing.
In this way, the planned sandbox will facilitate compliance with the requirements and obligations that the proposed European Regulation will foreseeably establish, and will enable any possible difficulties or problems that may arise to be identified at the time of its practical application.
In addition, it will encourage the exchange of information and experiences between participants and the Administration as to possible improvements that could be included in the preliminary guidelines and technical specifications, thus contributing to their updates, as a preliminary, prior to publication of the final guidelines for implementation to be made available to all interested parties.
Furthermore, the results of the controlled test environment will be collated by the Administration and published in a report of conclusions, which will contribute to the current work in progress on normalization and other preparatory actions on a European Union scale, for application of the future Artificial Intelligence Regulation.