The challenges of European AI regulation for the financial sector
José Ramón Morales, partner in Garrigues' Commercial Department, and co-head of the Technology, Communications and Digital area, Garrigues Digital and the Garrigues FinTech Hub.
Artificial intelligence has enormous potential for financial institutions. Therefore, in addressing its regulation, the aim is to provide a framework of legal certainty to facilitate its adoption and also to address the challenges and risks for the sector, customers and supervisors.
The adoption of artificial intelligence (AI) tools in the financial sector is not a new phenomenon, and has long been identified as a key strategy for innovation and competitiveness in this industry. However, we are at an essential moment for its regulation, due to the concurrence of two factors:
(1) the fact that the proposed European Regulation on AI is in its final stages before definitive approval, and
(2) the emergence of a relevant number of foundation models and solutions in the field of so-called generative AI, based on deep learning techniques and neural networks and trained with huge amounts of structured or unstructured data, which have enormous power to generate, with different levels of autonomy, content (text, images, audio or video). This, in addition to opening the door to new and promising use cases for the financial world, poses new challenges due to the type of associated risks and possible legal implications.
AI in general, but especially generative AI, has enormous potential for financial institutions, allowing them to improve the efficiency, quality and personalisation of services and products offered to customers (financial coaching; investment strategies, recommendations and personalised offers; investment assistants), but also to optimise their internal processes (loan processing, credit scoring, portfolio management, scenario simulation, software generation, development of personalised marketing content), risk management (anomaly or fraud detection, risk assessments) and compliance (prevention of money laundering).
However, in approaching to its regulation, the aim should be to create a legal framework that provides legal certainty for the entities that adopt it, but at the same time addresses the challenges and risks for the industry, customers and supervisors. In particular, in terms of the levels of transparency, explainability, reliability of AI solutions, the ability to identify who is accountable for their adoption and use, and the potential threats to security and privacy. Not to mention the ethical consequences and the social and environmental impact that AI systems may have.
These challenges have been addressed in the European Union since the first 2021 version of the proposed AI Regulation on the basis of a risk-based regulatory model, which differentiates between prohibited, high-risk and limited-risk AI systems, and imposes a series of requirements and obligations on the different operators in the value chain (suppliers, implementers, importers, distributors and authorised representatives of AI systems). However, among the changes made by the European Parliament to the proposed Regulation in its position of 14 June 2023, an attempt has been made to respond to the accentuation of certain risks that may occur with the development and adoption of generative AI systems, including the introduction of a specific regulation for foundation models within the text of the regulation.
As an example, AI systems that are used to assess the credit rating or creditworthiness of natural persons are considered by the proposed AI Regulation as high risk and must therefore be subject to a conformity assessment prior to being placed on the market or put into service, and comply with specific requirements on data quality, technical documentation, transparency, human supervision, robustness, accuracy and cybersecurity.
It is known that, in addition to the horizontal regulation foreseen for AI systems in all sectors, the proposed AI Regulation contains specific provisions for financial institutions and in relation to the supervision of their performance when they are users, providers, importers or distributors of AI systems, or when they launch systems on the market or put software into service.
But we must not forget that the future AI Regulation will be inserted in the complex regulatory framework, at European and national level, that governs the financial sector; especially with regard to internal governance, risk management, conduct and customer protection, solvency, cybersecurity and digital operational resilience, prevention of money laundering, data protection or sustainability. Therefore, financial institutions adopting AI systems in general, and generative AI in particular, will need to assess and implement risk management and compliance mechanisms in response to both the outcome of the proposed AI Regulation and the other aforementioned sectoral rules that may be affected by the adoption of such systems.
The proposed IA Regulation is therefore a very important step towards creating a common and coherent legal framework for IA in the EU, but it also poses a significant challenge, especially for the financial sector, in terms of the interaction of the regime it establishes with that resulting from other regulatory frameworks in which financial institutions carry out their activities.