top of page

Artificial Intelligence and RegTech in the Field of Anti-Money Laundering and Counter-Terrorist Financing: Opportunities, Risks, and the Role of Humans

  • Writer: Austėja Dimaitytė
    Austėja Dimaitytė
  • 13 hours ago
  • 5 min read

In recent years, an increasing number of companies and organizations, including financial institutions, have been implementing artificial intelligence (AI) solutions, particularly in the field of anti-money laundering and counter-terrorist financing (AML/CTF).

 

RegTech (regulatory technology) solutions support financial institutions in complying with regulatory requirements, while rapid advances in AI create opportunities to implement these processes more efficiently and effectively. However, the growing use of AI-based solutions naturally raises questions about the role and responsibility of humans in this area.

 

An analysis of the views of the Financial Stability Institute (FSI), the European Banking Authority (EBA), and the European Central Bank (ECB), as well as observations of market trends not only in Lithuania but also in other European Union countries such as the Netherlands, reveals a clear scale of technological transformation. For example, banks in the Netherlands have announced plans to eliminate approximately 2,600 AML/CFT-related positions over the next two years by using AI technologies for day-to-day monitoring operations.

 

Although technology and artificial intelligence are an integral part of the future, it is essential to understand how these solutions should be used responsibly, what risks they pose, and how processes should be properly managed in order to prevent potential violations.

 

Key Risks of Using Artificial Intelligence and KYC

 

The implementation of the KYC (“Know Your Customer”) principle is mandatory for financial institutions in order to identify and verify customers’ identities and assess the risks associated with them.

 

Both financial institutions and supervisory authorities recognize the potential of artificial intelligence in this area, as it can reduce manual workloads and enable organizations to allocate resources to the most critical processes. However, although artificial intelligence can significantly increase process efficiency by rapidly analyzing large volumes of data and producing comprehensive investigative conclusions within a short period of time, it is essential to exercise a high level of caution, as the use of AI also entails substantial risks. In my view, the three most significant risks are as follows:

 

Model risk is the probability that an AI model will make incorrect decisions due to inaccurate parameters, parameters not adapted or insufficiently adapted to the organization’s activities, errors, biased or insufficient data, unexplained logic, or other factors.

 

The performance of AI models directly depends on the quality of the data used to train them: if the data is incomplete, inaccurate, or historically biased, the model may replicate existing social biases, leading to systemic discrimination against certain client groups or to incorrect decisions.

 

A significant risk also arises from the opacity of AI models - they can operate as “black boxes,” making it difficult to understand how a specific decision was made. This complicates compliance and internal control processes and highlights the need to integrate explainable AI models capable of justifying their conclusions, thereby ensuring the implementation of the principle of operational transparency.

 

For a model to be reliable, it must also be capable of using the most up-to-date available information. Models that do not take such information into account will produce inaccurate results. For example, a model may fail to identify the most recent negative information in the “Know Your Customer” process. Finally, insufficient model adaptation - when a model is unable to generalize data in a broader context - or the generation of convincingly sounding but factually incorrect information, known as “hallucinations,” can significantly increase the risk of operational breaches and violations of legal requirements applicable to the organization.

 

Cyber risk refers to the set of threats that arise when financial institutions integrate AI solutions and increase the complexity of their technological ecosystems. With the use of AI, organizations become more vulnerable due to the growing number of touchpoints with external service providers and the increased interconnection of IT systems, which opens broader opportunities for cyberattacks.

 

Inadequate access control measures may allow unauthorized access to training data or the AI models themselves, posing risks to both data confidentiality and model integrity. AI models may also be affected by data poisoning attacks, in which training datasets are maliciously modified to alter model behavior.

 

Operational risk arises when the implementation of AI solutions complicates an organization’s technological environment and increases the likelihood of operational disruptions. For companies using legacy or fragmented IT systems, AI integration can significantly increase architectural complexity by creating additional dependency points and higher risks related to system interoperability. Such technological “layering” increases the probability of IT failures, operational disruptions, and incidents, while their management becomes considerably more complex. In addition, some processes may become overly dependent on automated solutions, meaning that even short-term disruptions of AI models or systems can have wide-ranging operational consequences.

 

The use of artificial intelligence in financial institutions may give rise to various data-related challenges. Although many key aspects of data governance and management - such as model risk, customer privacy, and information security - are already addressed within the existing regulatory framework, organizations should assess not only the AI model itself but also the increased risks related to data loss, confidentiality breaches, data integrity violations, and other associated risks.

 

These represent only some of the AI-related risks identified by the author. The purpose of this article is to emphasize that regulated sectors (and beyond) cannot leave systems in which AI “maintains order” without appropriate human oversight. Moreover, when implementing solutions and developing AI model–based IT architectures, both their design and ongoing supervision should involve specialists from relevant fields, such as AML, risk management, compliance, and other related disciplines

 

The excessive replacement of specialists with AI solutions can also lead to the erosion of human expertise, which would complicate not only system maintenance but also compliance with regulatory and operational transparency standards. If AI models do not function properly or if non-standard situations arise, organizations may lack the experience needed to identify problems in a timely manner and respond appropriately.

 

Responsible AI implementation must be grounded in sustainable governance, continuous training, and real-time monitoring. In high-risk areas, it is essential to ensure human oversight, providing the authority to monitor, intervene in a timely manner, and suspend or adjust AI-driven decisions. Achieving this requires a consistent strengthening of organizations’ internal expertise and knowledge.

 

Artificial intelligence can become a powerful ally in the fight against money laundering and terrorist financing, but it cannot replace the human role in the AML/CFT field.

 

Human judgment, critical thinking, ethics, and accountability remain indispensable when working with these technologies. This is also illustrated by the legal framework for AML/CFT, which obliges organizations to test the systems and automated decisions they use in order to assess whether the selected system parameters truly reflect the organizations’ business models and regulatory obligations in practice.

 

Technologies enable tasks to be performed more quickly and allow for the processing of larger volumes of data; however, they cannot replace human judgment. Therefore, it can be concluded that a sustainable organization’s compliance assurance system should be based on synergy between humans and applied technologies, where oversight is exercised by humans with the support of artificial intelligence, rather than the other way around. Only such an approach can ensure that innovation in the financial sector remains safe, reliable, and ethical, while enabling organizations to avoid regulatory breaches and potential supervisory sanctions.

 

AI and AML

The ExpertLab team will help you ensure that the models you apply support your business in identifying, managing, detecting, and monitoring new risks, while ensuring human oversight, compliance with applicable legal requirements, and the sustainable and responsible development of innovation.

bottom of page