Navigating Gen AI’s Ethical Landscape When Combatting Financial Fraud and Money Laundering

‍ First printed in the International Legal Technology Association Journal

Spring 2024 Vol. 40 | No. 1

Artificial intelligence (AI) technologies have ignited significant interest in the financial payments industry, with their potential to rapidly analyze extensive datasets and mitigate human biases, thereby enhancing the effectiveness of financial crime prevention. However, recent advancements in AI, particularly in large language models (LLMs) and natural language processing, have introduced a novel iteration of technological intelligence: Generative AI, or Gen AI. Unlike traditional AI, which relies on historical data and numeric predictions, Gen AI has the unique ability to create entirely new content, often indistinguishable from human-generated material, without any reliance on preexisting data.

‍ ‍

Most financial payment solutions are pre-packaged products, using existing Gen AI models or APIs within layers of code tailored for specific applications in payment processing. Still, many organizations are integrating Gen AI into their operations and expressing intentions to invest in the space further. According to a McKinsey poll, across industries, a third of the organizations surveyed already use Gen AI, 40% are planning further investment, and 28% are including it on their board’s agenda. Considering the ethical governance and regulatory frameworks the financial and legal industry should implement as these advanced AI-powered tools become more commonplace is essential. 

‍ ‍

Outputs are Only as Good as the Inputs 

‍ ‍

The legal and financial industries’ increasing reliance on Gen AI introduces the risk of discriminatory outcomes and undermines the fairness of decision-making processes. The complexity intensifies when tech developers treat their Gen AI models as black boxes, rendering the decision-making process opaque and mysterious. Like an exam student submitting an answer without showing their calculations, the inability to understand how Gen AI arrives at its decisions poses a fundamental threat. In fields like healthcare, finance, and law, where Gen AI plays a critical role, the consequences of errors made by these black boxes can be dire. If Gen AI systems are trained on data containing biases, these biases may perpetuate and amplify within the decision-making process, leading to unfair and unjust outcomes.

‍ ‍

Take predictive policing tools, for instance, constructed upon notoriously flawed and prejudiced crime data, which perpetuate cycles of discrimination. By prioritizing already over-policed neighborhoods, predictive policing algorithms amplify existing biases and exacerbate inequalities. The consequences are dire, as individuals find themselves trapped in a web of suspicion and surveillance simply due to their geographical location or appearance. The danger lies in Gen AI’s ability to cloak biased decisions under the guise of impartial mathematical algorithms. This phenomenon, known as tech-washing, obscures the reality of systemic injustices. As researchers delve deeper into the murky waters of predictive policing, a disturbing trend appears. Predictive policing software disproportionately targets working-class communities and people of color, particularly Black individuals, in relentless cycles of unnecessary surveillance and undue suspicion.  

‍ ‍

The evolution of Gen AI imaging software has ushered in a new era of visual creation, but with it comes the shadow of bias and stereotype perpetuation. The latest iterations of image generators, such as Stable Diffusion XL and LAION-400M, boast advancements in bias reduction. However, they remain enmeshed in outdated and harmful clichés. Despite efforts to refine their algorithms, these tools continue producing images that reinforce Western-centric stereotypes and distortions. From caricatured portrayals of ethnicity to gendered assumptions about household roles, the consequences of biased image generation are profound and far-reaching.  The very fabric of these Gen AI-powered tools is woven from the depths of the internet, where xenophobia, racism, misogyny, violence, bigotry, and abusive tendencies fester unchecked.

‍ ‍

Despite efforts to sterilize these datasets, filtering out problematic content proves to be a Sisyphean task. Remnants of cultural bias linger, distorting representations of race, gender, and wealth. From disproportionately representing individuals who appear White, female, and youthful to perpetuating tropes about race, class, and intelligence, the impact of biased Gen AI extends far beyond mere pixels on a screen. This underscores the critical need for fully auditable tools and processes to track the effectiveness and impact of Gen AI optimizations. Responsible Gen AI deployment involves selecting accurate and audited data sources and establishing auditable tools and processes to track its effectiveness and impact. 

‍ ‍

Moreover, privacy protection and rigorous auditing processes are imperative to prevent biases from seeping into Gen AI systems. In New York City, an ordinance was issued in Summer 2023 over bias in Gen AI systems used for hiring processes. While waiting for the typical procedural gridlock in Congress, the city took matters into its own hands. It prohibited employers and employment agencies from using an automated employment decision tool in New York City unless they ensure a bias audit is done and provide required notices. Under the new law, employers who wanted to use Gen AI systems in their hiring procedures would need to publish an annual bias audit report outlining how utilizing Gen AI in employment practices withstands scrutiny for bias. New York businesses must also inform applicants and employees whenever Gen AI-driven tools contribute to employment-related decisions.

‍ ‍

Transformative Potential in Combating Financial Fraud and Money Laundering 

‍ ‍

While the ethical implications surrounding Gen AI are profound and warrant careful consideration, a compelling case exists for embracing this technology within the financial and legal sectors. From streamlining internal operations to bolstering security measures, its role in detecting and preventing fraud and money laundering is a groundbreaking application with far-reaching implications. In the payments industry, the initial adoption of Gen AI focused on internal functions, ranging from streamlining IT requests to managing internal expense reporting, informing lending processes, processing payments, or automating corporate employee expense payments for activities such as travel. Perhaps the most groundbreaking application of Gen AI in the payments industry lies in its capacity to detect and prevent service violations, particularly in combating fraud and money laundering.

‍ ‍

As officials from regulatory bodies such as the Financial Conduct Authority and the U.S. Securities and Exchange Commission have publicly acknowledged, Gen AI technologies can drive operational efficiency, particularly in customer due diligence, screening, and transaction monitoring controls. Industry leaders like Stripe and Mastercard have already begun leveraging Gen AI to enhance their operations. Stripe utilizes ML algorithms to bolster fraud prevention measures while optimizing product offerings across its business spectrum. Similarly, Mastercard harnesses Gen AI models to ensure the security of over 125 billion transactions annually, extending its applications to customer experience enhancement, treasury management, and product testing.

‍ ‍

Gen AI's Impact on Customer Due Diligence and Identity Verification 

‍ ‍

Gen AI is increasingly applied to customer due diligence and screening controls, leveraging natural language processing and text mining techniques to enhance risk assessment processes. Integrating Gen AI with human activity drives innovation in AML practices, potentially leading to a more holistic approach to know-your-customer (KYC) procedures. The integration of Gen AI has significant implications for KYC processes, especially in the non-traditional space of fintech.  AI-driven KYC solutions can enhance regulatory compliance, improve risk management practices, and elevate the overall customer experience by harnessing advanced biometrics and, potentially, social profiles.

‍ ‍

A popular approach to KYC verification incorporates challenge questions during user login to strengthen the verification process. This innovative method uses user data to monitor and compare login activities with new login attempts. Parameters such as failed login attempts, new user registrations, clients with limited identity details, or alterations in transaction patterns are analyzed within the challenge question technique, implemented in many login types. Risk factors are then calculated based on these parameters, enabling the system to flag and log out users exhibiting suspicious behavior.  

‍ ‍

A step up from the traditional challenge questions, Hewlett Packard Enterprises (HPE) unveiled, in March 2024, the enhancement of its AIOps network management features by integrating multiple Gen into a cloud-native network management solution, diverging from conventional Gen AI methodologies that rely on API calls to public LLMs. This network incorporates a self-contained set of LLMs with pre-processing techniques and safeguards to enhance user experience and operational efficiency, prioritizing search response times, accuracy, and data privacy. HPE amassed telemetry from nearly four million network-managed devices and over one billion distinct customer endpoints, fueling the ML models for predictive analytics and recommendations. The result? Complementing existing ML-based Gen AI across the platform to deliver deeper insights, enhanced analytics, and proactive capabilities. The more data it consumes, the better trained the Gen AI systems become. Integrating the risk assessment, monitoring, investigative, and due diligence processes, as HPE has, shows Gen AI serving as an agent of change to break down silos and provide a more contextual basis for determining risk and detecting suspicious activity.              

‍ ‍

Global Applications of Gen AI in Payment and AML 

‍ ‍

Efforts to combat money laundering globally have led to the development of numerous international, regional, and local regulations and directives. These efforts aim to protect the integrity of the global financial system and address the pervasive threat of money laundering and terrorist financing. The European Union, for example, has implemented AML and countering the financing of terrorism (CFT) laws that outline the responsibilities of financial institutions, including KYC verification, transaction monitoring, identification of politically exposed individuals, and reporting of suspicious activities to authorities. Establishing specialized Financial Intelligence Units (FIUs) worldwide demonstrates governments' collaborative approach to combat money laundering. FIUs are crucial in analyzing suspicious activity reports from financial institutions and coordinating with relevant authorities to investigate and prosecute money laundering and financial fraud cases. At the international level, the Financial Action Task Force (FATF) is an independent body responsible for developing and promoting policies to safeguard the global financial system against money laundering and terrorist financing. FATF's recommendations set global standards for AML and CFT practices, with each member country required to identify risks, implement preventive measures, and enhance transparency and international cooperation.

‍ ‍

Regional and national initiatives also complement global efforts in combating money laundering, like the Japanese Society for Artificial Intelligence’s Ethical Guidelines, which created a proposal for federal algorithmic auditing (and was later adopted by the US Senate Intelligence Committee). The European Union's General Data Protection Regulation (GDPR) and the latter’s subsequent AI Act set ethical standards and data governance principles for Gen AI, ensuring unbiased decision-making and protecting individual privacy rights. The GDPR began running studies on the impacts of Gen AI in 2020. The EU Council and Parliament struck a deal on the first rules for Gen AI in December of 2023. In the US, the National Institute of Standards and Technology, one of the first to set solid guidelines for blockchain at its boom, released its’ AI Risk Management Frameworkin January of 2023.  

‍ ‍

Recent joint statements from government agencies highlight the urgency for businesses to proactively evaluate Gen AI's impact on civil liberties, consumer protection, and equal opportunities. This call to action mirrors the challenges faced by blockchain technology three years ago at its boom, with both industries grappling with the information and understanding gaps between tech companies and legislators. Gen AI-wrapped or focused tech companies, like the blockchain/de-fi start-ups that popped up after sharing open-source information, are slow to put privacy or regulatory protocols in place at inception. Regulatory counsels and advisory boards are put in place. The first round saw leaders like the Global Blockchain Business Council and through bespoke advocacy like Stand With Crypto. Now, they look like the Center for AI Safety and the AI Alliance, which boasts membership from AMD, Meta, and IBM leaders. Similarly, US regulations are developing from the inside out and the outside in, with Congress being the last to catch up. Individual states are leading in creating legislative policy and attempting to regulate Gen AI technology as the EU, Asia, and Latin America develop protocols.  

‍ ‍

‍ ‍

Conclusion 

‍ ‍

Information shared here underscores the necessity for balancing technological advancements with ethical imperatives and regulatory compliance. While Gen AI holds the potential to revolutionize cultural institutions like the legal and financial sectors within our society, responsible and ethical implementation requires the integration of human oversight, prioritization of data privacy, and compliance with industry-wide regulatory standards. Only through such conscientious practices can the full potential of Gen AI be realized. Both legal and financial institutions must prioritize training Gen AI models on diverse datasets to mitigate biases and ensure equitable outcomes. Accountability is also necessary, not just on the regulatory level – companies must embrace accountability for Gen AI’s mistakes, recognizing that these technologies are their responsibility. Committees and positions focused on Gen AI and accountability can aid in this endeavor, ensuring a proactive approach to addressing errors and mitigating risks in Gen AI-powered AML systems.

‍ ‍

As Gen AI relies heavily on data analytics and processing, protecting customers' information becomes paramount. Legal professionals, firms, and regulatory entities must prioritize robust data privacy policies, developed with the guidance of legal counsel, to ensure compliance with regulatory frameworks and protect customers' sensitive data from unauthorized access or misuse. In essence, the ethical implications of leveraging Gen AI in combating financial fraud and money laundering necessitate a multidimensional approach that prioritizes ethical considerations, regulatory compliance, and the protection of consumer interests. By fostering a symbiotic relationship between technological innovation and moral imperatives, we can harness the transformative potential of Gen AI while upholding integrity, transparency, and trust in the evolutionary advancements of the financial and legal ecosystem. 

‍ ‍

First published in the International Legal Technology Association Quarterly Journal Spring 2024; see https://epubs.iltanet.org/i/1521210-spring24/33?.

Citation: Nicoletta Kolpakov, “Navigating Gen AI’s Ethical Landscape When Combatting Financial Fraud and Money Laundering,” 40 Int’l Legal Tech Assn. Q. 34 (2024).

‍ ‍

Next
Next

Blockchain & Global Finance: The Legal Architecture of a Fragmenting Monetary Order