Don’t Botch It—AML Compliance Opportunities, Risks With Chat Bots

Andrew Bigart

By Andrew Bigart

Andrew E. Bigart is a partner at Venable in Washington, D.C., where he focuses his practice on antitrust and consumer protection law, payments and financial services, and business counseling.

The next time you attempt a transaction with your bank, you may find yourself talking to a chat bot or similar virtual assistant instead of a live person. While the rise of chat bot technology has received significant press, little attention has been paid to the anti-money laundering (AML) compliance implications of replacing front-line employees, whether they are tellers, loan officers, or customer service representatives, with bots and other, similar virtual assistants (collectively, “bots”).

To be sure, artificial intelligence and machine learning, as a whole, have the potential to improve anti-money laundering efforts. But the use of bot technology in place of traditional human interaction is a development that banks and other financial institutions (FIs) should address within their existing AML, fraud, and consumer protection policies and procedures. Otherwise, FIs may leave themselves with a gap in their compliance and potentially exposed to regulatory risk.

The Rise of Chat Bots and Artificial Intelligence

Bots are small pieces of code designed to do one task formerly done by a human, such as opening an account application or handling customer inquiries online or by phone. According to news reports, many FIs are supplementing their traditional consumer-facing positions with chat bots and similar technologies. These bots are being deployed in account opening, operations, customer service, and elsewhere. According to media reports, a number of banks are experimenting with bots, and industry expects these technologies will revolutionize the way in which banks gather information, as well as how they interact with clients.

To date, however, there has been little discussion of the potential positive or negative AML and consumer fraud implications of using bots in place of humans. On the one hand, using bots for certain functions may limit fraud by preventing employees from engaging in fraud or coordinating with criminals outside of the bank. In fact, the use of bots or other, similar technologies may have a net benefit in terms of managing compliance risk. On the other hand, customer-facing employees play an important role in monitoring for fraud or suspicious behavior. And while bots may be programmed to identify patterns of behavior that are clearly “suspicious,” they lack an important component for identifying and, more importantly, preventing behavior that is suspicious or out of the ordinary—human instinct.

Federal regulators, whether the Financial Crimes Enforcement Network (FinCEN), the Department of Justice, or the prudential banking agencies, expect financial institutions to monitor customer interactions for signs of money laundering or fraud. The failure to do so can lead to enforcement actions and stiff penalties. For example, in January 2017, Western Union entered into a consent agreement with FinCEN, and a deferred prosecution agreement with the Department of Justice, for alleged anti-money laundering and consumer fraud deficiencies in its supervision of its agents.

Western Union was fined for its failure to monitor its agents and in turn ensure they were properly trained and diligently identifying people obtaining money abroad, especially in Mexico. Western Union was further found to have inadequate security and discipline policies to take corrective remedial action for agent locations that facilitated a high volume of fraud and/or money laundering transactions, including training, compliance inspections, and termination.

While the Western Union case didn’t involve the use of bots, it does highlight the potential for increased compliance risks in an increasingly automated and remotely functioning banking system. Using the case as a springboard, it’s not hard to imagine a world in which regulators scrutinize the use of bots or other technology for AML or consumer fraud gaps. The case demonstrates that regulators expect humans at branches and agent locations to be trained in spotting red flags, and to be the first line of defense when it comes to money laundering, especially in high-risk regions. Regulators will expect no less from FIs that implement bots or other AI technology to interact with customers. In this regard, the use of bots, while efficient, has the potential to create compliance risks if their algorithms are not designed to spot red flags or unusual behavior.

Integrating Bots With AML Compliance

FinCEN, the Federal Financial Institutions Examination Council (FFIEC), and other regulators have yet to publish guidelines that relate to the use of “bots” or AI in financial transactions. To remain compliant with BSA/AML laws, FIs should take internal measures, such as updating their AML and client relationship policies and procedures when introducing chat bots and other AI technology to automate their customer-facing interactions. These new policies and procedures should take into account that customers will no longer be interacting with staff trained to look for suspicious behaviors and red flags and ensure that the technology used is sophisticated enough to continue to spot and alert supervisory teams to potential red flags.

Some recommendations for managing the AML risks that may arise include:

  •  Continue to train employees in customer-facing positions to monitor for and identify suspicious activity.
  •  Incorporate effective methods to authenticate the identity of customers when interacting with them through automated technology.
  •  Institute comprehensive customer identification requirements, including requesting the purpose of transactions and ensuring there are algorithms in place to check consistency.
  •  Incorporate certain key words, scripts, or patterns of speech that signal suspicious activity and tailor systems to cause an alert requiring additional review by staff.
  •  Institute adequate reporting mechanisms to promptly inform security administrators when suspicious behaviors are detected by a bot in order to temporarily flag or disable accounts
  •  Incorporate geolocation technology to identify when users are in high-risk regions that may require specialized policies and procedures and heightened scrutiny.

In closing, as more and more financial institutions experiment with bots and other automated programs in customer-facing roles, it is important that these programs be implemented in ways that take into account AML and consumer fraud risks. A core aspect of AML compliance has always included vigilant front-line employees trained to look for suspicious customer activity, and regulators will expect similar vigilance as new technologies are rolled out.

Copyright © 2018 The Bureau of National Affairs, Inc. All Rights Reserved.