Skip to content

Revealing the Blind Spots: A Critical Review of the CFPB’s Issue Spotlight on Chatbots in Consumer Finance

June 21, 2023

Clark Hill Summer Associate Pawan Jain contributed to this article.

As the evolution of Artificial Intelligence (“AI”) dominates headlines throughout the globe, financial institutions have been paying attention. Not surprisingly, so has the Consumer Financial Protection Bureau (“CFPB” or “Bureau”).  Just as financial sector companies feverishly ramp up their use of AI to complete, augment, and personalize customer service interactions, the CFPB is pumping the proverbial brakes.

On June 6, the Bureau published an Issue Spotlight (“Spotlight”) titled “Chatbots in Consumer Finance,” cautioning against the industry’s reliance on advanced chatbots, saying it can lead to violations of consumer finance laws, harm consumers by providing inaccurate information and diminish customer service. The Spotlight indicates that automated chatbots, especially those fueled by AI and related technology, will be a new area of focus for the CFPB, in terms of both supervision and enforcement.

The underlying technologies used in these customer-facing chatbots include large language models, AI, generative machine learning, neural networks, and natural language processing (NLP). These technologies enable chatbots to simulate human-like responses and automatically generate chat responses using text and voice.

The consumer finance industry has widely adopted chatbots as a cost-effective alternative to human customer service and continues to do so at a breakneck pace. According to the research conducted by the CFPB, approximately 37% of the U.S. population engaged with a bank’s chatbot last year, and CFPB indicates this number is projected to grow to 110.9 million users by 2026. The adoption of chatbots has not only resulted in significant cost savings for financial sector firms, but studies indicate that it has also improved consumer experiences. Therefore, the growth rate of chatbot adoption in the financial industry is expected to continue increasing exponentially.

The Spotlight, however, presents the agency’s pessimistic view surrounding the use of chatbots for providing customer service in the consumer finance industry. The Spotlight highlights several risks associated with the use of chatbots including: (1) Risk of noncompliance with federal consumer financial laws; (2) Risk of harming people; (3) Erosion of trust and deterrence from seeking help; and (4) Frustrating customers.

A critical question is how the Bureau came to this conclusion. Was it based on a scientific study, a focus group, or mere conjecture? The Spotlight claims to be based on a significant number of complaints filed by consumers on the CFPB’s complaint portal. However, a simple search of the complaint database reveals that, as of the date of this alert’s publication, there are only 64 total complaints that mention chatbots or artificial intelligence in searchable complaint narratives over the past five years. During the same time period, 50,341 complaints regarding poor customer service experiences (unrelated to chatbots) were filed with the Bureau. Hence, only 0.13% of the poor customer service complaints filed were in relation to bad experiences using a chatbot. If analyzed against the total number of complaints filed with the CFPB this number drops to a mere 0.0024%.

The CFPB’s report points to compliance issues with the use of chatbots in the consumer finance industry. The Spotlight suggests that chatbots have difficulty recognizing and resolving consumers’ disputes. Surprisingly, however, the Spotlight’s supporting evidence is actually based on a consumer’s experiences and frustrations while interacting with a human customer agent. For example, on page 10 of the Spotlight, CFPB presents a consumer complaint as evidence of “parroting” by chatbots. However, a review of the full narrative of the complaint indicates that a “human” agent, not a chatbot, was regurgitating the same information without resolving the consumer’s actual issues.

The Spotlight further argues that the use of chatbots can be problematic for consumers with limited English proficiency. However, human customer service agents can be more prone to language barriers but newer generations of chatbots, are known to support over 50 different languages, which go well beyond the capacity of a human brain.

Next, the Spotlight suggests that AI-powered chatbots pose significant security risks through impersonation and phishing scams. While the mass adoption of these generative AI technologies can certainly increase the frequency of phishing scams, these security risks are still mostly initiated by humans, and AI can also be programmed in a countervailing measure to push back on many types of data breaches. Further, a recent large-scale study from Hoxhunt documents that the scams initiated using chatbots were 69% less effective than those initiated by humans. Earlier this year, the CFPB itself experienced a significant security breach triggered by human error. It is worth considering that a properly trained AI system might have helped the CFPB avert this breach by blocking the transfer of unauthorized data to an external email. As highlighted by Hoxhunt’s study, the CFPB could and should focus on offering security training programs to consumers going forward.

Lastly, the Spotlight also suggests that chatbots lead consumers in “continuous loops of repetitive, unhelpful jargon or legalese without an offramp to a human customer service representative.” This “doom loop” according to the Spotlight, might lead to customer frustrations and dissatisfaction. However, the conclusions drawn by CFPB are seemingly unsubstantiated. By their own records discussed earlier, the CFPB highlights that 37% of the U.S. population is estimated to have interacted with a bank’s chatbot in 2022. If the CFPB’s account of user frustrations with chatbots is accurate, then over 98 million people have interacted with chatbots in the past year. This would certainly result in more than 64 complaints to CFPB. Further, on page 7 of the Spotlight, CFPB highlights that Bank of America’s chatbot, Erica, had been used by nearly 32 million customers with over 1 billion interactions in 4 years since its launch. Even with such a high volume, a simple search for complaints on CFPB’s complaint portal reveals only four bad customer service experiences related to artificial intelligence with Bank of America during the same four-year period.

Finally, it’s important to note that the CFPB itself has been using rule-based AI in its online complaint portal. Upon providing their personal information on the complaint portal, consumers are asked to select categories and subcategories before they are provided with a text editor to file their complaints. The organization of these complaints can only be done by rule-based AI. Additionally, the CFPB also uses rule-based chatbots to support their telephone-based customer service. The Bureau is therefore not immune from enjoying the benefits of automated communication. Perhaps the CFPB might even benefit from the use of more sophisticated models that further leverage AI, so as to avoid the risk of consumer frustration that could be escalated when only rule-based models (aged technology) are used.

The CFPB’s all-or-nothing approach is counter-productive at the early stage of the AI debate. Most consumer-facing companies have been instituting a hybrid approach to AI where the simple or programmatic consumer issues are handled by chatbots which frees up human customer agents’ time to deal with more complex issues. The CFPB makes no mention of the fact that many financial services entities and institutions are using AI to enhance compliance; what is sometimes called “AI for good.” It is essential to balance the need for innovation and progress with the need to protect consumers and ensure fair practices. Undoubtedly, AI is not immune from mistakes, and it certainly instills a tremendous fear of the unknown if it is not used in the right way. The task of protecting consumers requires an honest conversation of the good and bad that surrounds AI, along with a measured approach to the current outcomes and opportunities. Unrealistic hypotheticals may ultimately result in consumers losing out in the end. Striking the right balance is a complex challenge that regulators, companies, and consumer advocates must work together to solve – it will be worth the effort.

Clark Hill’s Financial Services Regulatory and Compliance Practice Group provides effective representation during enforcement and supervision, technical guidance, policy advice, and strategic planning and outreach to relevant stakeholders in the financial services industry. Our exceptional team of lawyers and government and regulatory advisors has extensive experience in – and an in-depth understanding of – the laws and regulations governing financial products and services. For more information, please contact Joann Needleman (jneedleman@clarkhill.com), and Aryeh “Ari” Derman (aderman@clarkhill.com).

Subscribe For The Latest

Subscribe