Regulators vs. Robots: AI Under Scrutiny
Remember The Jetsons? That cartoon imagined an optimistic future in which humans zipped off to work in flying cars and had sassy housecleaning robots like Rosey to make their lives more convenient. The show originally aired in the early 1960s, which means it’s now about 60 years old (wow!). More than six decades after The Jetsons’ creation, we still don’t have the flying cars, but in many ways, technology has moved us closer to the future imagined in the show – the paper newspapers of the 1960s have been replaced by websites and social media, maps have been replaced by GPS apps, and we all carry handheld computers (smartphones) with us wherever we go. Technology has transformed financial services dramatically as well, with many credit union members (especially the younger generations) relying on technology to conduct their transactions, such as remote deposit capture, peer-to-peer payment transfers, and using online banking so they never have to step foot inside a branch.
For those looking ahead to the future, many have identified artificial intelligence (AI) as a technology that will have growing importance for the financial services industry in years to come. One of the value propositions of AI is that it can be similar to Rosey on The Jetsons – much like how Rosey took care of the cooking and cleaning so the Jetson family could focus on leisure and spending time together, AI can take care of some of the minor tasks or more data-driven processes of a credit union, thereby freeing up the time of credit union employees to focus on other work (though, sadly, AI will probably have less personality than Rosey did). Financial institutions have begun using AI in tasks such as underwriting and analysis of transactions. In recent years AI has made tremendous progress towards mimicking human behavior – you might have read about how Chat GPT has been used to create art and music and to mimic human writing and conversation styles. Thus, it is also no surprise that some institutions have begun to use AI “chatbots” – while the complexity of chatbots varies, these are generally systems in which a consumer asks for assistance and the system provides a response, typically pointing them towards information, which may appear as a simulated conversation. In recent months the federal financial regulators have increasingly been taking notice of AI and its possible implications for regulatory compliance. Let’s review:
First, the National Credit Union Administration (NCUA), Consumer Financial Protection Bureau (CFPB or bureau) and other financial regulators issued a Request for Information (RFI) in March 2021, which sought to gather information on how financial institutions – including credit unions – were using AI and machine learning. The RFI asked questions relating to several topics, including alternative data in underwriting, cybersecurity implications and fair lending.
Second, in May 2022 the CFPB published a consumer financial protection circular on the topic of “credit decisions based on complex algorithms.” The circular notes that when a credit union takes adverse action on a credit application, an adverse action notice is required under Regulation B – and that notice will require the credit union to provide the “specific reasons” for the adverse action. In the circular, the CFPB clarified that, if a credit union took adverse action based on the results of an algorithm, then simply saying “our algorithm denied your application” would not suffice – instead, the credit union must provide the specific reasons why the algorithm resulted in an adverse action decision. The bureau warns that the fact that technology is too complex or opaque is not an excuse for violating Regulation B’s requirements.
Just a few months ago, in April 2023, the CFPB joined the Department of Justice, the Equal Credit Opportunity Commission and the Federal Trade Commission in a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. In this statement, the agencies discuss the use of AI and automated systems in U.S. society – in financial services but also in other sectors – and warn that “[a]lthough many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” The joint statement also warns that flawed data sets (such as historical bias in the data), lack of transparency regarding how models work, and mistaken assumptions about how processes will be used could result in unlawful discrimination against consumers.
Most recently, the CFPB published last week an “issue spotlight” report on the topic of “chatbots in consumer finance.” The bureau’s report concludes that financial institutions are increasingly using AI chatbots to interact with their customers. The report describes the spectrum of complexity involving chatbots – how some may be rudimentary systems that can only accept preset inputs from consumers (i.e. “please select from the following options…”), whereas others may be more complex and can basically mimic human conversation. The report details complaints the CFPB has received from consumers, including complaints that some chatbots merely send consumers in circles (aka “doom loops”) without ever resolving their issue, and without any option to speak to an actual human being. Additionally, the bureau asserts that some chatbots have been found to have generated fictional outputs – meaning that the information provided might be factually inaccurate or misleading. The report also warns that chatbots may create cybersecurity risks by being targets for phishing attempts and may also pose data privacy challenges as the information entered by the consumer will need to be protected.
Importantly, the CFPB also made it a point to mention that regulatory requirements still apply to the credit union, even when chatbots are used – for example, the bureau notes that when a member contacts a credit union through a chatbot to dispute a transaction, error resolution provisions in the regulations could be triggered based on the conversation. Thus, credit unions may want to review their practices in this area to determine if they can identify when disputes are received via chatbot message. For example, if a member were to contact the chatbot to allege that an electronic fund transfer from his or her checking account was unauthorized, the information provided by the member could amount to a “notice of error” under section 1005.11(b), thus triggering Regulation E’s error resolution provisions. Credit unions using chatbots may want to ensure that they can identify such situations so that they can begin complying with the relevant regulatory requirements.
Finally, credit unions may want to have processes built-in so that members can access a live human being if the chatbot is not able to resolve their issue. The bureau’s report focuses at several points on consumers’ frustrations with the inability to talk to a live person, and notes that harm can result: “Deficient chatbots that prevent access to live, human support can lead to law violations, diminished service, and other harms.” Providing members with the ability to access a human being could reduce possible reputation risk and could also help a credit union uphold the industry mantra of “people helping people.”
As the area continues to develop, the Compliance Blog will keep you updated on any new guidance or regulatory requirements relating to the use of AI.
****************************************
On your schedule! New On-Demand and Virtual Conference Options!
BSA School On-Demand: Earn your NCBSO
Risk Management Seminar On-Demand: Earn or Renew your NCRM
Regulatory Compliance School On-Demand: Earn your NCCO
Virtual Regulatory Compliance & BSA Seminar: Recertify your NCCO and/or NCBSO
About the Author
Nick St. John, NCCO, NCBSO, Director of Regulatory Compliance, NAFCU
Nick St. John, was named Director of Regulatory Compliance in August 2022. In this role, Nick helps credit unions with a variety of compliance issues.