Understanding the Dangers
Inherent in Artificial Intelligence
- In November 2021, an AI-generated error resulted in a $304 million operating loss and the shutdown of Zillow’s largest revenue generator, Zillow Offers.
- In November 2023, the U.S. Senate subpoenaed Ticketmaster for using AI which artificially inflated demand, causing exorbitant ticket prices and angering consumers.
- January 2023, the AI-powered cruise control on a 71-year-old motorist’s BMW misread a speed limit and accelerated to 110 mph in a 30-mph zone.
The global market for artificial intelligence is expected to reach $341.4 billion by 2027. You may be wondering if it’s time for your company to jump on the AI bandwagon.
But the potential for AI’s misuse in the credit and collections industry is immense. Collectors must be aware of these inherent pitfalls when dealing with consumers’ sensitive information and financial well-being.
Let’s look at some of the risks AI poses for creditors and other collectors.
The Mysterious Black Box
One big problem with AI is that its inner workings are neither readily accessible nor understandable to humans. These systems rely on intricate algorithms, and tracing AI’s step-by-step process can be challenging and problematic. The complexity of the models and the non-linear functions they perform make it exceedingly difficult to identify the rationale behind their outputs.
The Consumer Financial Protection Bureau (CFPB) recently highlighted this problem concerning AI chatbots: “Financial institutions risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology.”
Security and Privacy Concerns
AI-driven collection systems require access to sensitive financial data and personal information to make decisions. In the absence of proper security measures, these programs are vulnerable to cyberattacks, leading to data breaches and compromised consumer data.
Absence of Human Judgment and Empathy
The Robodebt Debacle
In 2015, the Australian government introduced an AI debt recovery program called “Robodebt,” designed to claw back supposed overpayments to welfare recipients.
Using an incorrect algorithm, the AI issued more than half a million letters to Australians, stating they owed thousands of dollars in debt. By the time the program was halted in 2019, some of the country’s poorest residents had been forced to pay off false debts. At least three known suicides were also attributed to Robodebt.
The government was ordered to pay over $700 million to victims.
Source: BBC
Fostering strong relationships with consumers is vital in the credit and collections industry. AI-powered collection processes lack the human touch and emotional intelligence required in a time of financial trouble. Connecting with AI can be challenging, resulting in potential misunderstandings that can negatively impact consumers’ overall experience.
Compliance Issues
Financial institutions can be held accountable for faulty algorithms that result in compliance violations. (See sidebar, “The Robodebt Debacle.”) Last year, the CFPB and other regulators issued a statement regarding strict enforcement of compliance laws whenever AI is used in credit and collections.
Bias and Discrimination
AI can inadvertently contain biases in the initial training data, the algorithm, or the algorithm’s predictions, resulting in discriminatory or unfair outcomes in collection decisions.
AI can also violate fair lending laws by treating credit applicants differently based on a protected characteristic. For example, digital redlining can deny minority applicants equal access to credit and banking services.
How to Respond?
As the AI market expands, these tools will become more heavily regulated to promote transparency, accountability, and trust.
Until then, financial institutions and credit/collections professionals must lean on industry experts, consistently placing the consumer at the center of their AI decisions. Collections software that still requires some level of human interaction is likely to provide the best outcomes.
In the following video, bestselling author and technology expert Bernard Marr offers advice on how companies can best prepare for the risks associated with AI:
Sources:
Featured Image: Adobe, License Granted
MicroConf
CNBC
The Sun
Lexop
Cointelegraph