You want to know how racist artificial intelligence can be? In 2021, a facial recognition system included a photograph of Michael B. Jordan, the internationally famous Black American film star, among a lineup of suspected gunmen in a Brazilian mass shooting.
Back home in the states, since 2020, facial recognition systems have led to wrongful arrests of four people—and you’re right if you guessed they all had two things in common with Michael B. Jordan. Those are just a couple of the examples illustrating A.I.’s nasty tendency to perpetuate anti-Black racism, which is learned from its sentient programmers.
Now that the debt collections industry—well-known for its racism and predatory practices—has wholeheartedly embraced A.I. and automation, it’s a safe bet that algorithmic racism will have particularly negative outcomes for Black folks already living on the financial margins.
ADVERTISEMENT
It’s not merely that, as Vice recently reported, debt collection is likely to become more relentless and aggressive, though that’s definitely part of it. Black folks, studies find, “are much more likely than whites to be called by debt collectors” even though Black and white debt holders have compatible “levels of debt and repayment rates.”
In 2017, the Consumer Financial Protection Bureau found more than a quarter of all “consumers contacted by debt collectors felt threatened.” The CFPB has since passed laws banning that kind of harassment, but in 2021 the agency still had to shell out $4.86 million “in refunds to consumers harmed by unlawful debt collection practices.” Based on history and statistics, Black folks are likely to be disproportionately targeted by creditors whose capacity for ruthlessness is expanded by automation.
In fact, that’s kinda one of the selling points for so many companies currently courting the debt collections industry.
A firm with the subtle name of “Arrears” promises its suite of A.I. offerings will allow debt agents “seamless communication” with targets “across multiple channels, including SMS, email, chat apps, and soon-to-be-integrated Meta DMs.” (Yes, debt collectors are already legally allowed to DM you via social media.) SKIT.ai boasts about rolling out A.I. and text-to-speech enabled voicebots meaning able to speak and respond in a way that approximates human conversational skills—that “can dial millions of calls within a few days,” so that “human agents are no longer required to do those calls.”
Another company raves about its chatbots, which “can be used around the clock at any time” to “reach customers when they’re most likely to respond—even in the middle of the night or on weekends.” And for the record, those CFPB anti-harassment laws only apply to calls, but don’t cover “text messages, emails, and other types of media,” many of which are new and emerging!
But the most sinister use of A.I. in debt collection, a recent Wired article revealed, is that now, many more financially strapped consumers can be sued—over even the most paltry sums.
Debt collection agencies were previously loath to spend the time or money to pursue “low-quality, small-dollar cases” in court. But A.I.-based robo-lawyers can now cheaply and quickly generate and file huge volumes of cases at once.
Defendants often don’t appear in court to respond because they don’t have the funds to hire a lawyer or simply have no idea how to proceed—or, not uncommonly, because many debt collectors “deliberately avoid notifying defendants of a legal case (for example, by sending a case to an old address).” The case then ends in a default judgment, with wage garnishment as a final outcome.
And, as Wired notes, that’s a particularly galling result considering debt collectors often produce bad filings filled with erroneous information, including incorrect interest rate calculations, “false affidavits, bad notarizations, backdated paperwork, inadequate documentation, and so on.” This was true during the subprime mortgage crisis, a 2008 New York Times report confirmed, when “some of the largest firms in the industry”—after overwhelmingly targeting Black folks with predatory loans—“repeatedly submitted erroneous affidavits when moving to seize homes and levied improper fees.” If human lawyers regularly filed crappy suits, imagine what A.I. chatbots will generate—not that courts will look close enough to notice before ruling in their favor.
Pew Trusts notes that multiple studies show debt lawsuits already “disproportionately affect African American and Hispanic communities,” where people lack resources due to historical inequalities. A 2015 ProPublica examination of five years worth of court rulings in debt cases from Chicago, St. Louis, and Newark, New Jersey, found that “even accounting for income, the rate of judgments was twice as high in mostly Black neighborhoods as it was in mostly white ones.”
Moreover, a 2019 investigation by tech-news outlet The Markup discovered that even when their financials were the same as white applicants, A.I. used by the country’s biggest mortgage lenders was 80 percent more likely to reject Black applicants. (It was also 70 percent more likely to deny Native Americans, 50 percent more likely to reject Asian/Pacific Islanders, and 40 percent more likely to reject Latinos.)
In addition to wrongly accusing Black folks of murder and denying them mortgages (just like in the analog world!), the ubiquity of A.I., bearing the bigotry of its makers. continues to perpetuate systemic disparities.
Multiple studies have found that A.I. discriminates against Black folks and women in hiring. Risk-assessment algorithms, used by some courts to predict the likelihood a person will commit future crimes—and therefore impact sentencing decisions—are “particularly likely to falsely flag Black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.” (Those racial disparities couldn’t be explained by explained away by “defendants’ prior crimes or the type of crimes they were arrested for.”)
An algorithmic tool used to determine health-care needs for a some 70 million Americans required that Black patients be far sicker than white patients to receive the same recommended level of care. A.I. is even problematic when it comes to proctoring tests, doing a bad job at recognizing all nonwhite faces, but being particularly piss-poor at the task where Black students are concerned.
There’s plenty of reason to be concerned about how A.I., already in use by debt collectors but likely only to grow more common, will harm Black folks, who are more likely to have both medical and student loan debt.
The “racism in the machine” may very well increase stressors from harassment, depreciated wages from garnishments, fees and fines from court cases, and credit issues that send folks further down the debt abyss. (Across the board, American household debt has skyrocketed $2.9 trillion since the end of 2019.)
This is the canary in the coal mine, and while A.I. debt solutions might be especially bad for Black folks, financial suffocation has a way of engulfing us all.