Innovation

The 5 Insidious Ways AI Has Already Impacted Your Life for Years

SNEAK

You probably didn’t even notice—and that’s the whole point.

060323-ai-hero_o91e0g
Photo Illustration by Luis G. Rendon/The Daily Beast/Getty

You’re probably getting the wrong idea about artificial intelligence—and it’s not your fault. The past few months have been filled with tales surrounding the supposed power and abilities of the technology, ranging from the sensational to the outright ludicrous. It’s certainly not helped when AI experts and pioneers feed into this by signing open letters calling for a pause in AI research and warning of a looming extinction-level event.

AI isn’t all chatbots and image generators. Nor is it Skynet threatening to go live and destroy all of humanity. In fact, artificial intelligence isn’t even all that intelligent—and glossing over its history of hallucinating facts and making wrong decisions has allowed it to cause real harm to humans.

While there are a lot of factors at play when it comes to these harms, the vast majority of it can be boiled down to the perennial problem of bias. AI bots like ChatGPT or even the algorithms used to recommend YouTube videos are all trained on massive amounts of data. This data comes from humans—many of whom unfortunately happen to be biased, racist, and sexist.

ADVERTISEMENT

For example, if you’re trying to build a bot that determines who should graduate from a university, you might upload demographic information for the type of people who have historically obtained degrees. However, if you did that, you’d likely end up with mostly white men—while rejecting large swaths of people of color due to the fact that minorities have historically been disproportionately rejected from universities.

This isn’t an exaggeration. We’ve seen this play out again and again in different ways. Though the public discourse around AI has exploded in recent months, it’s really been impacting many facets of our lives for years. Before ChatGPT, AI programs were also being used to determine the livelihoods of the unemployed, whether or not you secure housing, and even the type of health care you receive.

This context provides a realistic picture of what this technology can and cannot do. Without it, you’re liable to fall for AI hype too—and that can actually be incredibly dangerous in itself. With hype comes misinformation and misleading claims about these bots. While there are a lot of different ways that this technology has embedded itself into our lives, here are six of some of the most consequential examples we’ve seen play out.

Home Mortgages

If you want to purchase a home, you’re likely going to have to go through an algorithm. For example, your FICO credit score is the result of an algorithmic process that greatly determines whether or not you secure a loan of any shape or size.

However, you’re also likely going to go through an AI approval process. In 1995, Fannie Mae and Freddie Mac introduced an automated underwriting software that promised to make the home loan approval or rejection process faster and more efficient by using AI to assess whether or not a potential borrower is likely to default on their loan.

While these systems were promised to be color blind, the results were damning. A 2021 report by The Markup found mortgage lending algorithms in the U.S. were 80 percent more likely to reject Black applicants, 50 percent more likely to reject Asian and Pacific Islander applicants, 40 percent more likely to reject Latino applicants, and 70 percent more likely to reject Native American applicants compared to similar white applicants.

These numbers spiked even higher in cities like Chicago, where Black applicants were 150 percent more likely to be rejected than their white counterparts; and in Waco, Texas, where Latino applicants were 200 percent more likely to be rejected.

Jail and Prison Sentencing

We think of judges and lawyers when it comes to doling out punishments or showing leniency in the court of law. In reality, a lot of that work is done with algorithms to determine a defendant's potential for recidivism—or a tendency to re-offend as a criminal.

In 2016, ProPublica discovered that one commonly used AI often helped judges dole out much harsher sentences for Black defendants at double the rate of white ones (45 percent vs 23 percent). Moreover, white defendants were found to be less at risk of reoffending than they actually were—resulting in a skewed recidivism rate.

That same bot is still being used today for criminal risk assessment in states including New York, California, Florida, and Wisconsin.

Job Hiring

As if the job hunting process wasn’t infuriating enough, you might have to deal with a racist HR bot reading your resume.

Hiring bots come in a variety of different forms. HireVue, an employee hiring company used all over the country in companies such as Hilton and Unilever, offers software that analyzes the facial expressions and voices of the applicants. The AI then scores them, providing companies with an assessment of how they stack up against their current employees.

There are also AI programs that run résumé analysis to quickly screen your CV for appropriate keywords. This means you might even get rejected before a human HR person even glances at your cover letter. The result, like with so many other AI applications, is a disproportionate rejection of applicants of color when compared to similar white candidates.

Medical Diagnosis and Treatment

Hospital systems and doctors offices are no strangers to using automated systems in order to help assist the diagnostic process. In fact, places like the Mayo Clinic have used AI to help identify and diagnose things like heart issues for years.

However, bias inevitably rears its ugly head when it comes to AI and medicine is no exception. A 2019 study published in Nature found that an algorithm used to manage health populations often resulted in Black patients receiving worse care than similar white patients. Less funding is invested in Black communities and patients with the same level of need as well.

With the rise of ChatGPT and various health tech startups trying to create diagnostic chatbots (to varying degrees of cringiness), many experts are now concerned that these bias issues might be exacerbated by the harms we’ve seen arise from chatbots already. This also isn’t helped by the medical community’s sordid history with scientific racism.

Recommendation Algorithms

Perhaps the most visible example of how AI impacts your everyday life is the very same reason you probably stumbled upon this article in the first place: social media algorithms. While these AIs do things like show you your friend’s latest Instagram photo from their recent vacation in Italy, or your mom’s embarrassing Facebook status, they can also do things like elevate extremist content on YouTube or push a far-right agenda on Twitter.

Historically, these algorithms have been gamed by bad actors in the past to push political narratives. We see this play out time and again on Facebook when massive troll farms based in places like Albania and Nigeria are used to push disinformation in an effort to sway elections.

At its best, these algorithms can show you a fun new video to watch on YouTube or Netflix. At its worst, that video is trying to convince you that vaccines are dangerous and that the 2020 election was stolen.

That’s the nature of AI, though. These are technologies that have great potential to help make decisions easier and more efficiently. However, when weaponized by bad actors, leveraged by greedy corporations, and lazily applied to historically racist and biased social systems like incarceration, it ultimately does much more harm than it does good—and you don’t need an AI to be able to tell you that.

Got a tip? Send it to The Daily Beast here.