Tech

Your Next Job Interview Could Be with a Racist Bot

BLUE SCREEN

Companies are using AI-powered cameras and chatbots to screen applicants. And that could unintentionally make employers even more discriminatory than before.

180410-robot-job-interviews-tease_jqwndz
Getty

Take the horribly complex and difficult task of hiring new employees, make it less transparent, more confusing and remove all accountability. Sound good to you? Of course it does not, but that’s the path many employers are taking by adopting artificial intelligence in the hiring process.

Companies across the nation are now using some rudimentary artificial intelligence, or AI, systems to screen out applicants before interviews commence and for the interviews themselves. As a Guardian article from March explained, many of these companies are having people interview in front of a camera that is connected to AI that analyzes their facial expressions, their voice and more. One of the top recruiting companies doing this, Hirevue, has large customers like Hilton and Unilever. Their AI scores people using thousands of data points and compares it to the scores of the best current employees.

But that can be unintentionally problematic. As Recode pointed out, because most programmers are white men, these AI are actually often trained using white male faces and male voices. That can lead to misperceptions of black faces or female voices, which can lead to the AI making negative judgments about those people. The results could trend sexist or racist, but the employer who is using this AI would be able to shift the blame to a supposedly neutral technology.

ADVERTISEMENT

Other companies have people do their first interview with an AI chatbot. One popular AI that does this is called Mya, which promises a 70 percent decrease in hiring time. Any number of questions these chatbots could ask could be proxies for race, gender or other factors.

An algorithm that judges resumes or powers a chatbot might factor in how far away someone lives from the office, which may have to do with historically racist housing laws. In that case, the black applicant who lives in the predominantly black neighborhood far away from the office gets rejected. Xerox actually encountered that exact problem years ago.

You can fire a racist HR person, you might not ever find out your AI has been producing racist or sexist results.

“If you use data that reflects existing and historical bias, and you ask a mathematical tool to make predictions based on that data, the predictions will reflect that bias,” Rachel Goodman, a staff attorney at the ACLU’s Racial Justice Program, told The Daily Beast. It’s nearly impossible to make an algorithm that won’t produce some kind of bias, because almost every data point that can be connected to another factor like someone’s race or gender. We’ve seen this happening when algorithms are used to determine prison sentences and parole in our justice system.

“Algorithms, by their very nature, are going to carry some kind of bias,” Dipayan Ghosh, a fellow at New America and the Shorenstein Center at the Harvard Kennedy School. “Their designers come from a certain background and have to express their ideas through code—through writing—and whenever you’re expressing an idea on paper or in print or in code, that idea is coming out in a way that is unilaterally defined and necessarily carries some bias in it.” You can fire a racist HR person, you might not ever find out your AI has been producing racist or sexist results.

This is not to say programmers are going out there and intentionally making racist hiring robots. Most times, Ghosh explained, the programmer created a biased piece of AI completely on accident. The problem is that companies that create these products do not want to reveal how their algorithms work, because then anyone could recreate the algorithm and profit from it. Ghosh compared it to revealing your “secret sauce.”

“You see vendors make these claims that because their tools don’t explicitly incorporate race or gender or other protected class-related factors that they cannot be discriminatory, when the opposite is in fact true,” Goodman said. “Without analyzing the results of an algorithm using data about race or gender to understand the results, it’s extremely likely that proxies for those factors will end up being part of the tool and part of its results.”

Lewis Maltby, president of the National Workrights Institute at Cornell University, told The Daily Beast that one major problem is companies let things like AI make decisions instead of just using them as tools while making a decision.

“You don’t have to misuse technology in HR decisions, but employers usually do,” Maltby said. “Nobody in HR wants to hire someone when a test says they shouldn’t. If they turn out to be a bad employee, the HR person’s career is going to be compromised.”

Maltby pointed out that employers have been misusing exciting new technology during the hiring process for decades. “Employers have a long history of falling in love with technology that promises easy answers to difficult decisions,” he said.

Maltby referenced how employers used to have applicants do polygraph tests while “every scientist in the country said they were junk.” Polygraphs measure stress, not truthfulness, and a pathological liar might feel no stress while lying. An honest person might feel stress simply because they’re nervous. Employers didn’t stop using them until Congress passed a law barring it. He said it’s extremely unlikely Congress will act to stop the same problem with AI.

“If stress is something the AI is looking for, it’s going to be the same problem,” Maltby said. “Stress sometimes means there’s a problem, and sometimes it doesn’t.”

Rather than removing bias and discrimination from the hiring process, AI could very well bake it in and perpetuate it. We cannot hold machines accountable for their actions, so it’s likely no one would be held accountable for such behavior. Unless the companies making these tools become willing to reveal how they work, which seems unlikely, we’ll never know where bias is or is not being expressed. Technology is supposed to be neutral, but often it’s a reflection of us.

Got a tip? Send it to The Daily Beast here.