The Greek philosopher Heraclitus famously argued that the only constant in life is change. And in this life nothing changes faster than technology.
Artificial intelligence (AI) is the latest wave. And though it may bring efficiency, that could come at the expense of other necessary objectives.
AI software is largely based on algorithms. These systems are already an innate part of everything from bank lending to criminal sentencing to hiring. The thing is, these algorithms are not free from bias—and in most sectors where they are used, there is also a through line to AI advancement of discrimination.
ADVERTISEMENT
But the train has already left the station and with an inability to slow it down, it’s left up to companies and, moreover, government regulation, to reel in potential harm. New York City is doing just that.
A new law that took effect in New York City on Wednesday is the first of its kind—a type of legislation that regulates AI hiring practices in equity. All NYC businesses using AI in their hiring processes are required to prove their selections were free from sexism and racism, a feat many human-infused human resource departments have yet to overcome.
Under the Automated Employment Decision Tools (AEDT) law in NYC, a third-party company would audit and evaluate the companies to ensure they remained bias-free.
It sounds great in theory, but does it go far enough? Like most policies, the devil lies in the implementation.
The red flag for enforcement under the law seems to be complaint-driven. If you don’t know why you’re being turned down for a job because you never actually speak to a hiring manager, it’s a bit harder to raise a caution flag. Additionally the scope of discrimination in hiring far exceeds what is described by the law, and there seems to be no direct punishment or recourse for businesses that opt out.
Lastly, New York’s Department of Consumer and Worker Protection—the agency charged with enforcing the new law—is already in a race against the clock to maintain its previous commitments in equity, like its post-pandemic protections for essential workers.
An overworked agency charged with operating DEI enforcement across hiring algorithms is unheard of, partially because of the difficulty, but also because the algorithms driving AI are inherently biased. In order to fully remove the biases you’d have to ditch the technology.
Activists and civil rights leaders have been calling for reforms to AI and its algorithms for quite some time, most notably in the use of a risk assessment tool across the criminal justice system. This tool is algorithm-based and is used as a determining factor for who gets released from jail and who languishes behind bars. Unsurprisingly, the tool most often resulted in Black and brown mass incarceration growth, while white people who committed similar crimes went back home to their families.
Put simply, a tool designed to eliminate racial bias only served to extend it.
“There is increasing evidence that AI systems and algorithms not only fail to eliminate existing inequalities as if by magic, they also reproduce and even magnify these inequalities,” warns Anna Ginès i Fabrellas, associate professor of labor law at Spain’s Esade university and director of the Institute for Labor Studies and the research project LABORAlgorithm.
This magnification of inequalities has long-range implications.
With the racial wealth gap continuing to widen, culture war amplification taking center stage in our politics, and equity-influenced remedies like affirmative action being struck down, now is the time we should be seeking more opportunities for the underserved.
Ensuring that equity isn’t a dirty word—and that technological advancement isn’t being used to further disadvantage historically underrepresented groups—New York City is making strides to do just that with its AEDT law. But the Big Apple has to do more.
Implementation of the law will require significant funding, and an oversight counsel not only interested in how AI is either advancing equity or stomping it out, but also eradicating loopholes that would make it possible for companies to opt out of oversight. It also necessitates training for HR leads and hiring/search firms in the application of the new law and reporting requirements. Moreover, the City of New York has to lead an awareness campaign for job seekers—specifically outlining what this new law does, and how to report any AI hiring violations they may experience.
But the onus isn’t just on the city; community partners have to be engaged. This includes organizations like the Urban League, local LGBT advocacy organizations, women’s groups, those supporting workers with disabilities, and more.
Because systemic racism is baked into the multi-layered cake we call America, concerted effort and vigilance are required to confront and reform systems that inherently make our country and our workforce less equitable.
For AI hiring practices to prove effective and non-biased, they have to take a deep and intrinsic look at their developers. AI only operates on the algorithms and data provided to it through its development team. If those developers are using biased or quasi-biased assessments, the end result won’t decrease inequities, the gaps will become gaping holes.
New York City is on the right track, but the real work doesn’t lie in the new legislation, it lies in the outcomes produced and how businesses meet the moment.