Tech

Harvard AI Killed Within Hours of Release Over Allegations of Racist Stereotyping

‘PROBLEMATIC’

A student group claimed that the model, based on Harvard’s new president, a Black woman, was based on instructions directing it to provide “angry and sassy” responses.

Harvard
Brian Snyder/Reuters

An artificial intelligence bot created by a Harvard student group that modeled it after the university’s president was taken down within hours of its release after allegations surfaced that it was built using a racist stereotype, according to The Harvard Crimson. The student newspaper reported Monday that the Harvard Computer Society AI Group released ClaudineGPT and another version, “Slightly Sassier ClaudineGPT,” on Sept. 29, the day of President Claudine Gay’s inauguration. Both language models were quietly dismantled by their creators that night, according to the Crimson, after the Harvard AI Student Safety Team emailed them about the models’ “problematic” builds. The team had found that ClaudineGPT’s instructions directed it to respond in a “extremely angry and sassy” tone, a member wrote in the email. “Releasing these models seems to only contribute to the trend of AI products that denigrate or harm women and people of color,” the member wrote. The AI group, which declined to comment to the Crimson, emailed the safety team the next day to say that ClaudineGPT was “always signaled to be a satire and joke,” but that they had taken it offline.

Read it at The Harvard Crimson