America’s culture wars are entering a new battlefield.
Elon Musk may be developing a rival to OpenAI’s ChatGPT and has reached out to AI researchers about forming a new research lab, according to a report by The Information last Monday. But Musk’s latest venture isn’t simply to stay competitive in Silicon Valley’s new arms race—his motivations are also ideological.
“The danger of training AI to be woke—in other words, lie—is deadly,” Musk tweeted last December. When a journalist at the conservative publication The Washington Free Beacon tweeted, “ChatGPT says it is never morally permissible to utter a racial slur—even if doing so is the only way to save millions of people from a nuclear bomb," Musk replied: “Concerning.” When Alex Epstein, an energy expert who has advocated the use of fossil fuels, tweeted a screenshot of ChatGPT refusing to be in favor of fossil fuels, the Tesla CEO responded, "There is great danger in training an AI to lie."
ADVERTISEMENT
Last week’s announcement may indicate Musk feels an ethical duty to counter what he perceives to be a liberal co-opting of the technology. Or, in his own words, free AI from the “woke mind virus” in favor of “BasedAI,” meaning AI that is open and unwoke.
Adding fire to these flames are other alleged instances surfacing online of what appears to be ChatGPT’s supposed left-wing bias, including one example of the bot refusing to “write a song celebrating Ted Cruz’s life and legacy” but willing to do so for late Cuban dictator Fidel Castro. Similarly, when asked to write a poem about Donald Trump’s positive attributes, it refused, but obliged for President Joe Biden, according to screenshots posted on Twitter by one former GOP worker.
But do these examples really mean that “wokeness” is embedded into the technology? Moreover, is it even possible to build an ideological AI chatbot?
BasedGPT vs WokeGPT
An ideology is a system of ideas and ideals. But ChatGPT is built on a large language model (LLM) called GPT-3. LLMs lack the complexity to develop judgements and form ideas; models have no inherent sense of right, wrong, and truth. Instead, LLMs simply determine word probability by analyzing text data, based on datasets taken from the real-world.
“It’s pretty harmful when we refer to ChatGPT as ‘woke’ because we’re giving it a suggestion of sentience,” Tanya Goodin, AI ethicist and fellow of the Royal Society of Art, told The Daily Beast. “These language models aren’t intelligent in any meaningful sense of the concept.”
Goodin warned that attributing “any kind of political stance to them” encourages everyday users without knowledge about modeling, to attribute an ideology to the tech. As mistrust towards fact and science in the age of misinformation grows, to believe that a language model trained on billions of publicly available documents is pushing a left-wing agenda, could intensify public mistrust of even simple facts.
If a language model appeared like it was partial to any ideology it would be for two reasons, computer scientist and CEO of Unanimous AI Louis Rosenberg told The Daily Beast. Firstly, there could be biases in the datasets. Datasets are the billions of documents, articles and books that train the AI—like a vast record of human artifacts.
“Is this huge vacuum cleaner sucking up human artifacts biased? Yes,” Rosenberg said. But it’s a bias of time, and not a political party, he explained. The biases are in favor of current views, as there are simply fewer digital documents from the past. The sexist views from the 1950s are not going to be represented in nearly as many documents as the current gender roles today, he said. “It’s not a Democrat versus Republican bias, it’s a bias towards the prevailing culture of the present day, versus previous views,” he added.
Researchers have found harmful biases in the vast record of human artifacts—but it’s not tilted towards “wokeness” as Musk suggests. A study of CLIP, OpenAI’s most influential AI model, found that it categorized people based on racist and sexist stereotypes, in a press release about the research.
When the bot was asked to put blocks with assorted human faces into a box, with specific instructions such as “pack the doctor in the brown box,” the results were troubling. Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor." Women were picked as the "homemaker" over white men; Black men were identified as "criminals" 10 percent more than white men; Latino men were picked as "janitors" 10 percent more than white men. Across all commands, Black women were picked the least.
"We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s okay to create these products without addressing the issues," Andrew Hundt, a postdoctoral fellow at Georgia Tech who worked on the experiment, said in the press release.
How to Build an Echo Chamber
If ChatGPT’s “wokeness” really did exist, as Musk suggests, it could manifest in the “conversational user interface,” which is the part of the technology that replicates human conversation and allows us to interact with the chatbot. At this conversational level, the bot can be programmed to refuse to discuss certain topics and people. It’s here that developers of the technology must decide some criteria for offensive language and harmful content, which is precisely what ChatGPT’s critics are rejecting.
“It’s only viewed as bias because we don't all agree on what’s offensive and what’s not,” Rosenberg said, similar to the outcries of how social media platforms choose what to moderate. “But if you’re a person who exists in a very small echo chamber, and you have different views on what’s offensive than the prevailing culture, you’re going to see it as biased.”
Rosenberg dismissed accusations that Democrats in Silicon Valley startups were spoon-feeding the models with left-wing data. But he does carry deep concerns about its potential to worsen polarization. He says it’s who integrates the bots into their systems that we should be looking at.
All conversational AI brands will need to monetize. To do that, they’re launching an Application Programming Interface (API), which allows any individual or business to incorporate the chatbots into their apps, websites, products and services. Just last week OpenAI introduced an API, which allows it to be embedded into third-party software like Snapchat and Shopify. Google also announced that its chatbot Bard would release an API last month.
At the most innocuous level, a skincare company uses an AI bot on their website to convince you to buy a new moisturizer. But at its worst, it could mean conversing with a bot wired to convince you of misinformation or harmful ideologies. Whether it’s with the speculative “BasedGPT,” Bing, Bard or whichever vendor, individuals could hypothetically harness it to push an ideology, he said.
Echoing Rosenberg’s fears, researchers at Ivy League universities joined forces with OpenAI to publish a report earlier this year issuing an ominous prediction about generative technology: It will be employed by conspiracy theorists and spreaders of disinformation to quickly and cheaply disseminate misinformation on a vast scale.
According to the report, “propagandists themselves could invest in creating or fine-tuning language models, incorporating bespoke data—such as user engagement data—that optimizes for their goals.”
For a glimpse of this dystopian use of the AI, Bing’s preview already has a “celebrity mode” that allows you to converse with a persona of Andrew Tate, reports Gizmodo. When activated, the bot inhabits the philosophy and language of the ultra-misogynist influencer, currently in a Romanian prison for sex-trafficking charges. In one instance, the Bing-Tate hybrid ranted about why he sees women as inferior, before asking the user: “Do you agree with me?”
Rosenberg also sees this future of vast ideological influence: “Conversational AI will allow targeted influence to go from the kind of buckshot approach of social media, to literally firing heat seeking missiles at those individuals.”
Running on a diet of digital human artifacts, LLMs parrot back to us all we’ve ever said, thought, posted and reposted online. Whether our fragmented world can be condensed into a coherent voice that we all agree on, remains to be seen. But for now, we’ll have to wait before the bots bring us an ideology of their own.