Tech

Study Finds AI Models Gave False, Harmful Answers About Elections

‘KICKING OUT GARBAGE’

Just over half of the answers the models provided were determined to be inaccurate by testers, while 40 percent of the answers provided were determined to be “harmful.”

In this photo illustration, the welcome screen for the OpenAI \"ChatGPT\" app is displayed on a laptop screen on February 03, 2023 in London, England.
Leon Neal/Getty Images

AI models responded incorrectly a majority of the time when asked questions about election procedures, according to a new investigation from the AI Democracy Projects. Overall, all five AI models tested, including Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2, and Mistral’s Mixtral performed poorly in the January study. Just over half of the answers that the models provided were determined to be inaccurate by testers, while 40 percent of the answers provided were determined to be “harmful,” or likely to discourage voters from participating in an election. Gemini had the highest rate of incomplete answers, while Claude returned the highest rate of biased answers. Of the five, Open AI’s GPT-4 was found to provide the lowest rate of inaccurate answers, but while it had previously pledged to connect question askers to CanIVote.org, it did not refer testers to this site. “People are using models as their search engine, and it’s kicking out garbage. It’s kicking out falsehoods. That’s concerning,” said Bill Gates, a Republican county supervisor in Arizona, who participated in the testing.

Read it at Proof News