Innovation

‘60 Minutes’ Made a Shockingly Wrong Claim About a Google AI

FAKE NEWS

With the nascent AI boom, misinformation about the emerging tech is running rampant—and the media is partly to blame.

Screenshot_2023-04-18_at_4.15.35_PM_js8del
60 Minutes via YouTube

Since OpenAI unleashed ChatGPT on the world, we’ve seen takes you people wouldn’t believe.

Some folks have claimed that chatbots have a woke agenda. Sen. Chris Murphy (D-CT) tweeted that ChatGPT “taught” itself advanced chemistry. Even seasoned tech journalists have written stories about how the chatbot fell in love with them. It seems as though the world is reacting to AI the same way cavemen probably reacted when they saw fire for the first time: with utter confusion and incoherent babbling.

One of the latest examples comes from 60 Minutes, which threw its voice in the ring with a new episode focused on innovations in AI that aired Sunday on CBS. The episode featured interviews with the likes of Google CEO Sundar Pichai—and included questionable claims about one of the company’s large language models (LLM).

ADVERTISEMENT

The topic of the clip is about emergent behavior, which describes an unexpected side effect of an AI system that wasn’t necessarily intended by the model’s developers. We’ve already seen emergent behavior spring up in other recent AI projects. For example, researchers recently used ChatGPT to create generative digital characters with goals and background in a study posted online last week. They observed the system performing multiple emergent behaviors such as sharing new information from one character to another and even forming relationships with one another—something the authors didn’t initially have planned for the system.

Emergent behavior is definitely a worthwhile topic for a news show to discuss. Where the 60 Minutes clip takes a turn, though, is when we’re introduced to claims that Google’s chatbot was actually able to teach itself a language it previously didn’t know after it was prompted in that language. “For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know,” CBS News correspondent Scott Pelley said in the clip.

Turns out it was complete BS. Not only could the bot not learn a foreign language “it was never trained to know,” but it didn’t teach itself a new skill. The entire clip spurred AI researchers and experts to excoriate the news program’s misleading framing on Twitter.

“I sure hope some journalist does a review of the whole @60Minutes segment on Google Bard as a case study in how *not* to cover AI,” Melanie Mitchell, an AI researcher and professor at the Santa Fe Institute, wrote in a tweet.

“Stop Magical Thinking in Tech! It is not possible for an #AI to respond in Bengali, unless the training data was contaminated with Bengali or is trained on a language that overlaps with Bengali, such as Assamese, Oriya, or Hindi,” M. Alex O. Vasilescu, a researcher at MIT, added in another post.

It's worth mentioning that 60 Minutes segment didn't say exactly what the AI they used was. However, a spokesperson from CBS told The Daily Beast that the clip was not a discussion on Bard but a separate AI program called PaLM—the underlying technology of which was later incorporated into Bard.

The reason the segment was so frustrating to these experts is it ignores and manipulates the reality of what a generative AI can do. It can’t “teach” itself a language if it never had access to it in the first place. That’d be like trying to teach yourself Mandarin but you’ve only ever heard it after someone asked you Mandarin once.

After all, language is incredibly complex—with subtle nuance and rules that require an incredible degree of context to understand and communicate with. There’s no way for even the most advanced LLM to grapple with and learn all of that through a few prompts.

PaLM was already trained with Bengali, the predominant language of Bangladesh. Margaret Mitchell (no relation), a researcher at AI startup lab HuggingFace and formerly of Google, explained this in a tweet thread making the argument for why 60 Minutes was wrong.

Mitchell pointed out that, in a 2022 demo, Google showed that PaLM could communicate and respond to prompts in Bengali. The paper behind PaLM revealed on a datasheet that the model was indeed trained in the language with roughly 194 million tokens in the Bengali alphabet.

So it didn't magically learn anything via a single prompt. It already knew the language.

It’s unclear why Pichai, the CEO of Google, sat down for the interview and allowed these claims to be made without any pushback. (Google did not respond to requests for comment.) Since the episode aired, he’s stayed silent despite experts pointing out the misleading and false claims made in the segment.

On Twitter, Margaret Mitchell suggested the reason could be a combination of Google leadership not knowing how their products work and allowing shoddy messaging to spread to piggyback on the current hype around generative AI.

“I suspect [Google executives] literally don't understand how it works,” Mitchell tweeted. “What I wrote above is likely news to them. And they're incentivised not to understand (close your eyes to that Datasheet!!).”

The second half of the video can also be seen as problematic as Pichai and Pelley discuss a short story that Bard created that “seemed so disarmingly human,” it left both men looking somewhat shaken.

The fact is these products aren’t magic. They’re not capable of being “human” because they’re not human. They’re text predictors like the ones you have on your phone, trained to come up with the likeliest words and phrases following a string of words in phrases. To say they are could give them a level of authority that could be incredibly dangerous.

After all, people can use these generative AIs to do things like spread misinformation. We’ve already seen this play out with deepfakes of people’s likenesses and even their voices.

A chatbot on its own can cause harm if it winds up producing biased results—something we’ve already seen with the likes of ChatGPT and Bard. Knowing these chatbots’ propensity to hallucinate and make up results, they could even be capable of spreading misinformation to unsuspecting users.

Research bears this out. A recent study published in Scientific Reports found that human responses to moral questions can be easily swayed by arguments made by ChatGPT—and users even grossly understated how much they were being influenced by the bots.

The misleading claims on 60 Minutes are really just a symptom of a larger need for digital literacy in a time when we are in desperate need for it. Many AI experts say that now, more than ever, people need to become aware of what AI can and cannot do. These basic facts about bots need to be effectively communicated to the broader public.

The people with the biggest platforms and the loudest voices (i.e., the media, politicians, and Big Tech executives) have the most responsibility to ensure a safer, more educated future with regard to AI. If we don’t, then we might just wind up like those aforementioned cavemen, playing with the magic of fire—and getting burned in the process.

Got a tip? Send it to The Daily Beast here.