Innovation

Google Wants to Play Both Sides of the AI Arms Race

‘BOLD AND RESPONSIBLE’

CEO Sundar Pichai stressed “responsibility” with its bots—but the company’s actions say the opposite.

230509-Google-AI-sides-tease_dvcxia
Photo Illustration by Kelly Caminero / The Daily Beast / Getty

Google’s annual I/O developer conference kicked off on Wednesday with a very clear message: The search engine giant is going all in on artificial intelligence.

CEO Sundar Pichai and several top executives at Google’s parent company Alphabet took the stage to announce the infusion of AI into its suite of tools, as well as the launch of a new-and-improved AI model to power it all. The conference represents a fairly seismic shift in the way that users will interact with their search engine and, therefore, the internet as we know it.

The company also reveals that, as much as it wants to talk about its focus on responsibly approaching AI, it’s much more interested in launching its products to the world—despite the inherent dangers.

ADVERTISEMENT

“Seven years into our journey as an AI first company, we’re at an exciting inflection point,” Pichai said. He later added, “We have the opportunity to make AI even more helpful for people, businesses, and everyone. With generative AI, we’re taking the next step. With a bold and responsible approach, we are reimagining all our core products including search.”

The most radical and apparent transformation will occur with Google’s search engine, which will now include its Bard chatbot as part of certain search results. In one demo, Google’s VP of Engineering Cathy Edwards searched “What’s better for a family with kids under 3 and a dog: Bryce Canyon or Arches?” The engine then provided an AI-generated response at the top of the page followed by several links and sources associated with the topic.

Screenshot_2023-05-10_at_2.54.15_PM_lro3pr
Screenshot via Google/Alphabet

The AI-powered search function won’t be available to everyone—at least, not yet. Google stressed that this is still very much experimental. In order to gain access to this function, you’ll need to opt in to Search Generative Experience in Google’s new feature dubbed Search Labs, which allows users to sign up to test new AI functions. So while it’s experimental, there is zero doubt that this is a test for a total rethink to how users experience search engines in the future.

Google’s other products such as Gmail, Docs, Sheets, and Slides will also be receiving additional AI tools such as “help me write,” which will enable users to prompt the AI to generate text for them. For example, you might ask Gmail to “write me an email announcing the birth of my baby boy,” and it will create a draft for you that you can then edit and send off.

Pichai also announced PaLM 2, the latest update to its underlying large language model (LLM) that now powers Bard and will be the foundation for all of the AI-infused products going forward. PaLM 2 gives Bard a host of new capabilities and tools such as multimodal functionality. This allows it to use image inputs as well as text. It can also generate images using text prompts a la DALL-E or Midjourney.

For example, you might upload a photo you just took of your two dogs and ask Bard “write a funny caption for this photo,” and it’ll give you such knee slappers like, “When you’re trying to figure out which one of you is a good boy.”

Screenshot_2023-05-10_at_4.03.57_PM_es6loj
Screenshot via Google/Alphabet

Perhaps most surprisingly, Google also announced that there would no longer be a waitlist to its Bard chatbot and that anyone will be able to use it (provided they live in one of the more than 180 countries it’s available). This means that Bard will now be in the hands of anyone who wants it—allowing them to chat, generate jokes, prompt engineer stories, and create AI-generated images to their heart's content.

But this is where things start to get tricky. While the speakers constantly stressed approaching AI responsibly and ethically, the fact that Google’s leaders are rolling out an even more powerful version of Bard that includes the ability to generate images indicates that—for all their talk of responsibility—their actions are doing quite the opposite.

The dangers inherent to these AI tools have been known for years, and have only become more and more starkly clear in the past few months since the release of OpenAI’s ChatGPT. These LLMs are capable of bias and harm. Generative AI tools like image generators and voice replicators have already been wreaking havoc on social media. We’ve already seen people get tricked by deepfaked images of Trump’s arrest. The music industry is even changing due to AI versions of artists like Drake singing songs they’ve never performed in their lives.

The ability for the AIs to deepfake reality, while also hallucinating facts turn them from simple tech tools and into dangerous weapons when put into the hands of bad actors. There is a growing body of studies and evidence that show that people are susceptible to the suggestions of chatbots even if they know they’re talking to a chatbot. Now, Google wants to put it into the hands of millions if not billions of their users.

To their credit, the company did mention some things they were going to do in an effort to fight misinformation. Most notably, Google is going to be including watermarks by way of metadata on images generated by Bard. The company will also flag AI-generated images on their Google Images results. This should ostensibly allow users to see whether or not a picture was created by a bot or not.

Outside of that, though, the company’s messaging seems to be more talk than anything else—especially in light of the fact that it will soon unleash a suped-up, multimodal Bard to the masses. With the release of PaLM 2, Google released a 91 page technical report. While the paper stressed multiple times the importance of its training data, it is opaque as to what exactly the model was trained on.

This lack of transparency is a break from previous technical reports for their AI models, which usually include the dataset upon which these bots are trained. As we see LLMs become commodified into products, though, companies like Google and OpenAI are becoming increasingly secretive about how they developed their models—much to the frustration of tech ethicists and those concerned with the dangers of AI.

There’s an inherent friction here. Google recognizes that they have to do this responsibly. They say that they want to put responsibility front and center. But then they announce they’re going to hand something akin to a loaded gun to the world.

For now, the tools have yet to be formally launched. While people can access Bard, it’s still not capable of multimodal functionality at the time of reporting. However, Google’s I/O conference this year represents a sea change in the way we interact with the internet. No longer are tools like ChatGPT, Midjourney, and DALL-E going to be limited to those with some level of technical proficiency and digital literacy. Soon, they’ll be available to anyone with a decent internet connection and a Google account—ushering in the new AI Age of the internet.

Got a tip? Send it to The Daily Beast here.