Innovation

Meta’s Smart Glasses Ignore Why Google Glass Failed

GLASSHOLE 2.0

The mixed-reality wearable are stoking massive privacy, security, and ethical concerns.

An illustration including a photo of Smart Glasses
Photo Illustration by Kelly Caminero / The Daily Beast / Getty

Ten years after the initial launch of Google Glass, one of the first commercially available pairs of camera-equipped smart glasses, a second wearable technology revolution is underway—and it’s mostly ignoring the lessons of the first go-round.

Last month, Meta CEO Mark Zuckerberg took the stage at Meta Connect—the company’s annual conference for virtual reality developers—and unveiled something significantly more compelling than virtual legs: the Meta Smart Glasses.

The Meta Smart Glasses, which are the product of a partnership between the social media giant and glasses maker Ray-Ban, are actually the second generation of internet-connected eyewear to come out of this collaboration. The first, the Ray-Ban Stories, dropped in 2021 and were largely forgotten. The lenses were equipped with dual cameras that could capture pictures or 30-second video clips.

ADVERTISEMENT

The Meta Smart Glasses, which hit retail shelves October 17, take things much further. First, the glasses have been infused with Meta AI, a conversational assistant in the mold of ChatGPT and birthed from Meta’s large language models (LLMs) that has previously heaped praise on Ghengis Khan and pushed election conspiracy theories. It’ll also expand the camera capabilities from short burst image and video capture to full-on livestreaming.

A pair of Meta Smart Glasses.

Meta Smart Glasses include a conversational assistant in the mold of ChatGPT and birthed from Meta’s large language models.

Carlos Barria

Meta isn’t the only company embracing AI-enabled wearable technology. In fact, its announcement seemed to launch a slew of other devices that rushed to capitalize on the AI-wearable hype. Humane, a tech startup made up of ex-Apple employees, slipped its mysterious Ai Pin, positioned as a sort of portable Siri or Alexa-style assistant with a laser projected display and on-board camera and microphone, on the jacket lapel of Naomi Campell as she walked the runway at Paris Fashion Week 2023. Days later, Rewind AI announced it would make a pendant that could record and transcribe everything the wearer says and hears.

These devices are almost surely more impressive pieces of technology than what Google cooked up in 2013. But these companies might do well to remember that Google Glasses didn’t fail because the technology wasn’t good enough. It failed because it creeped people out. A lot.

Wearable Red Flags

The rush to get these wearables in the wild presents a two-fold privacy problem: First, just how much of our lives will be captured without our knowledge and consent by these devices? And second, will our conversations, activities, and likeness be used to train AI models?

For the most part, these questions don’t have clear answers. Meta, which did not return multiple requests from The Daily Beast for comment, has at least somewhat acknowledged that discreetly recording in public via glasses is likely to creep people out. To address it, the company has included an indicator light on the frames that, in theory, inform others around the wearer that they are recording.

Ignoring that early reviews suggest the light is not always noticeable—and that Meta caught warnings from the EU’s privacy regulators for making its indicator light too small on the Ray-Ban Stories—this method of informing others of recording is still insufficient, according to Calli Schroeder, senior counsel and global privacy counsel at consumer privacy watchdog Electronic Privacy Information Center.

“Indicator lights, beeps, or similar indicators are generally so small and undetectable that it would be shocking for a bystander to even notice it, much less know what the signal is indicating,” she told The Daily Beast. “We hear pings and see little lights on people’s devices all the time when we move about the world. It would never occur to me that those things are meant to be meaningful disclosure that I am being recorded.”

According to Schroeder, the best practice to alert others that you’re wearing a camera or microphone that could be turned on at any time is to wear “a large sign or T-shirt that says ‘I’m recording you’ every time they use it”—and she added she was only kind of joking.

Meta hasn’t exactly been synonymous with privacy, and a white LED is unlikely to change that. Given the massive amount of user data the company possesses, there is plenty of additional concern as to how it will use the content captured through the in-frame cameras. After all, the company openly considered including facial recognition features in its first pair of smart glasses, though the company does not currently include (or at least market) this technology in its wearables. Google Glass also did not include Google’s own facial recognition tech, but users were able to hack together their own version of the software to run on the devices.

It’s clear that, at least to some extent, the company’s goal with smart glasses is to feed its AI models tons of information. “If you think about it, smart glasses are the ideal form factor for you to let an AI assistant see what you’re seeing and hear what you’re hearing,” Zuckerberg said when he announced the Meta Smart Glasses.

Meta has admitted to using public posts from Facebook and Instagram, including text and photos, to train its AI model. The company insists that photos and videos are encrypted, and the company only collects “essential data,” but there is a lack of clarity around whether the company can or will use any information from its Smart Glasses to bolster its AI.

A woman wears Google glass after a media presentation of a Google apartment in Prague May 15, 2014.

AI wearables companies might do well to remember that Google Glass didn’t fail because the technology wasn’t good enough. It failed because it creeped people out.

Reuters

It isn’t uncommon for companies to use “anonymized data”—information that has been scraped of any potentially identifying information—to train its systems. However, it’s also not uncommon for the anonymization process to be woefully insufficient. Researchers once bought web browsing history data of German citizens and paired it with publicly available information to reveal the online habits of individuals, including the porn viewing habits of a judge.

It’s also made much more complicated when factoring in the data captured by cameras and microphones. “Facial recognition systems may be able to identify bystanders, functionally providing a real-time map of where they are, depending on how many wearable devices are around them,” Schroeder warned. “Systems tracking gait or audio may be able to identify the individual even if their face is covered.”

Beyond that, there’s simply no feasible way for people to opt in to having their likeness recorded and potentially used for training AI. “A company benefiting from personal data that a person did not consent to give and took no action to allow to be collected seems unethical,” Schroeder said. “Existing in the world should not make us targets of data collection.”

Return of the Glassholes

It’s clear that other companies jumping into the wearable AI space have at least given these issues consideration—though their answers are often murky. Ella Geraerdts, a communications manager representing Humane, told The Daily Beast that the company’s Ai Pin would be a “privacy-first” device with “no wake word and therefore no ‘always on’ listening.” This, she said, reflects Humane’s “vision of building products which place trust at the center.”

Dan Siroker, the co-founder and CEO of Rewind, told The Daily Beast that his company rushed to announce its recording and transcribing Pendant while it’s still in the early stages of development because “I wanted to put the importance of privacy out in the ether.”

Of all of the AI wearables announced in recent weeks, though, Pendant got perhaps the most vitriol online. While the company emphasizes that it is taking a “privacy-first approach” and plans to “offer features for you to ensure no one is recorded without their consent,” it has only offered up two ideas of how it might achieve that: either by only storing recordings of people who have verbally consent to being recorded or by producing text summaries generated by the AI’s best judgment of important details, rather than verbatim transcripts and nothing solidly implemented.

The device is still more theoretical than practical, so neither idea has actually been implemented. “If we have better ideas that come between now and [launch], we will absolutely incorporate them,” Siroker said.

To its credit, Rewind is much clearer on how it handles data: Siroker said explicitly that it does not use recordings to train AI models, and that all recordings and transcriptions are performed and stored locally on a user’s device.

Siroker acknowledged smart wearables that came before—and some making their way to market now—played it fast and loose with the idea of recording in public. When Google Glass first made its way into the wild, businesses banned people from wearing them. Wearers were heckled and labeled Glassholes. Little about the new devices on the way do much to address the causes of that public pushback from a decade ago and instead just assume things will be different this time. “Google Glass was too cavalier about privacy and I think they did the whole industry a disservice,” Siroker said, “and we're kind of still digging ourselves out of that hole that Google Glass created.”

Got a tip? Send it to The Daily Beast here.