Tech

In California, It’s Now Illegal For Some Bots to Pretend to Be Human

BOT PROBLEMS

The transparency law is the first of its kind in the U.S. and could serve as a template for future efforts to make it clear who’s real and who’s not online.

190705-hatmaker-cali-robots-tease_npp4wj
Bernhard Lang/Getty

An experimental California law making it illegal for some bots to masquerade as human went into effect this week.

Known as the B.O.T (“Bolstering Online Transparency”) Act or California Senate bill 1001, the legislation “make[s] it unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity.”

The legislation specifically applies to bots that intend to influence voters, as well as intentionally deceptive bots used to sell goods and services. Under the law, a person isn’t liable if they make it clear their bot isn’t human through a “clear, conspicuous, and reasonably designed” disclosure.

ADVERTISEMENT

The law, which kicked in on July 1, is exploratory in nature for a few reasons. For one, it’s the first of its kind—but it only applies within the state of California. What that means for online activity that crosses both state and national lines remains to be seen.

Bots are usually regarded as a modern scourge, but they aren’t all bad. Sometimes bots make it easier to access existing information, like in the case of automated non-human accounts that track and report earthquakes. Others are well-loved streams of robot-generated humor or even independent art projects.

Twitter is widely regarded as the bot-friendliest platform. Many automated accounts flourish there but they also exist elsewhere, haunting YouTube comments and drumming up follows for wishful Instagram influencers.

To avoid putting undue burden on small businesses and online communities, the law only applies to platforms with more than 10 million monthly users.

“Bots continue to misrepresent public sentiments and perceptions about topics, or to mute dissenting opinions and distract from current events,” said California Sen. Bob Hertzberg, who introduced the bill last year.

While many bot accounts have quietly hummed away for years, the 2016 U.S. election drew national attention to coordinated campaigns to suppress votes, disseminate political disinformation, and deepen ideological rifts in national politics. The true impact this kind of artificial online activity has on political outcomes is difficult to measure and might never be fully known.

In a Medium post, Hertzberg cleared up one major misconception about his legislation, explaining that “the BOT Act does not prohibit the existence of bots, but rather simply requires them to identify themselves.”

Still, it’s not clear how the law will be enforced, particularly in situations when the offender lives beyond the reach of California’s attorney general or is difficult to identify. Anyone who violates the law could face the same consequences that apply to traditional fraud, including fines and jail time.

Hertzberg cited the bot networks that spread attacks against Democratic candidate Kamala Harris as one kind of activity his law targets. “Just last week, hundreds of incendiary tweets poured into the public conversation after the second presidential debate, disputing Senator Kamala Harris’ ethnic heritage,” Hertzberg wrote.

Earlier versions of the bill placed the responsibility on platforms to investigate, label, and enforce its prohibitions. After revisions, the version of the bill signed by the governor and now in effect removed those requirements, freeing companies like Facebook, Twitter, and YouTube of additional duties beyond their existing efforts to research and purge platforms of activity violating their terms of service agreements.

The Electronic Frontier Foundation opposed the bill in its original form, arguing that some of the bill’s “dangerous elements” would suppress real speech and burden tech companies with unreasonable reporting requirements. The early version of the bill “would have predictably caused innocent human users to have their accounts labeled as bots or deleted altogether," the organization wrote in a summary of the bill's evolution toward its present form.

The legislation was drafted by media watchdog group Common Sense Media and the Center for Humane Technology, a coalition of former big tech employees pushing reforms to address the harms wrought by the software they helped create.

“Misinformation of our public and meddling in our elections is where policy makers must draw a line,” Hertzberg said of his law going into effect. “Our democracy depends on it.”

Got a tip? Send it to The Daily Beast here.