Tech

Army of Fake Accounts Is Spreading Twitter’s False ‘Breaking News Alert’ Terror Alarms

DARK MOTIVES?

Researchers found a network of phony users that’s spreading panic with misleading ‘breaking news alert’ posts—and keeping track of everyone who clicks.

190612-Poulsen-new-terror-scam-tease_zxnzlb
Photo Illustration by Kelly Caminero/The Daily Beast

He’s a Montana man who loves to hunt. She’s a Los Angeles woman who wants to share her Muslim faith. Another man in Glenwood, Iowa, is all about health and nutrition, while a Washington, D.C. resident is fascinated by police work and sometimes shares links to white-supremacist sites.

These seemingly diverse Twitter users and hundreds more like them have two things in common: They all joined the platform within the last 13 months, and by all evidence none of them actually exist.

On Wednesday, researchers at the threat-intelligence firm Recorded Future detailed their discovery of a previously unknown network of fake Twitter accounts using what they believe is a novel disinformation tactic: raising false alarms about terrorist attacks by resurfacing years-old articles as breaking-news alerts. The purpose of the campaign is unclear, but whoever is pulling the strings is carefully monitoring the results, channeling every link through a network of fake URL-shortening services programmed to secretly collect information on everyone who clicks.

ADVERTISEMENT

“We’re not seeing any monetization, so it’s not someone doing this for profit potential,” said Staffan Truvé, Recorded Future’s Sweden-based chief tech officer, in an interview with The Daily Beast. “One theory is that they’re doing this purely to spread fear, uncertainty, and doubt… It’s most likely either a state actor or some European group.”

Recorded Future discovered the network when it fired up a redesigned behavioral analytics engine the company built to hunt for covert influence operations by foreign governments and others. One of the indicators the software looks for are reports of terrorist attacks circulating on social media without also appearing on news outlets. The algorithm quickly keyed in on a March 23 tweet from an account called “The Football Babe” that announced “Bombings, Shootings Rock Paris.”

There was no such attack in March. Instead the link in the terse tweet led to a genuine 2015 article about the terror attack at a Paris soccer stadium in November of that year. “It is easy to see how the post could cause concern for those reading it and prompt them to follow the link to validate the news, missing the difference in publication date,” notes the Recorded Future report.

In another example from July 2018, a different account in the network tweeted images from a two-year-old story about a protest in Chile, Recorded Future found. The 2016 photos showed student protesters in bandanas and head scarves angrily destroying a six-foot-tall crucifix looted from a Catholic church in Santiago. But the tweet presented the images as taken in Sweden in 2019, during a wholly fabricated Muslim protest against the exhibition of Christian crosses in Sweden.

“All the posts they’re pointing to concern three- or four-year-old events,” said Truvé, for whom switching on a new algorithm as something akin to turning over a rock. “The scary side is this is our first attempt to use this methodology, and this thing popped up almost immediately.”

Because of the operation’s apparent strategy of recycling old news articles, Truvé dubbed the operation “Fishwrap.” The researchers began hunting for more examples, building a large list of accounts that were tweeting the same subject matter on the same schedule. Hoping to winnow down the list, they examined other indicators as well, including the link-shortening services used by the accounts

That’s when they noticed that a subset of 215 accounts, most purporting to belong to Americans, were relying exclusively on 10 off-brand link-shortening sites that literally nobody else was using.

The sites were all running the same code, but operating from a variety of URLs. Some are short, like n1o[.]io, while others are decidedly long for a link-shortening service. On the surface, the sites roughly mimic the functionality of established services like Bit.ly and Tinyurl, but they don’t actually allow outside users to create short links.

The sites have another distinguishing feature. In the split-second before forwarding incoming clicks to the final destination, they capture and log the dimensions of the visitor’s computer monitor. The same technique is commonly used as one component of “browser fingerprinting” systems to uniquely identify visitors without using cookies.

It’s possible the shorteners were set up to provide analytics to whoever is running Fishwrap, allowing them to isolate words or themes that draw in the same user more than once. “It gives you a way to target your message,” said former FBI agent Clint Watts, a research fellow at the Foreign Policy Research Institute. “You could use the URL shortener to see who’s touching your disinformation… Whoever clicks on that, these people are susceptible.”

Other theories are still on the table though. A network of fake Twitter accounts and bespoke link-shorteners could be useful for any number of scams and schemes. It’s possible an affiliate-marketing spammer is behind the network, using the shorteners to divert a portion of the click-throughs to dodgy revenue-producing ads.

Building off Recorded Future’s list of 10 URL shorteners, The Daily Beast found an additional 59 of the sites used exclusively by hundreds of additional Twitter accounts similar to the ones isolated by the researchers. A random walk through these accounts reveals an eclectic mix, heavy in automation.

A bot account purporting to belong to a California man who “longs for the days of handwritten letters and wet ink” tweets nothing but links related to the post office, each passing through the same faux link-shortener site. Another account supposedly run by a New York man who loves national monuments (bio: “My goal is to visit every national monument and park by the time I die) disgorges random tweets that look like the results from a poorly constructed search-engine query for government offices and building projects.

Not only are the bots haphazard in their link selection, it’s also easy to find tweets with mismatched headlines, and links to grossly outdated news stories. The Football Babe account that triggered Recorded Future’s hunt appears entirely devoted to terrorism tweets, but viewed together those tweets look less like a cunning news-recycling strategy and more like sloppy programming. Tuesday’s lineup included a tweet linking to an article on the 2016 stadium attack in Istanbul, and another linking to a 2017 article pegged to the 16-year anniversary of the September 11 terror attacks. Sandwiched between them: a link to the “terrorist attack” section on a stock photo site.

Whatever motive is driving the operation, there’s an unmistakable dark streak in the network’s tweets, even on accounts crafted to appear upbeat and apolitical. A word-frequency count on a sample of the tweets linking through the fake shorteners shows the second most-used word, excluding common prepositions, is “Drone” with more than 6,000 occurrences. After that, in order: Security, Attack, Plant, Police, Power, Trump, Knife, Terror.”

The most-used word is “News.”