Tech

Facebook’s Still Spying on You. The Only Way to Stop It Is to Destroy Its Data.

THE BIG IDEA

Only the destruction of the vast majority of Facebook’s data stores of consumer information will prevent further abuses.

opinion
190321-fb-spying-tease_c2zici
Photo Illustration by Sarah Rogers/The Daily Beast / Photos Getty

Mark Zuckerberg has announced Facebook’s new vision: it’s going to draw the curtains. In other words, Facebook is trying to turn its public forums into private ones. The new Facebook plans to integrate its three networks—WhatsApp, Instagram, and Facebook as we know it—into a single social network. They plan to encrypt all message traffic end-to-end, so not even Facebook can read it.

These changes come  in response to the onslaught of negative publicity in the last two years, which has seen everything from the Cambridge Analytica scandal to promiscuous sharing of personal data to unscrupulous opposition research. But don’t expect closing the curtains to change what the network does behind them.

Facebook’s new network should look like a cross between Gmail, Slack, and Instagram: places for sharing relatively personal and sensitive information rather than for broadcasting opinions and attracting followers. In Zuckerberg’s words, the focus will be on “private messaging, ephemeral stories, and small groups.” There will still be space for celebrity channels and professional, corporate public spaces, but they will not be the norm and they will be carefully policed.

ADVERTISEMENT

By focusing on small communities of known groups of people, Facebook hopes to tone down—or at least hide from view—the ugliness that has come to pervade every open internet forum (Twitter, Reddit, you name it). Not only are such spaces a nightmare of content moderation, they aren’t especially lucrative. With the exception of YouTube, Google makes its money without the hassle of moderating public online spaces, using other signals to determine what a user might be interested in seeing advertising for.

In other words, Facebook’s recent changes are less of a paradigm shift than they sound.

Reports have suggested that Facebook doesn’t yet know how to make money off its new model, but this belies the fact that Facebook’s announced changes don’t fundamentally threaten its revenue streams. Alongside Google, Facebook dominates online advertising: Google takes 37 percent of all online advertising revenue, while Facebook takes 22. While Facebook’s user experience will likely change drastically in the next few years, Facebook will still monetize its users the way it does today: consumer profiling and advertiser microtargeting. And unfortunately, that means that Facebook’s privacy problems will persist. They just won’t be as noticeable.

Consider Facebook’s settlement this week related to discriminatory advertiser targeting. In settling lawsuits with the National Fair Housing Alliance, the ACLU, and others, Sheryl Sandberg announced the following changes:

—Anyone who wants to run housing, employment or credit ads will no longer be allowed to target by age, gender or zip code.

—Advertisers offering housing, employment and credit opportunities will have a much smaller set of targeting categories to use in their campaigns overall. Multicultural affinity targeting will continue to be unavailable for these ads. Additionally, any detailed targeting option describing or appearing to relate to protected classes will also be unavailable.

When it comes to privacy, the devil is in the details—and in the omissions. These changes say absolutely nothing about the data that Facebook collects about its users. They are restricted to a particular, exclusive set of advertisers. And even there, they are weak. Facebook has so much data on its users that it’s trivial to switch from one of these protected categories for unprotected ones. The “far-reaching” settlement trumpeted by Sandberg and the ACLU says:

[Housing, employment, and credit] ads must have a minimum geographic radius of 15 miles from a specific address or from the center of a city. Targeting by zip code will not be permitted.

Targeting by zip code is no longer allowed, but targeting within a 15 mile radius is. Since the median size of a zip code is 35 square miles, and over 40 percent of Americans live in zip codes larger than 30 square miles, this targeting restriction is easily circumvented for a huge chunk of Facebook users. Companies will still easily be able to target areas with high concentrations of young or old people or with a particular racial focus. And while your texts and posts may be ephemeral under Facebook’s new vision, your demographic metadata will remain all too permanent.

And that’s to say nothing of other sorts of ads. Alcohol, pharmaceutical, and bail bonds advertisers—as well as the likes of a Cambridge Analytica—can still target consumers with the same degree of racial and gender specificity as ever. If you’re pregnant, or alcoholic, or in debt, Facebook’s relevant advertisers will still find a way to target you.

It’s not surprising that Facebook wouldn’t restrict their advertising beyond the narrow protected categories under which the lawsuit was filed, though it doesn’t reassure one about their supposed new focus on privacy. It is surprising, however, that the settlement is being treated by the press as anything more than a footnote. Anti-discrimination law was never going to be enough to address the massive consumer tracking and targeting of which computers are capable today.

The weak settlement makes this starkly clear in tepid bullet points like these:

  • Facebook will provide educational materials and features to inform advertisers about Facebook’s policies prohibiting discrimination and anti-discrimination laws.
  • Facebook will engage academics, researchers, experts, and civil rights and liberties advocates, including the Plaintiffs, to study the potential for unintended biases in the algorithmic modeling used by social media platforms.

The second point is particularly disingenuous, since I and many others have already chronicled the actual unintended bias in algorithmic modeling. Frank Pasquale’s The Black Box Society, Joseph Turow’s The Daily You, and my own book Bitwise all detail how bias is an inevitable feature in algorithmic modeling of human behavior, to be mitigated but never eliminated.

Facebook now thinks I am African-American (until 2016, it had thought I was Asian). I don’t know how this determination was made: I never specified my race. But all that’s necessary is for advertisers to target based on race-correlated factors in order to reestablish the same kinds of targeting that this settlement supposedly prevents. Maybe it was where I live, maybe it was what music I listened to, maybe it was which websites I linked to. Targeting based on these factors may not legally equate to race-based targeting, but the bias is still there.

CNBC reports that Facebook maintains a BOLO (“be on lookout”) list of individuals who’ve made “improper” or “threatening” communications against the company, and Facebook tracks them via its networks. There’s no indication that Facebook will stop tracking them. We have enough difficulties ensuring that our actual police forces are fair and unbiased; Facebook should not be getting into the policing business.

Facebook’s critics have not always been coherent. They ask that Facebook do a better job of surveilling its users and moderating content, even as they demand greater privacy for those users. Facebook can’t be a mother and a marketer at the same time. The Electronic Frontier Foundation leapfrogs from privacy to security to competition concerns in criticizing Facebook’s shift, obscuring the real root of the problems: Facebook’s enormous database of detailed personal information on hundreds of millions of people, including those people who don’t have Facebook accounts).

When I wrote about the difficulties of consumer tracking seven years ago, there was still the possibility of an ethical system of consumer tracking and opt-in permissions. In the years since, that possibility has ceased to exist, because the amassed data has simply gotten too large and centralized.

Only the actual destruction of the vast majority of Facebook and other companies’ data stores of consumer information will prevent further abuses, no matter what commitments to privacy they may make. Such data is only useful to the extent that it violates one’s privacy.

If Facebook has a more ethical future, it lies in separating its consumer profiling from its advertising business, sharply limiting its application (and retention) of personal data to non-commercial usages. This is not a likely outcome, but it should remind us that Facebook’s supposed new leaf is really plastic surgery.

Got a tip? Send it to The Daily Beast here.