Tech

Could Google Rig the 2016 Election? Don’t Believe the Hype.

Artificial Simulation

A claim in a recent Politico article that undecided voters can ‘easily’ be shifted by ‘20 percent or more’ seems ridiculous on its face. We dug a little deeper.

articles/2015/09/21/could-google-rig-the-2016-election-don-t-believe-the-hype/150920-stats-google-elections-tease_uicfvb
Illustration by Emil Lendof/The Daily Beast

One way journalism has changed for the better is that we can feel free to be open about our uncertainty. Perhaps it is the evanescent nature of the Internet that allows us to recognize that our conclusions are always provisional, only as good as our data and our model that connects data to the larger reality. While lumbering dinosaurs, in their fear of uncertainty, have to pretend omniscience in the daily dead-tree edition, we in our online journalism can reproduce the two-steps-forward, one-step-back (or, sometimes, three-steps-back) dance that is the fractal nature of human understanding.

We’ll demonstrate that in today’s column, where instead of presenting our conclusions cleanly and at once, we will go through our steps of discovery.

The story started when a political science colleague pointed us to an article in Politico headlined, “How Google Could Rig the 2016 Election,” which made an astounding claim:

Google’s search algorithm can easily shift the voting preferences of undecided voters by 20 percent or more—up to 80 percent in some demographic groups—with virtually no one knowing they are being manipulated...Given that many elections are won by small margins, this gives Google the power, right now, to flip upwards of 25 percent of the national elections worldwide...the 2012 election was won by a margin of only 3.9 percent—well within Google’s control.

This is not, as you might think, a reprise of the widespread concern a few years ago about Diebold Corp. and alleged corruption of voting machines. Instead, the story is unconscious manipulation of voters via search engine results:

In laboratory and online experiments conducted in the United States, we were able to boost the proportion of people who favored any candidate by between 37 and 63 percent after just one search session. The impact of viewing biased rankings repeatedly over a period of weeks or months would undoubtedly be larger.

The author, Robert Epstein—a psychologist who was once a student of B.F. Skinner—goes on to describe a series of psychology experiments he performed on potential voters in the U.S. and India in which he and his colleague Ronald Robertson recruited several samples of potential voters and questioned them on their attitudes about several candidates in recent or upcoming elections. In each experiment, the questionnaires were done twice, first when the potential voters entered the lab, and then again after they had the opportunity to look up the candidates on a search engine. The gimmick was that the search engines were rigged, manipulated so that higher-ranking webpages were more likely to favor one candidate over the others. And what Epstein and Robertson found was that, when a candidate was favored by a search engine (that is, with webpages that favored him or her appearing higher on the page), participants were more likely to click on these pages and ended up giving the candidate more favorable ratings, indeed reported being more likely to vote for the candidate.

What was our reaction to all this?

To start with, the claim that undecided voters can “easily” be shifted by “20 percent or more” struck us as ridiculous. In psychology research jargon, these numbers lack “face validity”—that is, they are implausible on their face. A 20 percent shift among any group of voters is huge. How could this possibly occur from something as passive as a search engine?

So we followed the link to the research article. Our first guess was that the experimental data were really noisy and the researchers just picked out a couple of statistically significant comparisons without seeing the big picture. That’s how earlier researchers made similarly implausible (but, unfortunately, publishable) claims that beautiful parents were more likely to have girls or that single or partnered women at certain times of the month were more likely to support Barack Obama or Mitt Romney: isolated noise that we have no reason to believe will generalize to the larger population.

But Epstein and Robertson’s studies were different: They found consistent results—stunningly consistent. In five experiments, each with two or three candidates and three or four measures of voter attitude, they found a positive effect of the search engine manipulation in every case: a total of 60 comparisons, all of which went in the right direction. This implies a strong and consistent, indeed overwhelming, effect.

So how do we think about this study? How do we reconcile large effects in the lab with our real-world understanding that elections are hard to swing? There are a few pieces to the puzzle. First and most obviously, the experiment is an artificial simulation. Participants were asked their attitudes, were instructed to search, then were asked again. It seems reasonable for the people in the experiment to assume that they were expected to update their views based on what they’ve just read. Epstein and Robertson assure us that most participants did not detect the manipulation, but that’s kind of irrelevant. The point is they were given information and responded to it as requested. This has little to do with voting.

One might argue that the study in question is no different from any survey research in that what is being measured are opinions, not behavior—but we feel this study is different, in that the responses are so closely linked to the searching that it almost seems as if the goal of the study is to have people report on what they’ve read.

It is, of course, quite reasonable to believe that actual voting behavior will be influenced by what people turn up in their Google searches, along with what they read in the newspaper, watch on TV, and so forth. That much is clear. The question, though, is not whether there is some effect but rather how large it is. And it is not at all clear how to generalize to the real world from a study in which voters are immediately responding to a single stimulus in isolation, which, for that matter, is constructed to be much more biased than in a real web search. (The researchers put extremely biased articles favoring one candidate on page 1, moderately biased articles on page 2, and so on, so that participants had to go to pages 4 and 5 of a five-page search to find anything strongly favoring the other candidate.)

To take an artificially huge manipulation in isolated laboratory conditions and then claim that real results “would undoubtedly be larger”—well, this may be good marketing, but it’s poor science. Actually, we hope it’s poor marketing, too, in that it motivated us to write this article explaining why we don’t believe these claims.

What were the editors of Politico doing, uncritically running this story? Are they gullible or just cynical? Our guess: a combination of the two. A lack of numeracy or understanding of the quantitative aspects of politics leads the editors to credulously accept a claim of huge effects. And as for the cynicism: Is it really so bad to run a speculative story? It gets a lot of attention, and if it really is a bit of hype, other journalists can clean up the mess. Everybody wins, right? Just remember to think twice when you hear someone claim that some magic X factor can shift the preferences of up to 80 percent of some group of voters.

That all said, we find the study by Epstein and Robertson to be genuinely interesting. If you’d told us this experimental design ahead of time and asked us what would come out in the data, we’d have guessed that nothing much would happen but a bunch of noise. But they found consistent effects, and that's interesting. Too bad they had to spoil it with a bunch of hype—but maybe that’s what it takes to get published in Politico and the Proceedings of the National Academy of Sciences, a journal that recently was in the news after publishing a later-debunked claim the people react differently to hurricanes with boys’ and girls’ names. The unconscious is indeed mysterious, and people often to seem willing to believe just about anything about subliminal effects on behavior. We just have to do our best to try to unpack these claims, to think more carefully about their relevance to the real world. Luckily for us as citizens of a democracy, voters are not as gullible as Epstein and Robertson think. Unluckily for us as consumers of science, the editors of the Proceedings of the National Academy of Sciences are not so skeptical.

Andrew Gelman and Kaiser Fung are statisticians who deal with uncertainty every working day. In Statbusters they critically evaluate data-based claims in the news, and they usually find that the real story is more interesting than the hype.

Got a tip? Send it to The Daily Beast here.