“My gut says Donald Trump. And my guess is that it is true for many anxious Democrats.”
So writes statistician Nate Silver in the New York Times, adding that his gut—and yours—is not to be trusted.
The Silver Bulletin writer, FiveThirtyEight founder and onetime baseball analyst rose to fame analyzing the quantitative data and statistics of politics, and making election forecasts based on probabilistic models that split out weighted averages of public opinion polls.
ADVERTISEMENT
But this year, at least, he’s thrown in the towel on making a firm call, saying former president Donald Trump and Vice President Kamala Harris are locked in a genuine toss-up of a race as polls placing them neck and neck.
“You should resign yourself to the fact that a 50-50 forecast really does mean 50-50,” he wrote. “And you should be open to the possibility that those forecasts are wrong, and that could be the case equally in the direction of Mr. Trump or Ms. Harris.”
Silver said, with both candidates within a point or two of each other in the seven battleground states that will likely determine the election, a toss-up is “the only responsible forecast.”
Silver’s modeling of the 2024 race currently reflects that, noting “we honestly don’t know” who is going to win.
Silver, in his Times oped, did introduce a seeming counterintuitive: “It’s surprisingly likely that the election won’t be a photo finish.”
To this end he noted either Trump—whose supporters often have low civic engagement and can be harder to track in public opinion surveys—or Harris—who could benefit from pollsters unconsciously weighting in favor of Trump—could comfortably win the election, pushing to the edge of margins of error or beyond.
In fact, he said his model shows a 60 percent chance that one of them will earn the electoral college votes of six out of the seven battleground states.
Justin Grimmer, a Stanford political scientist, expressed concerns about election forecasters who used probabilistic models in an interview with the Harvard Graduate School of Arts and Sciences' Colloquy podcast earlier this month.
“The clearest failure of these predictions is 2016, where the predictions that came out of these models were very confident to medium confident that Clinton was going to win,” he said, noting Silver’s model gave Trump, at 28.6 percent, a significantly higher chance of winning than other forecasters. “Certainly, the consensus view across the models was a Clinton victory. And that didn’t come to pass. And so, they fundamentally got that election wrong. When we think about harm, it’s interesting to think about how these models are being consumed by the public.”
Grimmer said many news organizations turned to forecasting after 2008 and 2012, when Silver correctly forecasted Barack Obama’s election victories in most states, realizing it could feed content needs.
“It turns out it’s a nice summary of what might be happening,” he said. “And it naturally creates a lot of stories. If the probability of a candidate winning an election moves from 55 to 50 percent, well, maybe that could be a whole cycle of stories about what happened, why that probability changed, and what could be the origin.”