On the Atlantic side, Jason Blakely asks, “Is political science the victim of this year’s election?” There is a lot to unbox in this title and in the article, but it’s worth discussing as many expert opinions were reversed last week.
First of all, it should be noted that Blakely’s deletion of “political science” does not refer to any political scientist. He mainly notes the failures of survey aggregation sites like those developed and managed by Samuel Wang, Nate Cohn and Nate Silver. These models all predicted a 3-4 point lead for Hillary Clinton ahead of the election. These models had different estimates of uncertainty, but ultimately they were all based on the same data: polls conducted and published by polling companies across the country.
These models, of course, were extinct, albeit by only a few points. (Blakely suggests that the USC Dornsife / LA Times poll, which used very different methods and data, “did better,” but that is obviously not true. He predicted that Trump would win the popular vote of over 3 points; so far he loses it.) We might be more indulgent to these predictions if they had picked the Electoral College winner well, but missing a few points is really not unusual.
But to the extent that it describes political science, it describes only a very small part of it. How did political science really do it? Keep in mind that the subfield of political science that focuses on predicting the US presidential election is very small, even if it is a bit more visible than most. Our journals and books are largely focused on explaining, describing, and testing hypotheses, rather than predicting future events.
But if you want to see what political science election forecasters have come up with, watch here. The predictions cover a fairly wide range of outcomes, with some seeing Trump win and others expecting Clinton to win. But average it and you get Trump to win 49.9% of the bipartisan vote. The vote so far brings it to 49.6 percent. It’s quite impressive and much closer than the survey-based models.
Blakely’s critique seems more aimed at those who use quantitative measures to predict political outcomes. As he puts it, “the problem is with predictions – or the attempt to report predictions as so-called scientific or quasi-scientific discoveries akin to the work that occurs in the natural sciences.”
First, I would point out that only certain parts of the natural sciences lend themselves well to precise quantitative predictions. Want to know when a comet will cross Earth’s orbit or what will happen when you smash two particles together in an accelerator? You can predict them pretty well. Want to predict how tall humans will be 100,000 years from now or how the Earth will react to a doubling of the carbon dioxide content of the atmosphere? Well, these are complex systems, and we can certainly use history and good quantitative measures to make educated predictions, but there will be significant error terms associated with them.
Furthermore, I don’t know of any pollster or political scientist who would claim that their predictions are akin to those of the natural sciences. While I am skeptical of models that claim 95% certainty of any outcome in a close election, it doesn’t seem to me that there is a misapplication of scientific methods here. Human behavior is undoubtedly complex, and therefore its study involves important error terms. But that doesn’t mean that quantitative analysis is unnecessary in the study of humans.
Such an analysis can be extremely useful. This can tell us, for example, that Clinton’s share of the vote was about where we expected it to be given the state of the economy, and that party discipline has worsened. maintained among voters even in a very unusual election year. We should also embrace more qualitative studies – Kathy Cramer’s The politics of resentment is essential for understanding political sentiment in the rural Upper Midwest, and has proven rather prescient this year – but there is no reason to reject quantitative methods just because humans are complicated.