OPINION: Election polls aren’t as terminally broken as we think
2016 was an outlier; polls are usually able to account for externalities
November 3, 2020
The announcement of the 2016 general election results left both citizens and pollsters in utter shock.
After polls had predicted a victory for Hillary Clinton, the outcome of Donald Trump’s victory left pollsters traumatized by how he managed to defy statistics.
Experts had tracked the data throughout the whole election and inevitably favored Clinton. However, many pollsters attribute this error to a few possible scenarios.
Many have come to question the validity of the polls following the 2016 election, despite experts assuring voters most of the polls were accurately calculated.
During an Oct. 14th Foley Talk, Charles Franklin, professor of Law and Public Policy at Marquette University and nationally recognized pollster, discussed the role of polls, whether they are broken and how voters should feel about them.
As a pollster for the state of Wisconsin, Franklin said his poll was one that was found to be inaccurate. His poll had candidate Hillary Clinton in the lead by six points by the end of the campaign, while the results revealed Trump had won by seven-tenths of one point. Franklin had used his own poll as a base for his argument, for which he was able to provide raw data to support his position.
Since the 1930s, national polls of reasonably good samples of the public have calculated who was ahead and by how much, Franklin said. In theory, polls have been perceived as frame-by-frame “snapshots” that capture subsequent votes of each candidate, as opposed to a “forecast” that predicts what will result from the election.
Polls also allow voters to observe changes in the presidential campaign over time. Franklin, like many others, said there is a growth in the volume of participating voters since the 1990s, which has allowed polls to portray more accurate data regarding voter preference.
Franklin said state polls are what allows citizens to see where the electoral votes are decided state-by-state. Whereas the final polls tend to be taken as a “de facto” forecast that are not based on a literal model with a statistical basis of what the outcome would be.
Regardless of this fact, final polls were treated as a forecast of the election.
Even so, polling was systematically wrong on two occasions — the 1948 and 2016 general elections.
While the polls were usually more or less exact, in both situations, the polls said one thing while voters said something else, Franklin said, which eventually came to a shocking conclusion to political analysts and pollsters alike.
In the more recent example, Franklin said national polling was well and accurate, as Clinton’s poll results differed by one percentage point resulting in her popular vote victory. The issue in the polls was within the electoral college, which showed a polar opposite result.
Franklin noted the pivotal states of the electoral college in his example; Wisconsin, Michigan and Pennsylvania where a few out of 134 polls showed Trump ahead, leading pollsters to perceive an inevitable Clinton victory in these states.
What Franklin said he believes may have occurred was a “tightening” in Michigan and Pennsylvania polling later in the presidential race, eventually resulting in a strong preference for candidate Donald Trump for those two states.
“Pollsters rely on a lot of (normally relatively safe) assumptions to project the vote and usually, one of the safest is that once a trend in votes is established, it is a safe bet it will continue,” wrote Joshua Hiler, political science major and vice president of WSU’s Political Science Club, in an email. “In the very rare case where that is incorrect, suddenly a lot of accuracy goes out the window.”
In a “postmortem” analysis of the election by the American Association for Public Opinion Research, analysts found a prime explanation for polling error that may have been the adjustment of samples regarding education levels within voters, or “weighting to education” as Franklin had referred. This concept refers to the demographic breakdown of voters on the basis of education, assuming that college-educated individuals were more likely to answer polls, while less-educated individuals were less likely to be responsive to a poll.
Franklin said there was a very strong correlation between education level and willingness to respond to polls, as well as education level and voter preference. These differences significantly challenged pollsters who, in that situation, had an overrepresentation of college-educated individuals, and an underrepresentation of less-than-college educated individuals that altogether resulted in inaccuracy in their polls.
“The relationship between education and support for Trump is much stronger now than it was in 2016, and so there are a lot more well-educated suburban voters, who in the past may have voted Republican, [are now] voting Democrat,” said Travis Ridout, distinguished professor at the Thomas S. Foley Institute. “I think we have actually seen some real changes in the party that people support, and I think that’s backed by the results of a lot of house races in 2018 as well.”
This data allows us to observe how preferences have evolved over the course of the last four years under the Trump administration.
Another example of using demographic data for polls is the age factor. Young voters are often underrepresented, while older voters are overrepresented in polling data, which is another reason to adjust polls according to factors like age and education level. In application, however, weighting to education has always improved the accuracy of the polls, but in this case, did not help to predict a Trump victory, Franklin said.
A second probable issue that may have steered polling towards error may have been “reluctant Trump respondents,” as Franklin had labeled them. This group is comprised of individuals that preferred Trump but were not expected to, in accordance with society’s perspective towards those individuals.
Franklin said many studies were conducted in an attempt to explain this phenomenon. In one particular study he had mentioned, researchers had found that highly educated or professional individuals appeared reluctant to report their vote for Trump. Other Trump voters were simply difficult to reach despite how the poll was conducted, automated system or human caller, Franklin said.
The unwillingness to participate in polling does not correlate with a win or loss for candidates. It instead leaves a significant part of the voter population out of impartial statistical data, which is concerning for pollsters whose objective is to gather voting data to help voters visualize how each candidate is doing throughout the race.
In Franklin’s 2016 poll in Wisconsin, all five regions of the state, spanning from the rural north to the suburbs, had been polled and data showed little evidence that “representation by area had shown any systematic underrepresentation of a Republican area or an overrepresentation of a more Democratic area.”
Eventually, it was found that late decisions from mostly conflicted voters had played a big role in determining the result of the 2016 election, Franklin said. In a sample of three Midwestern states, exit polls exhibited an overwhelming break for Trump by about 60 percent of conflicted voters, while 20 percent of conflicted voters ended up choosing Clinton, and the last 20 percent of those sampled opted for third parties.
In reference to the public’s confidence in the polls, Franklin said that data points towards no overwhelming flaws in the polls, despite the 2016 polling experience. All subsequent polls following 2016 had revealed highly accurate results, which leads to two conclusions.
The first is that national polls were accurate in 2016, while state polls were less accurate. The second was that if the polling system were fundamentally broken (in the situation that people don’t answer the phone, or samples are skewed), they would underperform.
“The absolute thrashing that polling got by the media after 2016 will take a while to burn off. However after a few cycles of being relatively accurate, I believe that people will be less skeptical,” Hiler wrote. “I also think that people generally still do trust the polls; the perception that faith is low emanates from the media which I think was more than a little unfair to the polling industry after 2016. The media was made a bit of a fool of with their headlines about 90% chances of Clinton winning and they still feel the sting of that.”
To compare the 2016 election year to others, the 2018 Senate election yielded highly accurate estimates, just as the 2012 and 2014 elections had, Franklin said. This shows just how much of an anomaly the 2016 general election was in terms of polling because there is nothing profoundly inconsistent in the way polls are conducted.
With the state of today’s current media outlets, “Broken Poll Rhetoric” is a common method argument meant to undermine the polling system by referring to the 2016 polling errors, which altogether are intended to dismiss unfavorable poll results of any political candidate by citing 2016, Franklin said.
“I think it’s in the interest of candidates that appear to be losing in the polls to blame the polls, and suggest they are not accurate,” Ridout said. “We’ve seen that for a very long time, just as we’ve seen the president attack the polls. It is pretty common rhetoric that we’re going to continue seeing that until we finally have those results.”
All in all, participation in polling is highly imperative in configuring unbiased data that is meant to simply track the progress of an election. As opposed to blaming the polling system on undesirable results and labeling them as “fake news,” it may be of more help to partake in them to increase the accuracy.
Time and time again, polling has shown to be as accurate as participating voters allow it to be, but 2016 was a complete surprise to pollsters who had been watching and analyzing the data since the beginning of election season.