Learning from poll autopsies

As polls pour out of early primary states, it’s worth examining just how complex and fragile those instruments are.

Recently we’ve been treated to three polling autopsies that illustrate potential pitfalls: two about Britain’s elections and one regarding our own 2014 midterms.

Aficionados may recall that polls for the United Kingdom predicted a tie, when in fact the Tories won by 6.5 percentage points. Here at home, 2014 polls consistently underestimated support for Republican candidates.
In the immediate aftermath of these elections, many (including me) offered thoughts about what could have gone wrong. Some of those hypotheses are backed by the analytic autopsies, others not.

As the authors of one report (Anthony Wells of YouGov and Stanford professor Doug Rivers, arguably one of the smartest people in the world) noted, while their polls’ error seems quite large, “3 or 4% effects are inherently difficult to analyze, even with quite large sample sizes.”

With that important caveat, let’s look first at what did not seem to make much difference.

Many claimed low response rates were the culprit. While everyone would prefer higher response rates, the forensic examiners found no evidence that response rates mattered in and of themselves.

Question wording, question order and mode of interview (online vs. telephone) were also dismissed as causes of the British failure.

That is not to say those factors can’t be important — we have lots of evidence they can make big differences — but they were apparently not responsible for this particular poll failure.

I argued late swing could be part of the problem; the British reports conclude that had a small but measurable impact — about 1 point.

Pew’s report on the U.S. midterms suggests a slightly greater effect. In their pre-election poll, those who actually voted favored Republicans by 3 points. In the end, those same voters gave the GOP a 6-point edge. And since the vast majority of House races are not competitive at all, it’s possible shifts were even greater in targeted districts.

A prime culprit in all of these reports (and in my earlier suggestions) is a failure to accurately model the likely electorate. In the U.K., this problem was manifest in the age of the electorate. Young people were oversampled, and the oldest cohorts were under-sampled.

The young favored Labour, the old backed the Conservatives, so too many young people and not enough of the elderly skewed the results in Labour’s favor.

Moreover, getting age bands right was not sufficient. The Conservative advantage increased as one climbed the age ladder. So having the right number of 75-years-olds would have made polls more accurate than just having the right number of voters who were age 60 and older.

How one composed the electorate was important in the U.S. as well. Pew’s report found the ways pollsters chose to construct their likely electorate could, with the same underlying data, estimate anything from a 2-point Democratic win to a 7-point GOP advantage.

The British have an excuse for getting the likely electorate wrong — they don’t maintain detailed voter files with demographic and turnout data on each voter.

We have no excuse, because we have a treasure trove of such data, though many public pollsters seem reluctant to use it.

And it’s that kind of data that counts. Though they don’t say it, Pew finds what I have long been telling disbelieving clients and what other studies have shown: likely voter screening questions aren’t very useful.

It turns out that the guesses I’ve employed in previous pieces, illustrating the difference between focusing on likely voters and the likely electorate, are close to accurate.

The widely used Gallup method employs seven separate questions to determine who is a likely voter. It turns out that 22 percent of “unlikely” voters actually turn out, while some 20 percent of “likely voters” don’t.

Much remains to be done to improve election polling, but alas, all the analysts seem to agree: there are no silver bullets.

Mellman is president of The Mellman Group and has worked for Democratic candidates and causes since 1982. Current clients include the minority leader of the Senate and the Democratic whip in the House.

Whether winning for you means getting more votes than your opponent, selling more product, changing public policy, raising more money or generating more activism, The Mellman Group transforms data into winning strategies.