The failure of the polls in Britain

Having not done British politics since the Concorde was flying, and being fully cognizant of the fact that others have spent those decades immersed in the subject, I had resolved not to delve into the failure of British polling in the recent elections there.

However, when my Republican colleague Nathan Wurtzel tweeted, “As usual, I’m going to wait to see what @MarkMellman has to say about UK polling,” I felt challenged to at least take a look, though I’m really just dabbling.

Technical explanations — cellphones vs. online vs. telephone, etc. — don’t hold much water. If they did, polls using one methodology should be more accurate than those employing a different approach. They weren’t.
A second category of explanations blames voters for pollsters’ problems. As YouGov’s president argued, “people have said one thing and they did something else in the ballot box.” But why would voters lie to pollsters on Tuesday, yet be happy to tell them the truth in Thursday’s exit poll, which was right on target?

So where did our cousins go wrong?

First, I believe they were operating on the wrong level of analysis. Their data were on one level and what they were trying to predict was on another. The polls were looking at the percentage of the national vote each party was earning, while analysis and reporting emphasized the number of seats each would receive in Parliament.

The whole U.K. polling enterprise is akin to predicting the number of House seats each U.S. party will get using only the generic vote.

Much of Britain’s shock after the votes were tallied derived from the fact that a relatively even horse race in the polls produced a large Conservative advantage. Single-member, first-past-the-post districts are designed to magnify relatively slender pluralities of the national vote.

The Tories got less than 37 percent of the vote but 51 percent of Parliamentary seats, whereas Labour picked up more than 30 percent of the vote but less than 36 percent of the seats.

Multiple parties complicate the picture, underlining the problem of extrapolating from votes to seats. The Scottish National Party (SNP) garnered 1.45 million votes, while the UK Independence Party (UKIP) got a much larger 3.88 million votes. The SNP snapped up 56 seats and the UKIP just one.

A second problem is one I’ve addressed before: undecideds. Britain’s national polls show that not a single voter was undecided.

It wasn’t so. British poll reports simply eliminate undecideds.

Dig through Lord Ashcroft’s final national poll, which gave Labour a 1-point edge, and you’ll find 9 percent undecided, 9 percent refusing to say how they would vote and 9 percent saying they would not vote at all. In response to a different question, 21 percent said they might well “end up voting differently” on Election Day — more than enough voters to transform what looked like a tie into a 7-point margin.

At least British pollsters would be wiping less egg from their faces had they been reporting a 27-27 tie with 18 percent undecided.

Third is the leadership question. Apparently confident that Britons follow the socially acceptable path in their country and vote the party, not the person, British pollsters pay relatively little attention to attitudes toward those atop the ticket. Perhaps they deserve more focus.

Just two days out, one poll that did inquire found voters evenly divided on Prime Minister David Cameron’s performance, while they were 12 points more negative than positive about Labour Leader Ed Miliband.

Among undecideds, the results were even more lopsided, with Cameron -3 and Miliband -26.

These strong preferences for Cameron should have entered into pollsters’ projections and should certainly have tipped them off that undecideds had opinions, and opinions quite hostile to Miliband.

Finally, there is herding. The likelihood that so many polls would have identical and incorrect results is close to zero, at least raising the possibility that pollsters were following each other as well as the data.

The British Polling Council should be commended for undertaking a complete investigation.

Mellman is president of The Mellman Group and has worked for Democratic candidates and causes since 1982. Current clients include the minority leader of the Senate and the Democratic whip in the House.

Whether winning for you means getting more votes than your opponent, selling more product, changing public policy, raising more money or generating more activism, The Mellman Group transforms data into winning strategies.