Question order and poll result

Forty years ago, Connecticut featured a marquee Senate race.

Incumbent Lowell Weicker cemented his reputation as an independent Republican leading the charge against Richard Nixon’s Watergate crimes on the Senate investigating committee.

Weeks before Election Day, the two newspapers covering the race most closely released statewide polls. The New York Times poll put Moffett 5 points ahead.

That same morning, the Hartford Courant poll showed Weicker “had surged ahead of his Democratic challenger,” posting a 16-point lead. In the net, accouting for rounding, a difference of 22 points.

Two reputable polls: one with a sample of 400, the other sampling over 1,100, but diametrically opposed results that could not be accounted for by the margin of error.

Deepening the mystery, the two polls had quite similar results in the governor’s race, which was also on the ballot that November.

Pollsters noted that the Times’s survey overrepresented college-educated voters and underrepresented those without a college degree. (Sound familiar?)

Other fixes and checks failed to close the gap.

One potential culprit remained unexplored: question order.

To assess how important this difference could be, The Times embedded a split sample experiment in a poll conducted a couple of weeks later. Half the sample was asked about the governor’s race first and Senate second; the other half, the reverse.

Of course, the results were not comparable to the earlier poll. Weeks of campaigning filled the time between the two polls and the race had moved on.

When the gubernatorial race was asked first, Weicker led by 2 points in the Senate race. When the Senate race was first, Weicker led by 17.

(On Election Day, Weicker won by 5, so neither of the early October polls proved particularly prescient.)

While everything is obvious once you know the answer, no one would have predicted an impact of this magnitude when designing the original polls.

Does the placement of the horse race always matter that much? Certainly not. Even in this case it mattered a great deal for the Senate race but not much at all for the gubernatorial contest.

But such question order effects can matter a great deal — and no one can predict when they will manifest themselves and when they won’t.

Pollsters today vary in the questions they put before the horse race of interest. Some put the vote question right up front, before any information is provided. Others put the match-up deep in the survey, after questions that evoke feelings about the country’s direction, key issues and more. That’s of course in addition to which other horse race questions are posed, and in what order.

Yet, question order effects are rarely the focus in discussions of polling error or differences in poll results.

The simple truth is, pollsters make myriad decisions in designing and conducting surveys and, like the Wizard of Oz, ask consumers to “pay no attention to that man behind the curtain.” Decisions about question wording, question order, sampling, weighting and more impact poll results.

Sometimes there is a scientifically right way to proceed, but often there isn’t. It’s a matter of judgment.

As we’re treated to a raft of polls in the closing weeks of campaign 2022, remember from campaign 1982 that seemingly small choices pollsters make can have big impacts on survey results.

Mellman is president of The Mellman Group and has helped elect 30 U.S. senators, 12 governors and dozens of House members. Mellman served as pollster to Senate Democratic leaders for over 20 years, as president of the American Association of Political Consultants, and is president of Democratic Majority for Israel.

Whether winning for you means getting more votes than your opponent, selling more product, changing public policy, raising more money or generating more activism, The Mellman Group transforms data into winning strategies.