PollingReport.com

Home ] Up ] Perot, Bush, Clinton ] Withstanding the Wave ] Rise of Robopolling ] Marriage and the Courts ] Delegates and the Democrats ] How to Forecast an Election ] Case for Online Polls ] Presidency on Life Support ] Iowa Caucuses ] 2000: Clinton's Coattails ] NAES 2000 ] Support for Clinton ] [ Sampling Error ] Incumbent Rule ] Focus Groups ] Internet Polling ] Questions About Polling ]


Humphrey Taylor is the Chairman of Louis Harris & Associates, Inc.
      
This article appeared in the May 4, 1998, edition of The Polling Report.



Myth and Reality in
Reporting Sampling Error
      
How the Media Confuse and
Mislead Readers and Viewers
  
by Humphrey Taylor

  
On almost every occasion when we release a new survey, someone in the media will ask, "What is the margin of error for this survey?" There is only one honest and accurate answer to this question
-- which I sometimes use to the great confusion of my audience -- and that is, "The possible margin of error is infinite."

The follow-up from the journalist is often, "I can’t report that. My editor won’t let me run a story about surveys unless I can report the margin of error."

When the media print sentences such as "the margin of error is plus or minus three percentage points," they strongly suggest that the results are accurate to within the percentage stated. That is completely untrue and grossly misleading. The media’s intentions are honorable. They want to warn people about sampling error. But they might be better off assuming -- as most of the readers surely do -- that all surveys, all opinion polls (and, indeed, all censuses) are estimates, which may be wrong.

This is surely a classic case of a little knowledge being a dangerous thing. In the real world, "random sampling error" -- or the likelihood that a pure probability sample would produce replies within a certain band of percentages only because of the sample size -- is one of the least of our measurement problems. The main problems of survey measurement, or more accurately mismeasurement, include:

• the sample design (for telephone surveys, how the numbers were selected and how the individuals are selected within the household);

• the non-availability problem (are people who are available different on the variables we are measuring than the people who are not available?);

• the refusal problem (is the refusal rate different on the particular variable we are measuring?);

• question wording;

• question order;

• deliberate, or unconscious, lying or false reporting by respondents;

• inappropriate or inadequate weighting of the data.

In addition, when we are trying to predict election results, we have several other problems to worry about, including:

• differential turnout (a huge factor in U.S. elections, where predicting who will vote is often harder than measuring how they are likely to vote);

• late-swing (i.e., people changing their minds after we’ve surveyed them).

All of these variables have been shown in various studies to have been sources of error and sometimes of quite substantial error. Unfortunately, we are unable to quantify the effects of these sources of error on our results, or to validate our results within any kind of reasonable budget.

For this reason, we (Harris) include a strong warning in all of the surveys that we publish. Typically, it goes as follows:

In theory, with a sample of this size, one can say with 95 percent certainty that the results have a statistical precision of plus or minus __ percentage points of what they would be if the entire adult population had been polled with complete accuracy. Unfortunately, there are several other possible sources of error in all polls or surveys that are probably more serious than theoretical calculations of sampling error. They include refusals to be interviewed (non-response), question wording and question order, interviewer bias, weighting by demographic control data, and screening (e.g., for likely voters). It is difficult or impossible to quantify the errors that may result from these factors.

If journalists are the least bit interested in all of this -- and alas, most of them are not -- they may well ask, "If there are so many sources of error in surveys, why should we bother to read or report any poll results?" To which I normally give two replies:

1. Well designed, well conducted surveys work. Their record overall is pretty good. Most social, and marketing, researchers would be very happy with the average forecasting errors of the polls (less than 2% on two main candidates in presidential races since 1952). However, there are enough disasters in the history of election predictions for readers to be cautious about interpreting the results.

2. (And this is more effective.) I re-word Winston Churchill’s famous remarks about democracy and say, "Polls are the worst way of measuring public opinion and public behavior, or of predicting elections -- except for all of the others."

.

.

Polls are the worst way of measuring public opinion and public behavior, or of predicting elections—except for all of the others.

 


HOME | TABLE OF CONTENTS | SEARCH THE SITE

Copyright © 2017 POLLING REPORT, INC., and polling/sponsoring organizations