Analysis
CA120: In political polling, art and science join hands
Friday night, my wife Jodi got home after a long week. Trying to decide what we should do, she flipped through some channels, looking at the networks, a couple sports channels, a few news channels, HBO and Showtime, and then finally announced “I don’t think there’s anything good on TV, let’s go see a movie.”
How could she really know that there wasn’t anything good on TV? She didn’t watch all 300 channels for the three hours that we would have been home that evening. Instead, she checked about 15 channels — a “sample” of Comcast offerings — and made a choice based on her knowledge of past, similar Friday nights. Armed with a blend of research and common sense, and she had what was needed to make plans for that night.
The following Monday, I got a call from Jeff in our office with the amazing news that “everyone is crazy!”
Naturally, I was concerned, so I called right away to find out what had happened.
His finding of worldwide mania was based on one voicemail from a confused vendor, and from that “sample” of one person he had decided on the spot that absolutely everyone was crazy.
Both Jeff and Jodi understand they don’t have to watch every television channel or talk to everyone to make a decision or reach a conclusion about the state of the world.
But the differences in how they came to these conclusions, and the confidence with which they held their beliefs, are examples of some of the strengths and pitfalls of political polling.
Questions are often framed “If the election were held today…” as a way of trying to capture the voters’ current attitudes. And those might change.
Political polling is the art and science of determining voter attitudes, and it gets the most attention in the weeks or months leading up to a critical election.
It is not magical, and it is not always predictive, but a good poll will tell you approximately where voters are at a given point in time, and in-depth polling can help us understand what issues are driving the electorate and the underlying rationale for who or what they are supporting.
What makes a poll good? How should we perceive the polling sent out in a press release by a campaign versus the nonpartisan polling by the Field Poll or the Public Policy Institute of California? How do national polls, like those we see for president, differ from the polling in a local city council race?
Here are the basics of how to be a better and more discerning consumer of polling.
Polling is just a snapshot in time
There is a difference between viewing a poll as correct and viewing it as being predictive. A poll can correctly peg the electorate’s attitudes on an issue or a candidate, but since it is just a moment in time, that does not mean that it is telling you the actual outcome three weeks or three months down the road. In polling, questions are often framed “If the election were held today…” as a way of trying to capture the voters’ current attitudes. And those might change.
A statewide poll for governor with 700 respondents out of 17 million total voters would have a 3.7% margin of error, but at 400 respondents that error margin grows to 4.9%.
If a pollster asked you this morning “If you had to order your dinner right now, what would you have?” your answer should reflect your current feelings on the topic. But, after a long lunch and a couple drinks outside the office, your appetite may have changed.
The answer to your polled question might reflect your interest in healthy food, but your final decision might be based on the convenience of take-out. The poll would probably be wrong, but that isn’t necessarily a flaw in the poll itself, but an over-reliance on your early answer to predict your final choices could be a flaw in interpreting that poll.
In polling, size matters
Polling is based on the notion that you can determine the attitudes of the whole by knowing the attitudes of a small, representative group. Professionals call this the “N” size of the poll, and you often see surveys that have an N ranging from 300 in a city council race to 500, 700 or more in a congressional or statewide. The larger the N the more statistically significant the survey is.
There has been a lot of discussion about how increasingly hard it is to get voters now that many only have cell phones.
The margin of error is calculated based on a function of the N and the area being surveyed.
For example, a statewide poll for governor with 700 respondents out of 17 million total voters would have a 3.7% margin of error, but at 400 respondents that error margin grows to 4.9%. If a poll shows a candidate having a lead of 3% in either of these surveys, then we would say that lead is “within the margin of error” and effectively a tie.
But just because a poll has a large N, doesn’t mean that all of its findings are equally strong.
A poll can have a small, 3.7% margin of error, but then come with an analysis that Latinos support or oppose a candidate, without revealing that only 25 of the respondents were Latino. That N=25 finding would have a nearly 20% margin of error, making the result no better than a guess.
Polling methods are changing
There are a number of different methods of political polling.
The most common are phone polls, where a respondent is taken through a 10-to-20 minute phone survey by a live caller. But all phone polls are not created equally.
The best polls begin with a voter file, ensuring that only actual registered voters are called. These voter files include what past elections a voter has participated in, their ethnicity, language preferences, and both home and cell phone numbers.
There has been a lot of discussion about how increasingly hard it is to get voters now that many only have cell phones. But this problem is more about the cost of calling cell phones, which must be done manually under current law, and makes for more expensive surveys.
Some pollsters — particularly the national firms — use a method called “random digit dialing,” where they simply call phone numbers within a certain area code and set of prefixes and ask voters if they are registered. These can be effective in larger surveys, particularly national polls that might be looking to gauge attitudes of all residents, not just registered voters.
One of the fastest growing polling methods are web-based surveys with people who have pre-arranged to be available for consumer surveys in exchange for payment or gift cards.
Many pollsters are expanding out from these traditional methods and conducting email and online surveys, touchtone polls, or in-person interviews.
These can be extremely effective, particularly when a voter survey is trying to complete a very large sample to do modeling or to have a large N for a subgroup, like Latinos or Asians, which might be of particular interest, but would not be represented well, or be too expensive to reach, in a traditional phone survey.
One of the fastest growing polling methods are web-based surveys with people who have pre-arranged to be available for consumer surveys in exchange for payment or gift cards.
These so-called “panel” surveys are used regularly by companies attempting to gauge the appeal of a product or marketing strategy, but they are generally not matched back to the voter file, so they are less common, and less effective, as political survey methods.
Representative samples
If you’re trying to “poll” a bowl of soup to find out how tasty it is, you can take a sip from any part of the bowl, as most soups taste the same no matter where you start.
Creating a sample that is representative of voter turnout is more complicated in primaries or special elections where participation can fluctuate.
But, if you were polling a salad, you would need to try some of the lettuce, some croutons, chicken, and tomatoes. Trying too much of one, or not enough of another, could completely distort your analysis because the bites you constructed were not representative of the whole salad.
San Diego is a good example of a diverse area where voters are more like a salad.
A polling firm called Survey USA did a round of random digit dial polls in the mayor’s race in 2013 that completely flopped because their samples were not representative of who was likely to vote in that election. Their poll was based on a set of voters who were 25% Latino, 38% Republican, and 25% under 30 years old. On Election Day, turnout was 12% Latino, 25% Republican and 11% Age under 30.
Creating a sample that is representative of voter turnout is more complicated in primaries or special elections where participation can fluctuate, or caucuses, like those in Nevada and Iowa, where there might be little data on who will vote, and years in which turnout can double because of a late interest in the race, or shrink because of bad weather or lost candidate appeal.
If you believe polls are infallible, then you also believe people are infallible.
Much of my day-to-day work includes cutting polling samples: We take the state’s 17 million voters, then select or delete voters from a potential file in such a way that randomness is assured within a pollster’s preferred likely-voter universe.
The methodology ensures that if you’re targeting a universe of voters who are 12% Latino, 11% under 30 and 25% Republican, you don’t have to wade through 17 million voter records to find them. Instead, the pollster begins with a list of voters that is exactly representative of the universe and can even be segmented into hundreds of buckets based on gender, age, ethnicity, geography, or other factors.
As a consumer of polling, these breakdowns of the voters polled are the first thing you should look at. Before you actually look at the answers to the questions, you want to find out who was polled and how representative they are of the voters that you expect to see casting ballots in the election.
Poll respondents are only human
If you believe polls are infallible, then you also believe people are infallible. But, experience in polling reminds us that the people on the other end of the line are themselves experiencing and reacting to the survey. A bacterium in a petri dish doesn’t change colors depending on what it thinks of a researcher, but a poll respondent will.
In the weeks leading up to the 1982 Governor’s race, LA Mayor Tom Bradley was leading in the polls. But on Election Day he narrowly lost to George Deukmejian in an upset victory that led to a significant amount of introspection by pollsters who had gotten the race wrong.
The analysis of this race and others led to a better understanding of a bias in polling, later called the “Bradley Effect.” This bias is on the respondent’s need to feel like they are giving the socially acceptable answer, even when discussing their views to an unknown person on the other end of the phone.
If a poll begins with a question like, “Are you a registered voter?”, then most know the socially correct answer is “yes,” littering a poll with unregistered voters who were just unwilling to admit it.
In the 1982 election if you were a Democratic voter, and voting for the Republican, it might be presumed that you were not voting for Bradley because he was African American. So, rather than sound like a racist, you would simply answer that yes, sure, you were voting for the Democratic candidate.
It is very possible that this kind of effect was seen in in California with Proposition 8, the ban on gay marriage in 2008, which was failing in polling but then ended up passing.
Some poll respondents who knew they were going to vote for the measure might have felt that was the less socially desirable answer, so they would say they were undecided or voting no.
This desire of respondents to give the “right” answers also lends to one of the failures of surveys that don’t begin with voter data.
If a poll begins with a question like, “Are you a registered voter?”, then most know the socially correct answer is “yes,” littering a poll with unregistered voters who were just unwilling to admit it. And when asked “Did you vote in the last election?”, the respondents again will inflate their voting record, possibly stating that they are a very regular voter when, in fact, they rarely or never actually vote.
One fun game in polling is to include a past election, like “In 2014 did you vote for Governor Brown?” or “In 2012 did you vote for President Obama?” in the survey, and you will usually find that the results don’t match the actual outcome but the current popularity of the incumbent.
Alternatively, polling can sometimes have a self-cancelling impact.
You can imagine a poll taken in 1974 asking “Did you vote for Richard Nixon in 1972” would have suggested he lost his re-election in a landslide to Hubert Humphrey.
Self-identification of registration or likelihood to vote can also have the opposite effect, if, as one study showed, some small portion of the most likely registered voters will sometimes say “no I’m not registered” or “nope, I’m not gonna vote” as a way of getting pollsters to stop calling them.
Polling can have a self-reinforcing or self-cancelling impact
When a poll shows a candidate winning, their fundraising spikes, volunteers begin to crowd the campaign office and the media begins to give “frontrunner” status and additional coverage. This is the self-reinforcing impact of polling, and exactly why a candidate’s internal poll showing that they’re winning rarely is kept a secret.
Alternatively, polling can sometimes have a self-cancelling impact.
This might have been seen this year in Iowa, when the Des Moines Register came out with polling showing Donald Trump surging in the polls and for the first time leading in the Republican caucuses.
But, instead of fueling a Trump win, this poll exposed two underlying factors: a high negative rating for Trump and a willingness among Republican primary voters to vote for whichever alternative candidate had the best chance of beating him on that day.
In the 2016 election we are seeing a new kind of poll – the Troll Poll.
Voter turnout soared, and many late deciders coalesced around Ted Cruz, handing him a small margin of victory. This win for Cruz could be, in part, due to a self-cancelling impact of Trump leading in that and other late polls.
Pushing and Trolling
Sometimes a poll isn’t a poll at all, or a serious poll becomes trivialized by silly trolling questions designed to capture attention, but shed little light on a campaign.
In the 2000 Republican presidential primary, someone did a push poll against John McCain that went to hundreds of thousands of voters with one simple question: “Would you be more likely or less likely to vote for John McCain for president if you knew he had fathered an illegitimate black child?” This was not an actual polling call with the intent on collecting data, it was a persuasion technique. The campaign paying for this was trying to mask the question as a poll so that is sounds more credible and could be done without being associated with the opposing campaign.
In the 2016 election we are seeing a new kind of poll – the Troll Poll.
This refers to polling questions that are sometimes included within an otherwise legitimate poll, but are simply outrageous. Questions like,
“Do you support a ban on homosexuals entering the United States?” or “Were Japanese internment camps a good idea?” are asked of voters who have already identified support for a candidate, with the results released in a manner that lights up social media.
Yes, it is fun to see things like Donald Trump voters, by a 16-point margin, wishing that the Confederate Army had won the Civil War, but it isn’t exactly good polling.
The Exit
Finally, we all see Election Day coverage dominated with exit polls which ask voters on the spot, who they voted for, when they decided, and why. In a state like California, these exit polls will also call voters who have already cast their ballots to get their responses.
These polls could be thought of as the most predictive, since they are the equivalent of asking you what you want for dinner when you’re actually sitting at the dinner table. But they are also fraught with errors and inconsistencies because of their trouble effectively determining voter outcomes from somewhat non-random samples in just a few precincts.
The larger exit polls can give good topline results for something like the national or statewide vote, but generally have trouble shedding much light on smaller geographic areas or ethnic subgroups.
To be a better consumer of polling, one should consider all of these issues. How big was a poll, is the story it’s trying to tell consistent with polling’s predictive ability, does it use good voter data as its starting point and does it reveal its questions and full methodology?
And when Jeff calls you with a claim that “everyone is crazy!” ask him what his N is.
—
Ed’s Note: Paul Mitchell, a regular contributor to Capitol Weekly, is vice president of Political Data Inc., and owner of Redistricting Partners, a political strategy and research company. Both firms provide information and strategy to Democratic, as well as Republican and independent candidates.
Want to see more stories like this? Sign up for The Roundup, the free daily newsletter about California politics from the editors of Capitol Weekly. Stay up to date on the news you need to know.
Sign up below, then look for a confirmation email in your inbox.
[…] See the full article in Capitol Weekly […]