As the 2024 presidential election looms, many Americans are anxiously following the latest political polls — and then reminding themselves how polling went wrong in recent elections.
There are good reasons to be skeptical of opinion polls, says social psychologist Jon Krosnick of Stanford University, who has studied such surveying methods for decades. Cheaply built online surveys have proliferated since the early 2000s. The result is a flood of unreliable data, making our information ecosystem even murkier.
Yet surveying — when done correctly — is critical for understanding what people think, what prompts their future behavior, and more. Polling a random sample of the public provides a snapshot of data that can inform policy such as by capturing the unemployment rate, for example, or learning from former smokers what helped them quit. And, of course, political polling — the surveying of public opinion on candidates and related issues — can gauge how voters are leaning in competitive elections and why.
In our politically polarized times, accurate polling is more important than ever, says Krosnick. He talked to Knowable Magazine about how a proper poll is conducted and why understanding public opinion matters for democracy. This interview has been edited for length and clarity.
Why does surveying the public matter?
Americans realize that government is not there simply to represent them as individuals; government is there to represent all of us. How are you going to know what the majority of the country thinks about the president’s job performance or about issues like gun control or abortion? You need surveys to tell you that.
How are they useful particularly in this polarized climate?
First of all, how do we know that the country is polarized? Surveys! We need sophisticated survey analysis by folks who go into detail on what people think, on how are we polarized, and how are we not polarized.
Secondly, polarization can be defined in various different ways. Has the number of moderates dropped and the number of people holding extreme views grown so that we are now two camps of people who each have rifles pointed at the other group? The answer from surveys is no, actually. Most Americans are still moderates. And surveys show there are lots of issues on which large majorities of Americans agree. Climate change is one example; almost 80 percent of Americans believe that climate change is real and is a threat.
In Congress, it’s a different story. The Republicans line up in their voting very much on one side, the Democrats line up on the other side. And if you’ll forgive me, news professionals don’t find people agreeing with each other quite as newsworthy as the division that we see in Congress. News emphasis on disagreements has contributed to a widespread misunderstanding that the country is hopelessly divided, when actually there’s a lot of common ground.
What about for elections specifically — why do we need accurate polls?
The polls can be a valuable check.
One of the ways that we can keep peace in America, or at least enhance the acceptance of election results, is polling. If we do high-quality, scientific, pre-election polls before Election Day and they say “Harris” over and over, and then Election Day comes, and the government says “Harris,” and then researchers do exit polls that say “Harris” as well, that’s a lot of data points reinforcing the government vote count. If you measure the same thing multiple different ways and you get the same result, you can have confidence in the conclusion.
If the polls say, “Trump, Trump, Trump” beforehand, and the exit poll says “Trump” and the government says “Harris,” then there is going to be a basis to say: Did something go wrong here? There’s a public interest in investing in really high-quality polls.
What’s the most accurate way to survey the public?
The gold standard is face-to-face interviewing of a random sample of people living in households on the US Postal Service’s master address file. That method yields the highest response rates, and people are remarkably thoughtful and honest when talking to interviewers face-to-face. The government still does face-to-face interviewing for its most important surveys that produce numbers used widely by businesses, scholars studying the economy, economists and investors, and government agencies planning their actions.
That sounds expensive. Can we replicate this gold standard over the phone?
Face-to-face interviews are colossally expensive. You can generate a random sample of telephone numbers using area codes in the country and the next three digits, which are known as the central office code. Then you can add four random digits to those codes over and over to make lots of random numbers for landlines and cell phones. This is called random digit dialing and works very nicely to produce highly accurate results, even today.
“One of the ways that we can keep peace in America, or at least enhance the acceptance of election results, is polling.”
Face-to-face interviewing typically yields higher response rates and allows for longer interviews, asking more questions, and showing visuals to respondents. Although telephone surveys typically yield lower response rates, those lower response rates don’t translate into notably lower accuracy.
Can internet-based surveys also be accurate?
Random sample internet surveys can produce highly accurate measurements while eliminating some of the work involved. The company called Ipsos contacted a random sample of people by mailing invitations to a random sample of addresses, and invited them to join for the KnowledgePanel. People who join get emails every so often inviting them to complete a questionnaire for modest financial incentives like entering raffles or sweepstakes for cash or other rewards. Similar methods are used by the National Opinion Research Center at the University of Chicago, the University of Southern California Dornsife, Gallup, the Pew Research Center and others.
There have been lots of evaluations of the accuracy of all of these random-sample surveys. For example, you can ask the US government: How many people have a passport? And you can ask your survey respondents: Do you have a passport? If the proportion of those surveyed saying they hold a passport matches the true rate known by the government, that’s evidence of survey accuracy.
If researchers can still accurately survey public opinion — in-person, on the phone and online — why is survey quality a problem?
Unfortunately, the world is filled with cheap non-scientific surveys.
The most common method is to put banner ads on webpages saying, “Hey, do you like to do surveys? Click here.” That’s not random sampling. Those opt-in panels, where people sign up simply to make money through surveys, are thriving. There are actually online sites advising people about which companies to work for in order to make the most money the most quickly without being caught when answering thoughtlessly. And all respondents don’t answer honestly and accurately; some just answer without even reading the questions so they can earn as much money as possible.
A company can collect and disseminate low-quality data and still make lots of profit, because there’s very little human involvement in the work once the software is programmed. Research has shown that you get what you pay for with these cheapo methods.
“Unfortunately, the world is filled with cheap, non-scientific surveys.”
Have scientifically conducted polls become less accurate? The answer is no. Have polls proliferated in recent years that are not scientifically conducted and that are horribly inaccurate? The answer is yes.
In addition to poor sampling methods, how else can surveys produce inaccurate results?
Question wording makes a big difference. For example, I could ask you, “Do you agree or disagree with this statement: Joe Biden is a good president?” Or I could ask you, “Do you think Joe Biden is a good president or that he’s not a good president?”
The first question is known to be biased because it creates a nudge in the direction of agreeing. People interpret the question as asking, “Do you agree or disagree with me that Joe Biden is a good president?” About 15 percent of any national sample will agree with both “Do you agree or disagree: Joe Biden’s a good president?” and its opposite, “Do you agree or disagree: Joe Biden is not a good president?”
The second version, “Do you think Joe Biden is a good president or not a good president?” does not have a nudge in it. It’s balanced and explicit and yields more accurate results.
What risks do these sketchy surveys pose to the public?
They may kill the entire field of survey research.
Nonscientific surveys can do a lot of damage. In 2016, many outfits told us that Hillary Clinton had more than an 80 percent chance of winning. Yet my team found that during the week before the 2016 election, of the polls in the battleground states, only one used random sampling. We’re preparing those findings for publication now. You shouldn’t be surprised by the bad predictions that were made, because it’s garbage in, garbage out.
Lots of important decisions are made based on surveys. Public health officials during Covid were making decisions about where to send resources based in part on insights from community surveys in which people provided nasal swabs. Bad survey research can kill people. In the absence of good-quality data, the government may do things that make a lot of people unhappy and hurt the best interests of the country.
How can someone determine whether a political survey was done in an accurate and scientific way?
When I’m deciding whether I will trust a poll, I read the methodology description. Was it done with a random sample face to face or by random digit dialing by telephone? If so, great. Was it done on one of these random sample internet panels? If so, great. If not, I’m not interested.
But either most people don’t have the motivation and time to evaluate those details, or the organizations disseminating the poll results don’t honestly describe the methods. Unfortunately, there isn’t a good answer.
Are news organizations doing anything to combat the influence of crappy polls?
Not enough. Twenty years ago, major news organizations had full-time survey research experts. I know people who used to play that role. They would evaluate surveys before journalists could cover them. The vetters would say: “Nope, you can’t write about this poll, it’s crappy.”
“In the absence of good-quality data, the government may do things that make a lot of people unhappy and hurt the best interests of the country.”
For some of the most visible pre-election surveys, potential respondents were selected from lists of people registered to vote and were called on the phone. The problem is that researchers can’t get phone numbers for a large proportion of the people on those lists. That causes what researchers call “non-coverage,” and it’s not unbiased non-coverage. It undermines the randomness of the sample.
As we get closer to Election Day, I hope organizations are going to be spending money to do high-quality polls in battleground states with truly random samples via random digit dialing. But I doubt they will.
Do other countries have the same polling problems?
Some countries, especially in Western Europe, have developed random-sample internet panels doing excellent data collection. But mostly, the success of the non-scientific methods in the US has translated into a proliferation of crappy methods around the world. You can sell the data and claim whatever you want to claim and make plenty of profit. And remarkably, the organizations buying the data seem indifferent to the poor quality.
Are you doing anything to fix these polling problems?
My university is launching a new center for excellence in survey research. This new organization is going to help the world do better survey research and understand how to differentiate good-quality surveys from lower-quality surveys. Maybe in a couple years, your readers can look at our website for guidance on whether to trust a particular survey. I’m hopeful!
I’ve never been called for a political poll. Is that normal?
There are hundreds of millions of American adults, and each survey is of as few as 1,000 people. The probability that you will be called is tiny.