News

Journalists should report selectively and critically on polls, expert advises

The 2022 midterm election is around the corner, and pre-election polls are filling voters’ inboxes. At ScienceWriters2022, political psychologist Jon Krosnick recommended that journalists know the signs of a good poll and consult a professional when needed. (Illustration created by Yezi Yang. Element credits: Pixabay, Sketchify Education via Canva.com)

Pre-election surveys fooled Americans in 2016. Presidential polls consistently put Hillary Clinton ahead of Donald Trump. Clinton did win the popular vote, but Trump won the Electoral College vote, which determines who moves into the White House. A sense of déjà vu struck many voters in 2020 when polling overstated support for Joe Biden or forecast wins in states he ended up losing.

What went wrong with the polls?

“Failure to scrutinize methodology,” said Stanford political psychologist Jon Krosnick, who laid out his reasoning Oct. 22 during the Council for the Advancement of Science Writing’s New Horizons in Science briefings at the ScienceWriters2022 conference in Memphis, Tenn.

Done properly, polling can be very accurate and provide invaluable insights into public opinion, said Krosnick, who studies survey methodologies. A well-designed poll should involve appropriate wording of questions and sampling that represents the population being examined.

How can a journalist tell a good poll from a bad one when reporting on elections? When in doubt, consult a professional. But having a general idea of what to look for when evaluating a poll can also go a long way.

Asking a survey question is not as straightforward as ordering a cheeseburger at a fast-food restaurant, Krosnick said. At McDonald’s, no matter how you phrase your order, you should end up with the same burger. In contrast, changing the wording of a question in a poll can generate vastly differing results.

In a 2014 study, for example, Krosnick found that changing an open-ended question (such as “where was Barack Obama born, as far as you know?”) to a closed-ended one (such as “Do you think Barack Obama was definitely born in the United States, probably born in the United States, probably born in another country, definitely born in another country, or don’t you know enough to say?”) caused the percentage of respondents who said they believe Obama was born in the U.S. to drop from 84% to 56%.

In evaluating a poll, journalists should also check whether it selected participants randomly and adjusted samples proportionally—or weighted them—to reflect the demographic makeup of the population.

“Online river sampling” and other 2016 missteps

Many pollsters in 2016 failed to use random sampling. In an unpublished study, Krosnick found that about 47% of polls the last week before the presidential election canvassed users of news sites and social media via a cheap, convenient, but inaccurate method called online river sampling. Anyone online could fill out the surveys, and many people did so more than once. The method therefore was more likely to capture the views of a website’s frequent users than a representative subset of the target population.

Some polls during that election cycle were not weighted appropriately. In one such case, the USC Dornsife Daybreak Poll incorrectly concluded that Trump led in the popular vote. That poll’s main shortcoming was that it divided respondents into very small demographic groups and put too much weight on each group, wrote New York Times chief political analyst Nate Cohn.

Krosnick disagrees with Cohn’s assessment. The real cause, he said, was a failure to estimate how many people each person in the sample represented. USC Dornsife chose their panelists by randomly sampling a group of zip codes, and then randomly sampling 40 addresses from each zip code. This sampling method did not account for population differences between zip codes—think Manhattan vs. rural Nebraska. The Daybreak Poll thus overrepresented rural voters and, correspondingly, overestimated popular votes for Trump.

It takes a trained eye to identify a robust poll. One resource to help journalists scrutinize survey methods is the American Association for Public Opinion Research. Journalists serve as a critical conduit between pollsters and the public, and thus have the responsibility to only report on polls conducted rigorously, Krosnick said.

“I believe strongly that scientifically done survey research can really help people have a voice in the way decisions are made that affect them.”

Yezi Yang (@yezitried) (she/her) is a PhD candidate who studies low-temperature geochemistry at Virginia Tech. Reach her at [email protected] Yang wrote this story as a participant in the ComSciCon-SciWri workshop at ScienceWriters2022.