Cornell-led election survey seeks to improve science of polls

In Florida recently, a registered Republican answered a pollster’s questions over his cell phone, while a Spanish-speaking Democrat responded to a survey invitation received by mail. In Michigan, a white male voter participated after a link was texted to his randomly selected number.

Those are just a few of the thousands of potential voters being reached in diverse ways by a Cornell-led survey that aims to provide the most comprehensive understanding of this year’s midterm elections on Nov. 8 – and to advance the science of survey research in the process.

Boasting a sample size 20 times larger than most nationally representative surveys, the federally funded 2022 Collaborative Midterm Survey will collect extensive information on voters’ attitudes toward candidates and key issues including the economy, abortion, race relations, political polarization and authoritarianism.

Importantly, the sample of roughly 20,000 also means that – for the first time – it will be possible to evaluate how different survey methods can be combined to offer the most representative data, not just across the U.S. but in key states such as California, Florida and Wisconsin, said Peter K. Enns, professor of government and public policy in the College of Arts and Sciences and the Cornell Jeb E. Brooks School of Public Policy.

“There’s been a massive proliferation of polls that are increasingly using different methods to collect data, but no way to systematically analyze what the best approach is,” said Enns, the Robert S. Harrison Director of the Cornell Center for Social Sciences. “Our ultimate goal is to provide a roadmap for improving the science of surveys.”

Enns is principal investigator of the midterm survey awarded $2 million by the National Science Foundation. Co-principal investigators are Jonathon P. Schuldt, associate professor in the Department of Communication in the College of Agriculture and Life Sciences, and executive director of the Roper Center for Public Opinion Research; and Colleen L. Barry, inaugural dean of the Cornell Brooks School.

The team plans to unveil its findings at a Jan. 20 data launch and hackathon event at Cornell Tech in New York City.

The challenge they hope to address is highlighted in an analysis Enns co-authored of more than 350 polls conducted over two months preceding the 2020 presidential election. Randomized, or probability-based, surveys – long considered the “gold standard” by survey firms and researchers – were the least accurate, on average. Nonprobability-based surveys, including so-called “convenience samples” of pre-selected respondents, fared slightly better. Mixed-method surveys performed best overall but showed wide variation.

“This goes against the science,” Schuldt said. “Random, probability-based samples aren’t behaving as if random anymore.”

Fast-changing technologies and declining response rates have made it increasingly challenging for surveys to collect representative samples, according to the researchers.

After an open call for proposals, the Cornell team in September selected three teams to conduct the monthlong midterm survey, collecting data from Oct. 26 to Nov. 22: SSRS; the Iowa Social Science Research Center and researchers at the University of Iowa; and partners Gradient Metrics and Survey 160. Each team will collect probability- and nonprobability-based samples and utilize at least two recruitment methodologies – reaching voters in both English and Spanish via mail, text, online panels or calls to land lines and mobile phones.

For example, Gradient plans to send more than 1 million survey invitations by text message, and more than 18,000 by mail. The Iowa researchers plan to contact more than 31,000 cell phones and 8,500 land lines using random digit dialing. SSRS will issue approximately 45,000 invitations using probability and non-probability methods, in addition to mailing invitations to 3,000 randomly selected addresses is Wisconsin.

Each team is asking the same survey questions at the same time about U.S. House, U.S. Senate and gubernatorial races, and will collect large samples through each methodology. The researchers say that will enable direct comparisons of the different approaches and an assessment of the most cost-effective combinations currently available to survey scientists. That hasn’t been possible to date, they said, because polls vary so much in their budgets, questions, methods, sample sizes and transparency.

David Wilson, dean of the Goldman School of Public Policy at the University of California, Berkeley and a senior adviser to the Cornell team, said the 2022 Collaborative Midterm Survey could transform the study of political attitudes and behavior during election season.

“The principal investigators have assembled some of the most innovative minds in survey methodology and public opinion, and partnered them with diverse practitioners in academia and the profession,” Wilson said. “The result is a new framework for investigating our democracy, advancing the science of surveys and politics.”

Learning about the electorate and state of American democracy are a top priority for the midterm survey project. But Barry said its insights into best practices – ideally to be updated every year or two – would benefit influential and costly government surveys across topics such as employment, consumer spending, health and the environment.

“Understanding these survey methods,” Barry said, “holds implications well beyond political surveys.”

Read the story in the Cornell Chronicle.

More News from A&S

 Peter Enns
Peter K. Enns, the Robert S. Harrison Director of the Cornell Center for Social Sciences, Executive Director of the Roper Center for Public Opinion Research and professor of government