Study finds repetitive questions in surveys yield unreliable data

By ANI | Published: January 30, 2022 12:32 PM2022-01-30T12:32:01+5:302022-01-30T12:40:18+5:30

A new study has found that surveys that ask repetitive questions tire the respondents and yield unreliable data.

Study finds repetitive questions in surveys yield unreliable data | Study finds repetitive questions in surveys yield unreliable data

Study finds repetitive questions in surveys yield unreliable data

A new study has found that surveys that ask repetitive questions tire the respondents and yield unreliable data.

The study has been published in the 'Journal of Marketing Research.

The study found that people tire from questions that vary only slightly and tend to give similar answers to all questions as the survey progresses. Marketers, policymakers, and researchers who rely on long surveys to predict consumer or voter behaviour will have more accurate data if they craft surveys designed to elicit reliable, original answers, the researchers suggested.

"We wanted to know, is gathering more data in surveys always better, or could asking too many questions lead to respondents providing less useful responses as they adapt to the survey," said first author Ye Li, a UC Riverside assistant professor of management.

"Could this paradoxically lead to asking more questions but getting worse results?" Li added.

While it may be tempting to assume more data is always better, the authors wondered if the decision processes respondents use to answer a series of questions might change, especially when those questions use a similar, repetitive format.

The research addressed quantitative surveys of the sort typically used in market research, economics, or public policy research that sought to understand people's values about certain things. These surveys often asked a large number of structurally similar questions.

Researchers analyzed four experiments that asked respondents to answer questions involving choice and preference.

Respondents in the surveys adapted their decision making as they answer more repetitive, similarly structured choice questions, a process the authors called "adaptation." This meant they processed less information, learned to weigh certain attributes more heavily, or adopted mental shortcuts for combining attributes.

In one of the studies, respondents were asked about their preferences for varying configurations of laptops. They were the sort of questions marketers use to determine if customers are willing to sacrifice a bit of screen size in return for increased storage capacity, for example.

"When you're asked questions over and over about laptop configurations that vary only slightly, the first two or three times you look at them carefully but after that maybe you just look at one attribute, such as how long the battery lasts. We use shortcuts. Using shortcuts gives you less information if you ask for too much information," said Li.

While humans are known to adapt to their environment, most methods in behavioural research used to measure preferences have underappreciated this fact.

"In as few as six or eight questions people are already answering in such a way that you're already worse off if you're trying to predict real-world behaviour," said Li.

"In these surveys, if you keep giving people the same types of questions over and over, they start to give the same kinds of answers," Li added.

The findings suggested some tactics that can increase the validity of data while also saving time and money. Process-tracing, a research methodology that tracks not just the quantity of observations but also their quality, can be used to diagnose adaptation, helping to identify when it is a threat to validity.

Adaptation could also be reduced or delayed by repeatedly changing the format of the task or adding filler questions or breaks. Finally, the research suggested that to maximize the validity of preference measurement surveys, researchers could use an ensemble of methods, preferably using multiple means of measurement, such as questions that involve choosing between options available at different times, matching questions, and a variety of contexts.

"The tradeoff isn't always obvious. More data isn't always better. Be cognizant of the tradeoffs," said Li.

"When your goal is to predict the real world, that's when it matters," Li added

Li was joined in the research by Antonia Krefeld-Schwalb, Eric J. Johnson, and Olivier Toubia at Columbia University; Daniel Wall at the University of Pennsylvania; and Daniel M. Bartels at the University of Chicago.

( With inputs from ANI )

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in app