Gathering high-quality responses from online surveys is a fundamental premise of building customer insights and making sound business decisions. Achieving this premise, however, can be a major challenge. This is especially true when respondents are sourced from broad general audience panels and whose identity can’t be verified for privacy reasons.
While collecting data from a reputable sample provider and using good survey design techniques go a long way towards ensuring data quality, it is often not enough. Inevitably, some survey respondents will set out to complete survey questions as fast as they can, particularly when a monetary incentive is involved. Not filtering out unqualified, inattentive or fraudulent respondents will at the minimum add unnecessary noise to collected data and at worst it may invalidate your survey’s findings.
In fact, based on our data, implementing trap questions and attention screeners has resulted in catching and removing an average of 15 percent respondents, with some surveys seeing levels of unqualified respondents rates well over 30 percent. In this article, we will share some of our insights to help researchers minimize impact of such respondents.
Implementing trap questions in a survey design
Fortunately, it is easy to establish relatively straight-forward study design techniques that can help manage the quality of survey respondents. One of them is inserting attention checks and trap questions at strategic points in the survey. Trap questions, or attention checks, are questions that are inserted into a survey and serve to filter out respondents who are not answering honestly or carefully. Such questions should be easy and obvious to answer, as they are not meant to test or trick the survey respondent’s knowledge.
We use trap questions in every survey, regardless if respondents come from a very specialized and highly managed panel or a quick general population sample. While we tend to find more bad respondents coming from general population panels, in our experience, even high-cost specialized panels suffer from inattentive respondents. Therefore, we prefer to have our independent validation of respondent quality inside of every survey.
Attention checks can be performed using both open-end and multiple-choice questions. Both types can work to a great effect, but multiple-choice questions—our focus for today—are typically a little bit easier to implement. Building quality checks into a survey using open-ended questions typically requires the use of NLP (Natural Language Processing), which is available in platforms such as ours, but other survey tools may not have it readily available. Nevertheless, there are various styles of trap questions that can be used in any survey and which are tailored to test different types of problematic respondent behaviors.
Trap questions help you catch low-quality respondents
When we design studies at GroupSolver, we typically use more than one trap question in our surveys, and we encourage subscribers to our platform to do so as well. The reason for that strategy is the simple fact that even respondents picking answers at random get them right sometimes. Implementing more than one attention check, however, severely diminishes the chances of an inattentive respondent completing the full survey. Reviewing data from a sample of our recent studies, we find that we are catching almost 12 percent of all survey takers on the first trap question in a study, and if we ask subsequent traps, 7 percent additional respondents are caught by the second and just under 5 percent are caught from the third.
For example, we deployed three trap questions in our recent Election 2020 study (N = 332, general population sample from one of the most prominent panel providers). One trap question asked: “With which species do you identify?” and presented the choices as Rock, Bunny Rabbit, Human, Fish, Magic Carpet, or Vampire. Any respondent who is paying attention would choose the obviously correct answer: Human. 68 respondents answered the question incorrectly and were terminated from the study.
In the same study, we also asked: “Starting counting with Monday, what is the third day of the week?” The choices were: Seven, Saturday, Wednesday, Excellent, or Not Sure. The correct choice, Wednesday, is only easy to answer if one took the time to read the question and the options, especially since there is a random mix of choice options listed. 18 respondents answered the question incorrectly, and therefore were terminated from the study.
These trap questions along with 1 other terminated a total of 183 respondents, or 35 percent of study participants. While this was an unusually high rate of bad respondents, it demonstrates the importance of quality control measures inside the survey.
Trap questions are the safety net for survey data quality
Market research takes time, money, and effort, and low-quality or dishonest survey answers can quickly ruin integrity of insights. Ensuring that survey responses come from qualitied and attentive respondents is essential when we know that collected data informs critical business decisions. While no technique is 100 percent effective, using even the quick and simple strategy of deploying trap questions will help improve the quality of data and give decision makers the confidence that the insights they rely on are valid.
This article previously appeared on the GroupSolver site; reprinted with permission.