Survey bias means that the question is phrased or formatted in a way that leads people to choose a certain answer instead of another. The same applies if your questions are hard to understand, making it difficult for customers to answer honestly. Either way, poorly prepared survey questionnaires result in unreliable feedback. In this guide we will take a look at 5 different survey bias types.
This guide will teach you:
1. Researcher Bias
The viewpoint of the researcher has a way of creeping into the survey. All research designers are humans and have their own views and opinions. Even the most practiced and professional researchers can have subtle biases in the way they word questions or interpret results.
2. Poor match of the sample to the population
It is almost never the case that the sampling frame you use is a perfect match to the population you are trying to understand, so this error is present in most studies. You can sometimes recover from asking the wrong questions, but you can never recover from asking the wrong questions to the wrong people.
Most people like to focus on questionnaire development when a new project is awarded. The reality is the sampling and weighting plan is every bit consequential to the success of the project, and rarely gets the attention it deserves. We can tell when we have a client that really knows what they are doing if they begin the project by focusing on sampling issues and not jumping to questionnaire design.
3. Lack of randomness/response bias
Many surveys proceed without random samples. In fact, it is rare that a survey being done today can accurately claim to be using a random sample. Remember those statistics courses you took in college and graduate school? The one thing they have in common is pretty much everything they taught you statistically is only relevant if you have a random sample. And, odds are that you don’t.
A big source of “non-randomness” in a sample is survey bias. 10% is considered a good response rate from an online panel. When we report the results of these studies, we are assuming that the vast majority of people who didn’t respond would have responded in the same way as those who did. Often, this is a reasonable assumption. But, sometimes it is not.
4. Failure to quota sample or weight data
Even if we sample randomly, it is typical for some subgroups to be more willing to cooperate than others. For example, females are typically less likely to refuse a survey invitation than males, and minorities are less likely to participate. So, a good researcher will quota sample and weight data to compensate for this. In short, if you know something about your population before you survey them, you should use this knowledge to your advantage.
5. Social desirability bias
This is a form of survey bias in responses caused by respondents' desire, either conscious or unconscious, to gain prestige or appear in a different social role.
- Survey incentives are actually not much different from any other kind of incentive. They are reasons, monetary or non-monetary, physical or emotional that drive or motivate people to fill in your survey. In other words, they would boost survey response rate.
- Survey completion rate: When calculating the survey completion rate, the calculation only takes to account those people who had some interaction with the survey, meaning that they actually started it. We don’t count the number of people who were invited and ignored the invitation.
- Survey Accuracy is the extent to which a survey result represents the attribute being measured in the group of interest or population. Determining how accurate the data captured by a survey reflects the entire population requires computing the confidence interval and the confidence level.