Survey Accuracy Definition:
Survey accuracy is the extent to which a survey result represents the attribute being measured in the group of interest or population. Determining how accurate the data captured by a survey reflects the entire population requires computing the confidence interval and the confidence level.
The confidence interval (Also know as the “Margin of Error” or simply "Error") is usually expressed as a plus or minus percentage, e.g., “+/- 5%”, which indicates that the survey mean score likely deviates from the population mean for that attribute by less than 5%. For example, if 30% of the respondents pick a certain choice on the survey and you have a margin of error of 5%, you can be "sure" that between 25% (30% minus 5%) and 35% (30% plus 5%) of the entire population would pick the same answer.
How sure can you be? This is what the confidence level (Also known as "Confidence") tells you. It’s listed as a percentage and tells you how certain you can be of your results.
Want to learn more?
Download the entire glossary list in a printable list
|Yes, I want this list|
Survey accuracy standards:
For example, a confidence level of 95% means that if you conducted your survey 100 times, you would come up with the same results 95% of the time. Most survey research uses a 95% confidence level since this strikes an optimal balance between accuracy and cost. Which is why we seldom hear the confidence level presented. In some circumstances, we might use a higher confidence level (say, 99%) or a lower one (say, 90%).
You should include key demographic questions in each survey so you can see if the survey sample correlates to your membership. If 75% of your members are from the West, for example, responses to a general membership survey should also be 75% from the West. Some types of response biases can be statistically corrected; others may require you to conduct a second mailing to make sure the survey respondents accurately reflect the membership composition.