“What’s the sample size?”, you might get asked. Or sometimes (wrongly), “What proportion of customers did you speak to?”. Or even “What’s your margin of error?”.
Important questions, to be sure, but often misleading ones unless you also address the elephant in the room: what was the response rate?
Low response rates are the dirty little secret of the vast majority of quantitative customer insight studies.
As we march boldly into the age of “realtime” high volume customer insight via IVR, SMS or mobile, the issue of low response rates is a body that’s becoming increasingly difficult to hide under the rug.
Why response rate matters
It’s too simplistic to say that response rates are directly correlated with nonresponse bias1, which is what we’re really interested in, but good practice would be to look for response rates well over 50%. Academics are often encouraged to analyse the potential for response bias when their response rates fall below 80%.
The uncomfortable truth is that we mostly don’t know what impact nonresponse bias has on our survey findings. This contrasts with the margin of error, or confidence interval, which allows us to know how precise our survey findings are.
How to assess nonresponse bias
It can be very difficult to assess how much nonresponse bias you’re dealing with. For a start, its impact varies from question to question. Darrell Huff gives the example of a survey asking “How much do you like responding to surveys?”. Nonresponse bias for that question would be huge, but it wouldn’t necessarily be such a problem for the other questions on the same survey. Nonresponse bias is a problem when likelihood of responding is correlated with the substance of the question.
There are established approaches2 to assessing nonresponse bias. A good starting point for a customer survey would be:
- Log and report reasons for non participation (e.g. incorrect numbers, too busy, etc.)
- Compare the make-up of the sample and the population
- Consider following up some nonresponders using an alternative method (e.g. telephone interviews) to analyse any differences
- Validation against external data (e.g. behavioural data such as sales or complaints)
How to reduce nonresponse bias
Increasing response rate is the first priority. You need to overcome any reluctance to take part (“active nonresponse”), but more importantly “passive nonresponse” from customers who simply can’t be bothered. We find the most effective methods are:
- Consider interviews rather than self-completion surveys
- Introduce the survey (and why it matters to you) in advance
- Communicate results and actions from previous surveys
- Send at least one reminder
- Time the arrival of the survey to suit the customer
- Design the survey to be easy and pleasant for the customer
Whatever your response rate is, please don’t brush the issue under the carpet. If you care about the robustness of your survey report your response rate, and do your best to assess what impact nonresponse bias is having on your results.
1. This article gives a good explanation of why.
2. This article is a good example.