Tag Archives: measurement

Response rate: the elephant in the room

noun_14049“What’s the sample size?”, you might get asked. Or sometimes (wrongly), “What proportion of customers did you speak to?”. Or even “What’s your margin of error?”.

Important questions, to be sure, but often misleading ones unless you also address the elephant in the room: what was the response rate?

Low response rates are the dirty little secret of the vast majority of quantitative customer insight studies.

As we march boldly into the age of “realtime” high volume customer insight via IVR, SMS or mobile, the issue of low response rates is a body that’s becoming increasingly difficult to hide under the rug.

Why response rate matters

It’s too simplistic to say that response rates are directly correlated with nonresponse bias1, which is what we’re really interested in, but good practice would be to look for response rates well over 50%. Academics are often encouraged to analyse the potential for response bias when their response rates fall below 80%.

The uncomfortable truth is that we mostly don’t know what impact nonresponse bias has on our survey findings. This contrasts with the margin of error, or confidence interval, which allows us to know how precise our survey findings are.

How to assess nonresponse bias

It can be very difficult to assess how much nonresponse bias you’re dealing with. For a start, its impact varies from question to question. Darrell Huff gives the example of a survey asking “How much do you like responding to surveys?”. Nonresponse bias for that question would be huge, but it wouldn’t necessarily be such a problem for the other questions on the same survey. Nonresponse bias is a problem when likelihood of responding is correlated with the substance of the question.

There are established approaches2 to assessing nonresponse bias. A good starting point for a customer survey would be:

  • Log and report reasons for non participation (e.g. incorrect numbers, too busy, etc.)
  • Compare the make-up of the sample and the population
  • Consider following up some nonresponders using an alternative method (e.g. telephone interviews) to analyse any differences
  • Validation against external data (e.g. behavioural data such as sales or complaints)

How to reduce nonresponse bias

Increasing response rate is the first priority. You need to overcome any reluctance to take part (“active nonresponse”), but more importantly “passive nonresponse” from customers who simply can’t be bothered. We find the most effective methods are:

  • Consider interviews rather than self-completion surveys
  • Introduce the survey (and why it matters to you) in advance
  • Communicate results and actions from previous surveys
  • Send at least one reminder
  • Time the arrival of the survey to suit the customer
  • Design the survey to be easy and pleasant for the customer

Whatever your response rate is, please don’t brush the issue under the carpet. If you care about the robustness of your survey report your response rate, and do your best to assess what impact nonresponse bias is having on your results.

1. This article gives a good explanation of why.

2. This article is a good example.

Tagged , , , , ,

Fully understanding employee engagement


I’ve been speaking today at a joint event with Avensure on the subject of Employee Engagement.

In my talk I covered why engagement is so important, how to go about measuring it, and a case study of one of our clients who has done a great job of building a culture of engagement.

I also spent quite a bit of time defining engagement, which can mean a lot of different things to different people.

Cause, effect, or something else?

I argue that to use the concept of engagement properly, it’s essential to understand and measure the state itself as distinct from the culture and management practices that cause engagement and the beneficial staff behaviours that result from engagement.
Employee Engagement Avensure SMALLER 2
You need to know the causes of engagement (so that you can improve) and you need to know the outcomes (so that you can prove it’s worth improving), but it’s a mistake to mix those three things together in the measurement or analysis. Be clear about what you’re measuring, and why.

Engaged with…what?

The other subtlety of measuring engagement is that you can be engaged with your job, but not to your employer, and vice versa. Both types of engagement are important, but they can have very different causes and effects. If you love what you do, but hate your employer, what’s to stop you leaving to do the same job somewhere else? We’d expect role engagement to correlate less well to retention than organisational engagement.

On the other hand, you might get on great with your manager and colleagues, but not feel inspired by your role. Employees can be satisfied and engaged with the organisation, but reluctant to fully engage with doing the best possible job for customers. Role engagement can correlate better with productivity and customer quality.

Most jobs have some element of drudgery and some opportunities for self-expression and challenge. It’s those challenges that make roles engaging for the right people, so the importance of organisational engagement is greater for businesses with employees who have limited opportunity for self-expression. If you can’t make them love their job, you can at least make them love you.

A complex picture

To understand the full importance of employee engagement, you need to understand it in all its messy glory. That means having clear, separate, measures of:

  • The causes of engagement
  • Role engagement
  • Organisational engagement
  • The effects of engagement

Put all those together and you have the basis for a sophisticated understanding of your people, and a clear way forward if you choose to invest in building a culture of engagement.

Tagged , ,

Why you want a low score

noun_166704It’s surprising how often I meet organisations whose leaders want a high score more than they want happy customers.

Some don’t even seem to notice the mental bait-and-switch they’ve played when they pretend it’s the same thing.

In order to improve you need what a client of ours once called a “burning platform for change”.

A score that looks ok, even if we’d rather it was higher, means there is no burning platform. No burning platform means no significant change.

Often what gets in the way is a measurement process which flatters the organisation.

We’ll ignore deliberate gaming of the score, or completely biased questionnaires, and look at two more subtle problems.

Using a weak measure

All customer survey scores show a skew towards the top end of the scale. Most customers are at least reasonably happy with most organisations. After all, how long would you stick with a company that you were scoring in the bottom end of the scale?

At the same time, relatively few organisations have a majority of customers giving them “top box” scores at the extreme end of the scale.

In other words, most companies are quite good at customer satisfaction, but few are consistently excellent. Data from the UKCSI as well as our own client league table backs this up.

When it comes to score, this means that measuring “% Satisfied” (i.e. the proportion of customers in the top end of the scale) is a tremendously weak and flattering measure.

Companies with over 90% “satisfied” customers can be below average performers when a strong measure is used.

But it sounds good, doesn’t it?

Both Customer Satisfaction Index (CSI) and Net Promoter Score (NPS) will give you a much tougher measure, one that’s more likely to push your organisation to change.


Benchmarking for comfort, not for ideas

Benchmarking can be a brilliant tool for improvement, or a distraction that does nothing but get in the way. David Ogilvy once said:

We all have a tendency to use research as a drunkard uses a lamppost—for support, not for illumination.

Benchmarking is much the same.

Internal benchmarking is a very powerful way to improve an organisation’s performance by sharing best practice and taking advantage of people’s natural competitiveness. Enterprise Rent-A-Car used this very effectively in the late 90s, as discussed in this classic HBR case study.

External benchmarking is useful to help you understand the range of performance that’s been achieved by others, and to find ideas for improvement (Southwest Airlines looked at Formula 1 pit crews to improve their turnaround time).

In practice, many organisations indulge in what I call vanity benchmarking – redefining your comparison set until you find a league table you look good in.

Even worse, some organisations (inadvertently or otherwise) cheat. They use different scales, or different methodologies, or change the way NPS is calculated, or exclude customers who made a complaint, or any one of 1,000 other tricks.

Benchmarking should be about finding opportunities to improve, not a PR exercise.

Tagged , , , ,