Category Archives: Research

Response rate: the elephant in the room

noun_14049“What’s the sample size?”, you might get asked. Or sometimes (wrongly), “What proportion of customers did you speak to?”. Or even “What’s your margin of error?”.

Important questions, to be sure, but often misleading ones unless you also address the elephant in the room: what was the response rate?

Low response rates are the dirty little secret of the vast majority of quantitative customer insight studies.

As we march boldly into the age of “realtime” high volume customer insight via IVR, SMS or mobile, the issue of low response rates is a body that’s becoming increasingly difficult to hide under the rug.

Why response rate matters

It’s too simplistic to say that response rates are directly correlated with nonresponse bias1, which is what we’re really interested in, but good practice would be to look for response rates well over 50%. Academics are often encouraged to analyse the potential for response bias when their response rates fall below 80%.

The uncomfortable truth is that we mostly don’t know what impact nonresponse bias has on our survey findings. This contrasts with the margin of error, or confidence interval, which allows us to know how precise our survey findings are.

How to assess nonresponse bias

It can be very difficult to assess how much nonresponse bias you’re dealing with. For a start, its impact varies from question to question. Darrell Huff gives the example of a survey asking “How much do you like responding to surveys?”. Nonresponse bias for that question would be huge, but it wouldn’t necessarily be such a problem for the other questions on the same survey. Nonresponse bias is a problem when likelihood of responding is correlated with the substance of the question.

There are established approaches2 to assessing nonresponse bias. A good starting point for a customer survey would be:

  • Log and report reasons for non participation (e.g. incorrect numbers, too busy, etc.)
  • Compare the make-up of the sample and the population
  • Consider following up some nonresponders using an alternative method (e.g. telephone interviews) to analyse any differences
  • Validation against external data (e.g. behavioural data such as sales or complaints)

How to reduce nonresponse bias

Increasing response rate is the first priority. You need to overcome any reluctance to take part (“active nonresponse”), but more importantly “passive nonresponse” from customers who simply can’t be bothered. We find the most effective methods are:

  • Consider interviews rather than self-completion surveys
  • Introduce the survey (and why it matters to you) in advance
  • Communicate results and actions from previous surveys
  • Send at least one reminder
  • Time the arrival of the survey to suit the customer
  • Design the survey to be easy and pleasant for the customer

Whatever your response rate is, please don’t brush the issue under the carpet. If you care about the robustness of your survey report your response rate, and do your best to assess what impact nonresponse bias is having on your results.

1. This article gives a good explanation of why.

2. This article is a good example.

Tagged , , , , ,

Is it time for zero-based customer insight?

There’s a debate in marketing about the merits of zero-based budgeting.

It doesn’t necessarily mean spending less. What it does mean is figuring out, from scratch, what you need to spend in order to achieve specific returns.

Which sounds pretty sensible.

Mark Ritson discusses Unilever’s announcement that they are adopting a zero-based budgeting approach to marketing. His summary is useful:

The zero base approach is not a cost cutting method or belt-tightening approach. It’s just a better, more strategic way to plan your marketing. First you forget about the total spend and where that spend was allocated last year – hence the zero. Second, the marketing team do their research, construct their marketing plan and conclude it with a budget in which they ask for a certain amount of investment and promise a specific return for that investment. Senior management review the plan and either grant the amount or push back and ask the team to make changes.

The appeal to the business is obvious—it forces departments to be accountable for their spend, and do the work to justify it. It seems to me that we should think about working towards a zero-based model for customer insight.

Does that sound like a turkey voting for Christmas?

It might be if we all switched overnight, but I think the principle of accountability and being able to demonstrate return is important if we want customer experience to be taken seriously.

It’s important, I think, to make sure that budgeting doesn’t lead to prioritising short term returns. If a marketing team spends its budget on vouchers rather than brand-building then they’re almost guaranteed to see an impact on sales in the short term. But what’s the long term benefit?

Similarly, for customer experience, you need to understand the links between investment in particular transactional journeys and longer term customer attitudes and behaviours. The benefits can take a long time to filter through; but they’re real, and they’re measurable.

It’s up to us to start proving it.

Tagged , ,

Are you measuring importance right?

One of the universal assumptions about customer experience research is that the topics on your questionnaire are not equally important.

It’s pretty obvious, really.

That means that when we’re planning what to improve, we should prioritise areas which are more important to customers.

Again, pretty obvious.

But how do we know what’s important? That’s where it starts to get tricky, and where we can get derailed into holy wars about which method is best. Stated importance? Key Driver Analysis (or “derived importance”)? Relative importance analysis? MaxDiff?

An interesting article in IJMR pointed out that these decisions are often made, not on the evidence, but according to the preferences of whoever the main decision maker is for a particular project.

Different methods will suggest different priorities, so personal preference doesn’t seem like a good way to choose.

The way out of this dilemma is to stop treating “importance” as a single idea that can be measured in different ways. It isn’t. Stated importance, derived importance and MaxDiff are all measuring subtly different things.

The best decisions come from looking at both stated and derived importance, using the combination to understand how customers see the world, and addressing the customer experience in the appropriate way:


  • High stated, low derived – a given. Minimise dissatisfaction, but don’t try to compete here.
  • Low stated, high derived – a potential differentiator. If your performance is par on the givens, you may get credit for being better than your competitors here.
  • High stated, high derived – a driver. This is where the bulk of your priorities will sit. Vital, but often “big picture” items that are difficult to action.

That’s a much more rounded view than choosing a single “best” measure to prioritise, and more accurately reflects how customers think about their experience.

Tagged , , , , , , ,

Why you want a low score

noun_166704It’s surprising how often I meet organisations whose leaders want a high score more than they want happy customers.

Some don’t even seem to notice the mental bait-and-switch they’ve played when they pretend it’s the same thing.

In order to improve you need what a client of ours once called a “burning platform for change”.

A score that looks ok, even if we’d rather it was higher, means there is no burning platform. No burning platform means no significant change.

Often what gets in the way is a measurement process which flatters the organisation.

We’ll ignore deliberate gaming of the score, or completely biased questionnaires, and look at two more subtle problems.

Using a weak measure

All customer survey scores show a skew towards the top end of the scale. Most customers are at least reasonably happy with most organisations. After all, how long would you stick with a company that you were scoring in the bottom end of the scale?

At the same time, relatively few organisations have a majority of customers giving them “top box” scores at the extreme end of the scale.

In other words, most companies are quite good at customer satisfaction, but few are consistently excellent. Data from the UKCSI as well as our own client league table backs this up.

When it comes to score, this means that measuring “% Satisfied” (i.e. the proportion of customers in the top end of the scale) is a tremendously weak and flattering measure.

Companies with over 90% “satisfied” customers can be below average performers when a strong measure is used.

But it sounds good, doesn’t it?

Both Customer Satisfaction Index (CSI) and Net Promoter Score (NPS) will give you a much tougher measure, one that’s more likely to push your organisation to change.


Benchmarking for comfort, not for ideas

Benchmarking can be a brilliant tool for improvement, or a distraction that does nothing but get in the way. David Ogilvy once said:

We all have a tendency to use research as a drunkard uses a lamppost—for support, not for illumination.

Benchmarking is much the same.

Internal benchmarking is a very powerful way to improve an organisation’s performance by sharing best practice and taking advantage of people’s natural competitiveness. Enterprise Rent-A-Car used this very effectively in the late 90s, as discussed in this classic HBR case study.

External benchmarking is useful to help you understand the range of performance that’s been achieved by others, and to find ideas for improvement (Southwest Airlines looked at Formula 1 pit crews to improve their turnaround time).

In practice, many organisations indulge in what I call vanity benchmarking – redefining your comparison set until you find a league table you look good in.

Even worse, some organisations (inadvertently or otherwise) cheat. They use different scales, or different methodologies, or change the way NPS is calculated, or exclude customers who made a complaint, or any one of 1,000 other tricks.

Benchmarking should be about finding opportunities to improve, not a PR exercise.

Tagged , , , ,

Understanding customers


If people ask what I do, my one-sentence answer tends to be “I help organisations understand their customers”.

What does that actually mean?

The tools we use are well-established quantitative and qualitative research techniques; all of which fundamentally boil down to one thing: talking to people.

Easy. Sort of.

No doubt you’ve seen the ever-growing hype around Behavioural Economics? It’s a field that has an enormous amount to teach those of us whose job is to understand other people, and particularly the way they make decisions.

We know, for example, that people are really bad at predicting their future behaviour (“Yes, I’ll definitely eat more salad and fewer doughnuts this year”), and nearly as bad at explaining why they did things.

Does that mean that research based on asking people questions is a waste of time?

I don’t believe so. But it does mean that it’s a good idea to focus your questions on the right things.

If you want to know about past/current behaviour it’s best to use observation or other sources of data if you can. If that’s not an option then people are fairly reliable about specific events, especially soon after them, and pretty unreliable on general patterns of behaviour (“How often do you go to the gym?”).

Future behaviour is tricky, because asking people is pretty much the only option. But consider the way you ask it, and see if you can set yourself up for more accuracy. If you want to know whether people will buy your product, don’t ask a focus group (they’ll all say yes to be polite), see if you can get them to part with cash on Kickstarter. If that’s not possible, frame it as a choice—would they buy your product instead of their current supplier?

Understanding how people will behave if you make a change (to a website, store layout, etc.) is best done by experiment. The more concrete you can make the future state for customers, through actual change or prototyping, the more accurate your findings.

Motivations are notoriously difficult for people to know, let alone explain. There’s no harm asking the question, but there’s often more insight from a good understanding of psychology than from people themselves. Rather than asking people why they did something, ask them how they feel about the choice they made or the product they chose, and then do the hard work in the analysis.

Attitudes form the mainstay of most research work, whether it’s brand associations, customer satisfaction, or employee engagement. We’re talking about thoughts and feelings, and again there are well-established limitations in what people are capable of telling you. The halo effect is a big one—if you want a meaningful attitude survey you have to work hard to ensure you get deeper than a single overall impression. Adding more questions won’t help, in fact it’ll make it worse.

Behavioural Economics teaches us that research based on asking people questions is limited, but it also gives us a framework to understand what those limitations are and how they work. It is not a threat to market research, in my view, but a coming of age.

Tagged , , , , ,

p. values are bad for your health

A few months ago you may have seen a flurry of stories about the slimming benefits of chocolate.

It turned out to be a hoax, well documented here.

The key point is that, although it was a deliberate hoax, the methodology and statistics used were not unrepresentative of those used in real nutrition “studies”.

They used a randomised controlled trial, and the chocolate-eating group did lose weight significantly faster (as measured by the all-important p. value) than the control group.

So what’s the problem? To understand that, we need to understand what a p. value tells us.

Statistical significance means a small chance of being wrong

In simple terms we set a p.value to control how sure we want to be about a difference we have found. By convention we set it to 0.05, or 5%.

In other words, there is less than a 5% chance that we would have seen the scores we have if there was no real difference between the control group and the treatment group.

So far, so good.

The chance of being wrong is additive

The problem is that 5% chance adds up for every measure we look at. In this instance, the “researchers” measured a total of 18 things (weight, cholesterol, sleep quality,…).

That means that the chance of making a mistake goes up to 5% x 18 = 90%.

In other words, there is a 90% chance of seeing a large difference on one of these 18 measures, even if there was no real difference between the control group and the treatment group.

Robust research corrects for this problem using techniques such as the familywise error rate or false discovery rate.

Are you fooling yourself?

Statistical significance testing is an immensely powerful tool, but it is very dangerous when used for “fishing expeditions” dredging through hundreds of comparisons to turn up ones that are significant.

The answer is to be clear about whether your analysis is testing or generating an idea. If it’s the latter, then you need to test that theory with fresh data before having much confidence in it.

Tagged ,

Text analytics – difficult but exciting

Text analytics, alongside its cousin speech analytics, is one of the most hyped applications of machine learning in the world of research and insight.

It’s easy to see why.

The dream is that all that laborious time spent manually coding comments to extract themes and root causes can be replaced with the click of a button. Huge savings in terms of cost and resource, and suddenly we have a quasi-qualitative tool that can be applied at scale.

What’s the catch? It’s hard.

Perhaps you’ll be lucky, but it’s rare to find a solution that works well out of the box. Models must be trained and/or tuned in order to be effective.

Don’t be put off. These techniques work, and are being actively developed by some of the smartest people in the world. Expect them to rapidly become more powerful and easier to use.

Most of the research, and often code, is publicly available. It’s an exciting and open field. Commercial offerings, inevitably, tend to lag a bit behind the cutting edge, but avoid the need for you to get your hands dirty with code.

If you have a lot of text responses you don’t know what to do with, why not give it a try?

Tagged , ,

Qualitative research: conversation not judgement

Focus groups get a bad press.

They’re overused by politicians, ignored by the business leaders we admire (Steve Jobs), often made up of “tame” participants.

Critics usually trot out the Henry Ford quote:

If I had asked people what they wanted, they would have said faster horses.

Which is actually a really good example of what focus groups do badly (innovation) and what they do well (identifying fundamental needs).

It’s true that customers won’t invent your new product, or even your next ad, for you.

They’re not even very good at picking between options, because customers get too focused on details at an early stage of development.

Which is why I wasn’t impressed by the car manufacturer who proudly announced that its new model had been “rejected by focus groups”.

When used properly, focus groups can provide the spark of insight that allows a brand to connect with customer emotions and radically differentiate itself.

At a recent MRS event, Peter from Voodoo gave a good example of this. Lurpak used focus groups to identify a globally shared emotional moment of truth in cooking – the “moment of alchemy” at which a dish comes together.

Based on that they developed an immersive ad called “weave your magic“.
No doubt this ad would have been rejected by focus groups. Probably in favour of one showing a green field full of cows with a dull voice over about quality and provenance.

We all like to think we’re rational.

By engaging customers in a conversation early, using research to inform creative rather than to judge it, Lurpak were able to create something truly memorable.

Qualitative research is about opening doors, not closing them.

Tagged ,

Seeing with fresh eyes

I mentioned last time that the secret to effective customer journey mapping is to talk to customers.

This may not seem like a stunning insight.

The truth is that many people in most organisations act as if they are afraid of customers, which makes talking to them almost inconceivable.

When we do talk to customers, it’s all too easy to ask them closed questions which reflect our agenda.

That will never get you a clear view of the customer journey.

Start with a clean sheet of paper (ignore your process map for now), and use qualitative research to understand how customers see the journey.

We call this the “lens of the customer” versus the “lens of the organisation”.

You’ll find that the moments of truth are different. Some things which are very significant for you will not be on customers’ radar. More importantly, you’ll find points of the journey where customers have a memorable emotional experience that is invisible to you.

Guess where most journeys show the lowest levels of satisfaction?

These missing touchpoints often reflect an unmet emotional need customers have to understand what is going on. Once you know about these missing moments, you can address them by setting expectations and improving communication.

Summary: to map the customer journey, start by using qualitative research to explore how customers see it.

Tagged , ,