Tag Archives: Research

Response rate: the elephant in the room

noun_14049“What’s the sample size?”, you might get asked. Or sometimes (wrongly), “What proportion of customers did you speak to?”. Or even “What’s your margin of error?”.

Important questions, to be sure, but often misleading ones unless you also address the elephant in the room: what was the response rate?

Low response rates are the dirty little secret of the vast majority of quantitative customer insight studies.

As we march boldly into the age of “realtime” high volume customer insight via IVR, SMS or mobile, the issue of low response rates is a body that’s becoming increasingly difficult to hide under the rug.

Why response rate matters

It’s too simplistic to say that response rates are directly correlated with nonresponse bias1, which is what we’re really interested in, but good practice would be to look for response rates well over 50%. Academics are often encouraged to analyse the potential for response bias when their response rates fall below 80%.

The uncomfortable truth is that we mostly don’t know what impact nonresponse bias has on our survey findings. This contrasts with the margin of error, or confidence interval, which allows us to know how precise our survey findings are.

How to assess nonresponse bias

It can be very difficult to assess how much nonresponse bias you’re dealing with. For a start, its impact varies from question to question. Darrell Huff gives the example of a survey asking “How much do you like responding to surveys?”. Nonresponse bias for that question would be huge, but it wouldn’t necessarily be such a problem for the other questions on the same survey. Nonresponse bias is a problem when likelihood of responding is correlated with the substance of the question.

There are established approaches2 to assessing nonresponse bias. A good starting point for a customer survey would be:

  • Log and report reasons for non participation (e.g. incorrect numbers, too busy, etc.)
  • Compare the make-up of the sample and the population
  • Consider following up some nonresponders using an alternative method (e.g. telephone interviews) to analyse any differences
  • Validation against external data (e.g. behavioural data such as sales or complaints)

How to reduce nonresponse bias

Increasing response rate is the first priority. You need to overcome any reluctance to take part (“active nonresponse”), but more importantly “passive nonresponse” from customers who simply can’t be bothered. We find the most effective methods are:

  • Consider interviews rather than self-completion surveys
  • Introduce the survey (and why it matters to you) in advance
  • Communicate results and actions from previous surveys
  • Send at least one reminder
  • Time the arrival of the survey to suit the customer
  • Design the survey to be easy and pleasant for the customer

Whatever your response rate is, please don’t brush the issue under the carpet. If you care about the robustness of your survey report your response rate, and do your best to assess what impact nonresponse bias is having on your results.

1. This article gives a good explanation of why.

2. This article is a good example.

Tagged , , , , ,

Empathy in Customer Experience

empathyI often talk about how important empathy is, but I realised the other day that I was using it in two different ways:

1) Empathy as a tool to inform the design of customer experiences

2) Building empathy at the front line as an essential output of insight

Let’s look at both of those in a bit more detail.

Empathy for design

To design good experiences you need to blend a deep understanding of customers with the skills, informed by psychology, to shape the way they feel. Getting that understanding requires in-depth qualitative research to get inside the heads of individual customers, helping you to see the world the way they see it.

When you understand why people behave the way they do, think the way they think, and (most importantly) feel they way they feel, you can design experiences that deliver the feelings you want to create in customers.

Design, to quote from Jon Kolko’s excellent book Well Designed is…

“…a creative process built on a platform of empathy.”

Empathy is a tool you can use to design better experiences.

Empathy at the front line

Improving the customer experience sometimes means making systematic changes to products or processes, but more often it’s a question of changing (or improving the consistency of) decision making at the front line.

Those decisions are driven by two things: your culture (or “service climate”), and the extent to which your people understand customers. If you can help your people empathise with customers, to understand why they’re acting, thinking, and feeling the way they are, then they’re much more likely to make good decisions for customers.

I’m sure we can all think of a topical example of what it looks like when front line staff are totally lacking in empathy.

The best way to build empathy is to bring customers to life with storytelling research communication. Using real customer stories, hearing their voices, seeing their faces, is much more powerful than abstract communication about mean scores and percentages.

Empathy at the front line is necessary to support good decisions.

Two kinds of empathy?

Are these two types of empathy fundamentally different? Not really. The truth is we are all experience designers. The decisions we make, whether grounded in empathy for the customer or making life easy for ourselves, collectively create the customer experience.

You can draw up a vision for the customer journey of the future, grounded in a deep understanding of customers, but if you fail to engage your colleagues at the front line it will never make a difference to customers.

To design effective experiences you need to start by gaining empathy for customers, but you also need to build empathy throughout your organisation.

Tagged , , , , , , , ,

Personas should be portraits, not caricatures

noun_202420Personas are an essential tool when using qualitative research with the customer experience, particularly for journey mapping.

It’s easy to forget the customer as we move from using insight to understand their feelings to a more internal view planning improvements.

Personas help us keep customer needs and motivations front of mind, and preserve the nuances and variety we found with the research.

Can you feel a “but” coming?

You’re right, there’s a big danger with personas that we slide from representing diversity to drawing crude stereotypes. Think in terms of archetypes and range rather than clusters or types.

Good personas are:

  • Grounded in research
  • Archetypes, not stereotypes or clichés
  • Defined by motivations and needs more than demographics
  • Used to challenge process, not put people in boxes

I think there’s a simple test that captures all of these: personas should increase your flexibility in dealing with individual customers, not reduce it.

Tagged , , , , , ,

From drivers to design thinking

networkDriver analysis is great, isn’t it? It reduces the long list of items on your questionnaire to a few key drivers of satisfaction or NPS. A nice simple conclusion—”these are the things we need to invest in if we want to improve”.

But what if it’s not clear how to improve?

Often the key drivers turn out to be big picture, broad-brush, items. Things like “value” or “being treated as a valued customer” which are more or less proxies for overall satisfaction. Difficult to action.

Looking beyond key drivers, there’s a lot of insight to be gained by looking at how all your items relate to each other, as well as to overall satisfaction and NPS. Those correlations, best studied as either a correlogram (one option below) or network diagram (top right) can tell you a lot, without requiring much in the way of assumptions about the data.
In particular, examining the links between specific items can support a design thinking approach to improving the customer experience based on a more detailed understanding of how your customers see the experiences you create.

Your experiences have a lot of moving parts—don’t you think you ought to know how they mesh together?

Tagged , , , , , , ,

Attention: getting it, keeping it, using it

One of the excellent speakers at the MRS “Best of Impact” event yesterday was a Creative Director specialising in data visualisation and infographics.

Naturally my ears pricked up—I’m always open to stealing ideas.

As well as being a very engaging talker, Tobias Sturt was really clear on a number of important principles for infographic design based on how our brains work:

  • Symbolic processing (e.g. icons) is quicker than verbal processing, but sometimes it’s less clear.
  • Recall is influenced by colour, faces, novel chart types, quirky images, etc.

But information design is not just about effective communication. It’s also about getting, and keeping, attention. This is a crucial role for what some characterise as graphic “decoration”. “Beauty” might be a better word. It’s something that David McCandless excels at, and Stephen Few objects to.

Those of us with important customer stories to tell have learned (the hard way) that getting attention is just as important as communicating facts.

Tagged , , , , ,

Are you measuring importance right?

One of the universal assumptions about customer experience research is that the topics on your questionnaire are not equally important.

It’s pretty obvious, really.

That means that when we’re planning what to improve, we should prioritise areas which are more important to customers.

Again, pretty obvious.

But how do we know what’s important? That’s where it starts to get tricky, and where we can get derailed into holy wars about which method is best. Stated importance? Key Driver Analysis (or “derived importance”)? Relative importance analysis? MaxDiff?

An interesting article in IJMR pointed out that these decisions are often made, not on the evidence, but according to the preferences of whoever the main decision maker is for a particular project.

Different methods will suggest different priorities, so personal preference doesn’t seem like a good way to choose.

The way out of this dilemma is to stop treating “importance” as a single idea that can be measured in different ways. It isn’t. Stated importance, derived importance and MaxDiff are all measuring subtly different things.

The best decisions come from looking at both stated and derived importance, using the combination to understand how customers see the world, and addressing the customer experience in the appropriate way:


  • High stated, low derived – a given. Minimise dissatisfaction, but don’t try to compete here.
  • Low stated, high derived – a potential differentiator. If your performance is par on the givens, you may get credit for being better than your competitors here.
  • High stated, high derived – a driver. This is where the bulk of your priorities will sit. Vital, but often “big picture” items that are difficult to action.

That’s a much more rounded view than choosing a single “best” measure to prioritise, and more accurately reflects how customers think about their experience.

Tagged , , , , , , ,

Understanding customers


If people ask what I do, my one-sentence answer tends to be “I help organisations understand their customers”.

What does that actually mean?

The tools we use are well-established quantitative and qualitative research techniques; all of which fundamentally boil down to one thing: talking to people.

Easy. Sort of.

No doubt you’ve seen the ever-growing hype around Behavioural Economics? It’s a field that has an enormous amount to teach those of us whose job is to understand other people, and particularly the way they make decisions.

We know, for example, that people are really bad at predicting their future behaviour (“Yes, I’ll definitely eat more salad and fewer doughnuts this year”), and nearly as bad at explaining why they did things.

Does that mean that research based on asking people questions is a waste of time?

I don’t believe so. But it does mean that it’s a good idea to focus your questions on the right things.

If you want to know about past/current behaviour it’s best to use observation or other sources of data if you can. If that’s not an option then people are fairly reliable about specific events, especially soon after them, and pretty unreliable on general patterns of behaviour (“How often do you go to the gym?”).

Future behaviour is tricky, because asking people is pretty much the only option. But consider the way you ask it, and see if you can set yourself up for more accuracy. If you want to know whether people will buy your product, don’t ask a focus group (they’ll all say yes to be polite), see if you can get them to part with cash on Kickstarter. If that’s not possible, frame it as a choice—would they buy your product instead of their current supplier?

Understanding how people will behave if you make a change (to a website, store layout, etc.) is best done by experiment. The more concrete you can make the future state for customers, through actual change or prototyping, the more accurate your findings.

Motivations are notoriously difficult for people to know, let alone explain. There’s no harm asking the question, but there’s often more insight from a good understanding of psychology than from people themselves. Rather than asking people why they did something, ask them how they feel about the choice they made or the product they chose, and then do the hard work in the analysis.

Attitudes form the mainstay of most research work, whether it’s brand associations, customer satisfaction, or employee engagement. We’re talking about thoughts and feelings, and again there are well-established limitations in what people are capable of telling you. The halo effect is a big one—if you want a meaningful attitude survey you have to work hard to ensure you get deeper than a single overall impression. Adding more questions won’t help, in fact it’ll make it worse.

Behavioural Economics teaches us that research based on asking people questions is limited, but it also gives us a framework to understand what those limitations are and how they work. It is not a threat to market research, in my view, but a coming of age.

Tagged , , , , ,

Seeing with fresh eyes

I mentioned last time that the secret to effective customer journey mapping is to talk to customers.

This may not seem like a stunning insight.

The truth is that many people in most organisations act as if they are afraid of customers, which makes talking to them almost inconceivable.

When we do talk to customers, it’s all too easy to ask them closed questions which reflect our agenda.

That will never get you a clear view of the customer journey.

Start with a clean sheet of paper (ignore your process map for now), and use qualitative research to understand how customers see the journey.

We call this the “lens of the customer” versus the “lens of the organisation”.

You’ll find that the moments of truth are different. Some things which are very significant for you will not be on customers’ radar. More importantly, you’ll find points of the journey where customers have a memorable emotional experience that is invisible to you.

Guess where most journeys show the lowest levels of satisfaction?

These missing touchpoints often reflect an unmet emotional need customers have to understand what is going on. Once you know about these missing moments, you can address them by setting expectations and improving communication.

Summary: to map the customer journey, start by using qualitative research to explore how customers see it.

Tagged , ,

Customer journey mapping

We run a regular half-day briefing on customer journey mapping in practice, taking people through the process and the decisions they need to make along the way.

Almost everyone who comes along has the same question – “I keep hearing about mapping the customer journey, but I can’t find any good examples on Google. What’s the secret?”

The truth is, there is no secret. A customer journey map is whatever you want it to be, and off the shelf templates or software are unlikely to be much help.

The word “journey” is just a metaphor, albeit a powerful and useful one. So is “map”. Use them however you want.

In an ongoing sequence of posts I’ll lay out some of the approaches that take advantage of those metaphors to improve the customer experience.

The most important thing is to get stuck in and make a start, by talking to customers. Go and do it now.

Tagged , ,