Category Archives: Research

Is it time for zero-based customer insight?

noun_168231
There’s a debate in marketing about the merits of zero-based budgeting.

It doesn’t necessarily mean spending less. What it does mean is figuring out, from scratch, what you need to spend in order to achieve specific returns.

Which sounds pretty sensible.

Mark Ritson discusses Unilever’s announcement that they are adopting a zero-based budgeting approach to marketing. His summary is useful:

The zero base approach is not a cost cutting method or belt-tightening approach. It’s just a better, more strategic way to plan your marketing. First you forget about the total spend and where that spend was allocated last year – hence the zero. Second, the marketing team do their research, construct their marketing plan and conclude it with a budget in which they ask for a certain amount of investment and promise a specific return for that investment. Senior management review the plan and either grant the amount or push back and ask the team to make changes.

The appeal to the business is obvious—it forces departments to be accountable for their spend, and do the work to justify it. It seems to me that we should think about working towards a zero-based model for customer insight.

Does that sound like a turkey voting for Christmas?

It might be if we all switched overnight, but I think the principle of accountability and being able to demonstrate return is important if we want customer experience to be taken seriously.

It’s important, I think, to make sure that budgeting doesn’t lead to prioritising short term returns. If a marketing team spends its budget on vouchers rather than brand-building then they’re almost guaranteed to see an impact on sales in the short term. But what’s the long term benefit?

Similarly, for customer experience, you need to understand the links between investment in particular transactional journeys and longer term customer attitudes and behaviours. The benefits can take a long time to filter through; but they’re real, and they’re measurable.

It’s up to us to start proving it.

Advertisements
Tagged , ,

Are you measuring importance right?

noun_70566
One of the universal assumptions about customer experience research is that the topics on your questionnaire are not equally important.

It’s pretty obvious, really.

That means that when we’re planning what to improve, we should prioritise areas which are more important to customers.

Again, pretty obvious.

But how do we know what’s important? That’s where it starts to get tricky, and where we can get derailed into holy wars about which method is best. Stated importance? Key Driver Analysis (or “derived importance”)? Relative importance analysis? MaxDiff?

An interesting article in IJMR pointed out that these decisions are often made, not on the evidence, but according to the preferences of whoever the main decision maker is for a particular project.

Different methods will suggest different priorities, so personal preference doesn’t seem like a good way to choose.

The way out of this dilemma is to stop treating “importance” as a single idea that can be measured in different ways. It isn’t. Stated importance, derived importance and MaxDiff are all measuring subtly different things.

The best decisions come from looking at both stated and derived importance, using the combination to understand how customers see the world, and addressing the customer experience in the appropriate way:

 
SatDriversDiagram

  • High stated, low derived – a given. Minimise dissatisfaction, but don’t try to compete here.
  • Low stated, high derived – a potential differentiator. If your performance is par on the givens, you may get credit for being better than your competitors here.
  • High stated, high derived – a driver. This is where the bulk of your priorities will sit. Vital, but often “big picture” items that are difficult to action.

That’s a much more rounded view than choosing a single “best” measure to prioritise, and more accurately reflects how customers think about their experience.

Tagged , , , , , , ,

Why you want a low score

noun_166704It’s surprising how often I meet organisations whose leaders want a high score more than they want happy customers.

Some don’t even seem to notice the mental bait-and-switch they’ve played when they pretend it’s the same thing.

In order to improve you need what a client of ours once called a “burning platform for change”.

A score that looks ok, even if we’d rather it was higher, means there is no burning platform. No burning platform means no significant change.

Often what gets in the way is a measurement process which flatters the organisation.

We’ll ignore deliberate gaming of the score, or completely biased questionnaires, and look at two more subtle problems.


Using a weak measure

All customer survey scores show a skew towards the top end of the scale. Most customers are at least reasonably happy with most organisations. After all, how long would you stick with a company that you were scoring in the bottom end of the scale?

At the same time, relatively few organisations have a majority of customers giving them “top box” scores at the extreme end of the scale.

In other words, most companies are quite good at customer satisfaction, but few are consistently excellent. Data from the UKCSI as well as our own client league table backs this up.

When it comes to score, this means that measuring “% Satisfied” (i.e. the proportion of customers in the top end of the scale) is a tremendously weak and flattering measure.

Companies with over 90% “satisfied” customers can be below average performers when a strong measure is used.

But it sounds good, doesn’t it?

Both Customer Satisfaction Index (CSI) and Net Promoter Score (NPS) will give you a much tougher measure, one that’s more likely to push your organisation to change.

 

Benchmarking for comfort, not for ideas

Benchmarking can be a brilliant tool for improvement, or a distraction that does nothing but get in the way. David Ogilvy once said:

We all have a tendency to use research as a drunkard uses a lamppost—for support, not for illumination.

Benchmarking is much the same.

Internal benchmarking is a very powerful way to improve an organisation’s performance by sharing best practice and taking advantage of people’s natural competitiveness. Enterprise Rent-A-Car used this very effectively in the late 90s, as discussed in this classic HBR case study.

External benchmarking is useful to help you understand the range of performance that’s been achieved by others, and to find ideas for improvement (Southwest Airlines looked at Formula 1 pit crews to improve their turnaround time).

In practice, many organisations indulge in what I call vanity benchmarking – redefining your comparison set until you find a league table you look good in.

Even worse, some organisations (inadvertently or otherwise) cheat. They use different scales, or different methodologies, or change the way NPS is calculated, or exclude customers who made a complaint, or any one of 1,000 other tricks.

Benchmarking should be about finding opportunities to improve, not a PR exercise.

Tagged , , , ,

Understanding customers

ThinkSpeak

If people ask what I do, my one-sentence answer tends to be “I help organisations understand their customers”.

What does that actually mean?

The tools we use are well-established quantitative and qualitative research techniques; all of which fundamentally boil down to one thing: talking to people.

Easy. Sort of.

No doubt you’ve seen the ever-growing hype around Behavioural Economics? It’s a field that has an enormous amount to teach those of us whose job is to understand other people, and particularly the way they make decisions.

We know, for example, that people are really bad at predicting their future behaviour (“Yes, I’ll definitely eat more salad and fewer doughnuts this year”), and nearly as bad at explaining why they did things.

Does that mean that research based on asking people questions is a waste of time?

I don’t believe so. But it does mean that it’s a good idea to focus your questions on the right things.

If you want to know about past/current behaviour it’s best to use observation or other sources of data if you can. If that’s not an option then people are fairly reliable about specific events, especially soon after them, and pretty unreliable on general patterns of behaviour (“How often do you go to the gym?”).

Future behaviour is tricky, because asking people is pretty much the only option. But consider the way you ask it, and see if you can set yourself up for more accuracy. If you want to know whether people will buy your product, don’t ask a focus group (they’ll all say yes to be polite), see if you can get them to part with cash on Kickstarter. If that’s not possible, frame it as a choice—would they buy your product instead of their current supplier?

Understanding how people will behave if you make a change (to a website, store layout, etc.) is best done by experiment. The more concrete you can make the future state for customers, through actual change or prototyping, the more accurate your findings.

Motivations are notoriously difficult for people to know, let alone explain. There’s no harm asking the question, but there’s often more insight from a good understanding of psychology than from people themselves. Rather than asking people why they did something, ask them how they feel about the choice they made or the product they chose, and then do the hard work in the analysis.

Attitudes form the mainstay of most research work, whether it’s brand associations, customer satisfaction, or employee engagement. We’re talking about thoughts and feelings, and again there are well-established limitations in what people are capable of telling you. The halo effect is a big one—if you want a meaningful attitude survey you have to work hard to ensure you get deeper than a single overall impression. Adding more questions won’t help, in fact it’ll make it worse.

Behavioural Economics teaches us that research based on asking people questions is limited, but it also gives us a framework to understand what those limitations are and how they work. It is not a threat to market research, in my view, but a coming of age.

Tagged , , , , ,

p. values are bad for your health

Chocolate
A few months ago you may have seen a flurry of stories about the slimming benefits of chocolate.

It turned out to be a hoax, well documented here.

The key point is that, although it was a deliberate hoax, the methodology and statistics used were not unrepresentative of those used in real nutrition “studies”.

They used a randomised controlled trial, and the chocolate-eating group did lose weight significantly faster (as measured by the all-important p. value) than the control group.

So what’s the problem? To understand that, we need to understand what a p. value tells us.

Statistical significance means a small chance of being wrong

In simple terms we set a p.value to control how sure we want to be about a difference we have found. By convention we set it to 0.05, or 5%.

In other words, there is less than a 5% chance that we would have seen the scores we have if there was no real difference between the control group and the treatment group.

So far, so good.

The chance of being wrong is additive

The problem is that 5% chance adds up for every measure we look at. In this instance, the “researchers” measured a total of 18 things (weight, cholesterol, sleep quality,…).

That means that the chance of making a mistake goes up to 5% x 18 = 90%.

In other words, there is a 90% chance of seeing a large difference on one of these 18 measures, even if there was no real difference between the control group and the treatment group.

Robust research corrects for this problem using techniques such as the familywise error rate or false discovery rate.

Are you fooling yourself?

Statistical significance testing is an immensely powerful tool, but it is very dangerous when used for “fishing expeditions” dredging through hundreds of comparisons to turn up ones that are significant.

The answer is to be clear about whether your analysis is testing or generating an idea. If it’s the latter, then you need to test that theory with fresh data before having much confidence in it.

Tagged ,

Text analytics – difficult but exciting

Text analytics, alongside its cousin speech analytics, is one of the most hyped applications of machine learning in the world of research and insight.

It’s easy to see why.

The dream is that all that laborious time spent manually coding comments to extract themes and root causes can be replaced with the click of a button. Huge savings in terms of cost and resource, and suddenly we have a quasi-qualitative tool that can be applied at scale.

What’s the catch? It’s hard.

Perhaps you’ll be lucky, but it’s rare to find a solution that works well out of the box. Models must be trained and/or tuned in order to be effective.

Don’t be put off. These techniques work, and are being actively developed by some of the smartest people in the world. Expect them to rapidly become more powerful and easier to use.

Most of the research, and often code, is publicly available. It’s an exciting and open field. Commercial offerings, inevitably, tend to lag a bit behind the cutting edge, but avoid the need for you to get your hands dirty with code.

If you have a lot of text responses you don’t know what to do with, why not give it a try?

Tagged , ,

Qualitative research: conversation not judgement

Focus groups get a bad press.

They’re overused by politicians, ignored by the business leaders we admire (Steve Jobs), often made up of “tame” participants.

Critics usually trot out the Henry Ford quote:

If I had asked people what they wanted, they would have said faster horses.

Which is actually a really good example of what focus groups do badly (innovation) and what they do well (identifying fundamental needs).

It’s true that customers won’t invent your new product, or even your next ad, for you.

They’re not even very good at picking between options, because customers get too focused on details at an early stage of development.

Which is why I wasn’t impressed by the car manufacturer who proudly announced that its new model had been “rejected by focus groups”.

When used properly, focus groups can provide the spark of insight that allows a brand to connect with customer emotions and radically differentiate itself.

At a recent MRS event, Peter from Voodoo gave a good example of this. Lurpak used focus groups to identify a globally shared emotional moment of truth in cooking – the “moment of alchemy” at which a dish comes together.

Based on that they developed an immersive ad called “weave your magic“.
No doubt this ad would have been rejected by focus groups. Probably in favour of one showing a green field full of cows with a dull voice over about quality and provenance.

We all like to think we’re rational.

By engaging customers in a conversation early, using research to inform creative rather than to judge it, Lurpak were able to create something truly memorable.

Qualitative research is about opening doors, not closing them.

Tagged ,

Seeing with fresh eyes

I mentioned last time that the secret to effective customer journey mapping is to talk to customers.

This may not seem like a stunning insight.

The truth is that many people in most organisations act as if they are afraid of customers, which makes talking to them almost inconceivable.

When we do talk to customers, it’s all too easy to ask them closed questions which reflect our agenda.

That will never get you a clear view of the customer journey.

Start with a clean sheet of paper (ignore your process map for now), and use qualitative research to understand how customers see the journey.

We call this the “lens of the customer” versus the “lens of the organisation”.

You’ll find that the moments of truth are different. Some things which are very significant for you will not be on customers’ radar. More importantly, you’ll find points of the journey where customers have a memorable emotional experience that is invisible to you.

Guess where most journeys show the lowest levels of satisfaction?

These missing touchpoints often reflect an unmet emotional need customers have to understand what is going on. Once you know about these missing moments, you can address them by setting expectations and improving communication.

Summary: to map the customer journey, start by using qualitative research to explore how customers see it.

Tagged , ,