Tag Archives: Customer satisfaction

The black box of creativity

noun_Box_300128Like most researchers, I’m constantly looking to give my clients “actionable insight”.

The truth is there’s no such thing.

Insights don’t work in a vacuum, they need the oxygen of imagination to spark ideas.

Great ideas, whether in marketing or customer experience, come about when you mix a profound insight about a customer need with skilled people’s interpretation of what it means for us and intuition about how to address that need.

Insight + Interpretation + Intuition

Mark Ritson, in the first of a new series on Marketing Week, breaks down the much-lauded Tide Superbowl ad. He makes a number of interesting points about marketing strategy, particularly about the tendency for marketers to obsess over media at the expense of creativity. What I want to pick up on is the story of how the concept came about.

Tide started with the insight that, in a commoditising category, everyone was talking about the same thing—removing dirt. No one was addressing the idea of perfectly clean clothes, and that meant there was a potential opportunity.

P&G briefed their agency (Saatchi & Saatchi), who came up with the idea that if Tide stands for clean clothes, and everyone in TV ads has preternaturally clean clothes, then every ad must be a Tide ad. A great concept, and the quality of the execution puts the icing on the cake.

As Ritson points out, that final creative leap can be frustratingly hard to explain or understand, but it’s not random. It happens when you take good insights, interpret them to fit a clear strategy, and then brief good creative people.

Insight: people want clean clothes, but the category only talks about removing dirt.

Interpretation: we can lift ourselves above the category by talking about cleanliness.

Intuition: if Tide = clean clothes, then every ad’s a Tide ad.

How it works for Customer Experience

I think exactly the same process is there in good customer experience research. The researcher’s job is to uncover insights about customer needs, to build empathy into organisations so that they better understand what shapes customer feelings, and to explore how meaning is created for customers.

None of that can deliver improved experiences on its own.

To have value, insights require interpretation, so that they are understood in the context of a clear customer strategy (I find “emotional value proposition” useful for this). Then comes the “black box” of creative inspiration.

Just like advertising there’s no way to explain how it happens, but if you get the right insight, briefed in the right way, to the right people, then good things will follow. In ‘Well Designed‘ Jon Kolko puts it like this:

Designers learn to purposefully embrace intuitive or inferential leaps of logic…”

That’s what fuels ideas as simple as, my favourite example, removing the clock from a waiting-room wall to improve customer satisfaction with waiting times.

 

Tagged , , , , , ,

Measurement madness

banmetrixcsHow valuable is measurement? You often hear variations on the phrase

“What gets measured gets done.”

I’ve probably used it once or twice myself. My organisation makes a living from measurement, so you’d expect us to be in favour of it.

We are, but like anything important measurement needs to be used with care.

Measurement drives behaviour

There’s no doubt that measurement can have a big impact on behaviour. As Edward Tufte points out, as soon as we begin to measure something we start to influence it:

“Measurement itself (and the apparent review of the numbers) can govern a process.”

That can be very positive. By measuring something, such as customer satisfaction, we signal that it matters, and that helps to focus everyone on improving it.

Hidden truths, busting myths

It’s not just about focusing attention. The growing use of analytics in sport has shown that data can reveal flaws in traditional management approaches based on experience and instinct.

Starting with baseball Sabermetrics, popularised in “Moneyball“, data-led management has spread throughout sport, and even threatens to enter the sceptical world of football. Not everyone is a fan of the new wave of statistics—this rant from Craig Burley’s about the “nerd nonsense” of expected goals is fairly typical—but bit by bit they’re becoming mainstream.

In business, as much as in sport, management by data can reveal opportunities to improve, and flaws in current practice, which are impossible to pick up on in any other way.

“Metric fixation”

Measurement is taking over, but unfortunately it can have a dark side.

Some things are easier to measure than others. When we can’t measure what matters, we’re prone to treat what we can measure as important. Metrics should serve to paint a clearer picture of the world as it really is, but sometimes they get in the way of seeing what really matters.

Obsessing over the number can drive short term decision-making, something which has long been an issue in judging the performance of publicly listed companies. The emphasis on this quarter’s profit is understandable, but doesn’t always serve the long-term interests of shareholders, let along anybody else.

Most worryingly, prioritising the metric over than the underlying truth often leads to gaming and unintended consequences. I’m reminded of my nephew, who lies in bed swinging his arm to boost his Fitbit step count.

“The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.”

—Jerry Z Muller “Against Metrics

Getting the balance right

Is measurement ultimately a negative force, then? I don’t think it has to be. A few simple principles help to make measurement a positive:

  • Data should be used to inform judgement, not replace it
  • Acknowledge that you can’t measure all the things that matter
  • Always question what the data really tell you (e.g. survivorship bias)
  • Challenge the cost & time spent gathering the data (is it worth it?)
  • Focus communication and management on behaviours rather than results

Using data requires a layer of interpretation to make it meaningful. If we see metrics as an indicator to be incorporated into judgements, rather than as absolute truth, then measurement deserves its place in your organisation.

Tagged , , , , , , , , ,

Why ending well is so important

noun_finish line_113911Our memories and perceptions of experience are much less concrete and rational than we like to imagine.

If you don’t accept that, go away and read Kahneman or Ariely, and then we’ll talk.

One particular mental bias that should be at the forefront of our minds when planning customer experiences is the “peak-end rule“.

Simply put, this rule states that we judge an experience* based on how we feel at its most extreme point and at the end. There are subtleties around the length of the experience, and how long-lasting the effects are, which you can read about in the Wikipedia article.

What I want to focus on is the importance of ending customer journeys well (and the end may well be more important than the peak). Whether it’s the big-picture of a customer lifetime, the detail of individual interactions, or the many journeys of all sizes in between, we often let the customer experience peter out instead of ending with a bang.

This is madness.

In many cases the single most powerful change you could make to seize control of the customer experience would be to ensure that you finish strongly. Schedule a call to make sure the customer got what they needed. Send a thank-you note. Go out of your way to make their transition to a new supplier seamless.

The peak-end rule means that you’ll leave the customer with a powerful positive memory, and that’s bound to pay for itself.

 


* Trying to get psychologists to define what exactly we mean by “an experience” is fun, but the word does seem to correlate with a consistent mental concept shared by all of us.

Tagged , , , , ,

Don’t you owe customers a reply?

noun_circles_1835211_000000

I was chatting to a taxi driver on the way to see a client the other day, and he asked  what I do.

I explained that I help companies understand their customers.

“You mean you send out those surveys that I never answer?”

It’s a depressingly common reaction. If people are to be believed, it’s a miracle that we manage to persuade anyone to take part in our research.

My taxi driver went on to explain why:

“There’s no point because they never reply to you, however much time you take explaining how you feel, or how the service could have been better.”

I think that’s a really interesting perspective. For our business to business clients, it’s normal to respond to customers individually based on their answers. You need to learn general lessons, sure, but you also need to address individual concerns and show that you value their feedback.

What about business to consumer clients? We usually recommend a “hot alert” system, passing on any customers with a burning issue for the client to resolve, but that’s not what the taxi driver was talking about.

He was talking about the lack of respect customer satisfaction surveys often show for customers, asking them to spend 10 minutes to submit carefully considered responses…which are then aggregated into a mass for impersonal analysis.

I think he’s right, we owe customers more than that.

If we’re worried about falling response rates (as we should be) then we need to do something about it. I suggest starting with a simple promise…

If you complete a satisfaction survey for us, and you want a personal response, you’ll get one.

For anyone who really cares about what their customers think I can’t see any reason you wouldn’t want to do it, and I’m willing to bet it would improve your response rate.

 

Tagged , , , , ,

From drivers to design thinking

networkDriver analysis is great, isn’t it? It reduces the long list of items on your questionnaire to a few key drivers of satisfaction or NPS. A nice simple conclusion—”these are the things we need to invest in if we want to improve”.

But what if it’s not clear how to improve?

Often the key drivers turn out to be big picture, broad-brush, items. Things like “value” or “being treated as a valued customer” which are more or less proxies for overall satisfaction. Difficult to action.

Looking beyond key drivers, there’s a lot of insight to be gained by looking at how all your items relate to each other, as well as to overall satisfaction and NPS. Those correlations, best studied as either a correlogram (one option below) or network diagram (top right) can tell you a lot, without requiring much in the way of assumptions about the data.
correlogram
In particular, examining the links between specific items can support a design thinking approach to improving the customer experience based on a more detailed understanding of how your customers see the experiences you create.

Your experiences have a lot of moving parts—don’t you think you ought to know how they mesh together?

Tagged , , , , , , ,

Trust: is honesty more important than competence?

noun_434630
Most theories of trust see it as multi-dimensional.

The details vary (some links below), but mostly boil down loosely to two things:

  • Competence
  • Integrity

Understanding how they relate to each other is really important.

For instance, Stephen M.R. Covey points out that the way banks set about repairing their reputations after the financial crisis was exactly wrong, from a trust perspective.

Their response was to employ lots of people to ensure they were “compliant”.

That’s all very well, and perhaps even necessary, but it won’t do anything to promote trust. Compliance, and rules more generally, are what we create when we can’t or don’t trust people.

Competence is a situational judgement. Each of us is competent in certain areas, and not competent in others. Moreover, competence does not require infallibility—customers are quite forgiving of mistakes (as long as you admit you’re wrong and make an effort to put things right).

Integrity is about who you are, and it’s much more long-term. If I lose trust in your integrity then it’s very hard for you to win it back.

The implications for customer service are clear—don’t be afraid of admitting a mistake, and never ever lie to a customer.

Strange how often we do the opposite, isn’t it?

 


We run a 1/2 day briefing on trust as it relates to Employee Engagement and Customer Experience. You can find more details on our website.

Three of the best models of trust are:

Tagged , , , ,

Are you measuring importance right?

noun_70566
One of the universal assumptions about customer experience research is that the topics on your questionnaire are not equally important.

It’s pretty obvious, really.

That means that when we’re planning what to improve, we should prioritise areas which are more important to customers.

Again, pretty obvious.

But how do we know what’s important? That’s where it starts to get tricky, and where we can get derailed into holy wars about which method is best. Stated importance? Key Driver Analysis (or “derived importance”)? Relative importance analysis? MaxDiff?

An interesting article in IJMR pointed out that these decisions are often made, not on the evidence, but according to the preferences of whoever the main decision maker is for a particular project.

Different methods will suggest different priorities, so personal preference doesn’t seem like a good way to choose.

The way out of this dilemma is to stop treating “importance” as a single idea that can be measured in different ways. It isn’t. Stated importance, derived importance and MaxDiff are all measuring subtly different things.

The best decisions come from looking at both stated and derived importance, using the combination to understand how customers see the world, and addressing the customer experience in the appropriate way:

 
SatDriversDiagram

  • High stated, low derived – a given. Minimise dissatisfaction, but don’t try to compete here.
  • Low stated, high derived – a potential differentiator. If your performance is par on the givens, you may get credit for being better than your competitors here.
  • High stated, high derived – a driver. This is where the bulk of your priorities will sit. Vital, but often “big picture” items that are difficult to action.

That’s a much more rounded view than choosing a single “best” measure to prioritise, and more accurately reflects how customers think about their experience.

Tagged , , , , , , ,

Insight & internal comms: a match made in heaven

noun_marriage_192896Every internal communications team I know is crying out for content.

Every customer insight team I know is crying out for airtime and tools to get their messages to staff.

I think you can see where I’m going with this.

So why do we not see more use of customer (and employee) insight in internal comms? I think the main problem is that we, as insight people, have tended to be boring.

We know there’s loads of brilliant stuff in our 60 slides of bar charts, so we send the slide pack off to internal comms. Then we’re a bit hurt they don’t do anything with it.

Bar charts are boring.

Stories are interesting.

But stories are not something that simply emerge from talking to customers. What distinguishes a story is not that it is human (although that’s important), but that it has a point.

To turn insight into effective comms you need to become a storyteller. That means you have to have the courage to craft a story for internal comms to tell, or you could work with them to craft a story together.

Figure out who your audience is, what interests them, and how your insight can change that for the better.

Let customers tell their stories, and flag up the turning points that sent their narratives in different directions.

Stories are told, not found.

Tagged , , , ,

Understanding customers

ThinkSpeak

If people ask what I do, my one-sentence answer tends to be “I help organisations understand their customers”.

What does that actually mean?

The tools we use are well-established quantitative and qualitative research techniques; all of which fundamentally boil down to one thing: talking to people.

Easy. Sort of.

No doubt you’ve seen the ever-growing hype around Behavioural Economics? It’s a field that has an enormous amount to teach those of us whose job is to understand other people, and particularly the way they make decisions.

We know, for example, that people are really bad at predicting their future behaviour (“Yes, I’ll definitely eat more salad and fewer doughnuts this year”), and nearly as bad at explaining why they did things.

Does that mean that research based on asking people questions is a waste of time?

I don’t believe so. But it does mean that it’s a good idea to focus your questions on the right things.

If you want to know about past/current behaviour it’s best to use observation or other sources of data if you can. If that’s not an option then people are fairly reliable about specific events, especially soon after them, and pretty unreliable on general patterns of behaviour (“How often do you go to the gym?”).

Future behaviour is tricky, because asking people is pretty much the only option. But consider the way you ask it, and see if you can set yourself up for more accuracy. If you want to know whether people will buy your product, don’t ask a focus group (they’ll all say yes to be polite), see if you can get them to part with cash on Kickstarter. If that’s not possible, frame it as a choice—would they buy your product instead of their current supplier?

Understanding how people will behave if you make a change (to a website, store layout, etc.) is best done by experiment. The more concrete you can make the future state for customers, through actual change or prototyping, the more accurate your findings.

Motivations are notoriously difficult for people to know, let alone explain. There’s no harm asking the question, but there’s often more insight from a good understanding of psychology than from people themselves. Rather than asking people why they did something, ask them how they feel about the choice they made or the product they chose, and then do the hard work in the analysis.

Attitudes form the mainstay of most research work, whether it’s brand associations, customer satisfaction, or employee engagement. We’re talking about thoughts and feelings, and again there are well-established limitations in what people are capable of telling you. The halo effect is a big one—if you want a meaningful attitude survey you have to work hard to ensure you get deeper than a single overall impression. Adding more questions won’t help, in fact it’ll make it worse.

Behavioural Economics teaches us that research based on asking people questions is limited, but it also gives us a framework to understand what those limitations are and how they work. It is not a threat to market research, in my view, but a coming of age.

Tagged , , , , ,