Tag Archives: Customer satisfaction

From drivers to design thinking

networkDriver analysis is great, isn’t it? It reduces the long list of items on your questionnaire to a few key drivers of satisfaction or NPS. A nice simple conclusion—”these are the things we need to invest in if we want to improve”.

But what if it’s not clear how to improve?

Often the key drivers turn out to be big picture, broad-brush, items. Things like “value” or “being treated as a valued customer” which are more or less proxies for overall satisfaction. Difficult to action.

Looking beyond key drivers, there’s a lot of insight to be gained by looking at how all your items relate to each other, as well as to overall satisfaction and NPS. Those correlations, best studied as either a correlogram (one option below) or network diagram (top right) can tell you a lot, without requiring much in the way of assumptions about the data.
correlogram
In particular, examining the links between specific items can support a design thinking approach to improving the customer experience based on a more detailed understanding of how your customers see the experiences you create.

Your experiences have a lot of moving parts—don’t you think you ought to know how they mesh together?

Tagged , , , , , , ,

Trust: is honesty more important than competence?

noun_434630
Most theories of trust see it as multi-dimensional.

The details vary (some links below), but mostly boil down loosely to two things:

  • Competence
  • Integrity

Understanding how they relate to each other is really important.

For instance, Stephen M.R. Covey points out that the way banks set about repairing their reputations after the financial crisis was exactly wrong, from a trust perspective.

Their response was to employ lots of people to ensure they were “compliant”.

That’s all very well, and perhaps even necessary, but it won’t do anything to promote trust. Compliance, and rules more generally, are what we create when we can’t or don’t trust people.

Competence is a situational judgement. Each of us is competent in certain areas, and not competent in others. Moreover, competence does not require infallibility—customers are quite forgiving of mistakes (as long as you admit you’re wrong and make an effort to put things right).

Integrity is about who you are, and it’s much more long-term. If I lose trust in your integrity then it’s very hard for you to win it back.

The implications for customer service are clear—don’t be afraid of admitting a mistake, and never ever lie to a customer.

Strange how often we do the opposite, isn’t it?

 


We run a 1/2 day briefing on trust as it relates to Employee Engagement and Customer Experience. You can find more details on our website.

Three of the best models of trust are:

Tagged , , , ,

Are you measuring importance right?

noun_70566
One of the universal assumptions about customer experience research is that the topics on your questionnaire are not equally important.

It’s pretty obvious, really.

That means that when we’re planning what to improve, we should prioritise areas which are more important to customers.

Again, pretty obvious.

But how do we know what’s important? That’s where it starts to get tricky, and where we can get derailed into holy wars about which method is best. Stated importance? Key Driver Analysis (or “derived importance”)? Relative importance analysis? MaxDiff?

An interesting article in IJMR pointed out that these decisions are often made, not on the evidence, but according to the preferences of whoever the main decision maker is for a particular project.

Different methods will suggest different priorities, so personal preference doesn’t seem like a good way to choose.

The way out of this dilemma is to stop treating “importance” as a single idea that can be measured in different ways. It isn’t. Stated importance, derived importance and MaxDiff are all measuring subtly different things.

The best decisions come from looking at both stated and derived importance, using the combination to understand how customers see the world, and addressing the customer experience in the appropriate way:

 
SatDriversDiagram

  • High stated, low derived – a given. Minimise dissatisfaction, but don’t try to compete here.
  • Low stated, high derived – a potential differentiator. If your performance is par on the givens, you may get credit for being better than your competitors here.
  • High stated, high derived – a driver. This is where the bulk of your priorities will sit. Vital, but often “big picture” items that are difficult to action.

That’s a much more rounded view than choosing a single “best” measure to prioritise, and more accurately reflects how customers think about their experience.

Tagged , , , , , , ,

Insight & internal comms: a match made in heaven

noun_marriage_192896Every internal communications team I know is crying out for content.

Every customer insight team I know is crying out for airtime and tools to get their messages to staff.

I think you can see where I’m going with this.

So why do we not see more use of customer (and employee) insight in internal comms? I think the main problem is that we, as insight people, have tended to be boring.

We know there’s loads of brilliant stuff in our 60 slides of bar charts, so we send the slide pack off to internal comms. Then we’re a bit hurt they don’t do anything with it.

Bar charts are boring.

Stories are interesting.

But stories are not something that simply emerge from talking to customers. What distinguishes a story is not that it is human (although that’s important), but that it has a point.

To turn insight into effective comms you need to become a storyteller. That means you have to have the courage to craft a story for internal comms to tell, or you could work with them to craft a story together.

Figure out who your audience is, what interests them, and how your insight can change that for the better.

Let customers tell their stories, and flag up the turning points that sent their narratives in different directions.

Stories are told, not found.

Tagged , , , ,

Understanding customers

ThinkSpeak

If people ask what I do, my one-sentence answer tends to be “I help organisations understand their customers”.

What does that actually mean?

The tools we use are well-established quantitative and qualitative research techniques; all of which fundamentally boil down to one thing: talking to people.

Easy. Sort of.

No doubt you’ve seen the ever-growing hype around Behavioural Economics? It’s a field that has an enormous amount to teach those of us whose job is to understand other people, and particularly the way they make decisions.

We know, for example, that people are really bad at predicting their future behaviour (“Yes, I’ll definitely eat more salad and fewer doughnuts this year”), and nearly as bad at explaining why they did things.

Does that mean that research based on asking people questions is a waste of time?

I don’t believe so. But it does mean that it’s a good idea to focus your questions on the right things.

If you want to know about past/current behaviour it’s best to use observation or other sources of data if you can. If that’s not an option then people are fairly reliable about specific events, especially soon after them, and pretty unreliable on general patterns of behaviour (“How often do you go to the gym?”).

Future behaviour is tricky, because asking people is pretty much the only option. But consider the way you ask it, and see if you can set yourself up for more accuracy. If you want to know whether people will buy your product, don’t ask a focus group (they’ll all say yes to be polite), see if you can get them to part with cash on Kickstarter. If that’s not possible, frame it as a choice—would they buy your product instead of their current supplier?

Understanding how people will behave if you make a change (to a website, store layout, etc.) is best done by experiment. The more concrete you can make the future state for customers, through actual change or prototyping, the more accurate your findings.

Motivations are notoriously difficult for people to know, let alone explain. There’s no harm asking the question, but there’s often more insight from a good understanding of psychology than from people themselves. Rather than asking people why they did something, ask them how they feel about the choice they made or the product they chose, and then do the hard work in the analysis.

Attitudes form the mainstay of most research work, whether it’s brand associations, customer satisfaction, or employee engagement. We’re talking about thoughts and feelings, and again there are well-established limitations in what people are capable of telling you. The halo effect is a big one—if you want a meaningful attitude survey you have to work hard to ensure you get deeper than a single overall impression. Adding more questions won’t help, in fact it’ll make it worse.

Behavioural Economics teaches us that research based on asking people questions is limited, but it also gives us a framework to understand what those limitations are and how they work. It is not a threat to market research, in my view, but a coming of age.

Tagged , , , , ,