Tag Archives: Research

Insight means Cause & Effect

What is insight?

For me, it’s always about cause and effect, although sometimes we don’t do a good enough job of making that explicit.

Let’s have a look at the process of going from data to insight.

Finding patterns

When we analyse data, we start simply by describing it. We look at averages, percentages, etc. to understand what’s typical, and how much it varies. But no one could call that insight.

The next step is to look for patterns, exceptions, and differences. Analysis can get extremely complex, involving all sorts of fancy techniques, but fundamentally it comes down to three things:

  • Comparisons (“let’s split by region”)
  • Trends (“what’s happened to our Ease score over the past year?”)
  • Association (“what correlates with NPS?”)

Hidden within all three of these approaches is a key causal question, “why?” Why does this region have higher retention than that region? Why is our Ease score trending down? Why are some people Promoters and others are Passives?

Asking “why?”

We should take care to be clear when we move to this kind of causal reasoning, because it is a bit of a leap of faith, and in practice organisations often use data and statistical tools which are not really up to the job.

Correlation is not causation, as the saying goes, but frankly if we’re reporting correlation then it’s usually because we’re at least speculating about causation. That’s not necessarily a problem, as long as we’re clear on what we’re doing:

…speculation is fine, provided it leads to testable predictions and so long as the author makes it clear when he’s ‘merely’ speculating.”

V.S. Ramachandran

Making up our own stories to explain correlation is not the answer (although as I’ve said elsewhere, telling stories is often the best way to communicate cause and effect arguments to other people).

What we’re really interested in is not asking “why”, but the related question “what if?” What if we take the account plans that the top-scoring regional manager has developed and use them in the other regions? What if we invest in a better customer portal? What if our score for reliability goes up by 1 point?

Asking “what if…?”

This is a very “big data” approach to insight, to focus heavily on building machine learning models which predict some outcome of interest (perhaps customer defection) with a great deal of accuracy. These techniques can be extremely powerful—they are more robust than statistical approaches when it comes to data that is very sparse, more open in terms of the types of data that they can deal with, and more flexible about the shape of relationship between variables.

But prediction is still not causation.

That might seem a bit odd if you’re not used to thinking about this stuff, so let’s prove it with a classic thought experiment. We can look outside at the lawn to see if the grass is wet, and if it is then we can (accurately) predict that it’s been raining. But if we go outside with a watering can and make the grass wet, that doesn’t make it rain.

Focusing on prediction without a theory about the underlying causal mechanism can lead us to make equally stupid mistakes in the real world.

With statistical modelling techniques we can build models to capture a theory of cause and effect, and to test it. But what we’re really, really, interested in is not even asking “what if…”, it’s understanding what will happen if we take some action. What if we cut our delivery times in half?

Asking “what if we do…?”

How do make this additional leap from prediction to causation?

The key is that we have a theory, because once you make your theory explicit you can test it. Techniques such as Structural Equation Modelling allow us to test how well our theory matches up to the data we’ve got, and that’s extremely useful.

But not all data suits these kind of models. More generally, there’s a lot we can learn from the Bradford Hill criteria, which originated in epidemiology. Put simply, these are a set of 9 conditions for drawing causal conclusions from observational data, and include important ideas such as a dose-response relationship and a plausable mechanism of action.

Judea Pearl is one of the leading thinkers on causal inference, so if you’re interested in this kind of stuff I’d highly recommend his work. The Book of Why is the most accessible starting point.

From theory to data

Even better, theory can guide the data we collect, whereas naive machine learning approaches assume we have all the relevant data. In practice that is very rarely the case.

Often we’re not measuring what really matters, or even what we think we’re measuring, like the online dating service that developed an algorithm to assess the quality of your profile picture. Except that you can’t measure “quality”, so they ended up measuring popularity instead. The result? The “best” photos, according to the algorithm, all belonged to attractive young women.

Not much help for me if I want to improve my profile (although fortunately I’ve been off the market for 20 years!)

That’s why, although they’re extremely powerful when it comes to prediction, I think machine learning approaches are not yet the final word. By focusing on prediction rather than explanation they have missed out the role of theory in guiding what data to collect, the importance of understanding intervention rather than association, and the subtle errors of assuming that you’ve measured what you’re really interested in.

Insight is about cause and effect, and that means developing and testing new theories about the world.

Tagged , , , , , ,

Remember who the expert is

When European sailors discovered Easter Island, or Rapa Nui as it is known to its inhabitants, the first thing they noticed was the enormous moai statues.

Understandably, they were amazed that a small population of people with access to only simple tools had been able to carve and move these vast stone figures, weighing up to 82 tonnes.

They asked the islanders how the statues had been put in place.

“They walked.”

Since this was obviously nonsense, Europeans over the years developed theories to explain the apparently impossible. Rollers seemed the most likely explanation, and this tied in nicely with the evidence that Rapa Nui had once been covered in palm trees.

An idea soon emerged of a people who, prioritising their statues above all else, cut down all the trees on the island for rollers to move their statues. Ultimately this led to the collapse of civilisation on the island, as there was simply not enough food to maintain the population.

This was the received wisdom for over a hundred years, and it forms a neat parable of the risks of not looking after our environment, but it’s probably not what happened on Rapa Nui.

As Paul Cooper explains in an episode of the superb Fall of Civilizations podcast, the inhabitants of Easter Island were wiped out by contact with Europeans. New diseases and slave-raiding were the main drivers of depopulation on an island that seems to have been remarkably peaceful and well organised for hundreds of years, until the Europeans arrived.

What has all this got to do with research?

I’ve written a few things recently about the importance of interpretation in qualitative research. That’s certainly true, but there is a catch. It’s all too easy to hear what we expect to hear. The listener’s assumptions can make them deaf to what the speaker is actually saying, particularly when there’s a perceived imbalance of power (like the assumption that the inhabitants of Easter Island were “primitive”).

If you want to understand customers, start by assuming that what they say is true. Forget your preconceptions, and treat your customer as an expert—after all, who knows more about how they feel, and why, than they do? Start by believing what they tell you, and if it seems strange do the work to figure out why there’s a perception gap.

Oh, and if you’re wondering how the moai moved into position? They walked.

Tagged , , ,

Language is more than words

I’m fascinated by language.

It’s one of the relatively small number of things which sets human beings apart from any other animal, and it’s the foundation on which pretty much all our knowledge, cooperation, and even civilisations are built.

It’s also the stock-in-trade of a researcher—language remains the main tool we have to understand other people and how they see the world.

But there’s a danger that when we think about “language” we reduce it to simply the words we use. We send and understand meaning far more richly than that, and in many ways our ability to communicate goes beyond what we’re consciously aware of.

There’s a fascinating article in New Scientist about the way in which we use filler words such as “um”, “uh”, and “huh?”. As the article says

“Far from being an inarticulate waste of breath, filler words like um, uh, mmm and huh are essential for efficient communication, sending important signals about the words we are about to say so that two speakers can better understand each other.”

The research shows that filler words are not mistakes, and they’re not empty or interchangeable, but form a kind of metalanguage. “Um”, for example, signals a longer pause than “uh”.

These words help the listener to understand what to expect, and they also prepare us to be ready for a change, or something unexpected, and therefore help us to notice and remember significant things.

That’s why it worries me so much when qualitative research is reduced to customer quotes. Shorn of body language, tone, context, and metalanguage detail which we may process without conscious awareness, it hardly seems right to call these shallow collections of text “verbatims”.

For qualitative researchers, our interpretation of what customers mean is at the heart of the work we do. It’s by taking advantage of our ability to interpret these clues correctly that real customer insight is possible, not by piling up quotes.

Tagged , , ,

Using real-time reporting for good, not evil

noun_Realtime Analytics_665914(1)Real-time customer satisfaction measurement sounds great doesn’t it? To be honest, I tend to roll my eyes when the subject comes up.

Quick turnaround on surveys and reporting is a good thing, of course. But “real-time” is a beguiling phrase that conceals a host of challenges, compromises, and decisions which need to be made in order to build an effective customer experience programme.

What’s lurking beneath the surface?

Surveys aren’t real-time…

Real-time reporting rests on the idea that we should be able to look at a dashboard and see the current state of the business. Great. So what is the current state of customer satisfaction?

Is it the satisfaction of customers who are interacting right now? We don’t know that yet. It makes sense to get the survey as soon as possible to the event, but it can never be instant.

Is it the satisfaction of customers we’ve just finished surveying? What do we mean by “just”? You need to decide how far back you should go in your definition of “real-time”. An hour? A day? A week? You’ll have to strike a balance between the robustness (i.e. sample size) and the freshness of the data you’re reporting. A real-time number does more harm than good if it fluctuates up and down with no real underlying change.

Surveys can’t be real-time. They can be close to the event, quickly reported, and aggregated over a relatively short historical range (all of which are a good idea), but they can’t be real-time.

…attitudes aren’t fixed at events…

Does the idea of “real-time” satisfaction even make sense? Yes and no. Ironically, one of the problems with surveys which aim to get a score immediately after the event is that the lasting memory for customers may not have formed yet.

Imagine you phone your bank to request a replacement card, and then you’re asked to complete an IVR survey. Your interaction was great, so you score it 9 out of 10. Five days later you realise that there’s still no sign of your card. What score would you give now?

This approach to surveys reveals a deeply internal, process, focus. The interaction often isn’t finished from a customer’s point of view at the point of the survey. That means that their attitudes may change, and we may miss opportunities to improve.

…action isn’t real-time…

The point of the survey is to make improvements, unless it’s just a tick-box exercise.

“Real-time” satisfaction measurement is a good way to pick up actively dissatisfied customers, allowing you to intervene and do something about it. But that’s only part of the picture.

Lasting improvements to the customer experience require changes to culture, processes, and behaviours. Those take a longer-term view to analyse, plan, and implement properly, and they can often be lost in the rush to focus on the latest score.

…but effective feedback is real-time

Where real-time satisfaction studies work best is when the data is fed immediately to the people who need it, rather than being owned by managers. To quote John Seddon:

“Performance information should be used by the people who do the work.”

John Seddon, I Want You To Cheat

Real-time satisfaction programmes which work best are those that get customer feedback to staff as quickly and transparently as possible, empowering them to improve the customer experience and reflect on their own behaviour. Management’s role is to provide the right information, and then get out of the way.

In his book Bounce Matthew Syed explains that the key to extraordinary performance is thousands of hours of high-quality practice. One of the keys is immediate feedback, and I think that good “real-time” customer feedback, even if the reality is a bit of a messy compromise, is the closest we can get to this.

Tagged , , , , , , ,

Measuring emotion

noun_pulse_325855_000000

Is it possible to measure emotion?

I don’t think so, at least not with a survey. Emotions are largely unconscious and experienced in the moment; asking customers to accurately remember and score them after the event misrepresents the nature of emotions.

That doesn’t mean we should give up on the idea of trying to understand emotions. Here are some tactics we can try…

Qualitative research

Qualitative research is all about trying to build up a picture of how customers think and feel, and the context that shapes that. Emotions, as we saw in a previous post, are a vital part of the picture.

The mistake people often make is thinking that qualitative research is about what customers say. It’s not, it’s about why they said it. Good qualitative research digs beneath the surface to understand the deep psychological needs and reasons for customers’ behaviour, thoughts, and feelings. That’s where the emotions sit.

How do we do that? It starts by asking probing questions, but ultimately it means we need to add a layer of interpretation; so qualitative research is never entirely objective. To counteract the subjectivity of interpretation, we can turn to established models.

Models to interpret

In her excellent book “MindFrames“, Wendy Gordon outlines 6 distinct lenses we can use when trying to make sense of what customers say, based on decades of practice. This is a good example of a tendency that all qualitative researchers have to build up mental models to help them translate from what customers say and do in order to understand why.

Good researchers keep up to date with what the cognitive sciences have to tell us about how the human mind works, looking for ways to translate that into the messy real world of customer experience.

Measurement (not questions)

What about measurement? Why can’t we take something like Plutchik’s list of basic emotions and ask customers to score them on a scale? There’s nothing stopping you from trying, but I don’t believe it often works. Introspection is a terrible tool for understanding our unconscious mind, and you’ll find that a few easy to articulate emotions such as “anger” dominate.

So what can we do?

  • Focus on causes and outcomes. Qualitative work can highlight which events and behaviours cause emotions, and which outcomes derive from them. Those are often easier to measure quantitatively.
  • Interpret. Know that what customers say is not always the true cause. If they give a low score for waiting times, understand that anxiety may well be the real problem.
  • Use non-survey methods. Sometimes it’s possible to measure emotion in the moment, without asking customers directly. IDEO’s laugh detector is a good example of this.
  • Quantify verbatims. Customers usually reveal more about their emotions in their verbatim comments. It’s relatively easy to find (or build) dictionaries that will score comments for emotion. This works to a point, but be aware that you are only working at the surface level of what customers say, not at the deeper level of why.

The future for emotions

The latest scientific evidence suggests that emotions may be less innate, less universal, and less monolithic than they feel to us. Lisa Feldman Barrett in “How Emotions are Made” says:

“…your emotions are not built-in but made from more basic parts. They are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment.”

That points us even more firmly away from trying to measure them in any straightforward way.

 

 

 

 

 

Tagged , , , , ,

Don’t you owe customers a reply?

noun_circles_1835211_000000

I was chatting to a taxi driver on the way to see a client the other day, and he asked  what I do.

I explained that I help companies understand their customers.

“You mean you send out those surveys that I never answer?”

It’s a depressingly common reaction. If people are to be believed, it’s a miracle that we manage to persuade anyone to take part in our research.

My taxi driver went on to explain why:

“There’s no point because they never reply to you, however much time you take explaining how you feel, or how the service could have been better.”

I think that’s a really interesting perspective. For our business to business clients, it’s normal to respond to customers individually based on their answers. You need to learn general lessons, sure, but you also need to address individual concerns and show that you value their feedback.

What about business to consumer clients? We usually recommend a “hot alert” system, passing on any customers with a burning issue for the client to resolve, but that’s not what the taxi driver was talking about.

He was talking about the lack of respect customer satisfaction surveys often show for customers, asking them to spend 10 minutes to submit carefully considered responses…which are then aggregated into a mass for impersonal analysis.

I think he’s right, we owe customers more than that.

If we’re worried about falling response rates (as we should be) then we need to do something about it. I suggest starting with a simple promise…

If you complete a satisfaction survey for us, and you want a personal response, you’ll get one.

For anyone who really cares about what their customers think I can’t see any reason you wouldn’t want to do it, and I’m willing to bet it would improve your response rate.

 

Tagged , , , , ,

Experiments to learn about action

noun_1280396I usually describe my job as helping clients to understand their customers and staff.

In particular, I help clients to understand how people think and feel (their attitudes), how those relate to their experiences, who they are (segmentation), and what they do (behaviour). Usually the ultimate reason is to answer the question…

“If we do X what will happen to Y?”

Learning about people

There are basically two tools in the researcher’s armoury: asking questions and observation. Which works best? Broadly speaking we know that observation works better for behaviour, because people aren’t very good at remembering or (in particular) predicting what they do. We ask questions because it’s the only way to try to understand what’s happening inside people’s heads. It’s not perfect, but it’s often the best tool we have. Where possible, combining both techniques can give insights that neither on its own is capable of.

In either case, however, we’re simply bystanders observing what happens to customers. That means that it’s very difficult to prove the links we identify, especially if we want to predict what will happen if we make a change of some sort.

It’s the knotty old problem of correlation versus causation. The classic example here is the early 20th century study that found a significant link between US households which owned a vacuum cleaner and those that sent their kids to college. The link is true, it held for the population at the time, but it isn’t a direct causal relationship.

The point here is that the correlation holds for prediction (if I know whether or not you have a vacuum cleaner I can make a better-than-chance guess about whether your kids are at college), but fails for intervention (buying a vacuum cleaner doesn’t make it more likely that my child will get into Harvard). That’s why observational studies are flawed if we want to draw conclusions about what actions to take.

Learning about action

To prove a case for intervention, in other words to answer the question “If we do X what will happen to Y?”, we almost always* need to use an experiment. Experiments can be very difficult to design well, so read up on the details, but the important principles are:

  • You need a control condition to serve as a baseline
  • Participants are randomly allocated to receive control or treatment
  • Participants shouldn’t know which group they’re in
  • People interacting with the participants shouldn’t know what group they’re in

It’s usually difficult, and often impossible, to meet all these conditions in practice for the kinds of customer experience change we’re looking at, but that doesn’t mean we shouldn’t try to do the best we can.

One place where the experimental approach has taken hold is in digital A/B testing. Web design (A/B testing is almost an illness at Google) and communications (email subjects etc) understand the value of making data-based decisions about which choices will deliver the best results.

Another is in the public sector, where the popularity of “Nudge” theory has seen behavioural economics tactics teamed with experiments to see which messages have the most impact on behaviour. This discussion of Kirklees Council’s GDPR mailings is an interesting example.

It’s high time we spread that enthusiasm for experiments throughout the rest of the customer experience.

Experiments are the only way for businesses to know the impact of planned changes on customer attitudes and business success.


* There are times it is possible to prove causation from correlation, but it’s tricky. Judea Pearl’s Book of Why is probably worth a read if you’re interested in this stuff.

Tagged , , , , , , ,

User stories & customer journey mapping

noun_1213168A big mistake that many organisations make when they try to map the customer journey is that they stick too close to their own perspective.

The result may be a customer view of their process map, but it’s not a true customer journey map.

Why not? The tell-tale problems are:

  • Too much detail
  • Ignoring context in customer’s life
  • Focused on products, processes & touchpoints
  • Starting too late in the journey
  • Finishing too early in the journey

How can we overcome this tendency to let the inside-out view dominate? The best way is to use qualitative research and allow customers to lead the creation of the journey map.

User stories are a really useful tool to make sure you approach the journey with the right mindset. They’re normally written in the form

As a__________ I want to__________in order to__________.

Doing this will allow you to stretch your view of the journey, so that you start when the customer became aware of their need, not when they first got in touch with you. This more accurately reflects the customer experience, and opens up opportunities for innovation.

It also puts the customer’s goal (not your product) front and centre. This helps you to make sure that the experience you design is addressing the right problem, and opens you up to the possibility of solving it in new ways.

“People don’t want to buy a quarter-inch drill, they want a quarter-inch hole.”

—Theodore Levitt

Tagged , , , , , , , ,

Response rate: the elephant in the room

noun_14049“What’s the sample size?”, you might get asked. Or sometimes (wrongly), “What proportion of customers did you speak to?”. Or even “What’s your margin of error?”.

Important questions, to be sure, but often misleading ones unless you also address the elephant in the room: what was the response rate?

Low response rates are the dirty little secret of the vast majority of quantitative customer insight studies.

As we march boldly into the age of “realtime” high volume customer insight via IVR, SMS or mobile, the issue of low response rates is a body that’s becoming increasingly difficult to hide under the rug.

Why response rate matters

It’s too simplistic to say that response rates are directly correlated with nonresponse bias1, which is what we’re really interested in, but good practice would be to look for response rates well over 50%. Academics are often encouraged to analyse the potential for response bias when their response rates fall below 80%.

The uncomfortable truth is that we mostly don’t know what impact nonresponse bias has on our survey findings. This contrasts with the margin of error, or confidence interval, which allows us to know how precise our survey findings are.

How to assess nonresponse bias

It can be very difficult to assess how much nonresponse bias you’re dealing with. For a start, its impact varies from question to question. Darrell Huff gives the example of a survey asking “How much do you like responding to surveys?”. Nonresponse bias for that question would be huge, but it wouldn’t necessarily be such a problem for the other questions on the same survey. Nonresponse bias is a problem when likelihood of responding is correlated with the substance of the question.

There are established approaches2 to assessing nonresponse bias. A good starting point for a customer survey would be:

  • Log and report reasons for non participation (e.g. incorrect numbers, too busy, etc.)
  • Compare the make-up of the sample and the population
  • Consider following up some nonresponders using an alternative method (e.g. telephone interviews) to analyse any differences
  • Validation against external data (e.g. behavioural data such as sales or complaints)

How to reduce nonresponse bias

Increasing response rate is the first priority. You need to overcome any reluctance to take part (“active nonresponse”), but more importantly “passive nonresponse” from customers who simply can’t be bothered. We find the most effective methods are:

  • Consider interviews rather than self-completion surveys
  • Introduce the survey (and why it matters to you) in advance
  • Communicate results and actions from previous surveys
  • Send at least one reminder
  • Time the arrival of the survey to suit the customer
  • Design the survey to be easy and pleasant for the customer

Whatever your response rate is, please don’t brush the issue under the carpet. If you care about the robustness of your survey report your response rate, and do your best to assess what impact nonresponse bias is having on your results.


1. This article gives a good explanation of why.

2. This article is a good example.

Tagged , , , , ,

Empathy in Customer Experience

empathyI often talk about how important empathy is, but I realised the other day that I was using it in two different ways:

1) Empathy as a tool to inform the design of customer experiences

2) Building empathy at the front line as an essential output of insight

Let’s look at both of those in a bit more detail.

Empathy for design

To design good experiences you need to blend a deep understanding of customers with the skills, informed by psychology, to shape the way they feel. Getting that understanding requires in-depth qualitative research to get inside the heads of individual customers, helping you to see the world the way they see it.

When you understand why people behave the way they do, think the way they think, and (most importantly) feel they way they feel, you can design experiences that deliver the feelings you want to create in customers.

Design, to quote from Jon Kolko’s excellent book Well Designed is…

“…a creative process built on a platform of empathy.”

Empathy is a tool you can use to design better experiences.

Empathy at the front line

Improving the customer experience sometimes means making systematic changes to products or processes, but more often it’s a question of changing (or improving the consistency of) decision making at the front line.

Those decisions are driven by two things: your culture (or “service climate”), and the extent to which your people understand customers. If you can help your people empathise with customers, to understand why they’re acting, thinking, and feeling the way they are, then they’re much more likely to make good decisions for customers.

I’m sure we can all think of a topical example of what it looks like when front line staff are totally lacking in empathy.

The best way to build empathy is to bring customers to life with storytelling research communication. Using real customer stories, hearing their voices, seeing their faces, is much more powerful than abstract communication about mean scores and percentages.

Empathy at the front line is necessary to support good decisions.

Two kinds of empathy?

Are these two types of empathy fundamentally different? Not really. The truth is we are all experience designers. The decisions we make, whether grounded in empathy for the customer or making life easy for ourselves, collectively create the customer experience.

You can draw up a vision for the customer journey of the future, grounded in a deep understanding of customers, but if you fail to engage your colleagues at the front line it will never make a difference to customers.

To design effective experiences you need to start by gaining empathy for customers, but you also need to build empathy throughout your organisation.

Tagged , , , , , , , ,