Insight means Cause & Effect

What is insight?

For me, it’s always about cause and effect, although sometimes we don’t do a good enough job of making that explicit.

Let’s have a look at the process of going from data to insight.

Finding patterns

When we analyse data, we start simply by describing it. We look at averages, percentages, etc. to understand what’s typical, and how much it varies. But no one could call that insight.

The next step is to look for patterns, exceptions, and differences. Analysis can get extremely complex, involving all sorts of fancy techniques, but fundamentally it comes down to three things:

  • Comparisons (“let’s split by region”)
  • Trends (“what’s happened to our Ease score over the past year?”)
  • Association (“what correlates with NPS?”)

Hidden within all three of these approaches is a key causal question, “why?” Why does this region have higher retention than that region? Why is our Ease score trending down? Why are some people Promoters and others are Passives?

Asking “why?”

We should take care to be clear when we move to this kind of causal reasoning, because it is a bit of a leap of faith, and in practice organisations often use data and statistical tools which are not really up to the job.

Correlation is not causation, as the saying goes, but frankly if we’re reporting correlation then it’s usually because we’re at least speculating about causation. That’s not necessarily a problem, as long as we’re clear on what we’re doing:

…speculation is fine, provided it leads to testable predictions and so long as the author makes it clear when he’s ‘merely’ speculating.”

V.S. Ramachandran

Making up our own stories to explain correlation is not the answer (although as I’ve said elsewhere, telling stories is often the best way to communicate cause and effect arguments to other people).

What we’re really interested in is not asking “why”, but the related question “what if?” What if we take the account plans that the top-scoring regional manager has developed and use them in the other regions? What if we invest in a better customer portal? What if our score for reliability goes up by 1 point?

Asking “what if…?”

This is a very “big data” approach to insight, to focus heavily on building machine learning models which predict some outcome of interest (perhaps customer defection) with a great deal of accuracy. These techniques can be extremely powerful—they are more robust than statistical approaches when it comes to data that is very sparse, more open in terms of the types of data that they can deal with, and more flexible about the shape of relationship between variables.

But prediction is still not causation.

That might seem a bit odd if you’re not used to thinking about this stuff, so let’s prove it with a classic thought experiment. We can look outside at the lawn to see if the grass is wet, and if it is then we can (accurately) predict that it’s been raining. But if we go outside with a watering can and make the grass wet, that doesn’t make it rain.

Focusing on prediction without a theory about the underlying causal mechanism can lead us to make equally stupid mistakes in the real world.

With statistical modelling techniques we can build models to capture a theory of cause and effect, and to test it. But what we’re really, really, interested in is not even asking “what if…”, it’s understanding what will happen if we take some action. What if we cut our delivery times in half?

Asking “what if we do…?”

How do make this additional leap from prediction to causation?

The key is that we have a theory, because once you make your theory explicit you can test it. Techniques such as Structural Equation Modelling allow us to test how well our theory matches up to the data we’ve got, and that’s extremely useful.

But not all data suits these kind of models. More generally, there’s a lot we can learn from the Bradford Hill criteria, which originated in epidemiology. Put simply, these are a set of 9 conditions for drawing causal conclusions from observational data, and include important ideas such as a dose-response relationship and a plausable mechanism of action.

Judea Pearl is one of the leading thinkers on causal inference, so if you’re interested in this kind of stuff I’d highly recommend his work. The Book of Why is the most accessible starting point.

From theory to data

Even better, theory can guide the data we collect, whereas naive machine learning approaches assume we have all the relevant data. In practice that is very rarely the case.

Often we’re not measuring what really matters, or even what we think we’re measuring, like the online dating service that developed an algorithm to assess the quality of your profile picture. Except that you can’t measure “quality”, so they ended up measuring popularity instead. The result? The “best” photos, according to the algorithm, all belonged to attractive young women.

Not much help for me if I want to improve my profile (although fortunately I’ve been off the market for 20 years!)

That’s why, although they’re extremely powerful when it comes to prediction, I think machine learning approaches are not yet the final word. By focusing on prediction rather than explanation they have missed out the role of theory in guiding what data to collect, the importance of understanding intervention rather than association, and the subtle errors of assuming that you’ve measured what you’re really interested in.

Insight is about cause and effect, and that means developing and testing new theories about the world.

Tagged , , , , , ,

The “tenth man”

I recently watched World War Z, the Brad Pitt zombie film.

I couldn’t say I was looking for business inspiration, but sometimes it finds you in the strangest places!

There’s a scene set in Jerusalem where Pitt’s character asks a Mossad officer why Israel was able to respond so quickly to the threat posed by the zombies. He explains that it’s because they have learned the hard way about the danger of complacency, of adopting the easy consensus view of things. As a result they now use a principle the film calls the “tenth man”:

“If nine of us with the same information arrived at the exact same conclusion, it’s the duty of the tenth man to disagree. No matter how improbable it may seem, the tenth man has to start thinking with the assumption that the other nine were wrong.”

It seems that this idea is actually based on real Israeli policies designed to encourage contrary points of view, particularly the “devil’s advocate office.” It’s a really compelling idea, and it’s something that I think all organisations would benefit from, but it’s worth unpacking how it works a little bit.

Who wants to be the tenth person?

First of all, devil’s advocates tend to be pretty annoying, and pretty unpopular. Everyone else agrees that there is a simple and obvious explanation for something, and you’re the one stopping them from moving on to something more interesting.

Built into this approach is the idea that the job of the tenth person in the room is to be wrong, more often than not. The point of the tenth person is not to predict outcomes accurately, but to mitigate the consequences on the rare occasions that they are right.

Who wants to be unpopular and wrong? No one, it goes against all our social instincts, which is exactly why this needs to be baked into a rule. The tenth person needs to have no choice but to play their role.

Then what?

This is all very well, but so far all we’ve got is an annoying person in the room refusing to agree with the rest of us. What do we do about it?

There’s a delicate balance to strike here between doing nothing, and wasting resources to counteract a risk that probably doesn’t exist.

In a nutshell, I’d say our tenth person needs to do two things:

  • Map out the consequences of the alternative explanation, and develop a plan to address them
  • Figure out a way to assess whether the danger (or opportunity) is real or not

Killing complacency

Complacency, as we’ve seen over and over again with both public policy and business strategy, is a killer.

It’s what led the “unsinkable” TItanic to set sail without enough lifeboats for all its passengers. It’s what led Nasa to conclude that the Challenger was safe to launch even though it was unprecedentedly cold. It’s what led Kodak to ignore the potential of digital cameras for 25 years.

If you want to be able to resist disruption, anticipate the impact of trends, and navigate shocks, then you need a way to anticipate the seismic events which no one sees coming.

If you want to fend off zombie thinking in your organisation, maybe you need someone to be the tenth person in the room?

Tagged , , ,

Why “actionable insight” is (nearly) a myth


When I first started in research we used to produce findings.

Then everyone decided that “insights” sounded more exciting.

Now it’s “actionable insights”.

Devaluation is an interesting process to watch, isn’t it? When there’s no concrete way to judge what separates an “insight” from a “research finding”, we can simply rebadge everything we were doing before and sell is as insight so that we all feel better about ourselves.

The pity of it is that, instead of simply pretending research findings are actionable insight, we could be doing the work to turn our findings into actionable insight. The point is that it does take work. It’s a process, and that’s why there’s (almost) no such thing as insight which is actionable on its own.

The exception

Sometimes we turn up a research finding with such an obvious conclusion that it can’t help but be actionable. If you discover a fire you don’t need to draw up an action plan, you just grab a fire extinguisher. In the same way, in the world of customer research, when the survey shows you that a lot of customers are really dissatisfied with some core aspect of your business, it’s often very easy to know what you need to do about it.

The challenge comes when, as is usually true for organisations in a position to conduct customer research, most of your customers are mostly satisfied most of the time. What then?

My view is that survey results are only ever directly actionable when they reveal a major problem that needs fixing. More often they reveal something important about customer needs and expectations, which can form the basis of action with a bit of effort.

Ways to turn findings into insight

How do we take our research findings and turn them into insights, and how do we make those insights actionable? I like to think of it like this:

  • Research finding + Memorability = Insight
  • Insight + Conversation = Actionability
  • Actionability + Storytelling = Action

To turn research findings into insights we need to package them in a way that is compelling and memorable. The best way to do that is by making them visual (either literally, or in a phrase that embeds a visual metaphor), or by using a simple, memorable, phrase that encapsulates your message.

It’s by collaborating with colleagues at the front line that you can make that insight actionable. You, as the researcher, are bringing your understanding of customer needs, and your colleagues bring their expertise and knowledge of what they do, and actionable insights is what happens when the two come together.

What turns that potential into real world action is the success with which you make the case for it, and I think storytelling is the way to do that. Storytelling is about change, and it’s about cause and effect. “Because of X, we need to do Y, in order to achieve Z.”

Those arguments are most compelling if you can back them with evidence, and that’s where linking your survey data about customer attitudes to other sources of data is vital. If you can prove that certain events drive how customers feel, and that how they feel drives how they behave, and that that in turn drives profitability, then your story is an easy one to tell.

That’s actionable insight.

Tagged , ,

Goals, plans, actions, habits

I’ve been thinking about the ingredients for success in business, and I think I’ve boiled it down to six words:

Long-term outcome, short-term action.

When we talk about sustainable success, or about building a loyalty strategy, you have to think about the long term. Investing in your people and your customers is a strategy that, through patience and commitment, will pay off in the long run.

But you can’t just sit back and wait.

The payoff may be long term, but to get there you need to take action today. I sometimes criticise organisations for their knee-jerk reactions to changes in survey scores, but at least that shows a willingness to act. That’s preferable to the “wait and see” approach that some other organisations adopt.

What I’m going to argue is that the real art in using customer insight to improve is to balance up a long term view of goals and outcomes with a short term focus on actions that will move us in the right direction, and that the secret to doing that is understanding how the two are linked together.


I’ve argued before that, when it comes to goals around customer experience, organisations tend to be both too vague and too precise. Is that a contradiction? I don’t think so.

The first problem is that although the principle of setting SMART goals is a good idea, it can lead to woolly thinking. Imagine we want to set a goal for increasing the headline score in our customer survey (perhaps NPS or a Satisfaction Index). The general goal is clear: we want happier customers. How do we make it smart? By setting a specfiic and realistic target to achieve by a certain point, something like “in three years our NPS will be 50.”

What’s wrong with that? Nothing at all, it’s a good start. The problem is that it’s also often where it ends, and there are two things missing. The first is that we actually haven’t defined that target very well. Do we mean that we’ll hit that NPS of 50 at a point in time, or that our three-month rolling average will be over 50, or that we’ll maintain a score over 50 for a year, or what?

More seriously, we haven’t addressed how we’re going to do it. A goal, however specific it may be, is not a plan. We need not just to set a goal, but to know precisely how we’re going to get from A to B.


A little while ago our TLF book club choice was Will it make the boat go faster?, in which GB rower Ben Hunt-Davis recounts his experience of preparing for the Sydney Olympics and draws out, with his co-author, lessons for business. One thing I really liked about the book was the idea of “layered goals”, and that’s exactly what I’m talking about when I say you need to break down how to get from where you are to where you want to get to.

In Ben’s case that meant starting with the “Crazy goal” to win the Olympics, then making that into a “Concrete goal” by specifying that winning the gold would require them to row the distance in under 5:18. Notice that what makes this concrete is that it is defined by actions, not outcomes (unlike the NPS example).

Rather than specifying an outcome you want to achieve in precise terms, you need to specify what you will do to achieve it in precise terms. For example, you might aim to hit a certain percentage of deliveries on time and error free.


The next step is to create “Control goals” that are inside your control. In the rowers’ case that might be lifting certain weights in the gym, but for you it could be something like gathering mobile phone numbers for a given proportion of customers so that you can text them information in the future.

Defining the right control goals means that you need to have a solid understanding of the system within which you’re working, and the network of causes and effects that exist. That’s why it’s so important that your research is able to identify and quantify causal links between internal actions and how customers feel and behave.

It’s dangerously easy to think that you’ve made a plan simply by listing bullet points of specific outcomes. The classic HBR article Strategic Stories explains why narratives are much more powerful than bullets, because they force you to address not just what you want to happen, but who’s going to do it and how.


Finally, the part that relates to short-term action, you need to focus on “Everyday goals” which are the methods or processes you use to achieve the Control layer. That’s what allows you to work towards something which is achievable, and measurable, but currently out of reach. You don’t lift your target weight by turning up on one day and trying harder, you do it by building a daily habit of progressive overload.

Too many organisations are like serial dieters, forever jumping on the latest fad rather than focusing on basic habits of eating and exercise. Remember: most diets fail, or rather, even when they work, the results don’t last. Crash diets are not a sustainable route to weight loss, because they don’t address the root causes, and they don’t help build new habits.

Seeing the links

When it comes to change, we should think about the cumulative benefit of many small actions rather than big silver bullet solutions. Whether it’s marginal gains, the compound effect, or 1% every day, this idea is profoundly important to achieving those crazy goals.

Without a plan, we won’t have a clear view of the way in which those small actions will eventually compound up to long-term outcomes. Lots of good ideas and intentions don’t add up to much unless they’re taken systematically.

What makes for a good strategic Customer Experience goal is not specific numbers, and not just a long-term goal defined by outcomes, but a clear plan of the actions we can take today that will take us a little bit closer to delivering the experiences that will lead to that outcome in the long run.

Long-term outcome, short term action.

Excellence is possible – perfection is not

I’m a big fan of design thinking.

The big benefit of this approach to the design of products, services, or experiences is that it starts by trying to understand customers.

We cannot design a good product unless we know what it is going to be good for. The success of a product, the meaning it creates for customers, is revealed only when it’s in their hands.

The customer experience doesn’t happen in your store or on your website; it happens inside customers’ heads.

I recently came across an intervew with Steve Dunnington of Moog Music, the designer of numerous legendary synthesizers, who reflects this idea perfectly.

He talks about the importance of seeing your product as something that is co-created with customers…

“Musical instruments are made as a collaboration between musicians and the instrument designers.”

…of understanding how your product is used, because making sure it is a pleasure to use is as important as its functional performance…

“Musical instruments should sound great and be inspiring to play. They should not impede the flow of music from the musician’s mind to the sound that is heard, once the instrument is reasonably understood.”

….which means that it is often the easily-forgotten details which make or break a product…

“Pay attention to the details as you are designing the instrument. They can be the difference between a positive and negative experience.”

…and, perhaps most crucial of all, he reminds us to focus on the things which really matter to customers, not getting so caught up in trying to be perfect that you forget to deliver really well against those priorities…

“Understand what is important and know the difference between perfection and excellence. Excellence is possible — perfection is not.”

I can’t think of four better pieces of advice to capture the difference between designing products, and designing the experiences that those products will create for customers.

Substitute in your own products for “musical instruments” and you have a brilliant synopsis of design thinking:

  • Work in collaboration with customers
  • Focus on experiences, not product features
  • Pay attention to details
  • Understand what is important

Do that consistently and you’ll create excellent experiences, even if they’re never perfect.

These quotes all come from the excellent book “Patch & Tweak With Moog

Tagged , , , ,

Words Go Viral

One of the things I discuss in my webinar on neurodesign for infographics is that we can’t evaluate whether a design is “good” or “effective” or “successful” until we define what exactly we mean by those things.

If we don’t know what we’re trying to achieve, we can’t know whether or not a design is delivering against the real brief, so we often end up chasing the wrong things.

When people disagree about whether a design works, it’s often because they’re using different (often unspecified) criteria, and those criteria are in conflict.

It’s been shown, for example, that the book covers which people like most are not the ones which drive the most sales. It’s pretty obvious which of those two things are more important to you if you’re a publisher.

The same is true when it comes to judging the effectiveness of survey communications. We need to be clear about what it is we’re trying to achieve with every piece of communication, before we can judge whether it was effective.

We often advise clients to find ways to tell their customer story in the most visual way possible. Good visuals engage people’s attention, and thanks to the picture superiority effect help people to notice and remember key information.

But, as Dave Trott points out in a blog about advertising, visuals have one key drawback:

“…visuals don’t go viral because people can’t repeat visuals like language.

Words are what gets passed on, so words are what goes viral.”

Dave Trott,

This is why it’s so important that the key insight that you’re trying to communicate can be captured in a pithy, memorable, easily-repeated phrase.

If you want your insight, and the action it requires, to remain front of mind, people need to be able to talk about it. So make sure that your customer story is not just engaging and memorable, but easy to talk about as well.

Tagged , , , ,

Remember who the expert is

When European sailors discovered Easter Island, or Rapa Nui as it is known to its inhabitants, the first thing they noticed was the enormous moai statues.

Understandably, they were amazed that a small population of people with access to only simple tools had been able to carve and move these vast stone figures, weighing up to 82 tonnes.

They asked the islanders how the statues had been put in place.

“They walked.”

Since this was obviously nonsense, Europeans over the years developed theories to explain the apparently impossible. Rollers seemed the most likely explanation, and this tied in nicely with the evidence that Rapa Nui had once been covered in palm trees.

An idea soon emerged of a people who, prioritising their statues above all else, cut down all the trees on the island for rollers to move their statues. Ultimately this led to the collapse of civilisation on the island, as there was simply not enough food to maintain the population.

This was the received wisdom for over a hundred years, and it forms a neat parable of the risks of not looking after our environment, but it’s probably not what happened on Rapa Nui.

As Paul Cooper explains in an episode of the superb Fall of Civilizations podcast, the inhabitants of Easter Island were wiped out by contact with Europeans. New diseases and slave-raiding were the main drivers of depopulation on an island that seems to have been remarkably peaceful and well organised for hundreds of years, until the Europeans arrived.

What has all this got to do with research?

I’ve written a few things recently about the importance of interpretation in qualitative research. That’s certainly true, but there is a catch. It’s all too easy to hear what we expect to hear. The listener’s assumptions can make them deaf to what the speaker is actually saying, particularly when there’s a perceived imbalance of power (like the assumption that the inhabitants of Easter Island were “primitive”).

If you want to understand customers, start by assuming that what they say is true. Forget your preconceptions, and treat your customer as an expert—after all, who knows more about how they feel, and why, than they do? Start by believing what they tell you, and if it seems strange do the work to figure out why there’s a perception gap.

Oh, and if you’re wondering how the moai moved into position? They walked.

Tagged , , ,

Language is more than words

I’m fascinated by language.

It’s one of the relatively small number of things which sets human beings apart from any other animal, and it’s the foundation on which pretty much all our knowledge, cooperation, and even civilisations are built.

It’s also the stock-in-trade of a researcher—language remains the main tool we have to understand other people and how they see the world.

But there’s a danger that when we think about “language” we reduce it to simply the words we use. We send and understand meaning far more richly than that, and in many ways our ability to communicate goes beyond what we’re consciously aware of.

There’s a fascinating article in New Scientist about the way in which we use filler words such as “um”, “uh”, and “huh?”. As the article says

“Far from being an inarticulate waste of breath, filler words like um, uh, mmm and huh are essential for efficient communication, sending important signals about the words we are about to say so that two speakers can better understand each other.”

The research shows that filler words are not mistakes, and they’re not empty or interchangeable, but form a kind of metalanguage. “Um”, for example, signals a longer pause than “uh”.

These words help the listener to understand what to expect, and they also prepare us to be ready for a change, or something unexpected, and therefore help us to notice and remember significant things.

That’s why it worries me so much when qualitative research is reduced to customer quotes. Shorn of body language, tone, context, and metalanguage detail which we may process without conscious awareness, it hardly seems right to call these shallow collections of text “verbatims”.

For qualitative researchers, our interpretation of what customers mean is at the heart of the work we do. It’s by taking advantage of our ability to interpret these clues correctly that real customer insight is possible, not by piling up quotes.

Tagged , , ,

Where are you dropping the baton?

Do you remember the mens 4x100m relay at the Athens Olympics? Great Britain were hopeful of a medal, but the USA were strong favourites with a team that included the individual 100m and 200m Gold medallists, as well as the 100m Gold medallist from the previous games.

As Steve Cram said in the commentary,

“…if they don’t drop the baton they should win it, easily.”

Well, they didn’t quite drop the baton, but they certainly muffed one of the handovers pretty badly. The GB team, by contrast, had three almost perfect handovers, and just held on to win a thrilling finish.

And that’s why I want to talk about process maps.

What’s the link? I must admit that process maps can play a useful role in organisations, but in my view they often cause as many problems as they solve. To quote the process consultant Ian James:

“The irony is that the biggest opportunities for process improvement never show up on a process map.”

I couldn’t agree more, and in particular what they are prone to do is to focus people on performing their own job well, rather than on delivering the best possible overall performance.

“I’ve done my bit”

You could look at a sprint relay race as four individual races glued end to end. If each of your runners is faster than the people they’re up against, then the team is bound to win, right? All they each need to do is concentrate on winning their leg.

As the USA team found out, that doesn’t always work.

If everyone concentrates only on “doing their job”, as defined by their little section of the process map, that tends to mean that they get very good at on-paper efficiency and meeting all their SLAs. Is that a bad thing? It can be if their efficiency comes at the expense of the customer experience.

The customer view

One of the many reasons that customer journey mapping is such a powerful technique is that it forces you to get away from an internal view of the experience. That has many benefits, but when it comes to improving processes the most immediate gain is that it forces us to see where the process stalls or derails completely from a customer’s perspective.

Very often these pain points turn out to coincide with handovers, whether implicit or explicit. Team A has finished their job, Team B hasn’t started theirs yet, and meanwhile the customer, who has no idea that Team A and B even exist, is left in limbo.

Handovers as failure points

Why is it that handovers are such a common source of failure? One reason is that it’s a point at which key contextual information can be difficult to transfer. It’s interesting, for instance, how heavily used the “Notes” field of many CRM systems is, but that’s hardly a robust process.

Another is that the team making the handover may have a poor or incomplete understanding of what their colleagues need to know. In one service blueprint workshop I facilitated for a client it emerged that Team A was spending hours reformatting data to transfer to Team B, who spent hours reformatting it into the format they needed (quite similar to the original). The looks on their faces as the scale of wasted effort dawned on them were quite memorable.

Then there are the times when customers are simply forgotten about. Team A has done its job, but somehow the baton is dropped completely and Team B never knows that it has a job to start. This can happen with shared mailboxes, when emails mysteriously fail to arrive (email really isn’t robust enough to do the job we expect it to do), or when someone is unexpectedly off sick.

Diagnosing problems

These are just a few of the more common problems, so how do we set about rooting them out of our processes? The starting point, as we’ve seen, is to use customer journey mapping to understand where things are going wrong from a customer point of view.

Then we need to join that view with an internal understanding of what actually happens. The service blueprint, as I’ve discussed before, is my favourite tool for this.

But the magic isn’t in the tool. If you simply document what is supposed to happen, then a service blueprint presents the same danger as a process map—the handoffs will be hidden.

That’s why it’s so important for everyone to be involved at the same time. It’s by talking the journey through, starting from the customer’s point of view, that you reveal these tricky handovers.

So the lesson is: never run your service design workshops one department at a time; always try to get everyone in the room together. That gives you the best chance of smoothing out the handovers, creating the best possible customer journey.

A smooth handover may be more important than running the fastest possible leg.

Tagged , , , , , , , ,

You keep using that word, I do not think it means what you think it means

Words can be tricky.

In particular, there are some words that we use and understand pretty well in everyday life, but which have a specialist meaning we need to understand in certain contexts. Unfortunately, those contexts are not always clear.

“Empathy” is a good example.

Before we go on, I’d like you to stop for a moment and try to come up with the best definition you can. What does empathy mean? What, if anything, distinguishes it from sympathy, or emotional contagion, or imagining ourselves in someone else’s shoes?

How did you do? If you struggled, I don’t blame you. Search through the academic literature and you’ll find plenty of disagreements about what empathy is.

Despite that, empathy is such an important concept in research, customer experience, and service design that I think it’s valuable to pin it down. Empathy features in each of those three disciplines, but I think we mean subtly different things by it, and by defining our terms more clearly we can better understand why it’s important, and how to go about using our understanding of customers to make the world a better place.

Perceiving? Feeling? What is empathy?

In a recent article1, the authors argue from a phenomenological2 perspective that empathy should refer to “basic empathy”, which they describe as our ability to understand what’s going on in the minds of others based on our perceptions. For example, if you look sad, then I understand that you feel sad. Unusually, they argue that it is not necessarily the case that I feel sad, nor that I put myself in your shoes and imagine what it’s like.

“Empathy is not about me having the same mental state, feeling, sensation, or emotional response as another, but rather about me being experientially acquainted with an experience that is not my own.”

Throop & Zahavi

This is a slightly unusual definition of empathy, as we’ll see in a moment, but what’s really useful about it is the emphasis they place on being aware of someone else’s inner life.

That’s why qualitative research can be so poweful—it uses our ability to empathise to broaden our understanding of the world:

“The meaning the world has for the other affects the meaning it has for me. In general, my own perspective on the world will consequently be enriched through my empathetic understanding of the other.”

Throop & Zahavi

The authors argue that perceiving someone else’s feelings is different from sharing them, and give the example of cruelty. Torturing someone implies that you understand their feelings (otherwise what’s the point?), but don’t share them (or you wouldn’t do it). Built on top of that basic empathy are other types of mental processes, such as imagining what it’s like to be someone else, trying to understand why people feel the way they do, and so on.

Others disagree. In a comment3 on the article, Kevin Groark argues for a form of “psychoanalytic empathy”, which he distinguishes from the spontaneous perception of another’s feelings as:

“…a conscious process of often-plodding epistemological work directed toward cultivation of a cognitively accurate and emotionally attuned ‘other understanding’ that can serve as a bridge for interpersonal communication and recognition.”


Which, once you get past the tricky academic language, seems closer to what it feels like I’m doing when I use research to build an understanding of customers. It may be grounded in perception, but it requires effort to empathise with people; otherwise we tend to lapse back into assuming that they think and feel as we would in the same circumstances. One of the fatal traps of customer experience is to treat others the way you’d like to be treated.

Can we resolve this argument? Perhaps not, but we can throw another definition into the mix. Neuroscientists, too, have done their best to define empathy in a way that clarifies what makes it different from related concepts. An influential article4 outlines four defining features:

  1. We feel an emotion
  2. That emotion mirrors someone else’s emotion
  3. Our emotion comes from seeing or imagining the other person’s emotion
  4. We know that the other person’s emotion is the source of our emotion

This presents a totally different, in fact contradictory, picture of empathy from the phenomenological account, but one that is helpfully clear about what does and doesn’t count as empathy.

It starts with the assumption that empathy requires us to feel. Understanding emotion without feeling it, as psychopaths do, the neuroscientists label as mentalising rather than empathy. It even seems that these are two distinct circuits in the brain5.

What distinguishes empathy from sympathy is that our feelings mirror the other person’s (i.e. if you’re sad and I feel pity rather than sadness, then that’s sympathy, not empathy).

Thirdly, we can empathise with others either by responding to them in the moment, or by imagining their feelings (for instance if you read a letter from a friend describing something sad). This directly contradicts the phenomenological account which emphasises perceiving embodied emotion.

Finally, what distinguishes empathy from emotional contagion is that we are aware of the distinction between us and person we are empathising with, whereas with emotional contagion we may not be aware of where our emotion has come from.

Where have we got to?

Put all of that together (and there is a lot more if you go digging), and we can see that “empathy” is used to mean a number of different, albeit similar, things. Not only that, there are a load of closely-related concepts which all the academics agree are definitely not empathy, but which the average person on the street probably would call empathy.

What does it all mean for our work in research, service design, and customer experience?


First of all, there’s no doubting the importance of empathy in doing research with customers. Qualitative research, with its emphasis on person-to-person interaction, both requires and enables empathy.

You cannot understand customer experience unless you understand emotion, and emotion happens in the moment.

“Interviewers have to be present in the moment the user is having these strong feelings to uncover them.”

Holtzblatt & Beyer

The phenomenological version of empathy as a basic ability to perceive the emotions of others is at the root of this, on which we layer up more complicated aspects of social reasoning which help us to understand how our customers see the world, what their needs are, and why they’re feeling what they’re feeling.


What about in customer service? How should you react to an angry customer? You’ll see advice to “empathise” with their situation, so that you understand their anger, but remain calm while trying to resolve the issue. In other words, you’re not sharing their emotion, which according to the neuroscientists’ definition, means that you are not using empathy; but some other form of social cognition (probably a combination of sympathy and mentalisation). Ironically, when we forget, what often happens is that we react by sharing their emotion and getting angry.

In person-to-person interaction with customers, what we really need to do is to be able to imagine the world from their perspective. Research can help with that, mainly by reminding us that customers do not necessarily see, feel, and act the way we would.


Designers (whether service or product designers) often talk about empathy as a key element of their work. They use tools such as empathy maps to help the organisation understand what shapes a customer’s feelings, and personas to think about how design concepts will work for different types of customers.

Empathy, in this context, doesn’t mean sharing a customer’s emotion in the moment. It means taking the understanding gained from in the moment sharing of emotion (through research) and using it.

Again, what this really is is a more complex form of social cognition built on top of a foundation of empathy. We use personas, or user stories, or the veil of ignorance, to remind ourselves of the needs of customers. It’s act of imagination to put ourselves in their shoes, and see the world from their perspective.

We can’t do that effectively unless we’ve experienced empathy, which is why everyone should spend time talking to customers, but there are a host of other mental processes at play.


Next time you hear someone casually talking about “empathy”, stop and think about what they mean. Which definition are they using? Are they really talking about empathy, or about something a bit different?

Unpacking the word helps us to see what exactly we’re talking about, and therefore understand how to get the most out of our work with customers.

Finally, I can’t resist including my favourite quote from the literature:

“…even if stones and rivers were sentient, we would not be able to empathize with them.”

Throop & Zahavi

Notes & References

  1. Throop, C.J., & Zahavi, D.A.N. (2019). Dark and bright empathy: phenomenological and anthropological reflections. Current Anthropology. 61. 283-303.
  2. No, I don’t really know what that means either
  3. Groark, K. (2020). Comment on C. Jason Throop and Dan Zahavi, Dark and Bright Empathy: Phenomenological and Anthropological Reflections. Current Anthropology. 61. 294-295.
  4. de Vignemont, F. & Singer, T. (2006). The empathic brain: How, when and why?. Trends in cognitive sciences. 10. 435-41.
  5. Singer, T. & Leiberg, S. (2009). Sharing the emotions of others: The neural bases of empathy.
  6. Holtzblatt, K. & Beyer, H. (2017). Contextual Design.
Tagged , , , , , , ,