The Rights and Wrongs of Cheating Customers

“Woah there mule!” as Yosemite Sam used to say, am I suggesting that it can be ‘right’ to cheat customers?

Yes, I am (kind of).  Let me explain.

I recently had a complaint about a product.  A few years ago I purchased a heart rate monitor watch and, the other day, when I went to use it the strap simply disintegrated.

Now I know that the product gets used in a tough environment; there’s no easy way to say this other than things get pretty sweaty when you’re working out.  But I’d always rinsed the watch off and, anyway, coping with sweat is a prerequisite for something worn next to the skin for exercise.  Whilst we’re at it, the band that goes around my chest looks like new, so it’s not as if we’re beyond the capability of modern plastics.  If they can put a man on the moon… and all that.

I rang the manufacturer to explain my discontent.  A new strap was going to cost me £30 and if the product wasn’t up to the job I would sooner have put that money towards a different brand’s that was.

At the start of the call a pre-recorded announcement informed me that, at the end of the call, I would be asked a few quick questions to ascertain my satisfaction with the experience I’d had.

As someone who has written a book about why such market research is largely futile (and can even be damaging) I wasn’t planning on answering any questions.  However, after my conversation with the customer service representative, during which she trotted out the company line about warranty periods and proof of purchase needs, and I expressed my views about realistic product lifespans, I was ready to vent my spleen in the post-interview questionnaire.

But that survey never materialised.  The phone call was terminated by the customer service agent as I dutifully stayed on the line, ready to low-ball  the ratings and justify my indignation.  So, I was cheated out of my opportunity to give them my (vitriolic) feedback.

Here’s the thing… I don’t blame the customer service agent for dumping my call.  I can understand absolutely why she would do it.  In fact, I think she was right to do it.

After subsequent investigations I discovered what the questions that I would be asked would be:

  • Net Promoter Score (NPS) – how likely would I be to recommend the company to a friend?
  • Why did I give that score?
  • Was my request solved on the first contact?
  • How much effort did I have to “put forth”? (They actually used those words!)
  • Did I have any other feedback?

From a customer experience measurement perspective it’s my opinion that the company concerned made a number of mistakes:

  1. They used market research in an attempt to find out what customers think.  This is a waste of time and money and frustrating for customers.
  2. They tied feedback to individual customer service agents and used that feedback to evaluate the performance of customer service agents.
  3. They had a system that enabled customer service agents to end calls before they reached the survey.
  4. They didn’t empower customer service agents to make balanced judgments; instead they gave them rigid guidelines that left the agent powerless to resolve a problem and, what’s worse, caused them to default to ‘procedure-speak’ in a bid to get the customer to back down.

The person that I spoke to was powerless to resolve my complaint since, technically, I was out of the warranty period.  (The fact that, after challenging the decision and the termination of the call with a supervisor for this article, I was subsequently given a free replacement indicates that the company was capable of providing a free replacement.)

So, by doing what she was required to do by the company, the agent knew I was unhappy.  She correctly identified that I would misattribute my dissatisfaction to her, not to the policies that she was required to uphold.  Rather than get a lower NPS score, she stopped me answering the questions.

As I said, I don’t blame the customer service agent at all.  However, the company was deluding itself using this approach to gauging customer experience.  I strongly suspect that other customer service agents used the same tactic: as a consequence, the company’s NPS ratings would be consistently high: they could kid themselves that their service was great.

There are several reasons that market research doesn’t work when it comes to gauging the customer experience a company actually delivers:

  1. Our capacity for misattribution  (whereby one contextual element is inappropriately connected to the focus of our attention) means that you can’t believe the comments that customers give are actually driven by the part of the customer experience that can be controlled by the agent (rather than the policies they are required to enforce or the time it takes to get through to them because of the phone system of staffing levels (which the individual agents don’t control).
  2. Operational procedures  adapt to deliver good outcomes defined by what the survey measures (my experience was an extreme example of this).  The result is that, because it’s what you measure, it becomes important and people get better at delivering it: this isn’t the same thing as delivering a good and improving customer service experience!
  3. There needs to be a correlation between the measures and something real (i.e. behavioural): unless you can correlate the two it is a leap of faith to believe that the measurement has any meaning.  If you can measure a behavioural outcome, measure that instead!

Companies who really care about delivering a good customer experience need to think differently:

  • Customer experience is an evolving and creative field, not something that lends itself to processes and procedures: it’s about a culture not a handbook.
  • Good customer experiences require knowledgeable and empowered customer service agents.
  • Someone responsible for creating a good service culture should also see the value in covertly monitoring the customer experience being delivered by customer service agents: you can tell if a person on a call is happy and / or satisfied from the quality of the exchange and the way they react.  Monitoring a random sample of calls will reveal what the overall experience being delivered is and will simultaneously inform the custodian of the service culture what he or she should or could change.
  • Live tests are the best way to make changes: don’t ask people what they want, create an experience and see how they react at the time and subsequently in their purchase behaviour.

Ultimately customer experience is an art and a science; but when the pseudo science of market research starts to inform the delivery of customer service companies are almost certainly on a slippery slope of decline (whatever their research data tells them).

Leave a Reply

Your email address will not be published. Required fields are marked *