Measuring quality and using what you learn to better meet customer expectations is what will propel your efforts to truly serve your customers.
As customer service professionals, we’re in the business of making sure our customers get the highest quality support. We strive to help them succeed with the highest caliber guidance we can provide and solve their problems with excellent solutions and service. When we do that, it feels good.
But it’s not just warm and fuzzies. The fact of the matter is that it’s good for business too. Consider this: 3 out of 5 Americans would try a new brand or company for a better service experience. 7 out of 10 said they were willing to spend more with companies they believe provide excellent service.
So what does it really mean to give high quality support? It varies for different industries and audiences, and we all have our own unique definition. Yet there is one common thread: excellent support means exceeding expectations. But as it turns out, most companies aren’t doing that. 80% believe they provide “superior” support, while only 8% of customers agree. That’s a pretty big gap. The good news is that it’s easy to close that gap.
All you need to do is listen and act by tracking quality metrics.
Define what Quality means to your team
You’re probably already tracking a variety of metrics like response time and backlog. Quality is a little harder to measure. There’s no single quality metric that can rule them all, and each one has its limitations. The best approach is to track a suite of metrics to get the clearest picture of expectations, judge the real caliber of your support, and raise your quality to truly superior levels.
Most companies agree on certain dimensions of quality, often citing speed, professionalism, and empathy. Consider the following two responses to the same question.
Question: Is it possible to reorder my images on this app?
Response #1: There’s not currently a way to do that, but it’s a great idea. We track all feature requests from our customers, so I’ll pass this along to our product team. If we get enough requests for this particular feature, then there’s a chance we’ll build it. I’ll loop back and let you know if that happens.
Response #2: Nope… Sorry.
Most of us would agree that response #1 is a higher-quality answer. But, other dimensions of quality vary from business to business. You’ll need to come up with your own unique definition to adhere to. Do this by gathering insights from two groups: your customers and your team. Surveying your customers gives you their external perspective on quality and helps you understand what they want. Involving your team means that you’re improving quality from the inside out, bringing the perspectives of your employees and ensuring that all interactions, not just with the ones with customers who volunteer feedback, are learning opportunities.
External Measures of Quality
Asking a customer how they feel about the quality of your support is a great place to start. Your customers know what their expectations are and how your interactions make them feel. There are three surveys that work really well for this: Customer Satisfaction, Net Promoter Survey and Customer Effort Score.
Customer Satisfaction (CSAT) most often refers to a customer’s feelings about a specific interaction with your support team. It’s commonly measured in a post-service survey sent after a ticket has been resolved, and it’s a great way to understand sentiment regarding that particular ticket experience.
Make sure to include a comment field so you can understand the reason behind the rating.
Pro tip: Measure your CSAT in every interaction via email signature to skyrocket your response rate. This way you’ll get more feedback and in the long run, happier customers.
Armed with this data, you’ll have the power to make sure your customers are happy with the way you resolve their issues. When they rate their experiences highly, then you’ll know what to keep up. When they rate their experiences poorly, then you’ll know what to fix. Maybe they love your refund policy and really don’t love your response time. CSAT helps you find out what customers think about the quality of your support.
While an invaluable tool for improving the quality of your support, it doesn’t tell the whole story. CSAT surveys reveal thoughts about a single interaction. By aggregating a lot ratings, you can surely improve your overall support experience. But, they don’t tell you how someone feels across all their dealings with you. This could be why CSAT isn’t a very good indicator of loyalty. In fact, 60-80% of customers who churn said they were satisfied or very satisfied in their last CSAT survey.
Net Promoter Score
Unlike CSAT, the Net Promoter Score (NPS) was specifically developed to measure loyalty, judging by a customer’s likelihood to recommend a product to others. It refers to their overall experience instead of one moment in time.
Since NPS measures a longer-term trend, you should send it at a regular interval. Limit the frequency on an individual level, to make sure the same people aren’t getting the survey too often. You’ll also want to make sure that anyone you survey has used your product for long enough to form an opinion.
To dig into the customer support contribution to the NPS score, you can ask a follow up question such as: “How likely are you to recommend our company based on your interactions with our support team?” Include a comment field so you can uncover the precise cause for their score.
When you give customers a chance to share their feedback through NPS, then you’ll reveal rich insights about what they expect from their overall experience. You’ll learn what they value, and whether or not you’re delivering on those values. These values are critical to setting your definition of quality.
The NPS metric will help you understand perceptions over a longer period of time. However, since it’s asking people to recall their experiences sometimes well after they’ve occurred, it can be less specific and actionable.
Customer Effort Score
Customer effort score (CES), on the other hand, is a highly specific measure of how much work your customer felt they had to do to solve an issue. It’s a great way to zero in on this distinct element of your support, and as it turns out, it’s massively important when it comes to loyalty. It’s 1.8x better at predicting loyalty than customer satisfaction (CSAT).
CES is often sent as part of a post-service survey. You can decide how to phrase your question and how many response options to give. It’s best to make your question straightforward, like: “How easy was it for you to get your issue resolved today?”
CES will show you where people run into difficulty getting help, and can point to areas where you can make it easier for them. Perhaps there are bugs on your help site or a broken loop in your phone system. You might discover that with a CSAT survey. But, if your customer was generally satisfied with the final resolution, then they might not mention it.
The major benefit of CES is also the downside. Since it focuses purely on effort, it doesn’t address why someone ran into an issue in the first place, nor does it reveal other potential issues with your service.
Internal Quality Assurance
All the metrics discussed above are subject to survey bias. That is, you’ll get the most responses from customers on the extreme ends of the spectrum. Those who are very happy will rate you highly, and those who are extremely angry will rate you poorly. This leaves out many customers in the middle. But their experiences are important too, and looking into them can uncover valuable insights.
External measures like CSAT, NPS, and CES can give you a good starting point for what customers consider to be quality support. However, because of this bias, they might not reveal experiences that aren’t outright awful but are nevertheless of subpar quality. Capturing an internal quality metric can bring light to those experiences. And while CSAT, NPS, and CES inform your definition of quality through direct customer commentary, an internal quality metric allows your team to influence the definition too.
Have a clear manifesto
The first step in gathering an internal quality metric is to make sure you have a clear manifesto of how tickets should be answered. Based this manifesto on what your customers and your team believe characterizes quality support. For example, if you think of personalization as a crucial dimension in quality for support, then put it in the manifesto. And make sure it’s clear how a dimension like personalization can be exhibited, for instance, by using the customer’s name.
A good way to drill into ways to demonstrate somewhat nebulous dimensions like “personalization” is to use an exercise called 5 Hows. Just start with your quality dimension and keep asking how until you have behaviors that you can see and measure.
Set up a review plan
Next, set up a plan to review random samples of resolved tickets on a regular cadence, such as once a week. Depending on your team size and structure, you could do peer review, managerial review, or even dedicate an entire person or team to quality assurance. An added benefit of review is that your team will learn from each other as they see how others have handled cases.
As you review the tickets, score them with the rubric. Then calculate your quality metric by taking an average of all scored tickets.
Sample Internal QA Rubric
|Dimension||Score (out of 5)||Notes|
|Personality: Did the agent create a connection with the customer by using their individuality and voice?||4||Nice response to the cat story!|
|Next issue avoidance: Did the agent provide additional information to prevent the customer from needing to follow up?||3||Agent should have included link to knowledge base article to provide additional resources.|
|Grammar, Spelling, Accuracy: Are there any mistakes in the language used?||5|
Make It Count
The best way to raise your quality is to monitor your scores regularly and take action toward improvement. You can take all the measurements in the world, but if you aren’t doing anything to align your service to your customer expectations, then it doesn’t make any difference.
It’s important to respond to each piece of feedback so individual customers know you’re committed to getting better. In turn they’ll keep sharing their thoughts in the future. It’s especially crucial to follow up with those who have had negative experiences: anyone who gives a low CSAT score, NPS detractors, and high CES ratings. This also gives you a chance to turn a low-quality experience around.
Equally necessary is analyzing all your data for trends. Read commentary and make the requisite changes. Compare your scores before and after your change efforts to judge the impact, and keep making adjustments as needed.
Measuring quality and using what you learn to better meet customer expectations is what will propel your efforts to truly serve your customers, drive your business forward, and be a part of the small class of companies that customers feel are genuinely providing superior service.
About the author:
Nykki Yeager is a customer-centric consultant who writes about customer support and success, management, and more.