Defining and Measuring Service Quality and Customer Satisfaction

Every customer has an ideal expectation of the service they want to receive when they go to a restaurant or store. Service quality measures how well a service is delivered, compared to customer expectations. Businesses that meet or exceed expectations are considered to have high service quality. Let’s say you go to a fast food restaurant for dinner, where you can reasonably expect to receive your food within five minutes of ordering. After you get your drink and find a table, your order is called, minutes earlier than you had expected! You would probably consider this to be high service quality

9 practical techniques and metrics for measuring your service quality.

1. SERVQUAL

This is the most common method for measuring the subjective elements of service quality. Through a survey, you ask your customers to rate the delivered service compared to their expectations.

Its questions cover what SERVQUAL claims are the 5 elements of service quality: RATER.

  • Reliability the ability to deliver the promised service in a consistent and accurate manner.
  • Assurance– the knowledge level and politeness of the employees and to what extend they create trust and confidence.
  • Tangibles the appearance; of e.g. the building, website, equipment, and employees.
  • Empathy to what extend the employees care and give individual attention.
  • Responsiveness how willing the employees are to offer a speedy service.

2. Mystery Shopping

This is a popular technique used for retail stores, hotels, and restaurants, but works for any other service as well. It consists out of hiring an ‘undercover customer’ to test your service quality – or putting on a fake moustache and going yourself, of course.

The undercover agent then assesses the service based on a number of criteria, for example those provided by SERVQUAL. This offers more insights than simply observing how your employees work. Which will probably be outstanding — as long as their boss is around.

3. Post Service Rating

This is the practice of asking customers to rate the service right after it’s been delivered.

With Userlike’s live chat, for example, you can set the chat window to change into a service rating view once it closes. The customers make their rating, perhaps share some explanatory feedback, and close the chat.

Something similar is done with ticket systems like Help Scout, where you can rate the service response from your email inbox.

It’s also done in phone support. The service rep asks whether you’re satisfied with her service delivery, or you’re asked to stay on the line to complete an automatic survey. The latter version is so annoying, though, that it kind of destroys the entire service experience.

Different scales can be used for the post service rating. Many make use of a number-rating from 1 – 10. There’s possible ambiguity here, though, because cultures differ in how they rate their experiences.

People from individualistic cultures, for example, tend to choose the extreme sides of the scale much more often than those from collectivistic cultures. In line with stereotypes, Americans are more likely to rate a service as “amazing” or “terrible”, while the Japanese will hardly ever go beyond “fine” or “not so good”. Important to be aware of when you have an international audience.

Simpler scales are more robust to cultural differences and more suited for capturing service quality. Customers don’t generally make a sophisticated estimation of service quality.

“Was it a 7 or an 8…? Well… I did get my answer quickly… On the other hand, the service agent did sound a bit hurried…” No. They think the service was “Fine”, “Great!”, or “Crap!”.

That’s why at Userlike we make use of a 5-star system in our live chat rating, why Help Scout makes use of 3 options (great – okay – not good), and the US government makes use of 4 smileys (angry – disappointed – fine – great). Easy does it.

4. Follow-Up Survey

With this method you ask your customers to rate your service quality through an email survey – for example via Google Forms. It has a couple advantages over the post-service rating.

For one, it gives your customer the time and space for more detailed responses. You can send a SERVQUAL type of survey, with multiple questions instead of one. That’d be terribly annoying in a post-service rating.

It also provides a more holistic overview of your service. Instead of a case-by-case assessment, the follow-up survey measures your customers’ overall opinion of your service.

It’s also a useful technique if you didn’t have the post service rating in place yet and want a quick overview of the state of your service quality.

But there are plenty of downsides as well. Such as the fact that the average inbox already looks more like a jungle than a French garden. Nobody’s waiting for more emails – especially those that demand your time.

With a follow-up survey, the service experience will also be less fresh. Your customers might have forgotten about it entirely, or they could confuse it with another experience.

And last but not least: to send an email survey, you must first know their emails.

5. In-App Survey

With an in-app survey, the questions are asked while the visitor is on the website or in the app, instead of after the service or via email. It can be one simple question – e.g. ‘how would you rate our service’ – or it could be a couple of questions.

6. Customer Effort Score (CES)

This metric was proposed in an influential Harvard Business Review article. In it, they argue that while many companies aim to ‘delight’ the customer – to exceed service expectations – it’s more likely for a customer to punish companies for bad service than it is for them to reward companies for good service.

While the costs of exceeding service expectations are high, they show that the payoffs are marginal. Instead of delighting our customers, so the authors argue, we should make it as easy as possible for them to have their problems solved. That’s what they found had the biggest positive impact on the customer experience, and what they propose measuring.

Looking for better customer relationships?

Test Userlike for free and chat with your customers on your website, Facebook Messenger, and Telegram.

Don’t ask: “How satisfied are you with this service?” – its answer could be distorted by many factors, such as politeness. Ask: “How much effort did it take you to have your questioned answered?”.

The lower the score, the better. CEB found that 96% of the customers with a high effort score were less loyal in the future, compared to only 9% of those with low effort scores.

7. Social Media Monitoring

This method has been gaining momentum with the rise of social media. For many people, social media serve as an outlet. A place where they can unleash their frustrations and be heard.

And because of that, they are the perfect place to hear the unfiltered opinions of your customers – if you have the right tools. Facebook and Twitter are obvious choices, but also review platforms like TripAdvisor or Yelp can be very relevant. Buffer suggests to ask your social media followers for feedback on your service quality.

Two great tools to track who’s talking about you are Mention and Google Alerts.

8. Documentation Analysis

With this qualitative approach you read or listen to your respectively written or recorded service records. You’ll definitely want to go through the documentation of low-rated service deliveries, but it can also be interesting to read through the documentation of service agents that always rank high. What are they doing better than the rest?

The hurdle with the method isn’t in the analysis, but in the documentation. For live chat and email support it’s rather easy, but for phone support it requires an annoying voice at the start of the call: “This call could be recorded for quality measurement”.

9. Objective Service Metrics

These stats deliver the objective, quantitative analysis of your service. These metrics aren’t enough to judge the quality of your service by themselves, but they play a crucial role in showing you the areas you should improve in.

  • Volume per channel. This tracks the amount of inquiries per channel. When combined with other metrics, like those covering efficiency or customer satisfaction, it allows you to decide which channels to promote or cut down.
  • First response time. This metric tracks how quickly a customer receives a response on her inquiry. This doesn’t mean their issue are solved, but it’s the first sign of life – notifying them that they’ve been heard.
  • Response time. This is the total average of time between responses. So let’s say your email ticket was resolved with 4 responses, with respective response times of 10, 20, 5, and 7 minutes. Your response time is 10.5 minutes. Concerning reply times, most people reaching out via email expect a response within 24 hours; for social channels it’s 60 minutes. Phone and live chat require an immediate response, under 2 minutes.
  • First contact resolution ratio. Divide the number of issues that’s resolved through a single response by the number that required more responses. Forrester research showed that first contact resolutions are an important customer satisfaction factor for 73% of customers.
  • Replies per ticket. This shows how many replies your service team needs on average to close a ticket. It’s a measure of efficiency and customer effort.
  • Backlog Inflow/Outflow. This is the number of cases submitted compared to the number of cases closed. A growing number indicates that you’ll have to expand your service team.
  • Customer Success Ratio. A good service doesn’t mean your customers always finds what they want. But keeping track of the number that found what they looked for versus those that didn’t, can show whether your customers have the right ideas about your offerings.
  • ‘Handovers’ per issue. This tracks how many different service reps are involved per issue. Especially in phone support, where repeating the issue is necessary, customers hate HBR identified it as one of the four most common service complaints.
  • Things Gone Wrong. The number of complaints/failures per customer inquiry. It helps you identify products, departments, or service agents that need some ‘fixing’.
  • Instant Service / Queueing Ratio. Nobody likes to wait. Instant service is the best service. This metric keeps track of the ratio of customers that were served instantly versus those that had to wait. The higher the ratio, the better your service.
  • Average Queueing Waiting Time. The average time that queued customers have to wait to be served.
  • Queueing Hang-ups. How many customers quit the queueing process. These count as a lost service opportunity.
  • Problem Resolution Time. The average time before an issue is resolved.
  • Minutes Spent Per Call. This can give you insight on who are your most efficient operators.

topic 1

Some of these measures are also financial metrics, such as the minutes spent per call and number of handovers. You can use them to calculate your service costs per service contact. Winning the award for the world’s best service won’t get you anywhere if the costs eat up your profits.

Some service tools keep track of these sort of metrics automatically, like Talkdesk for phone and Userlike for live chat support. If you make use of communication tools that aren’t dedicated to service, tracking them will be a bit more work.

One word of caution for all above mentioned methods and metrics: beware of averages, they will deceive you. If your dentist delivers a great service 90% of the time, but has a habit of binge drinking and pulling out the wrong teeth the rest of the time, you won’t stick around long.

A more realistic image shapes up if you keep track of the outliers and standard deviation as well. Measure your service, aim for a high average, and improve by diminishing the outliers.

4 thoughts on “Defining and Measuring Service Quality and Customer Satisfaction

Leave a Reply

error: Content is protected !!
%d bloggers like this: