How The Chatbot Industry Looks At The Wrong Metrics To Mislead Decision Makers And Inflate Effectiveness

TwitterFacebook

This article is part of Simplr’s CX Data Science Lab, where we leverage analytics and data science from within the Simplr platform to uncover best practices and trends in customer service interactions.

Chatbots are everywhere, but what’s their true impact on businesses?

In a recent Simplr study, we found that while chatbot usage has doubled since 2020, consumer willingness to use chatbots has remained low. Clearly, the chatbot industry has missed the mark somewhere along the way. Yes, chatbot adoption is high, but where’s the consumer enthusiasm for (and trust in) chatbots? 

This is why it’s so important to use meaningful data to measure the actual impact of chatbots. Over the past year of Simplr’s rigorous AI chatbot development, we’ve seen firsthand how chabot vendors are overpromising results based on hollow vanity metrics.

In this article, I’ll talk about the two biggest “red flag” metrics used in the chatbot industry and reveal what you should actually be measuring instead.

Red Flag Metrics: Containment Rate and Deflection Rate

Containment rate is the percentage of users who interact with a chatbot and leave without ever speaking to a live human agent. Similarly, deflection rate is the percentage of customer support inquiries that are handled by automation that would otherwise be serviced by agents. In short, it’s the number of tickets your internal team doesn’t have to deal with thanks to automation.  

Many chatbot vendors tout high containment and deflection rates as markers of success – it shows that their bot is “good enough” to keep customers within the channel. 

However, in the absence of other critical metrics, containment and deflection rates can be downright deceptive. Based on Simplr data, up to 20% of “contained tickets” are instances where the customers drop off because the chatbot was not helpful! The metric inflates bot performance and, worse, exposes the extent of customer frustration on a site.

Here’s what else containment and deflection rates don’t tell us:

  • Did the user hop to a more expensive channel like voice to get to a resolution?
  • Did the user even get a resolution at all?
  • Was the user satisfied?  
  • Will the user come back to our business again?

Unfortunately, these questions remain unanswered when you only look at these vague metrics. 

Another thing to keep in mind: If a chatbot always recommends something to users, it becomes even harder to differentiate people who drop off vs. people who left because they got their answer. High containment rates can amplify this misconception. We believe that businesses are risking customer neglect and lost revenue by not measuring multiple aspects of the customer chatbot journey. 

How to Measure the True Impact of Chatbots

At Simplr, we pride ourselves on using multiple metrics to create a realistic picture of how our automation performs. Instead of relying on overinflated values like containment and deflection rates, we look at the following three metrics together:

  1. Resolution rate. This is an incredibly important metric that answers a very simple question: did the user get what they came for? At the end of the day, this is what matters in CX (particularly for automation), and is a major driver of customer satisfaction.
  2. Customer Satisfaction Score (CSAT). CSAT is our metric of choice when it comes to customer sentiment. It lets us know how the customer is feeling after the interaction with the chatbot.
  3. Customer Effort Score (CES).  CES provides insight into the “why” behind low customer satisfaction ratings, plus the barriers that need to be removed in order to drive loyalty and happiness. According to Gartner, customer effort is the strongest driver of consumer loyalty – or disloyalty. Their research found that reducing customer effort has a proven relationship to higher-level goals in an organization, such as maintaining loyalty and minimizing service costs.

At Simplr, we deem a ticket as resolved if the users’ inquiry is answered. We also look at the feedback provided directly by the user through CSAT and CES. By answering the what, how, and why of the chatbot experience, we’re able to identify successes and opportunities for improvement. 

The Simplr Difference 

Unlike other chatbot providers, Simplr is able to route inquiries to a human as soon as it seems that quality might be sacrificed. The bot will learn from that human resolution to improve over time. Click here for more information about Simplr’s automation solution!