How Will Large Language Models (LLMs) Change Chatbots?
Large Language Models (LLMs) are artificial intelligence systems that can understand and generate human language. They use deep learning algorithms and massive amounts of data to learn the nuances of language and produce coherent and relevant responses. A simple way to think about LLMs is that they can look at words that have already been typed and try to predict what comes next. For example, Google’s search engine, powered by Google’s BERT, can predict what you’re about to search just by typing in a few keywords. BERT was trained on 3.3 million words and contains 340 million parameters, giving it remarkable accuracy in understanding and responding to what is being typed into the search bar.
Perhaps the most well-known LLM today is ChatGPT by OpenAI. Within five days of its public release, it gained over a million users. In comparison to other popular brands, Instagram took a little over two months to reach one million downloads, while Spotify took five months. It’s no wonder that ChatGPT experienced explosive growth given its capacity to produce responses to prompts and queries that closely mimic humans. This is made possible by using machine learning algorithms that have analyzed over 300 million words and 175 billion parameters, greatly surpassing BERT’s training model.
ChatGPT’s ability to learn and understand language allows it to generate responses that are not pre-scripted, unlike non-LLM chatbots. It gives the chatbot the ability to reply with personalized and contextually relevant responses to use in real-time, making it particularly useful in customer service.
How do intent-based chatbots compare to LLM-powered chatbots?
While a decent intent-based chatbot can answer basic, one-touch inquiries like order management, FAQs, and policy questions, LLM chatbots can tackle more complex, multi-touch questions. For example, LLM chatbots can answer product questions from a broad range of SKUs, talk and sell products through conversational commerce, and offer in-depth technical support that often requires a lot of back and forth with the customer. LLM enables chatbots to provide support in a conversational manner, similar to how humans do, through contextual memory. Conversational bots like ChatGPT raised the bar of what is possible from AI and consumer expectations with bot interactions have shifted. We’ll walk you through the benefits of using an LLM, as well as the risks you need to look out for.
Beneficial ways of how LLM chatbots can be applied:
Enhance the Ecommerce experience
LLM opens the door to changing how customers interact with their favorite brands. Instead of brands showcasing products they think customers will like, customers can go directly to the brand and tell the bot exactly what they’re looking for. The bot will analyze the customer’s inputs and provide tailored recommendations. The LLM chatbot’s conversational ability can help customers understand which product suits their needs and answer questions in a human-like fashion. Ecommerce leaders can even leverage this white-glove service to create tiered customer service experiences for premium customers.
Brand and tone
LLM chatbots can maintain a brand’s personality and tone with impeccable consistency. In a typical contact center model, agent consistency is difficult to achieve and requires a lot of training and QA checks. However, LLMs make it possible to feed the AI only high-quality responses that meet the brand’s criteria and customer expectations. This allows the chatbot to replicate a brand’s guidelines in every conversation, effectively downsizing the need for regular training and QA checks.
With LLM capabilities, chatbots can understand what customers are saying and engage with them in a multi-touch dialogue such as providing multi-step troubleshooting help. Moreover, the LLM’s contextual memory allows the chatbot to understand and address multiple questions that are embedded into a single, or even a run-on, sentence. For example, if a customer experiences multiple issues within an order, they can list all of their queries to the bot and get a response for each one.
LLM chatbots can identify the marketing persona a customer falls under simply by recognizing how the customer interacts with the bot. Once identified, the chatbot can adapt how it responds to the customer, and what it recommends, creating a hyper-personalized experience. For example, the chatbot detects that the customer in need of support is highly technical and can rearrange its response to match the customer’s vernacular.
LLM chatbots can respond to customers in any language and are not restricted to the language a brand uses for its knowledge base. This capability opens the doors to support the global market of diverse customers. For example, if a brand’s knowledge base is in English and a customer asks the bot for help in French, the bot is able to comprehend the customer’s question, analyze its English knowledge base, and respond to the customer in French.
Create a digital avatar experience that mimics talking to a human associate in a store on a screen. The LLM’s ability to comprehend language and respond similarly to humans can make LLM bots feel less rigid and more human-like when talking to them.
The risks of open-source LLMs
Effectively using LLM to enhance the customer experience (CX) can lead to more conversions, increase engagement, and decrease overall CX costs. However, implementing an open-source LLM chatbot that isn’t fully integrated with a company’s systems poses many risks. For example, users can manipulate the chatbot to give them harmful information through a series of inappropriate prompts. Without guardrails (such as Simplr’s Cognitive Paths) in place, LLM chatbots can also go off the rails and hallucinate non-existent scenarios which can seem believable because it peppers in some factual content in its responses. An example of hallucination was when Jessica Card, a University of Vermont student, connected ChatGPT to a Furby toy and asked if Furbies were plotting to take over the world. The AI generated a harrowing response and claimed that Furbies are planning to infiltrate households through its cute and cuddly appearance. This of course is not happening but it does illustrate the risks of implementing LLMs without guardrails. Brands must be careful when using LLMs because inappropriate and improper use by the public can be a PR disaster waiting to happen. Let’s not forget when trolls from 4chan, an infamous online forum, managed to get Microsoft’s chatbot Tay to become a holocaust denier. Yikes!
Another risk in the context of customer service is the inability to recognize when conversations should be passed over to a human agent. For example, ChatGPT is programmed to always engage in dialogue back with the user. This can be a problem when circumstances call for real human interactions and can lead to customer frustrations. Another downfall is ChatGPT can’t ask questions for clarification. For example, Simplr asked ChatGPT to help us connect a TV that was experiencing Wi-Fi issues. Instead of asking for the product name or model number, the bot gave a generic troubleshooting response that didn’t match the TV’s model and thus, failed to resolve the issue. ChatGPT’s inability to refine questions means it can’t follow the desired path to a successful resolution and instead, it gives all the information at once which can lead to confusion or an ineffective response. On the other hand, human agents can break information down into logical chunks and prioritize the information that is most relevant to the customer, asking follow-up questions if necessary, before giving the customer a detailed resolution.
Finally, company data and personally identifiable information (PII) that is leaked into the public domain through open-source LLMs is a major risk and concern. For example, Samsung accidentally leaked its source code into ChatGPT and thus, allowing OpenAI to have access to highly sensitive information. Countries like China, Iran, Italy, Russia, and more have banned or restricted the use of ChatGPT over potential misuse or privacy concerns. While the dangers of using open-source LLMs are present, bot-forward companies like Simplr are finding solutions to leverage the benefits of LLM while eliminating the dangers.
The Simplr Solution: Three key components to deliver superior automated CX
Simplr is safely leveraging OpenAI’s LLM using Microsoft Azure and an in-house detection service that ensures PIIs are stripped out of conversations. This means customer data is kept private while still getting the benefits from an LLM such as multi-turn, context-awareness, and conversations that match the client’s brand and tone. To add another layer of safety, Simplr applies a curated knowledge base for its LLM chatbot to collect information and generate responses. The curated knowledge base derives from the client’s FAQs, past responses, the manufacturer’s website, and Simplr’s own proprietary databases to ensure the information is safe and vetted. The final safety layer is Simplr’s Cognitive Path technology. Cognitive Paths ensure the chatbot properly understands the inquiry by running the inquiry against countless checks to make sure the right information is presented at the right time and needs to happen with the follow-up conversation. Simply put, Cognitive Paths act as the brain that’s controlling the process to facilitate agent-like experiences. Simplr wraps these three key components together to provide clients with a safe and superior automated CX experience.
Want to stay in the loop about how ChatGPT and LLMs are impacting CX? Sign up for our webinars and watch past episodes here.
Considering an LLM but not sure where to start? Start by asking some key questions. Download our guide to 8 Questions to Ask When Evaluating an LLM.