Why virtual assistants fail and how to fix them


SOURCE: VENTUREBEAT.COM
MAR 27, 2022

Most of us have had frustrating experiences with a virtual assistant. This isn’t always the virtual assistant’s fault. After all, most of the time when we call in to a company, we’re calling to solve some sort of problem — an insurance claim we disagree with, a service outage — and we’re often frustrated to begin with. The virtual assistant doesn’t quite understand the problem, and the person we eventually speak to isn’t the right person to solve it, sending us through what people in the industry sometimes call the “spiral of misery.”

And yet, as the last two years have shown us, there are often going to be times when human-staffed call centers will inevitably come up short, from businesses transitioning tens of thousands of employees to work-from-home, to crisis response and providing public health information during a pandemic. In these situations, human assistance needs to be directed toward solving more critical and complex problems, not answering the same four questions over and over and over again.

In my years of experience building virtual assistants for large companies and governments, I’ve learned that one of the main reasons many virtual assistants fall short is because they were built either by developers who lack business or user context, or by business professionals who didn’t have the coding skills or system insights to build a sophisticated experience.

To provide a helpful experience to users and customers, a conversational AI needs to reflect the entire breadth of the business’s knowledge, drawing on insights from customer service assistants, marketers, sales, experience-design leaders, and more. If you are thinking of looking to virtual assistants to help you solve customer care-related challenges, here are a few steps you can take to ensure you build a frictionless experience for your end users.

Get everyone in the room

The COVID-19 pandemic was a perfect storm for virtual assistants, with hundreds of thousands of people needing chat help. It was also a time when organizations had to build new services essentially overnight, as was the case with one hospital we worked with that needed a virtual assistant that could help triage COVID-19 care. Fueled by a powerful combination of need and desire, the organization brought all the important stakeholders together — doctors, attorneys, administrators, public health officials, developers — and worked through each element of the solution together. Because all the relevant parties had buy-in from the beginning, when an issue or point of confusion was raised, it got addressed. By incorporating the right perspectives, they were able to rapidly build a working solution.

Gather research data close to the source

A lot of virtual assistants also fail because the creators don’t do enough user research or take the time to gather the right data. Even if you have a clear idea of what the assistant is trying to accomplish — say, something as simple as showing a customer movie times — that doesn’t mean you have the data about how people ask for movie times. Some customer care areas, like healthcare or financial services, can also be particularly confusing. Users might not know which documents they need, or what the document they need is even called. Without reliable data about how these users ask questions and explain their problems, a virtual assistant is only going to frustrate them.

The best place to find this data is to get close to the source: inputs that a human made into some sort of virtual assistant. Perhaps you have an old or defunct chatbot with user data you can mine, or you can create a simple one to gather the data needed to make a refined product. If you can’t do that, perhaps you have an on-site search engine, or call logs from your existing customer care lines. Using expressions people have actually used can help you train a more responsive and context-aware natural language processing model. Humans talk differently to humans, chat bots, and search engines, so you’ll need to adapt your initial training data over time.

Visualize your end user

Another reason it’s important to build lots of perspectives into your virtual assistant is you need to get a clear visualization of your end user. Most businesses design a virtual assistant with a focus on what they want to get out of it, as opposed to centering on the customer that’s going to be using the product and the situation they may be in. A simple example is documentation. Most people don’t have their bills and log-in information in front of them. They probably don’t remember the exact day they paid their last bill or when the transaction in question took place. For a virtual assistant to walk that customer through a transaction, its designers need to have a clear picture of that customer’s situation, down to where and how they are calling or typing in and the likely reasons.

Automate your 80%

A good industry rule of thumb is the 80/20 rule. The idea is basically that 80% of customer requests are related to 20% of the topics you need to cover. In other words, 80% of the requests your virtual assistant will receive will likely be related to your top four or five most common question types. On the other hand, the remaining 20% of your chats could be any one of a thousand different questions, so many that you could never code them all. Automating the most common customer interactions is the only way you can free up call centers to handle the interactions that are too complicated for you to predict or code.

All text influences the system

As a technologist, I feel like it’s okay for me to say this: We tend to write awful dialogue. When technologists make decision trees, they think way too hard about details like classifiers and branching while neglecting the big picture user experience. I’ve found that our dialogue is also, frankly, kind of rude or overly direct.

On the flip side, non-experts aren’t very good at training models. You might have one intent with way more training data than the other, which can make your virtual assistant inaccurate. It’s critical to remember that every piece of text you add to a virtual assistant’s logic influences the system. Indiscriminately adding data can wind up inadvertently teaching your model things you didn’t mean to. Garbage in, garbage out. Dialogue needs to be empathetic and human, but it also needs to be written with an eye toward a balanced system that addresses sampling bias. In other words, your model should be trained on representative data, so if half of your users use the word ‘password’ and half use the word ‘PIN’, training data should include roughly 50% usage of ‘PIN,’ 50% of ‘password,’ but not 100% of either one.

Looking ahead to the future, there’s reason to believe we’ll be engaging a great deal more with virtual assistants, even when we’re not experiencing a crisis. New kinds of natural language processing can learn how to interpret context, as opposed to relying on sprawling word-trees of queries that need to be rigorously maintained. We can now build virtual assistants that can interpret from your browsing behavior what you might be looking for, or what kind of problem you’re likely to be having. As this technology continues to improve, we can work together as an industry to create AI that helps consumers solve once-frustrating everyday problems.

Thanks to a stronger understanding of these best practices surrounding virtual assistants and conversational AI, virtual assistants have gotten much more powerful and intuitive in only the last few years. As the underlying AI that powers them continues to improve as well, we will continue to see virtual assistants play a growing role in helping people manage their lives.

Andrew R. Freed is a Senior Technical Staff Member and Master Inventor at IBM. He is the author of Conversational AI, published at Manning Publications and has over 100 software patents.

Similar articles you can read