“I’m sorry. I couldn’t understand that. Could you rephrase your question?”
Anyone who’s interacted with an “intelligent” chatbot has run into that seeming dead end at least once or twice. It’s completely to be expected – even the best planned conversation corpus and supplemental questions set will rarely anticipate EVERY relevant question a user could ask of it.
But for a conversation architect, those user dead ends can be a goldmine for continually improving conversations and greatly reducing the number of “Please ask your question another way” requests that end users encounter.
Manage Users Expectations #
You know there’s no way to anticipate everything. So it’s vital to manage your users’ expectations around that. Your fallback responses when a user’s inquiry can’t be matched confidently to an intent in your conversation need to ask them to try again with a different phrasing to prevent easy-to-catch errors. But in those cases when you just didn’t anticipate the question, a bit of levity and personality can help manage your users’ expectations of what they can find through your conversational chatbot.
One of our favorite ways of handling this is a response that is apologetic and takes the opportunity to educate users a bit about conversational AI. “I’m sorry, I don’t know how to answer your question. But I’m getting smarter all of the time, so please check back with me in the future. Until then, [insert your preferred escalation message here].”
You let the customer know that you’re continuing to improve the service you provide and open the door just a crack to how this all works.
So how do you make sure your conversational promise of a smarter bot isn’t just an empty promise?
Your Interaction Logs #
Your interaction logs are the single most important piece of data you have to improve your conversations once your chatbots are launched. Why? You can see exactly what people asked, and just as importantly, HOW they asked those questions. With conversational AI, we enter training data for all of our intents that match up to a response.
Say, for example, you create an intent that is supposed to answer customer questions about the ability to upgrade your product from your basic to your pro version. You know users might simply ask “How do I upgrade to the pro version?” You’ll enter that question, and maybe, “How can I add features?” and “Can I bump to the next tier?” Your NLP engine will use machine learning to extrapolate other similar questions that will likely match that intent and deliver your response detailing how a user can bump to another tier.
But then you look in your interaction logs, and you notice that a user asks “Can your product do X?” You notice that that question either triggered a non-match response or it delivered a response with a low confidence level that you know could be better. You now have the opportunity to make your chatbot more helpful.
If your “upgrade” response could adequately answer the user’s question as they asked it, you can simply add the user’s inquiry (and variations of it) to your training data and retrain to make sure any future similar inquiries deliver your response. Is your “upgrade” response no adequate to answer the user’s question? Use the information to create a new intent with a more specific response.
How do you find this data? It varies from NLP provider to provider. We used to love a product called ChatBase for this purpose. Google has integrated many of the ChatBase features into Google DialogFlow, but as of this writing, the intent data isn’t available in the analytics tool.
But worry not. Even though DialogFlow’s analytics can give you some intent-level summaries, the dashboard doesn’t include the interactions (as of November 2021, even though it looks like they intend to add them). To access those interactions, click on your “Agent Settings” and check “Enable Interaction Logging.” To find your interaction logs, go to https://console.cloud.google.com/logs (make sure you select your project at the top). Go to your Logs Explorer, select your date range at the top, and download your logs. The CSVs are messy, but you can do some Excel magic to get the records to match up and analyze your interactions.
In our case, we download the JSON version and run them through a custom parser we wrote to get a simple spreadsheet of intents, responses, session and confidence level to see at a glance which parts of our conversation need the most attention..
Importantly, regularly go through these logs to understand what your customers are looking for – and so you can make sure you are, in fact, making your chatbot smarter as time goes on.