Usability Testting
November 11, 2021 • 5 minute read

The Power of Usability Testing

Conversational AI gives organizations the opportunity to create applications that customers use successfully and willingly.  The key to achieving these goals is delivering an excellent user experience–which for conversational apps can be quite a challenge. 

Every customer using these apps is already an expert because of their vast experience conversing with other humans. Every customer understands the unwritten rules of conversation and expects your app to listen, pay attention, try to understand what they say, and then give a sensible response that moves the conversation forward.  And if a conversational app fails, customers are likely to take it personally and lose their trust in the app because it’s breaking an implicit promise to hold up its end of the conversational contract.  

Organizations need a way to understand the experience their customers have with Conversational AI apps to ensure that they’re trustworthy and meeting customers’ expectations. Luckily the best way to do this is straightforward:  have customers interact with your app and then talk to them about it.  This idea may sound familiar to some readers: I’m calling for a program of usability testing.  

Usability testing isn’t a new idea or specific to conversational apps–it’s been proven for decades as the single best way to improve outcomes for any automated system.  Every application, no matter how carefully designed, will have spots where the experience doesn’t quite meet expectations.

Customers are really good at discovering  (and publicizing) these moments of friction so organizations need a plan for identifying and fixing such issues.
  The choice for organizations is whether they want this discovery to happen with a limited number of representative users in the controlled environment of a usability test or allow their whole customer base to find issues in the wild.  Think of usability testing is a risk-reduction mechanism that provides organizations with a preview of customer behavior and attitudes before the app goes into production, allowing them to control their brand and protect customer satisfaction.

Usability testing also boosts project efficiency by facilitating the  discovery of issues early in the project lifecycle when it is quick and cheap to make changes.  By running usability tests in the design phase, project teams can reduce costly rework and regression testing. Equally important, usability testing makes projects more predictable by eliminating unhappy surprises when apps go to production. 

The true genius of usability testing is that it gives us access to two kinds of data. We get behavioral data by observing what people say and do when they interact with an application and attitudinal data by soliciting  their reactions and opinions. Usability testing lets us discover what does and doesn’t work for users and understand how important issues are to them.  There are certainly other methods for observing how users interact with an app, but when analyzing usage data or listening to recorded conversations, we don’t have the ability to ask users what they were thinking during the interaction or how they felt about it when it was done. Customer satisfaction surveys can be a good way to understand users’ opinions, but survey responses don’t tie neatly back to the experiences that underlie the opinions. Because users complete satisfaction surveys after the fact (sometimes hours or days later) they quickly lose the details of the issues that led to their satisfaction rating. Usability test results are unique because they pair the user’s reactions to specific interactions, allowing organizations to make targeted optimizations.

Usability testing delivers these benefits for any type of application–the best organizations test their websites, mobile apps, and software to ensure that they deliver seamless experiences to customers. But the benefits of usability testing can be even more pronounced for voice apps because of the audio modality. Visual user interfaces (like a website or mobile app) present a lot of information simultaneously and the interface is persistent and asynchronous, meaning the user can study and scan a visual interface to figure out it out and then start the interaction in their own time. Contrast this to a voice application: in a voice user interface, information is presented sequentially–we can only say one word at a time–and once a word is spoken, there’s no record of it. Users don’t have time to think about how to respond, they just need to say something. And because spoken conversations are synchronous, users are under pressure to respond in a timely way. Therefore, usability testing gives us direct feedback on whether a voice application is intuitive and comfortable for the people who’ll use it.

By soliciting feedback from the people who will use the system, organizations can increase trust, minimize frustration, and boost willingness to use self-service.
  But of course, organizations also need strategies for acting on usability feedback. The whole point of collecting usability data is to improve outcomes, so project teams need to absorb the findings and use them to make targeted improvements to the application, and ideally, this should be more than a one-time effort. At Interactions, we follow an iterative, user-centric, data-driven design philosophy in which we test, improve, then test again in an iterative process of optimizing the user experience of an application. Over time we synthesize the learnings from individual usability tests into a body of knowledge that allows designers to avoid past mistakes and design better conversational experiences from the start. By continuing to test the apps we build, we grow our repertoire of design knowledge and begin to deliver on the promises of Conversational AI.

Want to learn more? Let’s talk.