Microsoft launched its ChatGPT-powered search engine chatbot earlier this year. Within three months of its release, the Bing chatbot has engaged in over half a billion chats, has over 100 million active daily users, and app downloads have increased fourfold. Since Microsoft's paperclip assistant, Clippy, first hit our screens in 1995, we've been trying to make computers more human-like, in order to smooth the often-awkward interface between the biological brain and synthetic software.
Adoption rates of conversational AI tools could more than double over the next five years, but the success of the Bing chatbot and others like it hinges on user acceptance and engagement. Many people are still uncomfortable using this technology, find it unfit for purpose, or unable to empathize and understand their needs. When computer pioneer Alan Turing conceived the "imitation game" 70 years ago, he understood that our demand for AI to think in a way that was indistinguishable from a human would be vital to its adoption.
In a world where 86% of customers still prefer humans to chatbots for online interactions, we wanted to explore how businesses could improve user engagement with chatbots.
Testing out three chatbots for their humanness
We recruited around 200 people with little to no experience with chatbots and presented them with Mitsuku, Bus Uncle, and Woebot.
Mitsuku (now known as Kuki) is designed to befriend humans in the metaverse, taking the form of an 18-year-old female. Able to chat, play games and do magic tricks at the user's request, Mitsuku is said to have inspired the movie Her and is a five-time winner of a Turing Test competition called the Loebner Prize. Woebot is designed to be a mental health support. The chatbot describes itself as a charming robot friend and uses AI to offer emotional support through talk therapy, as a friend would. And Bus Uncle is used across Singapore to tell users about bus arrival timings and deliver public transport navigation, supposedly with the “personality of a real uncle.”
We collected data on the individual characteristics of our respondents and their general perceptions of chatbots. Next, we gave the study respondents time with one of the three chatbots. We then asked them about their experience with it. For a different perspective, we also conducted ten interviews with frequent chatbot users about their interactions and experiences.
What human-like competencies in chatbots foster user engagement?
To foster user engagement, chatbots must create a natural, human-like interpersonal environment that enables natural, engaged communication. We must learn to program AI to deal with more intricate tasks, using humanized social skills to query, reason, plan, and solve problems. But which aspects of human behavior should be prioritized when designing conversational AI programs?
AI designers should be aware that building believable, human-like qualities into conversational AI is as important as improving its efficiency.
We looked at media naturalness theory (MNT), which suggests that face-to-face communication is the most natural and preferred form of human communication. The theory suggests that three main mechanisms govern how natural, smooth, and engaging interaction with technology feels:
- A decrease in cognitive effort, i.e., the amount of mental activity a user expends communicating with the technology,
- A reduction in communication ambiguities, i.e., misinterpretations and confusion in how the user interprets the messages,
- An increase in physiological arousal, i.e., the emotions that users derive from interacting with the technology.
Translating these ideas into the context of chatbots, we came up with three competencies we thought such AI would need to engage users:
- Cognitive competency – the ability of chatbots to consider and apply their problem-solving and decision-making skills to the task at hand in creative, open-minded, and spontaneous way.
- Relational competency – the AI agent’s interpersonal skills, such as supporting, cooperating, and collaborating with its users, appearing cooperative and considerate.
- Emotional competency – AI’s ability to self-manage and moderate interactions with users, accounting for their moods, feelings, and reactions by displaying human warmth and compassion.
Through our research, we have confirmed that cognitive and emotional competency builds user trust and engagement. This was less clear for relational competency because chatbots do not yet have the skills to remember previous interactions to build relationships and readjust appropriately to the task demanded. Though the importance of relational competency was confirmed by the qualitative interview study in our mixed methods research, we learnt that most chatbots do not offer enough opportunities to develop relational competency. This is a rich area for future research.
How important is users' trust in their engagement with chatbots?
We proposed that user trust in chatbots is the central mechanism that mediates the relationship between human-like interactional competencies in AI and user engagement with conversational AI agents.
Prior research has shown that trust in technology influences its use and take-up across contexts such as m-commerce portals, business information systems, and knowledge management systems. We also know that users are more likely to trust interactive technologies with human-like features such as voice and animation. Such trust is expected to be more pronounced in the case of AI-driven customer interactions because as they interact with it, users can constantly gauge the AI's value in terms of its human-like attributes as it responds to their different queries and responses.
Our findings show that human-like cognitive and emotional competencies serve as innate trust-building mechanisms for fostering user engagement with chatbots.