A Three-Factor Model for Exploring Trust in a Healthcare Bot

Screen Shot 2021-09-30 at 10.07.06 am.png

Abstract from LSE Dissertation (2020)

By 2030 the World Health Organisation estimates that the shortfall of healthcare workers globally will exceed 14 million. This shortfall comes at a time of rising healthcare costs, strained resources, increased demand and rapidly aging populations (New Health: A vision for sustainability, 2017; Spatharou et al.,2020). AI technologies are seen as a possible solution to growing healthcare problems, both as a way to reduce costs and deliver personalised healthcare services at scale. (Pereira & Diaz, 2019) Healthcare chatbots alone are expected to save the industry an estimated $3.6 billion globally per annum by 2022 (AI-Powered Chatbots to Drive Dramatic Cost Savings in Healthcare, 2017).

This study aimed to identify key factors that influence trust in chatbots to inform the design and marketing of healthcare agents to improve adoption and usage rates. Three categories of factors were used to explore trust in a healthcare bot: robot, human and environmental. Robot-factors included two levels of anthropomorphism. Human-factors included personality and demographic traits. Three different levels of information privacy disclosure, achieved by manipulating the type and complexity of a task as well as the type of data disclosed, explored associations between environmental-factors and trust.  A second dependent variable, likelihood to act on advice (LTA), was introduced to explore whether trust and LTA yielded similar results - both outcomes are important if the potential of healthcare bots is to be realised. Based on a literature review, a survey was distributed via the online platform, Prolific. Participants (N=602) were randomly assigned to one of 12 versions of a text-based chat interface, in which a bot-doctor discusses bone health with a patient.

Significant results were found for all three categories of factors, underscoring the need for a multi-dimensional approach. For example, if a bot-doctor was a digital human (a photo-realistic 3D avatar) a participant was less likely to act on its advice. But, the more human-like a bot-doctor was perceived to be the more likely a participant was to trust and act on its advice. The presence of the agreeable trait was associated with trust and LTA. Participants with spontaneous decision-making styles were also more likely to trust a bot-doctor. Finally, at higher levels of information privacy disclosure, compared to lower levels, participants were less likely to act on a bot-doctor’s advice. While this study yielded interesting insights, it also raised many unanswered questions and should be viewed as exploratory only.