In recent years, the healthcare industry has witnessed a significant increase in the use of large language model-based chatbots, or generative conversational agents. These AI-powered tools have been employed for various purposes, including patient education, assessment, and management. As the popularity of these chatbots grows, researchers from the University of Illinois Urbana-Champaign’s ACTION Lab have taken a closer look at their potential to promote healthy behavior change.
Michelle Bak, a doctoral student in information sciences, and Professor Jessie Chin recently published their findings in the Journal of the American Medical Informatics Association. Their study aimed to determine whether large language models could effectively identify users’ motivational states and provide appropriate information to support their journey towards healthier habits.
Study Design
To assess the capabilities of large language models in promoting behavior change, Bak and Chin designed a comprehensive study involving three prominent chatbot models: ChatGPT, Google Bard, and Llama 2. The researchers created a series of 25 scenarios, each targeting specific health needs such as low physical activity, diet and nutrition concerns, mental health challenges, cancer screening and diagnosis, sexually transmitted diseases, and substance dependency.
The scenarios were carefully crafted to represent the five distinct motivational stages of behavior change:
- Resistance to change and lacking awareness of problem behavior
- Increased awareness of problem behavior but ambivalence about making changes
- Intention to take action with small steps toward change
- Initiation of behavior change with a commitment to maintain it
- Successfully sustaining the behavior change for six months with a commitment to maintain it
By evaluating the chatbots’ responses to each scenario across the different motivational stages, the researchers aimed to determine the strengths and weaknesses of large language models in supporting users throughout their behavior change journey.
What Did the Study Find?
The study revealed both promising results and significant limitations in the ability of large language models to support behavior change. Bak and Chin found that chatbots can effectively identify motivational states and provide relevant information when users have established goals and a strong commitment to take action. This suggests that individuals who are already in the later stages of behavior change, such as those who have initiated changes or have been successfully maintaining them for some time, can benefit from the guidance and support provided by these AI-powered tools.
However, the researchers also discovered that large language models struggle to recognize the initial stages of motivation, particularly when users are resistant to change or ambivalent about making modifications to their behavior. In these cases, the chatbots failed to provide adequate information to help users evaluate their problem behavior and its consequences, as well as assess how their environment influenced their actions. For example, when faced with a user who is resistant to increasing their physical activity, the chatbots often defaulted to providing information about joining a gym rather than engaging the user emotionally by highlighting the negative consequences of a sedentary lifestyle.
Furthermore, the study revealed that large language models did not offer sufficient guidance on using reward systems to maintain motivation or reducing environmental stimuli that might increase the risk of relapse, even for users who had already taken steps to change their behavior. Bak noted, “The large language model-based chatbots provide resources on getting external help, such as social support. They’re lacking information on how to control the environment to eliminate a stimulus that reinforces problem behavior.”
Implications and Future Research
The findings of this study underscore the current limitations of large language models in understanding motivational states from natural language conversations. Chin explained that these models are trained to represent the relevance of a user’s language but struggle to differentiate between a user who is considering change but still hesitant and one who has a firm intention to take action. Additionally, the semantic similarity in user queries across different motivational stages makes it challenging for the models to accurately identify the user’s readiness for change based solely on their language.
Despite these limitations, the researchers believe that large language model chatbots have the potential to provide valuable support when users have strong motivations and are ready to take action. To fully realize this potential, future studies will focus on fine-tuning these models to better understand users’ motivational states by leveraging linguistic cues, information search patterns, and social determinants of health. By equipping the models with more specific knowledge and improving their ability to recognize and respond to different stages of motivation, researchers hope to enhance the effectiveness of these AI-powered tools in promoting healthy behavior change.
AI Chatbots in Behavior Change
The study from the University of Illinois Urbana-Champaign’s ACTION Lab has shed light on the potential and limitations of large language model chatbots in promoting healthy behavior change. While these AI-powered tools have shown promise in supporting users who are committed to making positive changes, they still struggle to effectively recognize and respond to the initial stages of motivation, such as resistance and ambivalence. As researchers continue to refine and improve these models, it is hoped that they will become increasingly effective in guiding users through all stages of the behavior change process, ultimately contributing to better health outcomes for individuals and communities alike.