ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11387
35
1

A Framework for Evaluating Appropriateness, Trustworthiness, and Safety in Mental Wellness AI Chatbots

16 July 2024
Lucia Chen
David A. Preece
P. Sikka
James J. Gross
Ben Krause
    AI4MH
ArXivPDFHTML
Abstract

Large language model (LLM) chatbots are susceptible to biases and hallucinations, but current evaluations of mental wellness technologies lack comprehensive case studies to evaluate their practical applications. Here, we address this gap by introducing the MHealth-EVAL framework, a new role-play based interactive evaluation method designed specifically for evaluating the appropriateness, trustworthiness, and safety of mental wellness chatbots. We also introduce Psyfy, a new chatbot leveraging LLMs to facilitate transdiagnostic Cognitive Behavioral Therapy (CBT). We demonstrate the MHealth-EVAL framework's utility through a comparative study of two versions of Psyfy against standard baseline chatbots. Our results showed that Psyfy chatbots outperformed the baseline chatbots in delivering appropriate responses, engaging users, and avoiding untrustworthy responses. However, both Psyfy and the baseline chatbots exhibited some limitations, such as providing predominantly US-centric resources. While Psyfy chatbots were able to identify most unsafe situations and avoid giving unsafe responses, they sometimes struggled to recognize subtle harmful intentions when prompted in role play scenarios. Our study demonstrates a practical application of the MHealth-EVAL framework and showcases Psyfy's utility in harnessing LLMs to enhance user engagement and provide flexible and appropriate responses aligned with an evidence-based CBT approach.

View on arXiv
Comments on this paper