8

Responsible Evaluation of AI for Mental Health

Hiba Arnaout
Anmol Goel
H. Andrew Schwartz
Steffen T. Eberhardt
Dana Atzil-Slonim
Gavin Doherty
Brian Schwartz
Wolfgang Lutz
Tim Althoff
Munmun De Choudhury
Hamidreza Jamalabadi
Raj Sanjay Shah
Flor Miriam Plaza-del-Arco
Dirk Hovy
Maria Liakata
Iryna Gurevych
Main:7 Pages
Bibliography:12 Pages
18 Tables
Appendix:16 Pages
Abstract

Although artificial intelligence (AI) shows growing promise for mental health care, current approaches to evaluating AI tools in this domain remain fragmented and poorly aligned with clinical practice, social context, and first-hand user experience. This paper argues for a rethinking of responsible evaluation -- what is measured, by whom, and for what purpose -- by introducing an interdisciplinary framework that integrates clinical soundness, social context, and equity, providing a structured basis for evaluation. Through an analysis of 135 recent *CL publications, we identify recurring limitations, including over-reliance on generic metrics that do not capture clinical validity, therapeutic appropriateness, or user experience, limited participation from mental health professionals, and insufficient attention to safety and equity. To address these gaps, we propose a taxonomy of AI mental health support types -- assessment-, intervention-, and information synthesis-oriented -- each with distinct risks and evaluative requirements, and illustrate its use through case studies.

View on arXiv
Comments on this paper