15

Evaluation of Large Language Models in Legal Applications: Challenges, Methods, and Future Directions

Yiran Hu
Huanghai Liu
Chong Wang
Kunran Li
Tien-Hsuan Wu
Haitao Li
Xinran Xu
Siqing Huo
Weihang Su
Ning Zheng
Siyuan Zheng
Qingyao Ai
Yun Liu
Renjun Bian
Yiqun Liu
Charles L.A. Clarke
Weixing Shen
Ben Kao
Main:7 Pages
1 Figures
Bibliography:5 Pages
2 Tables
Abstract

Large language models (LLMs) are being increasingly integrated into legal applications, including judicial decision support, legal practice assistance, and public-facing legal services. While LLMs show strong potential in handling legal knowledge and tasks, their deployment in real-world legal settings raises critical concerns beyond surface-level accuracy, involving the soundness of legal reasoning processes and trustworthy issues such as fairness and reliability. Systematic evaluation of LLM performance in legal tasks has therefore become essential for their responsible adoption. This survey identifies key challenges in evaluating LLMs for legal tasks grounded in real-world legal practice. We analyze the major difficulties involved in assessing LLM performance in the legal domain, including outcome correctness, reasoning reliability, and trustworthiness. Building on these challenges, we review and categorize existing evaluation methods and benchmarks according to their task design, datasets, and evaluation metrics. We further discuss the extent to which current approaches address these challenges, highlight their limitations, and outline future research directions toward more realistic, reliable, and legally grounded evaluation frameworks for LLMs in legal domains.

View on arXiv
Comments on this paper