20
v1v2 (latest)

PLawBench: A Rubric-Based Benchmark for Evaluating LLMs in Real-World Legal Practice

Yuzhen Shi
Huanghai Liu
Yiran Hu
Gaojie Song
Xinran Xu
Yubo Ma
Tianyi Tang
Li Zhang
Qingjing Chen
Di Feng
Wenbo Lv
Weiheng Wu
Kexin Yang
Sen Yang
Wei Wang
Rongyao Shi
Yuanyang Qiu
Yuemeng Qi
Jingwen Zhang
Xiaoyu Sui
Yifan Chen
Yi Zhang
An Yang
Bowen Yu
Dayiheng Liu
Junyang Lin
Weixing Shen
Bing Zhao
Charles L.A. Clarke
Hu Wei
Main:10 Pages
6 Figures
Bibliography:3 Pages
21 Tables
Appendix:29 Pages
Abstract

As large language models (LLMs) are increasingly applied to legal domain-specific tasks, evaluating their ability to perform legal work in real-world settings has become essential. However, existing legal benchmarks rely on simplified and highly standardized tasks, failing to capture the ambiguity, complexity, and reasoning demands of real legal practice. Moreover, prior evaluations often adopt coarse, single-dimensional metrics and do not explicitly assess fine-grained legal reasoning. To address these limitations, we introduce PLawBench, a Practical Law Benchmark designed to evaluate LLMs in realistic legal practice scenarios. Grounded in real-world legal workflows, PLawBench models the core processes of legal practitioners through three task categories: public legal consultation, practical case analysis, and legal document generation. These tasks assess a model's ability to identify legal issues and key facts, perform structured legal reasoning, and generate legally coherent documents. PLawBench comprises 850 questions across 13 practical legal scenarios, with each question accompanied by expert-designed evaluation rubrics, resulting in approximately 12,500 rubric items for fine-grained assessment. Using an LLM-based evaluator aligned with human expert judgments, we evaluate 10 state-of-the-art LLMs. Experimental results show that none achieves strong performance on PLawBench, revealing substantial limitations in the fine-grained legal reasoning capabilities of current LLMs and highlighting important directions for future evaluation and development of legal LLMs. Data is available at:this https URL.

View on arXiv
Comments on this paper