Evaluating the Correctness of Inference Patterns Used by LLMs for Judgment

This paper presents a method to analyze the inference patterns used by Large Language Models (LLMs) for judgment in a case study on legal LLMs, so as to identify potential incorrect representations of the LLM, according to human domain knowledge. Unlike traditional evaluations on language generation results, we propose to evaluate the correctness of the detailed inference patterns of an LLM behind its seemingly correct outputs. To this end, we quantify the interactions between input phrases used by the LLM as primitive inference patterns, because recent theoretical achievements have proven several mathematical guarantees of the faithfulness of the interaction-based explanation. We design a set of metrics to evaluate the detailed inference patterns of LLMs. Experiments show that even when the language generation results appear correct, a significant portion of the inference patterns used by the LLM for the legal judgment may represent misleading or irrelevant logic.
View on arXiv@article{chen2025_2410.09083, title={ Evaluating the Correctness of Inference Patterns Used by LLMs for Judgment }, author={ Lu Chen and Yuxuan Huang and Yixing Li and Dongrui Liu and Qihan Ren and Shuai Zhao and Kun Kuang and Zilong Zheng and Quanshi Zhang }, journal={arXiv preprint arXiv:2410.09083}, year={ 2025 } }