ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.10137
35
0

Legal Evalutions and Challenges of Large Language Models

15 November 2024
Jiaqi Wang
Huan Zhao
Z. Yang
Peng Shu
J. Chen
Haobo Sun
Ruixi Liang
Shixin Li
Pengcheng Shi
Longjun Ma
Zongjia Liu
Z. Liu
T. Zhong
Yutong Zhang
Chong Ma
X. Zhang
T. Zhang
Tianli Ding
Yudan Ren
Tianming Liu
Xi Jiang
S. Zhang
    AILaw
    ELM
ArXivPDFHTML
Abstract

In this paper, we review legal testing methods based on Large Language Models (LLMs), using the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions. We compare current state-of-the-art LLMs, including open-source, closed-source, and legal-specific models trained specifically for the legal domain. Systematic tests are conducted on English and Chinese legal cases, and the results are analyzed in depth. Through systematic testing of legal cases from common law systems and China, this paper explores the strengths and weaknesses of LLMs in understanding and applying legal texts, reasoning through legal issues, and predicting judgments. The experimental results highlight both the potential and limitations of LLMs in legal applications, particularly in terms of challenges related to the interpretation of legal language and the accuracy of legal reasoning. Finally, the paper provides a comprehensive analysis of the advantages and disadvantages of various types of models, offering valuable insights and references for the future application of AI in the legal field.

View on arXiv
Comments on this paper