ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02590
59
1

LexPam: Legal Procedure Awareness-Guided Mathematical Reasoning

3 April 2025
Kepu Zhang
Guofu Xie
Weijie Yu
Mingyue Xu
Xu Tang
Yaxin Li
Jun Xu
    AILaw
    ELM
    LRM
ArXivPDFHTML
Abstract

The legal mathematical reasoning ability of LLMs is crucial when applying them to real-world scenarios, as it directly affects the credibility of the LLM. While existing legal LLMs can perform general judicial question answering, their legal mathematical reasoning capabilities have not been trained. Open-domain reasoning models, though able to generate detailed calculation steps, do not follow the reasoning logic required for legal scenarios. Additionally, there is currently a lack of legal mathematical reasoning datasets to help validate and enhance LLMs' reasoning abilities in legal contexts. To address these issues, we propose the first Chinese legal Mathematical Reasoning Dataset, LexNum, which includes three common legal mathematical reasoning scenarios: economic compensation, work injury compensation, and traffic accident compensation. Based on LexNum, we tested the performance of existing legal LLMs and reasoning LLMs, and introduced LexPam, a reinforcement learning algorithm guided by legal procedural awareness to train LLMs, enhancing their mathematical reasoning abilities in legal scenarios. Experiments on tasks in the three legal scenarios show that the performance of existing legal LLMs and reasoning models in legal mathematical reasoning tasks is unsatisfactory. LexPam can enhance the LLM's ability in these tasks.

View on arXiv
@article{zhang2025_2504.02590,
  title={ LexPam: Legal Procedure Awareness-Guided Mathematical Reasoning },
  author={ Kepu Zhang and Guofu Xie and Weijie Yu and Mingyue Xu and Xu Tang and Yaxin Li and Jun Xu },
  journal={arXiv preprint arXiv:2504.02590},
  year={ 2025 }
}
Comments on this paper