ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23832
30
0

LegalSearchLM: Rethinking Legal Case Retrieval as Legal Elements Generation

28 May 2025
Chaeeun Kim
Jinu Lee
Wonseok Hwang
    AILawRALMELM
ArXiv (abs)PDFHTML
Main:8 Pages
6 Figures
Bibliography:2 Pages
29 Tables
Appendix:13 Pages
Abstract

Legal Case Retrieval (LCR), which retrieves relevant cases from a query case, is a fundamental task for legal professionals in research and decision-making. However, existing studies on LCR face two major limitations. First, they are evaluated on relatively small-scale retrieval corpora (e.g., 100-55K cases) and use a narrow range of criminal query types, which cannot sufficiently reflect the complexity of real-world legal retrieval scenarios. Second, their reliance on embedding-based or lexical matching methods often results in limited representations and legally irrelevant matches. To address these issues, we present: (1) LEGAR BENCH, the first large-scale Korean LCR benchmark, covering 411 diverse crime types in queries over 1.2M legal cases; and (2) LegalSearchLM, a retrieval model that performs legal element reasoning over the query case and directly generates content grounded in the target cases through constrained decoding. Experimental results show that LegalSearchLM outperforms baselines by 6-20% on LEGAR BENCH, achieving state-of-the-art performance. It also demonstrates strong generalization to out-of-domain cases, outperforming naive generative models trained on in-domain data by 15%.

View on arXiv
@article{kim2025_2505.23832,
  title={ LegalSearchLM: Rethinking Legal Case Retrieval as Legal Elements Generation },
  author={ Chaeeun Kim and Jinu Lee and Wonseok Hwang },
  journal={arXiv preprint arXiv:2505.23832},
  year={ 2025 }
}
Comments on this paper