29

Enhancing Legal LLMs through Metadata-Enriched RAG Pipelines and Direct Preference Optimization

Suyash Maniyar
Deepali Singh
Rohith Reddy
Main:9 Pages
2 Figures
Bibliography:2 Pages
6 Tables
Appendix:1 Pages
Abstract

Large Language Models (LLMs) perform well in short contexts but degrade on long legal documents, often producing hallucinations such as incorrect clauses or precedents. In the legal domain, where precision is critical, such errors undermine reliability and trust.Retrieval Augmented Generation (RAG) helps ground outputs but remains limited in legal settings, especially with small, locally deployed models required for data privacy. We identify two failure modes: retrieval errors due to lexical redundancy in legal corpora, and decoding errors where models generate answers despite insufficient context.To address this, we propose Metadata Enriched Hybrid RAG to improve document level retrieval, and apply Direct Preference Optimization (DPO) to enforce safe refusal when context is inadequate. Together, these methods improve grounding, reliability, and safety in legal language models.

View on arXiv
Comments on this paper