ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16335
31
0

Explainable Rule Application via Structured Prompting: A Neural-Symbolic Approach

19 June 2025
Albert Sadowski
Jarosław A. Chudziak
    AILawELMLRM
ArXiv (abs)PDFHTML
Main:8 Pages
1 Figures
Bibliography:2 Pages
5 Tables
Abstract

Large Language Models (LLMs) excel in complex reasoning tasks but struggle with consistent rule application, exception handling, and explainability, particularly in domains like legal analysis that require both natural language understanding and precise logical inference. This paper introduces a structured prompting framework that decomposes reasoning into three verifiable steps: entity identification, property extraction, and symbolic rule application. By integrating neural and symbolic approaches, our method leverages LLMs' interpretive flexibility while ensuring logical consistency through formal verification. The framework externalizes task definitions, enabling domain experts to refine logical structures without altering the architecture. Evaluated on the LegalBench hearsay determination task, our approach significantly outperformed baselines, with OpenAI o-family models showing substantial improvements - o1 achieving an F1 score of 0.929 and o3-mini reaching 0.867 using structured decomposition with complementary predicates, compared to their few-shot baselines of 0.714 and 0.74 respectively. This hybrid neural-symbolic system offers a promising pathway for transparent and consistent rule-based reasoning, suggesting potential for explainable AI applications in structured legal reasoning tasks.

View on arXiv
@article{sadowski2025_2506.16335,
  title={ Explainable Rule Application via Structured Prompting: A Neural-Symbolic Approach },
  author={ Albert Sadowski and Jarosław A. Chudziak },
  journal={arXiv preprint arXiv:2506.16335},
  year={ 2025 }
}
Comments on this paper