Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System
Nathaniel Weir
- ReLMNAILRM

Abstract
We present an approach for systematic reasoning that produces human interpretable proof trees grounded in a factbase. Our solution evokes classic Prolog-based inference engines, where we replace handcrafted rules through a combination of neural language modeling, guided generation, and semiparametric dense retrieval. This novel reasoning engine, NELLIE, dynamically instantiates interpretable inference rules that capture and score entailment (de)compositions over natural language statements. NELLIE shows competitive performance on scientific QA datasets requiring structured explanations over multiple facts while fully grounding justification proofs in verified knowledge.
View on arXivComments on this paper