Dynamic Generation of Grounded Logical Explanations in a Neuro-Symbolic Expert System
Nathaniel Weir
- ReLMNAILRM

Abstract
We propose an approach for systematic reasoning that produces human interpretable proof trees grounded in a factbase. Our approach evokes classic Prolog-based inference engines, where we replace handcrafted rules by combining neural language modeling, guided generation, and semiparametric dense retrieval. We demonstrate this approach through a novel system, NELLIE, which dynamically instantiates interpretable inference rules that capture and score entailment (de)compositions over natural language statements. This leads to strong performance, as shown in the scientific reasoning domain, while also producing reasoning traces showing how answers derive logically from the composition of human-verified facts.
View on arXivComments on this paper