ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00724
  4. Cited By
Obtaining Faithful Interpretations from Compositional Neural Networks

Obtaining Faithful Interpretations from Compositional Neural Networks

2 May 2020
Sanjay Subramanian
Ben Bogin
Nitish Gupta
Tomer Wolfson
Sameer Singh
Jonathan Berant
Matt Gardner
ArXivPDFHTML

Papers citing "Obtaining Faithful Interpretations from Compositional Neural Networks"

11 / 11 papers shown
Title
Language Models with Rationality
Language Models with Rationality
Nora Kassner
Oyvind Tafjord
Ashish Sabharwal
Kyle Richardson
Hinrich Schütze
Peter Clark
ReLM
KELM
LRM
20
15
0
23 May 2023
ViperGPT: Visual Inference via Python Execution for Reasoning
ViperGPT: Visual Inference via Python Execution for Reasoning
Dídac Surís
Sachit Menon
Carl Vondrick
MLLM
LRM
ReLM
45
435
0
14 Mar 2023
MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
  Generation
MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text Generation
Swarnadeep Saha
Xinyan Velocity Yu
Joey Tianyi Zhou
Ramakanth Pasunuru
Asli Celikyilmaz
ReLM
LRM
25
10
0
16 Dec 2022
Summarization Programs: Interpretable Abstractive Summarization with
  Neural Modular Trees
Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
Swarnadeep Saha
Shiyue Zhang
Peter Hase
Joey Tianyi Zhou
29
19
0
21 Sep 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
36
15
0
27 Jan 2022
BeliefBank: Adding Memory to a Pre-Trained Language Model for a
  Systematic Notion of Belief
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
Nora Kassner
Oyvind Tafjord
Hinrich Schütze
Peter Clark
KELM
LRM
245
64
0
29 Sep 2021
Do Natural Language Explanations Represent Valid Logical Arguments?
  Verifying Entailment in Explainable NLI Gold Standards
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards
Marco Valentino
Ian Pratt-Hartman
André Freitas
XAI
LRM
23
12
0
05 May 2021
Toward Code Generation: A Survey and Lessons from Semantic Parsing
Toward Code Generation: A Survey and Lessons from Semantic Parsing
Celine Lee
Justin Emile Gottschlich
Dan Roth Intel Labs
3DV
33
15
0
26 Apr 2021
ProofWriter: Generating Implications, Proofs, and Abductive Statements
  over Natural Language
ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language
Oyvind Tafjord
Bhavana Dalvi
Peter Clark
21
258
0
24 Dec 2020
A Survey on Explainability in Machine Reading Comprehension
A Survey on Explainability in Machine Reading Comprehension
Mokanarangan Thayaparan
Marco Valentino
André Freitas
FaML
17
50
0
01 Oct 2020
A Diagnostic Study of Explainability Techniques for Text Classification
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
17
220
0
25 Sep 2020
1