ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.08160
  4. Cited By
Interpretable Neural Predictions with Differentiable Binary Variables

Interpretable Neural Predictions with Differentiable Binary Variables

20 May 2019
Jasmijn Bastings
Wilker Aziz
Ivan Titov
ArXivPDFHTML

Papers citing "Interpretable Neural Predictions with Differentiable Binary Variables"

49 / 149 papers shown
Title
Rationalization through Concepts
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
27
19
0
11 May 2021
Improving the Faithfulness of Attention-based Explanations with
  Task-specific Information for Text Classification
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
21
37
0
06 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
33
132
0
27 Apr 2021
SalKG: Learning From Knowledge Graph Explanations for Commonsense
  Reasoning
SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
Aaron Chan
Lyne Tchapmi
Bo Long
Soumya Sanyal
Tanishq Gupta
Xiang Ren
ReLM
LRM
24
11
0
18 Apr 2021
Flexible Instance-Specific Rationalization of NLP Models
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
31
14
0
16 Apr 2021
Explaining Neural Network Predictions on Sentence Pairs via Learning
  Word-Group Masks
Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Hanjie Chen
Song Feng
Jatin Ganhotra
H. Wan
Chulaka Gunasekara
Sachindra Joshi
Yangfeng Ji
17
18
0
09 Apr 2021
Reconciling the Discrete-Continuous Divide: Towards a Mathematical
  Theory of Sparse Communication
Reconciling the Discrete-Continuous Divide: Towards a Mathematical Theory of Sparse Communication
André F. T. Martins
22
1
0
01 Apr 2021
Paragraph-level Rationale Extraction through Regularization: A case
  study on European Court of Human Rights Cases
Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases
Ilias Chalkidis
Manos Fergadiotis
D. Tsarapatsanis
Nikolaos Aletras
Ion Androutsopoulos
Prodromos Malakasiotis
AILaw
21
107
0
24 Mar 2021
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
Dheeraj Rajagopal
Vidhisha Balachandran
Eduard H. Hovy
Yulia Tsvetkov
MILM
SSL
FAtt
AI4TS
19
65
0
23 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
35
48
0
20 Mar 2021
Explain and Predict, and then Predict Again
Explain and Predict, and then Predict Again
Zijian Zhang
Koustav Rudra
Avishek Anand
FAtt
20
51
0
11 Jan 2021
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale
  Generation
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation
Kushal Lakhotia
Bhargavi Paranjape
Asish Ghoshal
Wen-tau Yih
Yashar Mehdad
Srini Iyer
25
27
0
31 Dec 2020
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
33
119
0
27 Dec 2020
Learning from the Best: Rationalizing Prediction by Adversarial
  Information Calibration
Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
133
35
0
16 Dec 2020
Learning to Rationalize for Nonmonotonic Reasoning with Distant
  Supervision
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
Faeze Brahman
Vered Shwartz
Rachel Rudinger
Yejin Choi
LRM
15
42
0
14 Dec 2020
DoLFIn: Distributions over Latent Features for Interpretability
DoLFIn: Distributions over Latent Features for Interpretability
Phong Le
Willem H. Zuidema
FAtt
13
0
0
10 Nov 2020
Weakly- and Semi-supervised Evidence Extraction
Weakly- and Semi-supervised Evidence Extraction
Danish Pruthi
Bhuwan Dhingra
Graham Neubig
Zachary Chase Lipton
4
23
0
03 Nov 2020
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
282
170
0
24 Oct 2020
Explaining and Improving Model Behavior with k Nearest Neighbor
  Representations
Explaining and Improving Model Behavior with k Nearest Neighbor Representations
Nazneen Rajani
Ben Krause
Wengpeng Yin
Tong Niu
R. Socher
Caiming Xiong
FAtt
11
32
0
18 Oct 2020
Adaptive Feature Selection for End-to-End Speech Translation
Adaptive Feature Selection for End-to-End Speech Translation
Biao Zhang
Ivan Titov
Barry Haddow
Rico Sennrich
8
40
0
16 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
46
173
0
12 Oct 2020
Weakly Supervised Medication Regimen Extraction from Medical
  Conversations
Weakly Supervised Medication Regimen Extraction from Medical Conversations
Dhruvesh Patel
Sandeep Konam
Sai P. Selvaraj
MedIm
6
9
0
11 Oct 2020
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
  Explanations of Their Behavior in Natural Language?
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
Peter Hase
Shiyue Zhang
Harry Xie
Joey Tianyi Zhou
18
99
0
08 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
27
25
0
07 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
Efficient Marginalization of Discrete and Structured Latent Variables
  via Sparsity
Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity
Gonçalo M. Correia
Vlad Niculae
Wilker Aziz
André F. T. Martins
BDL
30
22
0
03 Jul 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
L. Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
29
288
0
26 Jun 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
15
105
0
01 Jun 2020
Concept Matching for Low-Resource Classification
Concept Matching for Low-Resource Classification
Federico Errica
Ludovic Denoyer
Bora Edizel
Fabio Petroni
Vassilis Plachouras
Fabrizio Silvestri
Sebastian Riedel
11
2
0
01 Jun 2020
Rationalizing Text Matching: Learning Sparse Alignments via Optimal
  Transport
Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport
Kyle Swanson
L. Yu
Tao Lei
OT
29
37
0
27 May 2020
A multi-component framework for the analysis and design of explainable
  artificial intelligence
A multi-component framework for the analysis and design of explainable artificial intelligence
S. Atakishiyev
H. Babiker
Nawshad Farruque
R. Goebel1
Myeongjung Kima
M. H. Motallebi
J. Rabelo
T. Syed
O. R. Zaïane
46
35
0
05 May 2020
An Information Bottleneck Approach for Controlling Conciseness in
  Rationale Extraction
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Bhargavi Paranjape
Mandar Joshi
John Thickstun
Hannaneh Hajishirzi
Luke Zettlemoyer
20
97
0
01 May 2020
Learning to Faithfully Rationalize by Construction
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
19
158
0
30 Apr 2020
How do Decisions Emerge across Layers in Neural Models? Interpretation
  with Differentiable Masking
How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
Nicola De Cao
M. Schlichtkrull
Wilker Aziz
Ivan Titov
25
89
0
30 Apr 2020
The Explanation Game: Towards Prediction Explainability through Sparse
  Communication
The Explanation Game: Towards Prediction Explainability through Sparse Communication
Marcos Vinícius Treviso
André F. T. Martins
FAtt
27
3
0
28 Apr 2020
On Sparsifying Encoder Outputs in Sequence-to-Sequence Models
On Sparsifying Encoder Outputs in Sequence-to-Sequence Models
Biao Zhang
Ivan Titov
Rico Sennrich
6
13
0
24 Apr 2020
Invariant Rationalization
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
182
201
0
22 Mar 2020
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
41
626
0
08 Nov 2019
Making the Best Use of Review Summary for Sentiment Analysis
Making the Best Use of Review Summary for Sentiment Analysis
Sen Yang
Leyang Cui
Jun Xie
Yue Zhang
25
0
0
07 Nov 2019
A Latent Morphology Model for Open-Vocabulary Neural Machine Translation
A Latent Morphology Model for Open-Vocabulary Neural Machine Translation
Duygu Ataman
Wilker Aziz
Alexandra Birch
27
16
0
30 Oct 2019
Rethinking Cooperative Rationalization: Introspective Extraction and
  Complement Control
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Mo Yu
Shiyu Chang
Yang Zhang
Tommi Jaakkola
21
140
0
29 Oct 2019
Structured Pruning of Large Language Models
Structured Pruning of Large Language Models
Ziheng Wang
Jeremy Wohlwend
Tao Lei
24
281
0
10 Oct 2019
What do Deep Networks Like to Read?
What do Deep Networks Like to Read?
Jonas Pfeiffer
Aishwarya Kamath
Iryna Gurevych
Sebastian Ruder
16
3
0
10 Sep 2019
Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
Wenling Shang
Alexander R. Trott
Stephan Zheng
Caiming Xiong
R. Socher
22
18
0
01 Jul 2019
EDUCE: Explaining model Decisions through Unsupervised Concepts
  Extraction
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
Diane Bouchacourt
Ludovic Denoyer
FAtt
26
21
0
28 May 2019
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
232
200
0
06 Jul 2017
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
213
1,367
0
06 Jun 2016
Learning Attitudes and Attributes from Multi-Aspect Reviews
Learning Attitudes and Attributes from Multi-Aspect Reviews
Julian McAuley
J. Leskovec
Dan Jurafsky
200
296
0
15 Oct 2012
Previous
123