Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.17010
Cited By
This Reads Like That: Deep Learning for Interpretable Natural Language Processing
25 October 2023
Claudio Fanconi
Moritz Vandenhirtz
Severin Husmann
Julia E. Vogt
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"This Reads Like That: Deep Learning for Interpretable Natural Language Processing"
21 / 21 papers shown
Title
SGPT: GPT Sentence Embeddings for Semantic Search
Niklas Muennighoff
RALM
158
189
0
17 Feb 2022
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
59
63
0
05 May 2021
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Francisco Vargas
Ryan Cotterell
71
29
0
20 Sep 2020
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
62
106
0
01 Jun 2020
Quantifying Attention Flow in Transformers
Samira Abnar
Willem H. Zuidema
169
803
0
02 May 2020
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
82
164
0
30 Apr 2020
MPNet: Masked and Permuted Pre-training for Language Understanding
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
111
1,138
0
20 Apr 2020
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
262
7,554
0
02 Oct 2019
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,316
0
27 Aug 2019
Interpretable and Steerable Sequence Learning via Prototypes
Yao Ming
Panpan Xu
Huamin Qu
Liu Ren
AI4TS
63
141
0
23 Jul 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
89
214
0
20 May 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSL
FAtt
109
561
0
20 Mar 2019
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
148
1,330
0
26 Feb 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
104
1,021
0
26 Feb 2019
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
255
1,187
0
27 Jun 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
248
1,849
0
30 Nov 2017
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
193
593
0
13 Oct 2017
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
158
1,526
1
19 Apr 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
193
6,027
0
04 Mar 2017
Understanding Neural Networks through Representation Erasure
Jiwei Li
Will Monroe
Dan Jurafsky
AAML
MILM
105
567
0
24 Dec 2016
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
580
27,338
0
01 Sep 2014
1