ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.17010
  4. Cited By
This Reads Like That: Deep Learning for Interpretable Natural Language
  Processing

This Reads Like That: Deep Learning for Interpretable Natural Language Processing

25 October 2023
Claudio Fanconi
Moritz Vandenhirtz
Severin Husmann
Julia E. Vogt
    FAtt
ArXiv (abs)PDFHTML

Papers citing "This Reads Like That: Deep Learning for Interpretable Natural Language Processing"

21 / 21 papers shown
Title
SGPT: GPT Sentence Embeddings for Semantic Search
SGPT: GPT Sentence Embeddings for Semantic Search
Niklas Muennighoff
RALM
158
189
0
17 Feb 2022
This Looks Like That... Does it? Shortcomings of Latent Space Prototype
  Interpretability in Deep Networks
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
59
63
0
05 May 2021
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Francisco Vargas
Ryan Cotterell
69
29
0
20 Sep 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
62
106
0
01 Jun 2020
Quantifying Attention Flow in Transformers
Quantifying Attention Flow in Transformers
Samira Abnar
Willem H. Zuidema
169
803
0
02 May 2020
Learning to Faithfully Rationalize by Construction
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
82
164
0
30 Apr 2020
MPNet: Masked and Permuted Pre-training for Language Understanding
MPNet: Masked and Permuted Pre-training for Language Understanding
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
111
1,138
0
20 Apr 2020
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
  lighter
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
262
7,554
0
02 Oct 2019
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,316
0
27 Aug 2019
Interpretable and Steerable Sequence Learning via Prototypes
Interpretable and Steerable Sequence Learning via Prototypes
Yao Ming
Panpan Xu
Huamin Qu
Liu Ren
AI4TS
63
141
0
23 Jul 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
89
214
0
20 May 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly
  well on ImageNet
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSLFAtt
109
561
0
20 Mar 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
148
1,330
0
26 Feb 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
104
1,021
0
26 Feb 2019
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
255
1,187
0
27 Jun 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
246
1,849
0
30 Nov 2017
Deep Learning for Case-Based Reasoning through Prototypes: A Neural
  Network that Explains Its Predictions
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
193
593
0
13 Oct 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILMFAtt
158
1,526
1
19 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,027
0
04 Mar 2017
Understanding Neural Networks through Representation Erasure
Understanding Neural Networks through Representation Erasure
Jiwei Li
Will Monroe
Dan Jurafsky
AAMLMILM
105
567
0
24 Dec 2016
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
580
27,338
0
01 Sep 2014
1