Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.01087
Cited By
Is Sparse Attention more Interpretable?
2 June 2021
Clara Meister
Stefan Lazov
Isabelle Augenstein
Ryan Cotterell
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Is Sparse Attention more Interpretable?"
12 / 12 papers shown
Title
Hierarchical Attention Network for Interpretable ECG-based Heart Disease Classification
Mario Padilla Rodriguez
Mohamed Nafea
28
0
0
25 Mar 2025
DreamStory: Open-Domain Story Visualization by LLM-Guided Multi-Subject Consistent Diffusion
Huiguo He
Huan Yang
Zixi Tuo
Yuan Zhou
Qiuyue Wang
Yuhang Zhang
Zeyu Liu
Wenhao Huang
Hongyang Chao
Jian Yin
DiffM
VGen
62
12
0
17 Jul 2024
Sparse Autoencoders Find Highly Interpretable Features in Language Models
Hoagy Cunningham
Aidan Ewart
Logan Riggs
R. Huben
Lee Sharkey
MILM
33
344
0
15 Sep 2023
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification
Jiasheng Si
Yingjie Zhu
Deyu Zhou
AAML
52
3
0
16 May 2023
Exploring Faithful Rationale for Multi-hop Fact Verification via Salience-Aware Graph Learning
Jiasheng Si
Yingjie Zhu
Deyu Zhou
37
12
0
02 Dec 2022
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
24
0
0
15 Nov 2022
Exploring Self-Attention for Crop-type Classification Explainability
Ivica Obadic
R. Roscher
Dario Augusto Borges Oliveira
Xiao Xiang Zhu
30
7
0
24 Oct 2022
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
111
35
0
15 Oct 2021
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
26
4
0
31 Aug 2021
Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training
Shunsuke Kitada
Hitoshi Iyatomi
AAML
28
8
0
18 Apr 2021
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
287
622
0
04 Dec 2018
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1