ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.04626
  4. Cited By
Attention is not not Explanation

Attention is not not Explanation

13 August 2019
Sarah Wiegreffe
Yuval Pinter
    XAI
    AAML
    FAtt
ArXivPDFHTML

Papers citing "Attention is not not Explanation"

37 / 37 papers shown
Title
LiDDA: Data Driven Attribution at LinkedIn
LiDDA: Data Driven Attribution at LinkedIn
John Bencina
Erkut Aykutlug
Yue Chen
Zerui Zhang
Stephanie Sorenson
Shao Tang
Changshuai Wei
43
0
0
14 May 2025
Interpretable High-order Knowledge Graph Neural Network for Predicting Synthetic Lethality in Human Cancers
Interpretable High-order Knowledge Graph Neural Network for Predicting Synthetic Lethality in Human Cancers
Xuexin Chen
Ruichu Cai
Zhengting Huang
Zijian Li
Jie Zheng
Min Wu
59
0
0
08 Mar 2025
Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning
Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning
Anh Tong
Thanh Nguyen-Tang
Dongeun Lee
Duc Nguyen
Toan M. Tran
David Hall
Cheongwoong Kang
Jaesik Choi
82
1
0
03 Mar 2025
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage
Peter Yong Zhong
Siyuan Chen
Ruiqi Wang
McKenna McCall
Ben L. Titzer
Heather Miller
Phillip B. Gibbons
LLMAG
109
6
0
17 Feb 2025
Exploring Translation Mechanism of Large Language Models
Exploring Translation Mechanism of Large Language Models
Hongbin Zhang
Kehai Chen
Xuefeng Bai
Xiucheng Li
Yang Xiang
Min Zhang
107
1
0
17 Feb 2025
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Leisheng Yu
Yanxiao Cai
Minxing Zhang
Xia Hu
FAtt
339
0
0
15 Feb 2025
Making Sense Of Distributed Representations With Activation Spectroscopy
Kyle Reing
Greg Ver Steeg
Aram Galstyan
41
0
0
28 Jan 2025
A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference
A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference
Duc Hau Nguyen
Duc Hau Nguyen
Pascale Sébillot
71
5
0
23 Jan 2025
Regularization, Semi-supervision, and Supervision for a Plausible Attention-Based Explanation
Regularization, Semi-supervision, and Supervision for a Plausible Attention-Based Explanation
Duc Hau Nguyen
Cyrielle Mallart
Guillaume Gravier
Pascale Sébillot
85
0
0
22 Jan 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
92
5
0
10 Jan 2025
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers
Lam Nguyen Tung
Steven Cho
Xiaoning Du
Neelofar Neelofar
Valerio Terragni
Stefano Ruberto
Aldeida Aleti
395
2
0
30 Oct 2024
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination
Jerry Huang
Prasanna Parthasarathi
Mehdi Rezagholizadeh
Boxing Chen
Sarath Chandar
94
0
0
22 Oct 2024
Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers
Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers
Shijie Chen
Bernal Jiménez Gutiérrez
Yu Su
57
4
0
03 Oct 2024
Enhancing elusive clues in knowledge learning by contrasting attention of language models
Enhancing elusive clues in knowledge learning by contrasting attention of language models
Jian Gao
Xiao Zhang
Ji Wu
Miao Li
69
0
0
26 Sep 2024
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
64
0
0
21 Aug 2024
Machine learning surrogates for efficient hydrologic modeling: Insights from stochastic simulations of managed aquifer recharge
Machine learning surrogates for efficient hydrologic modeling: Insights from stochastic simulations of managed aquifer recharge
Timothy Dai
Kate Maher
Z. Perzan
55
1
0
30 Jul 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
99
1
0
23 Jul 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
77
2
0
24 Jun 2024
What Do VLMs NOTICE? A Mechanistic Interpretability Pipeline for Gaussian-Noise-free Text-Image Corruption and Evaluation
What Do VLMs NOTICE? A Mechanistic Interpretability Pipeline for Gaussian-Noise-free Text-Image Corruption and Evaluation
Michal Golovanevsky
William Rudman
Vedant Palit
Ritambhara Singh
Carsten Eickhoff
84
1
0
24 Jun 2024
MambaLRP: Explaining Selective State Space Sequence Models
MambaLRP: Explaining Selective State Space Sequence Models
F. Jafari
G. Montavon
Klaus-Robert Müller
Oliver Eberle
Mamba
145
9
0
11 Jun 2024
On the Challenges and Opportunities in Generative AI
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
...
F. Wenzel
Frank Wood
Stephan Mandt
Vincent Fortuin
Vincent Fortuin
128
18
0
28 Feb 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
84
7
0
20 Dec 2023
Exploring Self-Attention for Crop-type Classification Explainability
Exploring Self-Attention for Crop-type Classification Explainability
Ivica Obadic
R. Roscher
Dario Augusto Borges Oliveira
Xiao Xiang Zhu
61
7
0
24 Oct 2022
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
72
679
0
09 Jun 2019
Generating Token-Level Explanations for Natural Language Inference
Generating Token-Level Explanations for Natural Language Inference
James Thorne
Andreas Vlachos
Christos Christodoulopoulos
Arpit Mittal
LRM
40
57
0
24 Apr 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
83
1,307
0
26 Feb 2019
Human-Centered Artificial Intelligence and Machine Learning
Human-Centered Artificial Intelligence and Machine Learning
Mark O. Riedl
SyDa
102
264
0
31 Jan 2019
Automated Rationale Generation: A Technique for Explainable AI and its
  Effects on Human Perceptions
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
Upol Ehsan
Pradyumna Tambwekar
Larry Chan
Brent Harrison
Mark O. Riedl
76
239
0
11 Jan 2019
Stop Explaining Black Box Machine Learning Models for High Stakes
  Decisions and Use Interpretable Models Instead
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
Cynthia Rudin
ELM
FaML
43
219
0
26 Nov 2018
Explainable Prediction of Medical Codes from Clinical Text
Explainable Prediction of Medical Codes from Clinical Text
J. Mullenbach
Sarah Wiegreffe
J. Duke
Jimeng Sun
Jacob Eisenstein
FAtt
49
571
0
15 Feb 2018
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
96
585
0
10 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
341
3,742
0
28 Feb 2017
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
81
807
0
13 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
106
3,672
0
10 Jun 2016
Reasoning about Entailment with Neural Attention
Reasoning about Entailment with Neural Attention
Tim Rocktaschel
Edward Grefenstette
Karl Moritz Hermann
Tomás Kociský
Phil Blunsom
NAI
40
760
0
22 Sep 2015
Show, Attend and Tell: Neural Image Caption Generation with Visual
  Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Ke Xu
Jimmy Ba
Ryan Kiros
Kyunghyun Cho
Aaron Courville
Ruslan Salakhutdinov
R. Zemel
Yoshua Bengio
DiffM
273
10,034
0
10 Feb 2015
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
369
27,205
0
01 Sep 2014
1