ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.11569
  4. Cited By
Human Interpretation of Saliency-based Explanation Over Text

Human Interpretation of Saliency-based Explanation Over Text

27 January 2022
Hendrik Schuff
Alon Jacovi
Heike Adel
Yoav Goldberg
Ngoc Thang Vu
    MILM
    XAI
    FAtt
ArXivPDFHTML

Papers citing "Human Interpretation of Saliency-based Explanation Over Text"

31 / 31 papers shown
Title
Natural Language Processing RELIES on Linguistics
Natural Language Processing RELIES on Linguistics
Juri Opitz
Shira Wein
Nathan Schneider
AI4CE
75
7
0
09 May 2024
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Thiago Freitas dos Santos
Nardine Osman
Marco Schorlemmer
49
0
0
01 Mar 2024
Explain, Edit, and Understand: Rethinking User Study Design for
  Evaluating Model Explanations
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
55
38
0
17 Dec 2021
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
  Framework for Explainability Methods
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
40
102
0
06 Dec 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
139
61
0
07 Nov 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
124
36
0
15 Oct 2021
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Siva Reddy
A. Chandar
XAI
57
228
0
10 Aug 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
49
88
0
28 Jul 2021
On the Interaction of Belief Bias and Explanations
On the Interaction of Belief Bias and Explanations
Ana Valeria González
Anna Rogers
Anders Søgaard
FAtt
45
19
0
29 Jun 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
173
671
0
28 Dec 2020
Challenging common interpretability assumptions in feature attribution
  explanations
Challenging common interpretability assumptions in feature attribution explanations
Jonathan Dinu
Jeffrey P. Bigham
J. Z. K. Unaffiliated
43
14
0
04 Dec 2020
A Survey on the Explainability of Supervised Machine Learning
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
48
761
0
16 Nov 2020
Gradient-based Analysis of NLP Models is Manipulable
Gradient-based Analysis of NLP Models is Manipulable
Junlin Wang
Jens Tuyls
Eric Wallace
Sameer Singh
AAML
FAtt
47
58
0
12 Oct 2020
The Language Interpretability Tool: Extensible, Interactive
  Visualizations and Analysis for NLP Models
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Ian Tenney
James Wexler
Jasmijn Bastings
Tolga Bolukbasi
Andy Coenen
...
Ellen Jiang
Mahima Pushkarna
Carey Radebaugh
Emily Reif
Ann Yuan
VLM
113
192
0
12 Aug 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
48
106
0
01 Jun 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
84
588
0
07 Apr 2020
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAI
AAML
FAtt
73
901
0
13 Aug 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
89
1,427
0
17 Jul 2019
Saliency Maps Generation for Automatic Text Summarization
Saliency Maps Generation for Automatic Text Summarization
David Tuckey
Krysia Broda
A. Russo
FAtt
25
3
0
12 Jul 2019
WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from
  Wikipedia
WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia
Holger Schwenk
Vishrav Chaudhary
Shuo Sun
Hongyu Gong
Francisco Guzmán
CVBM
93
404
0
10 Jul 2019
What can AI do for me: Evaluating Machine Learning Interpretations in
  Cooperative Play
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
33
128
0
23 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
123
1,947
0
08 Oct 2018
Local Rule-Based Explanations of Black Box Decision Systems
Local Rule-Based Explanations of Black Box Decision Systems
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
D. Pedreschi
Franco Turini
F. Giannotti
118
436
0
28 May 2018
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
89
683
0
02 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
232
4,229
0
22 Jun 2017
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
L. Arras
G. Montavon
K. Müller
Wojciech Samek
FAtt
50
354
0
22 Jun 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
149
5,920
0
04 Mar 2017
"What is Relevant in a Text Document?": An Interpretable Machine
  Learning Approach
"What is Relevant in a Text Document?": An Interpretable Machine Learning Approach
L. Arras
F. Horn
G. Montavon
K. Müller
Wojciech Samek
61
288
0
23 Dec 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
97
809
0
13 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
772
16,828
0
16 Feb 2016
SentiWords: Deriving a High Precision and High Coverage Lexicon for
  Sentiment Analysis
SentiWords: Deriving a High Precision and High Coverage Lexicon for Sentiment Analysis
Lorenzo Gatti
Marco Guerini
Marco Turchi
23
109
0
30 Oct 2015
1