Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.14944
Cited By
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
31 May 2021
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The effectiveness of feature attribution methods and its correlation with automatic evaluation scores"
22 / 22 papers shown
Title
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users
Julian Rosenberger
Philipp Schröppel
Sven Kruschel
Mathias Kraus
Patrick Zschech
Maximilian Förster
FAtt
29
0
0
11 May 2025
Interactive Medical Image Analysis with Concept-based Similarity Reasoning
Ta Duc Huy
Sen Kim Tran
Phan Nguyen
Nguyen Hoang Tran
Tran Bao Sam
Anton Van Den Hengel
Zhibin Liao
Johan W. Verjans
Minh Nguyen Nhat To
Vu Minh Hieu Phan
53
0
0
10 Mar 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
111
7
0
06 Feb 2025
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Mingli Song
XAI
46
1
0
28 Jul 2024
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
49
2
0
11 Jun 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
61
5
0
02 May 2024
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
82
7
0
14 Sep 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
37
33
0
11 Aug 2023
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
27
4
0
06 Aug 2023
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
24
61
0
12 May 2023
Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
Yu Yang
Besmira Nushi
Hamid Palangi
Baharan Mirzasoleiman
39
36
0
08 Apr 2023
Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction
Patrick Hemmer
Monika Westphal
Max Schemmer
S. Vetter
Michael Vossing
G. Satzger
52
42
0
16 Mar 2023
Learning Human-Compatible Representations for Case-Based Decision Support
Han Liu
Yizhou Tian
Chacha Chen
Shi Feng
Yuxin Chen
Chenhao Tan
28
5
0
06 Mar 2023
Overcoming Catastrophic Forgetting by XAI
Giang Nguyen
18
0
0
25 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
103
0
17 Nov 2022
Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
30
42
0
26 Jul 2022
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
22
8
0
25 May 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
24
56
0
10 May 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Explaining Latent Representations with a Corpus of Examples
Jonathan Crabbé
Zhaozhi Qian
F. Imrie
M. Schaar
FAtt
16
37
0
28 Oct 2021
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1