ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.00449
  4. Cited By
A Consistent and Efficient Evaluation Strategy for Attribution Methods

A Consistent and Efficient Evaluation Strategy for Attribution Methods

1 February 2022
Yao Rong
Tobias Leemann
V. Borisov
Gjergji Kasneci
Enkelejda Kasneci
    FAtt
ArXivPDFHTML

Papers citing "A Consistent and Efficient Evaluation Strategy for Attribution Methods"

26 / 26 papers shown
Title
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
32
0
0
18 Apr 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
107
1
0
13 Mar 2025
Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview
A. Chaddad
Yan Hu
Yihang Wu
Binbin Wen
R. Kateb
61
6
0
11 Mar 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
69
5
0
10 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
47
4
0
03 Jan 2025
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
38
0
0
03 Oct 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
38
1
0
22 Sep 2024
MeLIAD: Interpretable Few-Shot Anomaly Detection with Metric Learning
  and Entropy-based Scoring
MeLIAD: Interpretable Few-Shot Anomaly Detection with Metric Learning and Entropy-based Scoring
Eirini Cholopoulou
D. Iakovidis
AAML
33
0
0
20 Sep 2024
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
55
0
0
21 Aug 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Mingli Song
XAI
46
1
0
28 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
39
3
0
16 Jul 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
64
5
0
03 May 2024
RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models
RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models
Maria Heuss
Maarten de Rijke
Avishek Anand
184
1
0
24 Mar 2024
CAManim: Animating end-to-end network activation maps
CAManim: Animating end-to-end network activation maps
Emily Kaczmarek
Olivier X. Miguel
Alexa C. Bowie
R. Ducharme
Alysha L. J. Dingwall-Harvey
S. Hawken
Christine M. Armour
Mark C. Walker
Kevin Dick
HAI
37
1
0
19 Dec 2023
CoRTX: Contrastive Framework for Real-time Explanation
CoRTX: Contrastive Framework for Real-time Explanation
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Quan-Gen Zhou
Pushkar Tripathi
Xuanting Cai
Xia Hu
46
20
0
05 Mar 2023
Don't be fooled: label leakage in explanation methods and the importance
  of their quantitative evaluation
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
N. Jethani
A. Saporta
Rajesh Ranganath
FAtt
34
11
0
24 Feb 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
36
3
0
14 Feb 2023
Relational Local Explanations
Relational Local Explanations
V. Borisov
Gjergji Kasneci
FAtt
22
0
0
23 Dec 2022
Explainability as statistical inference
Explainability as statistical inference
Hugo Senetaire
Damien Garreau
J. Frellsen
Pierre-Alexandre Mattei
FAtt
26
4
0
06 Dec 2022
Sensing accident-prone features in urban scenes for proactive driving
  and accident prevention
Sensing accident-prone features in urban scenes for proactive driving and accident prevention
Sumit Mishra
Praveenbalaji Rajendran
L. Vecchietti
Dongsoo Har
19
13
0
25 Feb 2022
Deep Neural Networks and Tabular Data: A Survey
Deep Neural Networks and Tabular Data: A Survey
V. Borisov
Tobias Leemann
Kathrin Seßler
Johannes Haug
Martin Pawelczyk
Gjergji Kasneci
LMTD
49
650
0
05 Oct 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
33
20
0
26 Apr 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
88
69
0
02 Mar 2021
Making Neural Networks Interpretable with Attribution: Application to
  Implicit Signals Prediction
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction
Darius Afchar
Romain Hennequin
FAtt
XAI
39
16
0
26 Aug 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,698
0
28 Feb 2017
1