Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.06861
Cited By
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
14 February 2022
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond"
38 / 38 papers shown
Title
Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics
Mohammed Alquliti
Erisa Karafili
BooJoong Kang
XAI
29
0
0
12 May 2025
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
Loc Phuc Truong Nguyen
Hung Truong Thanh Nguyen
Hung Cao
68
0
0
27 Apr 2025
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
33
0
0
28 Feb 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
103
0
0
11 Feb 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
94
0
0
30 Dec 2024
Study on the Helpfulness of Explainable Artificial Intelligence
Tobias Labarta
Elizaveta Kulicheva
Ronja Froelian
Christian Geißler
Xenia Melman
Julian von Klitzing
ELM
31
0
0
14 Oct 2024
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
1
0
22 Sep 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
67
28
0
30 Aug 2024
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Jie Song
XAI
46
0
0
28 Jul 2024
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
Qi Huang
Emanuele Mezzi
Osman Mutlu
Miltiadis Kofinas
Vidya Prasad
Shadnan Azwad Khan
Elena Ranguelova
N. V. Stein
45
0
0
17 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
34
3
0
16 Jul 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
61
1
0
01 Jul 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
42
0
0
17 Jun 2024
From Latent to Lucid: Transforming Knowledge Graph Embeddings into Interpretable Structures with KGEPrisma
Christoph Wehner
Chrysa Iliopoulou
Ute Schmid
Tarek R. Besold
58
0
0
03 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
32
4
0
27 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
37
5
0
03 May 2024
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation
Paulo Yanez Sarmiento
Simon Witzke
Nadja Klein
Bernhard Y. Renard
FAtt
AAML
40
0
0
22 Apr 2024
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
58
5
0
18 Apr 2024
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation
Xuexin Chen
Ruichu Cai
Zhengting Huang
Yuxuan Zhu
Julien Horwood
Zhifeng Hao
Zijian Li
Jose Miguel Hernandez-Lobato
AAML
36
2
0
13 Feb 2024
Respect the model: Fine-grained and Robust Explanation with Sharing Ratio Decomposition
Sangyu Han
Yearim Kim
Nojun Kwak
AAML
26
1
0
25 Jan 2024
Explainable Bayesian Optimization
Tanmay Chakraborty
Christin Seifert
Christian Wirth
55
5
0
24 Jan 2024
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
33
11
0
19 Jan 2024
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
26
2
0
13 Dec 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
37
32
0
11 Aug 2023
A Vulnerability of Attribution Methods Using Pre-Softmax Scores
Miguel A. Lerma
Mirtha Lucas
FAtt
19
0
0
06 Jul 2023
Quantitative Analysis of Primary Attribution Explainable Artificial Intelligence Methods for Remote Sensing Image Classification
Akshatha Mohan
Joshua Peeples
24
4
0
06 Jun 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
Piotr Komorowski
Hubert Baniecki
P. Biecek
MedIm
33
27
0
12 Apr 2023
Explainable AI for Time Series via Virtual Inspection Layers
Johanna Vielhaben
Sebastian Lapuschkin
G. Montavon
Wojciech Samek
XAI
AI4TS
12
25
0
11 Mar 2023
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
30
22
0
17 Jan 2023
A clinically motivated self-supervised approach for content-based image retrieval of CT liver images
Kristoffer Wickstrøm
Eirik Agnalt Ostmo
Keyur Radiya
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
SSL
23
13
0
11 Jul 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
6
0
0
22 Jun 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
32
2
0
15 Jun 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
23
41
0
15 Feb 2022
Framework for Evaluating Faithfulness of Local Explanations
S. Dasgupta
Nave Frost
Michal Moshkovitz
FAtt
111
61
0
01 Feb 2022
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
29
64
0
24 Jun 2021
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1