ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.07473
  4. Cited By
Scrutinizing XAI using linear ground-truth data with suppressor
  variables
v1v2 (latest)

Scrutinizing XAI using linear ground-truth data with suppressor variables

14 November 2021
Rick Wilming
Céline Budding
K. Müller
Stefan Haufe
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Scrutinizing XAI using linear ground-truth data with suppressor variables"

12 / 12 papers shown
Title
Minimizing False-Positive Attributions in Explanations of Non-Linear Models
Minimizing False-Positive Attributions in Explanations of Non-Linear Models
Anders Gjølbye
Stefan Haufe
Lars Kai Hansen
251
0
0
16 May 2025
Lost in Context: The Influence of Context on Feature Attribution Methods
  for Object Recognition
Lost in Context: The Influence of Context on Feature Attribution Methods for Object Recognition
Sayanta Adhikari
Rishav Kumar
Konda Reddy Mopuri
Rajalakshmi Pachamuthu
89
0
0
05 Nov 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
77
2
0
22 Sep 2024
GECOBench: A Gender-Controlled Text Dataset and Benchmark for
  Quantifying Biases in Explanations
GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Rick Wilming
Artur Dox
Hjalmar Schulz
Marta Oliveira
Benedict Clark
Stefan Haufe
100
2
0
17 Jun 2024
EXACT: Towards a platform for empirically benchmarking Machine Learning
  model explanation methods
EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods
Benedict Clark
Rick Wilming
Artur Dox
Paul Eschenbach
Sami Hached
...
Hjalmar Schulz
Luca Matteo Cornils
Danny Panknin
Ahcène Boubekki
Stefan Haufe
41
0
0
20 May 2024
What's meant by explainable model: A Scoping Review
What's meant by explainable model: A Scoping Review
Mallika Mainali
Rosina O. Weber
XAI
52
0
0
18 Jul 2023
XAI-TRIS: Non-linear image benchmarks to quantify false positive
  post-hoc attribution of feature importance
XAI-TRIS: Non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance
Benedict Clark
Rick Wilming
Stefan Haufe
94
5
0
22 Jun 2023
Benchmark data to study the influence of pre-training on explanation
  performance in MR image classification
Benchmark data to study the influence of pre-training on explanation performance in MR image classification
Marta Oliveira
Rick Wilming
Benedict Clark
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
55
1
0
21 Jun 2023
A Lightweight Generative Model for Interpretable Subject-level
  Prediction
A Lightweight Generative Model for Interpretable Subject-level Prediction
C. Mauri
Stefano Cerri
Oula Puonti
Mark Muhlau
Koen van Leemput
MedImAI4CE
95
0
0
19 Jun 2023
Theoretical Behavior of XAI Methods in the Presence of Suppressor
  Variables
Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables
Rick Wilming
Leo Kieslich
Benedict Clark
Stefan Haufe
74
10
0
02 Jun 2023
Black Box Model Explanations and the Human Interpretability Expectations
  -- An Analysis in the Context of Homicide Prediction
Black Box Model Explanations and the Human Interpretability Expectations -- An Analysis in the Context of Homicide Prediction
José Ribeiro
Nikolas Carneiro
Ronnie Cley de Oliveira Alves
45
0
0
19 Oct 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.3K
17,178
0
16 Feb 2016
1