Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.11266
Cited By
What You See is What You Classify: Black Box Attributions
23 May 2022
Steven Stalder
Nathanael Perraudin
R. Achanta
Fernando Perez-Cruz
Michele Volpi
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What You See is What You Classify: Black Box Attributions"
7 / 7 papers shown
Title
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
Xianren Zhang
Dongwon Lee
Suhang Wang
VLM
FAtt
45
3
0
27 Jul 2024
Explainable Image Recognition via Enhanced Slot-attention Based Classifier
Bowen Wang
Liangzhi Li
Jiahao Zhang
Yuta Nakashima
Hajime Nagahara
OCL
44
0
0
08 Jul 2024
Challenges and Opportunities in Text Generation Explainability
Kenza Amara
Rita Sevastjanova
Mennatallah El-Assady
SILM
45
2
0
14 May 2024
Q-SENN: Quantized Self-Explaining Neural Networks
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
AAML
MILM
28
6
0
21 Dec 2023
Counterfactual Image Generation for adversarially robust and interpretable Classifiers
Rafael Bischof
F. Scheidegger
Michael A. Kraus
A. Malossi
AAML
30
2
0
01 Oct 2023
From Classification to Segmentation with Explainable AI: A Study on Crack Detection and Growth Monitoring
Florent Forest
Hugo Porta
D. Tuia
Olga Fink
24
7
0
20 Sep 2023
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
Naveed Akhtar
XAI
VLM
30
7
0
31 Jan 2023
1