Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.06219
Cited By
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
13 June 2022
Paul Novello
Thomas Fel
David Vigouroux
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure"
21 / 21 papers shown
Title
Towards Robust and Generalizable Gerchberg Saxton based Physics Inspired Neural Networks for Computer Generated Holography: A Sensitivity Analysis Framework
Ankit Amrutkar
Björn Kampa
Volkmar Schulz
Johannes Stegmaier
Markus Rothermel
Dorit Merhof
23
0
0
30 Apr 2025
Interpreting Object-level Foundation Models via Visual Precision Search
Ruoyu Chen
Siyuan Liang
Jingzhi Li
Shiming Liu
Maosen Li
Zheng Huang
Qichuan Geng
Xiaochun Cao
FAtt
82
4
0
25 Nov 2024
Local vs distributed representations: What is the right basis for interpretability?
Julien Colin
L. Goetschalckx
Thomas Fel
Victor Boutin
Jay Gopal
Thomas Serre
Nuria Oliver
HAI
42
2
0
06 Nov 2024
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
54
0
0
10 Oct 2024
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAML
FAtt
35
0
0
02 Oct 2024
The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach
David Bertoin
Eduardo Hugo Sanchez
Mehdi Zouitine
Emmanuel Rachelson
45
0
0
25 Sep 2024
Understanding Visual Feature Reliance through the Lens of Complexity
Thomas Fel
Louis Bethune
Andrew Kyle Lampinen
Thomas Serre
Katherine Hermann
FAtt
CoGe
40
6
0
08 Jul 2024
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang
Shiwei Tan
Hao Wang
BDL
46
6
0
18 Jun 2024
Understanding Inhibition Through Maximally Tense Images
Chris Hamblin
Srijani Saha
Talia Konkle
George Alvarez
FAtt
40
0
0
08 Jun 2024
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images
Christopher Hamblin
Thomas Fel
Srijani Saha
Talia Konkle
George A. Alvarez
FAtt
31
3
0
15 Feb 2024
Path Choice Matters for Clear Attribution in Path Methods
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
18
1
0
19 Jan 2024
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference
Edouard Yvinec
Arnaud Dapogny
Kévin Bailly
Xavier Fischer
AAML
16
2
0
09 Aug 2023
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
FAtt
32
0
0
18 Jul 2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
24
19
0
11 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
16
51
0
11 Jun 2023
Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain
Gabriel Kasmi
L. Dubus
Yves-Marie Saint Drenan
Philippe Blanc
FAtt
18
3
0
24 May 2023
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
Victor Boutin
Thomas Fel
Lakshya Singhal
Rishav Mukherji
Akash Nagaraj
Julien Colin
Thomas Serre
DiffM
30
6
0
27 Jan 2023
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
103
0
17 Nov 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
33
101
0
06 Dec 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
120
60
0
07 Nov 2021
1