ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07728
  4. Cited By
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

15 February 2022
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
    AAML
ArXivPDFHTML

Papers citing "Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis"

32 / 32 papers shown
Title
Towards Robust and Generalizable Gerchberg Saxton based Physics Inspired Neural Networks for Computer Generated Holography: A Sensitivity Analysis Framework
Towards Robust and Generalizable Gerchberg Saxton based Physics Inspired Neural Networks for Computer Generated Holography: A Sensitivity Analysis Framework
Ankit Amrutkar
Björn Kampa
Volkmar Schulz
Johannes Stegmaier
Markus Rothermel
Dorit Merhof
16
0
0
30 Apr 2025
Tokenize Image Patches: Global Context Fusion for Effective Haze Removal in Large Images
Tokenize Image Patches: Global Context Fusion for Effective Haze Removal in Large Images
Jiuchen Chen
Xinyu Yan
Qizhi Xu
Kaiqi Li
VLM
27
0
0
13 Apr 2025
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Nhi Pham
Bernt Schiele
Adam Kortylewski
Jonas Fischer
56
0
0
17 Mar 2025
Exploring Channel Distinguishability in Local Neighborhoods of the Model Space in Quantum Neural Networks
Exploring Channel Distinguishability in Local Neighborhoods of the Model Space in Quantum Neural Networks
Sabrina Herbst
S. S. Cranganore
Vincenzo De Maio
Ivona Brandić
45
0
0
17 Feb 2025
OMENN: One Matrix to Explain Neural Networks
OMENN: One Matrix to Explain Neural Networks
Adam Wróbel
Mikołaj Janusz
Bartosz Zieliñski
Dawid Rymarczyk
FAtt
AAML
75
0
0
03 Dec 2024
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci
Marco Gaido
Beatrice Savoldi
Matteo Negri
Mauro Cettolo
L. Bentivogli
54
1
0
03 Nov 2024
Explainable Image Recognition via Enhanced Slot-attention Based
  Classifier
Explainable Image Recognition via Enhanced Slot-attention Based Classifier
Bowen Wang
Liangzhi Li
Jiahao Zhang
Yuta Nakashima
Hajime Nagahara
OCL
44
0
0
08 Jul 2024
Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness
Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness
Kejia Zhang
Juanjuan Weng
Yuanzheng Cai
Zhiming Luo
Shaozi Li
AAML
59
0
0
04 Jul 2024
A Study of Nationality Bias in Names and Perplexity using Off-the-Shelf
  Affect-related Tweet Classifiers
A Study of Nationality Bias in Names and Perplexity using Off-the-Shelf Affect-related Tweet Classifiers
Valentin Barriere
Sebastian Cifuentes
28
0
0
01 Jul 2024
Understanding Inhibition Through Maximally Tense Images
Understanding Inhibition Through Maximally Tense Images
Chris Hamblin
Srijani Saha
Talia Konkle
George Alvarez
FAtt
32
0
0
08 Jun 2024
Local vs. Global Interpretability: A Computational Complexity
  Perspective
Local vs. Global Interpretability: A Computational Complexity Perspective
Shahaf Bassan
Guy Amir
Guy Katz
35
6
0
05 Jun 2024
Tensor Polynomial Additive Model
Tensor Polynomial Additive Model
Yang Chen
Ce Zhu
Jiani Liu
Yipeng Liu
TPM
25
0
0
05 Jun 2024
Feature Accentuation: Revealing 'What' Features Respond to in Natural
  Images
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images
Christopher Hamblin
Thomas Fel
Srijani Saha
Talia Konkle
George A. Alvarez
FAtt
21
3
0
15 Feb 2024
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI
  Benchmarks
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
AAML
61
20
0
12 Jan 2024
Keep the Faith: Faithful Explanations in Convolutional Neural Networks
  for Case-Based Reasoning
Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based Reasoning
Tom Nuno Wolf
Fabian Bongratz
Anne-Marie Rickmann
Sebastian Polsterl
Christian Wachinger
AAML
FAtt
40
6
0
15 Dec 2023
Deep Natural Language Feature Learning for Interpretable Prediction
Deep Natural Language Feature Learning for Interpretable Prediction
Felipe Urrutia
Cristian Buc
Valentin Barriere
26
1
0
09 Nov 2023
Natural Example-Based Explainability: a Survey
Natural Example-Based Explainability: a Survey
Antonin Poché
Lucas Hervier
M. Bakkay
XAI
26
11
0
05 Sep 2023
Formally Explaining Neural Networks within Reactive Systems
Formally Explaining Neural Networks within Reactive Systems
Shahaf Bassan
Guy Amir
Davide Corsi
Idan Refaeli
Guy Katz
AAML
25
15
0
31 Jul 2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude
  Constrained Optimization
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
16
18
0
11 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept
  Importance Estimation
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
16
49
0
11 Jun 2023
Assessment of the Reliablity of a Model's Decision by Generalizing
  Attribution to the Wavelet Domain
Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain
Gabriel Kasmi
L. Dubus
Yves-Marie Saint Drenan
Philippe Blanc
FAtt
16
3
0
24 May 2023
Diffusion Models as Artists: Are we Closing the Gap between Humans and
  Machines?
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
Victor Boutin
Thomas Fel
Lakshya Singhal
Rishav Mukherji
Akash Nagaraj
Julien Colin
Thomas Serre
DiffM
22
6
0
27 Jan 2023
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Harmonizing the object recognition strategies of deep neural networks
  with humans
Harmonizing the object recognition strategies of deep neural networks with humans
Thomas Fel
Ivan Felipe
Drew Linsley
Thomas Serre
30
71
0
08 Nov 2022
Towards Formal XAI: Formally Approximate Minimal Explanations of Neural
  Networks
Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks
Shahaf Bassan
Guy Katz
FAtt
AAML
21
22
0
25 Oct 2022
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal
  Transport Perspective
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M. Serrurier
Franck Mamalet
Thomas Fel
Louis Bethune
Thibaut Boissin
AAML
FAtt
26
4
0
14 Jun 2022
Xplique: A Deep Learning Explainability Toolbox
Xplique: A Deep Learning Explainability Toolbox
Thomas Fel
Lucas Hervier
David Vigouroux
Antonin Poché
Justin Plakoo
...
Agustin Picard
C. Nicodeme
Laurent Gardes
G. Flandin
Thomas Serre
11
30
0
09 Jun 2022
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
  Framework for Explainability Methods
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
25
101
0
06 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
114
58
0
07 Nov 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
242
3,681
0
28 Feb 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
226
1,835
0
03 Feb 2017
1