ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.04394
  4. Cited By
Xplique: A Deep Learning Explainability Toolbox

Xplique: A Deep Learning Explainability Toolbox

9 June 2022
Thomas Fel
Lucas Hervier
David Vigouroux
Antonin Poché
Justin Plakoo
Rémi Cadène
Mathieu Chalvidal
Julien Colin
Thibaut Boissin
Louis Bethune
Agustin Picard
C. Nicodeme
Laurent Gardes
G. Flandin
Thomas Serre
ArXivPDFHTML

Papers citing "Xplique: A Deep Learning Explainability Toolbox"

8 / 8 papers shown
Title
Representational Similarity via Interpretable Visual Concepts
Representational Similarity via Interpretable Visual Concepts
Neehar Kondapaneni
Oisin Mac Aodha
Pietro Perona
DRL
213
0
0
19 Mar 2025
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
47
0
0
10 Oct 2024
Deep Natural Language Feature Learning for Interpretable Prediction
Deep Natural Language Feature Learning for Interpretable Prediction
Felipe Urrutia
Cristian Buc
Valentin Barriere
26
1
0
09 Nov 2023
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
103
0
17 Nov 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
120
58
0
07 Nov 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1