ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.07969
  4. Cited By
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
v1v2v3v4v5v6 (latest)

On Completeness-aware Concept-Based Explanations in Deep Neural Networks

17 October 2019
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
    FAtt
ArXiv (abs)PDFHTMLGithub (53★)

Papers citing "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"

41 / 41 papers shown
Title
Autoencoding Random Forests
Autoencoding Random Forests
Binh Duc Vu
Jan Kapar
Marvin N. Wright
David S. Watson
143
0
0
27 May 2025
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
499
1
0
28 Apr 2025
Leakage and Interpretability in Concept-Based Models
Leakage and Interpretability in Concept-Based Models
Enrico Parisini
Tapabrata Chakraborti
Chris Harbron
Ben D. MacArthur
Christopher R. S. Banerji
100
1
0
18 Apr 2025
Measuring Leakage in Concept-Based Methods: An Information Theoretic Approach
Measuring Leakage in Concept-Based Methods: An Information Theoretic Approach
Mikael Makonnen
Moritz Vandenhirtz
Sonia Laguna
Julia E. Vogt
75
3
0
13 Apr 2025
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Numair Sani
Daniel Malinsky
I. Shpitser
CML
136
16
0
10 Jan 2025
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
232
0
0
29 Nov 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
235
0
0
10 Oct 2024
3VL: Using Trees to Improve Vision-Language Models' Interpretability
3VL: Using Trees to Improve Vision-Language Models' Interpretability
Nir Yellinek
Leonid Karlinsky
Raja Giryes
CoGeVLM
260
3
0
28 Dec 2023
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
99
833
0
09 Jul 2020
FACE: Feasible and Actionable Counterfactual Explanations
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
75
369
0
20 Sep 2019
Benchmarking Attribution Methods with Relative Feature Importance
Benchmarking Attribution Methods with Relative Feature Importance
Mengjiao Yang
Been Kim
FAttXAI
69
142
0
23 Jul 2019
Towards Realistic Individual Recourse and Actionable Explanations in
  Black-Box Decision Making Systems
Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
Shalmali Joshi
Oluwasanmi Koyejo
Warut D. Vijitbenjaronk
Been Kim
Joydeep Ghosh
FaML
69
188
0
22 Jul 2019
Explaining Classifiers with Causal Concept Effect (CaCE)
Explaining Classifiers with Causal Concept Effect (CaCE)
Yash Goyal
Amir Feder
Uri Shalit
Been Kim
CML
83
178
0
16 Jul 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAIELM
57
67
0
16 Jul 2019
EDUCE: Explaining model Decisions through Unsupervised Concepts
  Extraction
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
Diane Bouchacourt
Ludovic Denoyer
FAtt
56
21
0
28 May 2019
Counterfactual Visual Explanations
Counterfactual Visual Explanations
Yash Goyal
Ziyan Wu
Jan Ernst
Dhruv Batra
Devi Parikh
Stefan Lee
CML
80
512
0
16 Apr 2019
ProtoAttend: Attention-Based Prototypical Learning
ProtoAttend: Attention-Based Prototypical Learning
Sercan O. Arik
Tomas Pfister
57
19
0
17 Feb 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
78
454
0
27 Jan 2019
Unsupervised speech representation learning using WaveNet autoencoders
Unsupervised speech representation learning using WaveNet autoencoders
J. Chorowski
Ron J. Weiss
Samy Bengio
Aaron van den Oord
SSL
72
319
0
25 Jan 2019
Challenging Common Assumptions in the Unsupervised Learning of
  Disentangled Representations
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Francesco Locatello
Stefan Bauer
Mario Lucic
Gunnar Rätsch
Sylvain Gelly
Bernhard Schölkopf
Olivier Bachem
OOD
131
1,471
0
29 Nov 2018
Representer Point Selection for Explaining Deep Neural Networks
Representer Point Selection for Explaining Deep Neural Networks
Chih-Kuan Yeh
Joon Sik Kim
Ian En-Hsu Yen
Pradeep Ravikumar
TDI
79
253
0
23 Nov 2018
Interpreting Black Box Predictions using Fisher Kernels
Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna
Been Kim
Joydeep Ghosh
Oluwasanmi Koyejo
FAtt
83
104
0
23 Oct 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured
  Data
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAttTDI
115
216
0
08 Aug 2018
Grounding Visual Explanations
Grounding Visual Explanations
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
56
229
0
25 Jul 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
246
1,188
0
27 Jun 2018
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
64
82
0
19 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
97
1,862
0
31 May 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
124
591
0
21 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
140
3,970
0
06 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
227
1,850
0
30 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
127
2,361
0
01 Nov 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAttODL
207
2,235
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,018
0
22 May 2017
Learning to Generate Reviews and Discovering Sentiment
Learning to Generate Reviews and Discovering Sentiment
Alec Radford
Rafal Jozefowicz
Ilya Sutskever
99
510
0
05 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
216
2,905
0
14 Mar 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,033
0
16 Feb 2016
Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DVBDL
886
27,416
0
02 Dec 2015
Evaluating the visualization of what a Deep Neural Network has learned
Evaluating the visualization of what a Deep Neural Network has learned
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
139
1,199
0
21 Sep 2015
PCANet: A Simple Deep Learning Baseline for Image Classification?
PCANet: A Simple Deep Learning Baseline for Image Classification?
Tsung-Han Chan
Kui Jia
Shenghua Gao
Jiwen Lu
Zinan Zeng
Yi-An Ma
120
1,501
0
14 Apr 2014
Auto-Encoding Variational Bayes
Auto-Encoding Variational Bayes
Diederik P. Kingma
Max Welling
BDL
455
16,923
0
20 Dec 2013
Supervised Topic Models
Supervised Topic Models
David M. Blei
Jon D. McAuliffe
BDL
124
1,788
0
03 Mar 2010
1