Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.01928
Cited By
v1
v2
v3 (latest)
Label-Free Explainability for Unsupervised Models
3 March 2022
Jonathan Crabbé
M. Schaar
FAtt
MILM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Label-Free Explainability for Unsupervised Models"
37 / 37 papers shown
Title
Explaining Latent Representations with a Corpus of Examples
Jonathan Crabbé
Zhaozhi Qian
F. Imrie
M. Schaar
FAtt
59
38
0
28 Oct 2021
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
90
81
0
09 Jun 2021
Understanding Instance-based Interpretability of Variational Auto-Encoders
Zhifeng Kong
Kamalika Chaudhuri
TDI
63
28
0
29 May 2021
Learning outside the Black-Box: The pursuit of interpretable models
Jonathan Crabbé
Yao Zhang
W. Zame
M. Schaar
35
24
0
17 Nov 2020
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
144
846
0
16 Sep 2020
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
158
604
0
16 Jun 2020
Supervised Contrastive Learning
Prannay Khosla
Piotr Teterwak
Chen Wang
Aaron Sarna
Yonglong Tian
Phillip Isola
Aaron Maschinot
Ce Liu
Dilip Krishnan
SSL
165
4,572
0
23 Apr 2020
Building and Interpreting Deep Similarity Models
Oliver Eberle
Jochen Büttner
Florian Kräutli
K. Müller
Matteo Valleriani
G. Montavon
56
58
0
11 Mar 2020
A Distributional Framework for Data Valuation
Amirata Ghorbani
Michael P. Kim
James Zou
TDI
52
132
0
27 Feb 2020
Estimating Training Data Influence by Tracing Gradient Descent
G. Pruthi
Frederick Liu
Mukund Sundararajan
Satyen Kale
TDI
99
417
0
19 Feb 2020
Decision-Making with Auto-Encoding Variational Bayes
Romain Lopez
Pierre Boyeau
Nir Yosef
Michael I. Jordan
Jeffrey Regier
BDL
507
10,591
0
17 Feb 2020
A Simple Framework for Contrastive Learning of Visual Representations
Ting-Li Chen
Simon Kornblith
Mohammad Norouzi
Geoffrey E. Hinton
SSL
378
18,866
0
13 Feb 2020
Inference with Deep Generative Priors in High Dimensions
Jillian R. Fisher
Mojtaba Sahraee-Ardakan
S. Rangan
Zaid Harchaoui
Yejin Choi
BDL
51
47
0
08 Nov 2019
Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
L. Brocki
N. C. Chung
FAtt
45
21
0
29 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
127
6,293
0
22 Oct 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
110
1,451
0
17 Jul 2019
Learning Interpretable Disentangled Representations using Adversarial VAEs
Mhd Hasan Sarhan
Abouzar Eslami
Nassir Navab
Shadi Albarqouni
DRL
OOD
128
21
0
17 Apr 2019
Data Shapley: Equitable Valuation of Data for Machine Learning
Amirata Ghorbani
James Zou
TDI
FedML
78
789
0
05 Apr 2019
Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey
Longlong Jing
Yingli Tian
SSL
159
1,700
0
16 Feb 2019
Representer Point Selection for Explaining Deep Neural Networks
Chih-Kuan Yeh
Joon Sik Kim
Ian En-Hsu Yen
Pradeep Ravikumar
TDI
79
253
0
23 Nov 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAtt
UQCV
116
683
0
28 Jun 2018
Understanding disentangling in
β
β
β
-VAE
Christopher P. Burgess
I. Higgins
Arka Pal
Loic Matthey
Nicholas Watters
Guillaume Desjardins
Alexander Lerchner
CoGe
DRL
68
831
0
10 Apr 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OOD
AAML
149
508
0
13 Mar 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
227
1,850
0
30 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
127
2,361
0
01 Nov 2017
Recent Trends in Deep Learning Based Natural Language Processing
Tom Young
Devamanyu Hazarika
Soujanya Poria
Min Zhang
75
2,835
0
09 Aug 2017
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
293
2,267
0
24 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,018
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
76
1,525
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,881
0
10 Apr 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
216
2,905
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
193
6,018
0
04 Mar 2017
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,706
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,033
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,426
0
10 Dec 2015
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.0K
150,312
0
22 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
314
7,316
0
20 Dec 2013
1