Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2012.03089
Cited By
Understanding Interpretability by generalized distillation in Supervised Classification
5 December 2020
Adit Agarwal
Dr. K.K. Shukla
Arjan Kuijper
Anirban Mukhopadhyay
FaML
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Understanding Interpretability by generalized distillation in Supervised Classification"
10 / 10 papers shown
Title
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
240
1,650
0
28 Dec 2018
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
129
3,961
0
06 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
142
820
0
02 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
217
1,842
0
30 Nov 2017
Bounding and Counting Linear Regions of Deep Neural Networks
Thiago Serra
Christian Tjandraatmadja
Srikumar Ramalingam
MLT
65
250
0
06 Nov 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
283
8,904
0
25 Aug 2017
A Formal Framework to Characterize Interpretability of Procedures
Amit Dhurandhar
Vijay Iyengar
Ronny Luss
Karthikeyan Shanmugam
29
19
0
12 Jul 2017
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
201
3,873
0
10 Apr 2017
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
180
3,701
0
10 Jun 2016
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
90
1,254
0
08 Feb 2014
1