Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.00770
Cited By
v1
v2 (latest)
Personalized explanation in machine learning: A conceptualization
3 January 2019
J. Schneider
J. Handali
XAI
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Personalized explanation in machine learning: A conceptualization"
21 / 21 papers shown
Title
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
152
1,972
0
08 Oct 2018
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
89
121
0
29 May 2018
Explainable Recommendation: A Survey and New Perspectives
Yongfeng Zhang
Xu Chen
XAI
LRM
124
879
0
30 Apr 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
111
219
0
20 Mar 2018
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
106
701
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
129
592
0
21 Feb 2018
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
150
3,989
0
06 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
110
244
0
02 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
252
1,849
0
30 Nov 2017
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
132
283
0
16 Nov 2017
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
193
593
0
13 Oct 2017
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
77
784
0
02 Oct 2017
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
259
4,287
0
22 Jun 2017
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
136
592
0
10 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
420
3,824
0
28 Feb 2017
Wide & Deep Learning for Recommender Systems
Heng-Tze Cheng
L. Koc
Jeremiah Harmsen
T. Shaked
Tushar Chandra
...
Zakaria Haque
Lichan Hong
Vihan Jain
Xiaobing Liu
Hemal Shah
HAI
VLM
200
3,673
0
24 Jun 2016
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
Abhishek Das
Harsh Agrawal
C. L. Zitnick
Devi Parikh
Dhruv Batra
109
467
0
11 Jun 2016
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,716
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,071
0
16 Feb 2016
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Anh Totti Nguyen
J. Yosinski
Jeff Clune
AAML
176
3,275
0
05 Dec 2014
Learning from Sparse Data by Exploiting Monotonicity Constraints
Eric Altendorf
Angelo C. Restificar
Thomas G. Dietterich
82
118
0
04 Jul 2012
1