ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.00770
  4. Cited By
Personalized explanation in machine learning: A conceptualization
v1v2 (latest)

Personalized explanation in machine learning: A conceptualization

3 January 2019
J. Schneider
J. Handali
    XAIFAtt
ArXiv (abs)PDFHTML

Papers citing "Personalized explanation in machine learning: A conceptualization"

21 / 21 papers shown
Title
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
152
1,972
0
08 Oct 2018
Human-in-the-Loop Interpretability Prior
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
89
121
0
29 May 2018
Explainable Recommendation: A Survey and New Perspectives
Explainable Recommendation: A Survey and New Perspectives
Yongfeng Zhang
Xu Chen
XAILRM
124
879
0
30 Apr 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
111
219
0
20 Mar 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
106
701
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
129
592
0
21 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
150
3,989
0
06 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAttXAI
110
244
0
02 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
252
1,849
0
30 Nov 2017
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
132
283
0
16 Nov 2017
Deep Learning for Case-Based Reasoning through Prototypes: A Neural
  Network that Explains Its Predictions
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
193
593
0
13 Oct 2017
Interpretable Convolutional Neural Networks
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
77
784
0
02 Oct 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
259
4,287
0
22 Jun 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
136
592
0
10 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
420
3,824
0
28 Feb 2017
Wide & Deep Learning for Recommender Systems
Wide & Deep Learning for Recommender Systems
Heng-Tze Cheng
L. Koc
Jeremiah Harmsen
T. Shaked
Tushar Chandra
...
Zakaria Haque
Lichan Hong
Vihan Jain
Xiaobing Liu
Hemal Shah
HAIVLM
200
3,673
0
24 Jun 2016
Human Attention in Visual Question Answering: Do Humans and Deep
  Networks Look at the Same Regions?
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
Abhishek Das
Harsh Agrawal
C. L. Zitnick
Devi Parikh
Dhruv Batra
109
467
0
11 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,716
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Anh Totti Nguyen
J. Yosinski
Jeff Clune
AAML
176
3,275
0
05 Dec 2014
Learning from Sparse Data by Exploiting Monotonicity Constraints
Learning from Sparse Data by Exploiting Monotonicity Constraints
Eric Altendorf
Angelo C. Restificar
Thomas G. Dietterich
82
118
0
04 Jul 2012
1