ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,266 papers shown
Title
Fairness and Accountability Design Needs for Algorithmic Support in
  High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
Michael Veale
Max Van Kleek
Reuben Binns
25
409
0
03 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
38
241
0
02 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
17
809
0
02 Feb 2018
Interpretable Deep Convolutional Neural Networks via Meta-learning
Interpretable Deep Convolutional Neural Networks via Meta-learning
Xuan Liu
Xiaoguang Wang
Stan Matwin
FaML
27
6
0
02 Feb 2018
A Comparative Study of Rule Extraction for Recurrent Neural Networks
A Comparative Study of Rule Extraction for Recurrent Neural Networks
Qinglong Wang
Kaixuan Zhang
Alexander Ororbia
Masashi Sugiyama
Xue Liu
C. Lee Giles
18
11
0
16 Jan 2018
What do we need to build explainable AI systems for the medical domain?
What do we need to build explainable AI systems for the medical domain?
Andreas Holzinger
Chris Biemann
C. Pattichis
D. Kell
28
680
0
28 Dec 2017
The Robust Manifold Defense: Adversarial Training using Generative
  Models
The Robust Manifold Defense: Adversarial Training using Generative Models
A. Jalal
Andrew Ilyas
C. Daskalakis
A. Dimakis
AAML
31
174
0
26 Dec 2017
Towards the Augmented Pathologist: Challenges of Explainable-AI in
  Digital Pathology
Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology
Andreas Holzinger
Bernd Malle
Peter Kieseberg
P. Roth
Heimo Muller
Robert Reihs
K. Zatloukal
19
91
0
18 Dec 2017
A trans-disciplinary review of deep learning research for water
  resources scientists
A trans-disciplinary review of deep learning research for water resources scientists
Chaopeng Shen
AI4CE
33
682
0
06 Dec 2017
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
  Stop Worrying and Love the Social and Behavioural Sciences
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
Tim Miller
Piers Howe
L. Sonenberg
AI4TS
SyDa
11
372
0
02 Dec 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
77
1,800
0
30 Nov 2017
Contextual Outlier Interpretation
Contextual Outlier Interpretation
Ninghao Liu
DongHwa Shin
Xia Hu
17
72
0
28 Nov 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
37
676
0
26 Nov 2017
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
28
281
0
16 Nov 2017
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
Rushil Anirudh
Jayaraman J. Thiagarajan
R. Sridhar
T. Bremer
FAtt
AAML
23
12
0
15 Nov 2017
Dynamic Analysis of Executables to Detect and Characterize Malware
Dynamic Analysis of Executables to Detect and Characterize Malware
Michael R. Smith
J. Ingram
Christopher C. Lamb
T. Draelos
J. Doak
J. Aimone
C. James
23
13
0
10 Nov 2017
Visualizing and Understanding Atari Agents
Visualizing and Understanding Atari Agents
S. Greydanus
Anurag Koul
Jonathan Dodge
Alan Fern
FAtt
37
342
0
31 Oct 2017
Do Convolutional Neural Networks Learn Class Hierarchy?
Do Convolutional Neural Networks Learn Class Hierarchy?
B. Alsallakh
Amin Jourabloo
Mao Ye
Xiaoming Liu
Liu Ren
42
210
0
17 Oct 2017
Practical Machine Learning for Cloud Intrusion Detection: Challenges and
  the Way Forward
Practical Machine Learning for Cloud Intrusion Detection: Challenges and the Way Forward
Ramnath Kumar
Andrew W. Wicker
Matt Swann
AAML
21
43
0
20 Sep 2017
Human Understandable Explanation Extraction for Black-box Classification
  Models Based on Matrix Factorization
Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization
Jaedeok Kim
Ji-Hoon Seo
FAtt
18
8
0
18 Sep 2017
Learning the PE Header, Malware Detection with Minimal Domain Knowledge
Learning the PE Header, Malware Detection with Minimal Domain Knowledge
Edward Raff
Jared Sylvester
Charles K. Nicholas
28
118
0
05 Sep 2017
Understanding and Comparing Deep Neural Networks for Age and Gender
  Classification
Understanding and Comparing Deep Neural Networks for Age and Gender Classification
Sebastian Lapuschkin
Alexander Binder
K. Müller
Wojciech Samek
CVBM
37
135
0
25 Aug 2017
Machine learning for neural decoding
Machine learning for neural decoding
Joshua I. Glaser
Ari S. Benjamin
Raeed H. Chowdhury
M. Perich
L. Miller
Konrad Paul Kording
33
242
0
02 Aug 2017
Using Program Induction to Interpret Transition System Dynamics
Using Program Induction to Interpret Transition System Dynamics
Svetlin Penkov
S. Ramamoorthy
AI4CE
35
11
0
26 Jul 2017
Weakly Submodular Maximization Beyond Cardinality Constraints: Does
  Randomization Help Greedy?
Weakly Submodular Maximization Beyond Cardinality Constraints: Does Randomization Help Greedy?
Lin Chen
Moran Feldman
Amin Karbasi
21
47
0
13 Jul 2017
A Formal Framework to Characterize Interpretability of Procedures
A Formal Framework to Characterize Interpretability of Procedures
Amit Dhurandhar
Vijay Iyengar
Ronny Luss
Karthikeyan Shanmugam
15
19
0
12 Jul 2017
Efficient Data Representation by Selecting Prototypes with Importance
  Weights
Efficient Data Representation by Selecting Prototypes with Importance Weights
Karthik S. Gurumoorthy
Amit Dhurandhar
Guillermo Cecchi
Charu Aggarwal
26
22
0
05 Jul 2017
Interpretability via Model Extraction
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
26
129
0
29 Jun 2017
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
21
4,203
0
22 Jun 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
29
41
0
22 Jun 2017
Interpretable Predictions of Tree-based Ensembles via Actionable Feature
  Tweaking
Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking
Gabriele Tolomei
Fabrizio Silvestri
Andrew Haines
M. Lalmas
9
203
0
20 Jun 2017
Contextual Explanation Networks
Contextual Explanation Networks
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric Xing
CML
37
82
0
29 May 2017
Interpreting Blackbox Models via Model Extraction
Interpreting Blackbox Models via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
35
170
0
23 May 2017
Induction of Interpretable Possibilistic Logic Theories from Relational
  Data
Induction of Interpretable Possibilistic Logic Theories from Relational Data
Ondrej Kuzelka
Jesse Davis
Steven Schockaert
NAI
33
12
0
19 May 2017
A Workflow for Visual Diagnostics of Binary Classifiers using
  Instance-Level Explanations
A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations
Josua Krause
Aritra Dasgupta
Jordan Swartz
Yindalon Aphinyanagphongs
E. Bertini
FAtt
19
95
0
04 May 2017
Translating Neuralese
Translating Neuralese
Jacob Andreas
Anca Dragan
Dan Klein
31
58
0
23 Apr 2017
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models
Minsuk Kahng
Pierre Yves Andrews
Aditya Kalro
Duen Horng Chau
HAI
23
322
0
06 Apr 2017
It Takes Two to Tango: Towards Theory of AI's Mind
It Takes Two to Tango: Towards Theory of AI's Mind
Arjun Chandrasekaran
Deshraj Yadav
Prithvijit Chattopadhyay
Viraj Prabhu
Devi Parikh
38
54
0
03 Apr 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
43
583
0
10 Mar 2017
Streaming Weak Submodularity: Interpreting Neural Networks on the Fly
Streaming Weak Submodularity: Interpreting Neural Networks on the Fly
Ethan R. Elenberg
A. Dimakis
Moran Feldman
Amin Karbasi
19
89
0
08 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
45
5,865
0
04 Mar 2017
Rationalization: A Neural Machine Translation Approach to Generating
  Natural Language Explanations
Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
Upol Ehsan
Brent Harrison
Larry Chan
Mark O. Riedl
25
217
0
25 Feb 2017
EVE: Explainable Vector Based Embedding Technique Using Wikipedia
EVE: Explainable Vector Based Embedding Technique Using Wikipedia
M. A. Qureshi
Derek Greene
25
33
0
22 Feb 2017
Deep Reinforcement Learning: An Overview
Deep Reinforcement Learning: An Overview
Yuxi Li
OffRL
VLM
104
1,503
0
25 Jan 2017
Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
Rock Stevens
H. Aggarwal
Himani Arora
Sanghyun Hong
M. Hicks
Chetan Arora
SILM
AAML
11
18
0
17 Jan 2017
Identifying Best Interventions through Online Importance Sampling
Identifying Best Interventions through Online Importance Sampling
Rajat Sen
Karthikeyan Shanmugam
A. Dimakis
Sanjay Shakkottai
28
72
0
10 Jan 2017
Interactive Elicitation of Knowledge on Feature Relevance Improves
  Predictions in Small Data Sets
Interactive Elicitation of Knowledge on Feature Relevance Improves Predictions in Small Data Sets
L. Micallef
Iiris Sundin
Pekka Marttinen
Muhammad Ammad-ud-din
Tomi Peltola
Marta Soare
Giulio Jacucci
Samuel Kaski
11
28
0
07 Dec 2016
Making the V in VQA Matter: Elevating the Role of Image Understanding in
  Visual Question Answering
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Yash Goyal
Tejas Khot
D. Summers-Stay
Dhruv Batra
Devi Parikh
CoGe
143
3,130
0
02 Dec 2016
Interpreting the Predictions of Complex ML Models by Layer-wise
  Relevance Propagation
Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation
Wojciech Samek
G. Montavon
Alexander Binder
Sebastian Lapuschkin
K. Müller
FAtt
AI4CE
52
48
0
24 Nov 2016
Previous
123...848586
Next