ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

16 / 4,266 papers shown
Title
Interpretation of Prediction Models Using the Input Gradient
Interpretation of Prediction Models Using the Input Gradient
Yotam Hechtlinger
FaML
AI4CE
FAtt
16
85
0
23 Nov 2016
Programs as Black-Box Explanations
Programs as Black-Box Explanations
Sameer Singh
Marco Tulio Ribeiro
Carlos Guestrin
FAtt
24
54
0
22 Nov 2016
An unexpected unity among methods for interpreting model predictions
An unexpected unity among methods for interpreting model predictions
Scott M. Lundberg
Su-In Lee
FAtt
16
109
0
22 Nov 2016
"Influence Sketching": Finding Influential Samples In Large-Scale
  Regressions
"Influence Sketching": Finding Influential Samples In Large-Scale Regressions
M. Wojnowicz
Ben Cruz
Xuan Zhao
Brian Wallace
Matt Wolff
Jay Luan
Caleb Crable
TDI
19
29
0
17 Nov 2016
Nothing Else Matters: Model-Agnostic Explanations By Identifying
  Prediction Invariance
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
17
63
0
17 Nov 2016
Low-rank Bilinear Pooling for Fine-Grained Classification
Low-rank Bilinear Pooling for Fine-Grained Classification
Shu Kong
Charless C. Fowlkes
33
344
0
16 Nov 2016
Link Prediction using Embedded Knowledge Graphs
Link Prediction using Embedded Knowledge Graphs
Yelong Shen
Po-Sen Huang
Ming-Wei Chang
Jianfeng Gao
27
26
0
14 Nov 2016
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
183
933
0
21 Oct 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
68
19,607
0
07 Oct 2016
Correct classification for big/smart/fast data machine learning
Correct classification for big/smart/fast data machine learning
S. Stepanov
13
0
0
27 Sep 2016
A deep learning model for estimating story points
A deep learning model for estimating story points
Morakot Choetkiertikul
Hoa Dam
T. Tran
Trang Pham
A. Ghose
Tim Menzies
24
170
0
02 Sep 2016
Towards Transparent AI Systems: Interpreting Visual Question Answering
  Models
Towards Transparent AI Systems: Interpreting Visual Question Answering Models
Yash Goyal
Akrit Mohapatra
Devi Parikh
Dhruv Batra
25
74
0
31 Aug 2016
Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach
Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach
Satoshi Hara
K. Hayashi
12
91
0
29 Jun 2016
Increasing the Interpretability of Recurrent Neural Networks Using
  Hidden Markov Models
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
Viktoriya Krakovna
Finale Doshi-Velez
AI4CE
37
69
0
16 Jun 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
56
805
0
13 Jun 2016
Auditing Black-box Models for Indirect Influence
Auditing Black-box Models for Indirect Influence
Philip Adler
Casey Falk
Sorelle A. Friedler
Gabriel Rybeck
C. Scheidegger
Brandon Smith
Suresh Venkatasubramanian
TDI
MLAU
12
287
0
23 Feb 2016
Previous
123...848586