ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

9 / 4,309 papers shown
Title
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
183
933
0
21 Oct 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
89
19,634
0
07 Oct 2016
Correct classification for big/smart/fast data machine learning
Correct classification for big/smart/fast data machine learning
S. Stepanov
13
0
0
27 Sep 2016
A deep learning model for estimating story points
A deep learning model for estimating story points
Morakot Choetkiertikul
Hoa Dam
T. Tran
Trang Pham
A. Ghose
Tim Menzies
24
170
0
02 Sep 2016
Towards Transparent AI Systems: Interpreting Visual Question Answering
  Models
Towards Transparent AI Systems: Interpreting Visual Question Answering Models
Yash Goyal
Akrit Mohapatra
Devi Parikh
Dhruv Batra
25
74
0
31 Aug 2016
Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach
Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach
Satoshi Hara
K. Hayashi
17
91
0
29 Jun 2016
Increasing the Interpretability of Recurrent Neural Networks Using
  Hidden Markov Models
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
Viktoriya Krakovna
Finale Doshi-Velez
AI4CE
39
69
0
16 Jun 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
59
805
0
13 Jun 2016
Auditing Black-box Models for Indirect Influence
Auditing Black-box Models for Indirect Influence
Philip Adler
Casey Falk
Sorelle A. Friedler
Gabriel Rybeck
C. Scheidegger
Brandon Smith
Suresh Venkatasubramanian
TDI
MLAU
20
287
0
23 Feb 2016
Previous
123...858687