ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

16 / 4,966 papers shown
Title
Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach
Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach
Satoshi Hara
K. Hayashi
110
91
0
29 Jun 2016
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
90
839
0
16 Jun 2016
Increasing the Interpretability of Recurrent Neural Networks Using
  Hidden Markov Models
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
Viktoriya Krakovna
Finale Doshi-Velez
AI4CE
167
69
0
16 Jun 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
131
812
0
13 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,721
0
10 Jun 2016
Auditing Black-box Models for Indirect Influence
Auditing Black-box Models for Indirect Influence
Philip Adler
Casey Falk
Sorelle A. Friedler
Gabriel Rybeck
C. Scheidegger
Brandon Smith
Suresh Venkatasubramanian
TDIMLAU
176
293
0
23 Feb 2016
Towards Universal Paraphrastic Sentence Embeddings
Towards Universal Paraphrastic Sentence Embeddings
John Wieting
Joey Tianyi Zhou
Kevin Gimpel
Karen Livescu
AI4TS
192
555
0
25 Nov 2015
Interpretable classifiers using rules and Bayesian analysis: Building a
  better stroke prediction model
Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model
Benjamin Letham
Cynthia Rudin
Tyler H. McCormick
D. Madigan
FAtt
75
747
0
05 Nov 2015
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Berk Ustun
Cynthia Rudin
147
354
0
15 Feb 2015
Show, Attend and Tell: Neural Image Caption Generation with Visual
  Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Ke Xu
Jimmy Ba
Ryan Kiros
Kyunghyun Cho
Aaron Courville
Ruslan Salakhutdinov
R. Zemel
Yoshua Bengio
DiffM
352
10,097
0
10 Feb 2015
Deep Visual-Semantic Alignments for Generating Image Descriptions
Deep Visual-Semantic Alignments for Generating Image Descriptions
A. Karpathy
Li Fei-Fei
156
5,601
0
07 Dec 2014
Falling Rule Lists
Falling Rule Lists
Fulton Wang
Cynthia Rudin
77
258
0
21 Nov 2014
Going Deeper with Convolutions
Going Deeper with Convolutions
Christian Szegedy
Wei Liu
Yangqing Jia
P. Sermanet
Scott E. Reed
Dragomir Anguelov
D. Erhan
Vincent Vanhoucke
Andrew Rabinovich
512
43,741
0
17 Sep 2014
Distributed Representations of Words and Phrases and their
  Compositionality
Distributed Representations of Words and Phrases and their Compositionality
Tomas Mikolov
Ilya Sutskever
Kai Chen
G. Corrado
J. Dean
NAIOCL
416
33,593
0
16 Oct 2013
A Computational Approach to Politeness with Application to Social
  Factors
A Computational Approach to Politeness with Application to Social Factors
Cristian Danescu-Niculescu-Mizil
Moritz Sudhof
Dan Jurafsky
J. Leskovec
Christopher Potts
96
440
0
25 Jun 2013
How to Explain Individual Classification Decisions
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
157
1,106
0
06 Dec 2009
Previous
123...1009899