ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,968 papers shown
Title
A Visual Interaction Framework for Dimensionality Reduction Based Data
  Exploration
A Visual Interaction Framework for Dimensionality Reduction Based Data Exploration
M. Cavallo
Çağatay Demiralp
55
55
0
28 Nov 2018
What is Interpretable? Using Machine Learning to Design Interpretable
  Decision-Support Systems
What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems
O. Lahav
Nicholas Mastronarde
M. Schaar
64
30
0
27 Nov 2018
Abduction-Based Explanations for Machine Learning Models
Abduction-Based Explanations for Machine Learning Models
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
FAtt
65
226
0
26 Nov 2018
Predicting Language Recovery after Stroke with Convolutional Networks on
  Stitched MRI
Predicting Language Recovery after Stroke with Convolutional Networks on Stitched MRI
Yusuf H. Roohani
Noor Sajid
Pranava Madhyastha
Cathy J. Price
T. Hope
19
5
0
26 Nov 2018
Attention, Please! Adversarial Defense via Activation Rectification and
  Preservation
Attention, Please! Adversarial Defense via Activation Rectification and Preservation
Shangxi Wu
Jitao Sang
Kaiyuan Xu
Jiaming Zhang
Jian Yu
AAML
52
7
0
24 Nov 2018
Interpretable Convolutional Filters with SincNet
Interpretable Convolutional Filters with SincNet
Mirco Ravanelli
Yoshua Bengio
93
107
0
23 Nov 2018
Representer Point Selection for Explaining Deep Neural Networks
Representer Point Selection for Explaining Deep Neural Networks
Chih-Kuan Yeh
Joon Sik Kim
Ian En-Hsu Yen
Pradeep Ravikumar
TDI
103
254
0
23 Nov 2018
State of the Art in Fair ML: From Moral Philosophy and Legislation to
  Fair Classifiers
State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers
Elias Baumann
J. L. Rumberger
FaML
38
4
0
20 Nov 2018
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector
  Predictions
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions
Denis A. Gudovskiy
Alec Hodgkinson
Takuya Yamaguchi
Yasunori Ishii
Sotaro Tsukizawa
FAtt
74
13
0
19 Nov 2018
On Human Predictions with Explanations and Predictions of Machine
  Learning Models: A Case Study on Deception Detection
On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection
Vivian Lai
Chenhao Tan
78
380
0
19 Nov 2018
Explicit Bias Discovery in Visual Question Answering Models
Explicit Bias Discovery in Visual Question Answering Models
Varun Manjunatha
Nirat Saini
L. Davis
CMLFAtt
67
93
0
19 Nov 2018
Understanding Learned Models by Identifying Important Features at the
  Right Resolution
Understanding Learned Models by Identifying Important Features at the Right Resolution
Kyubin Lee
Akshay Sood
M. Craven
44
8
0
18 Nov 2018
Interpretable Credit Application Predictions With Counterfactual
  Explanations
Interpretable Credit Application Predictions With Counterfactual Explanations
Rory Mc Grath
Luca Costabello
Chan Le Van
Paul Sweeney
F. Kamiab
Zhao Shen
Freddy Lecue
FAtt
81
109
0
13 Nov 2018
TED: Teaching AI to Explain its Decisions
TED: Teaching AI to Explain its Decisions
Michael Hind
Dennis L. Wei
Murray Campbell
Noel Codella
Amit Dhurandhar
Aleksandra Mojsilović
Karthikeyan N. Ramamurthy
Kush R. Varshney
80
111
0
12 Nov 2018
Characterizing machine learning process: A maturity framework
Characterizing machine learning process: A maturity framework
Rama Akkiraju
Vibha Sinha
Anbang Xu
J. Mahmud
Pritam Gundecha
Zhe Liu
Xiaotong Liu
John Schumacher
74
60
0
12 Nov 2018
Correction of AI systems by linear discriminants: Probabilistic
  foundations
Correction of AI systems by linear discriminants: Probabilistic foundations
A. N. Gorban
A. Golubkov
Bogdan Grechuk
E. M. Mirkes
I. Tyukin
32
63
0
11 Nov 2018
A Survey on Data Collection for Machine Learning: a Big Data -- AI
  Integration Perspective
A Survey on Data Collection for Machine Learning: a Big Data -- AI Integration Perspective
Yuji Roh
A. Mishra
Steven Euijong Whang
84
685
0
08 Nov 2018
Contrastive Explanation: A Structural-Model Approach
Contrastive Explanation: A Structural-Model Approach
Tim Miller
CML
75
167
0
07 Nov 2018
Explaining Deep Learning Models - A Bayesian Non-parametric Approach
Explaining Deep Learning Models - A Bayesian Non-parametric Approach
Wenbo Guo
Sui Huang
Yunzhe Tao
Masashi Sugiyama
Lin Lin
BDL
48
47
0
07 Nov 2018
YASENN: Explaining Neural Networks via Partitioning Activation Sequences
YASENN: Explaining Neural Networks via Partitioning Activation Sequences
Yaroslav Zharov
Denis Korzhenkov
J. Lyu
Alexander Tuzhilin
FAttAAML
23
6
0
07 Nov 2018
Deep Weighted Averaging Classifiers
Deep Weighted Averaging Classifiers
Dallas Card
Michael J.Q. Zhang
Hao Tang
94
41
0
06 Nov 2018
Progressive Disclosure: Designing for Effective Transparency
Progressive Disclosure: Designing for Effective Transparency
Aaron Springer
Ling Huang
65
16
0
06 Nov 2018
"I had a solid theory before but it's falling apart": Polarizing Effects
  of Algorithmic Transparency
"I had a solid theory before but it's falling apart": Polarizing Effects of Algorithmic Transparency
Aaron Springer
S. Whittaker
32
6
0
06 Nov 2018
Explaining Explanations in AI
Explaining Explanations in AI
Brent Mittelstadt
Chris Russell
Sandra Wachter
XAI
122
666
0
04 Nov 2018
What evidence does deep learning model use to classify Skin Lesions?
What evidence does deep learning model use to classify Skin Lesions?
Xiaoxiao Li
Junyan Wu
Eric Z. Chen
Hongda Jiang
63
9
0
02 Nov 2018
Towards Explainable NLP: A Generative Explanation Framework for Text
  Classification
Towards Explainable NLP: A Generative Explanation Framework for Text Classification
Hui Liu
Qingyu Yin
William Yang Wang
118
148
0
01 Nov 2018
SDRL: Interpretable and Data-efficient Deep Reinforcement Learning
  Leveraging Symbolic Planning
SDRL: Interpretable and Data-efficient Deep Reinforcement Learning Leveraging Symbolic Planning
Daoming Lyu
Fangkai Yang
Bo Liu
Steven M. Gustafson
OffRL
96
152
0
31 Oct 2018
Multimodal Machine Learning for Automated ICD Coding
Multimodal Machine Learning for Automated ICD Coding
Keyang Xu
Mike Lam
Jingzhi Pang
Xin Gao
Charlotte Band
...
A. Khanna
J. Cywinski
K. Maheshwari
P. Xie
Eric Xing
77
109
0
31 Oct 2018
Compositional Attention Networks for Interpretability in Natural
  Language Question Answering
Compositional Attention Networks for Interpretability in Natural Language Question Answering
Selvakumar Murugan
Suriyadeepan Ramamoorthy
Vaidheeswaran Archana
Malaikannan Sankarasubbu
50
3
0
30 Oct 2018
Do Explanations make VQA Models more Predictable to a Human?
Do Explanations make VQA Models more Predictable to a Human?
Arjun Chandrasekaran
Viraj Prabhu
Deshraj Yadav
Prithvijit Chattopadhyay
Devi Parikh
FAtt
139
97
0
29 Oct 2018
Learning and Interpreting Multi-Multi-Instance Learning Networks
Learning and Interpreting Multi-Multi-Instance Learning Networks
Alessandro Tibo
M. Jaeger
P. Frasconi
AI4CE
141
23
0
26 Oct 2018
Interpreting Black Box Predictions using Fisher Kernels
Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna
Been Kim
Joydeep Ghosh
Oluwasanmi Koyejo
FAtt
88
104
0
23 Oct 2018
What can AI do for me: Evaluating Machine Learning Interpretations in
  Cooperative Play
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
82
130
0
23 Oct 2018
On The Stability of Interpretable Models
On The Stability of Interpretable Models
Riccardo Guidotti
Salvatore Ruggieri
FAtt
64
10
0
22 Oct 2018
Finding Average Regret Ratio Minimizing Set in Database
Finding Average Regret Ratio Minimizing Set in Database
Sepanta Zeighami
Raymond Chi-Wing Wong
29
16
0
18 Oct 2018
Explaining Machine Learning Models using Entropic Variable Projection
Explaining Machine Learning Models using Entropic Variable Projection
François Bachoc
Fabrice Gamboa
Max Halford
Jean-Michel Loubes
Laurent Risser
FAtt
63
5
0
18 Oct 2018
Explaining Black Boxes on Sequential Data using Weighted Automata
Explaining Black Boxes on Sequential Data using Weighted Automata
Stéphane Ayache
Rémi Eyraud
Noé Goudian
69
44
0
12 Oct 2018
Feature Selection using Stochastic Gates
Feature Selection using Stochastic Gates
Yutaro Yamada
Ofir Lindenbaum
S. Negahban
Y. Kluger
176
42
0
09 Oct 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
90
78
0
09 Oct 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAttAAML
76
129
0
08 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
179
1,973
0
08 Oct 2018
On the Art and Science of Machine Learning Explanations
On the Art and Science of Machine Learning Explanations
Patrick Hall
FAttXAI
92
30
0
05 Oct 2018
Local Interpretable Model-agnostic Explanations of Bayesian Predictive
  Models via Kullback-Leibler Projections
Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections
Tomi Peltola
FAttBDL
70
40
0
05 Oct 2018
Projective Inference in High-dimensional Problems: Prediction and
  Feature Selection
Projective Inference in High-dimensional Problems: Prediction and Feature Selection
Juho Piironen
Markus Paasiniemi
Aki Vehtari
74
96
0
04 Oct 2018
Interpreting Layered Neural Networks via Hierarchical Modular
  Representation
Interpreting Layered Neural Networks via Hierarchical Modular Representation
C. Watanabe
84
19
0
03 Oct 2018
Stakeholders in Explainable AI
Stakeholders in Explainable AI
Alun D. Preece
Daniel Harborne
Dave Braines
Richard J. Tomsett
Supriyo Chakraborty
55
157
0
29 Sep 2018
Explainable Black-Box Attacks Against Model-based Authentication
Explainable Black-Box Attacks Against Model-based Authentication
Washington Garcia
Joseph I. Choi
S. K. Adari
S. Jha
Kevin R. B. Butler
92
10
0
28 Sep 2018
A User-based Visual Analytics Workflow for Exploratory Model Analysis
A User-based Visual Analytics Workflow for Exploratory Model Analysis
Dylan Cashman
S. Humayoun
Florian Heimerl
Kendall Park
Subhajit Das
...
Abigail Mosca
J. Stasko
Alex Endert
Michael Gleicher
Remco Chang
65
41
0
27 Sep 2018
Understanding Convolutional Neural Networks for Text Classification
Understanding Convolutional Neural Networks for Text Classification
Alon Jacovi
Oren Sar Shalom
Yoav Goldberg
FAtt
75
221
0
21 Sep 2018
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing
  System Failure
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
Besmira Nushi
Ece Kamar
Eric Horvitz
58
142
0
19 Sep 2018
Previous
123...939495...9899100
Next