ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,309 papers shown
Title
Testing Monotonicity of Machine Learning Models
Testing Monotonicity of Machine Learning Models
Arnab Sharma
Heike Wehrheim
16
8
0
27 Feb 2020
The Emerging Landscape of Explainable AI Planning and Decision Making
The Emerging Landscape of Explainable AI Planning and Decision Making
Tathagata Chakraborti
S. Sreedharan
S. Kambhampati
35
112
0
26 Feb 2020
NeuralSens: Sensitivity Analysis of Neural Networks
NeuralSens: Sensitivity Analysis of Neural Networks
J. Pizarroso
J. Portela
A. Muñoz
12
60
0
26 Feb 2020
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
  Systems
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems
Vineel Nagisetty
Laura Graves
Joseph Scott
Vijay Ganesh
GAN
DRL
26
27
0
24 Feb 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Andrés Páez
27
191
0
22 Feb 2020
On The Reasons Behind Decisions
On The Reasons Behind Decisions
Adnan Darwiche
Auguste Hirth
FaML
25
145
0
21 Feb 2020
An Investigation of Interpretability Techniques for Deep Learning in
  Predictive Process Analytics
An Investigation of Interpretability Techniques for Deep Learning in Predictive Process Analytics
Catarina Moreira
Renuka Sindhgatta
Chun Ouyang
P. Bruza
Andreas Wichert
28
4
0
21 Feb 2020
Learning Global Transparent Models Consistent with Local Contrastive
  Explanations
Learning Global Transparent Models Consistent with Local Contrastive Explanations
Tejaswini Pedapati
Avinash Balakrishnan
Karthikeyan Shanmugam
Amit Dhurandhar
FAtt
22
0
0
19 Feb 2020
A Visual Analytics System for Multi-model Comparison on Clinical Data
  Predictions
A Visual Analytics System for Multi-model Comparison on Clinical Data Predictions
Yiran Li
Takanori Fujiwara
Y. Choi
Katherine K. Kim
K. Ma
OOD
24
27
0
18 Feb 2020
A Modified Perturbed Sampling Method for Local Interpretable
  Model-agnostic Explanation
A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation
Sheng Shi
Xinfeng Zhang
Wei Fan
FAtt
11
28
0
18 Feb 2020
Decoding pedestrian and automated vehicle interactions using immersive
  virtual reality and interpretable deep learning
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning
Arash Kalatian
Bilal Farooq
20
37
0
18 Feb 2020
On the Similarity of Deep Learning Representations Across Didactic and
  Adversarial Examples
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples
P. Douglas
F. Farahani
AAML
19
5
0
17 Feb 2020
Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and
  Other Tasks
Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and Other Tasks
M. Leming
Juan M Gorriz
J. Suckling
19
51
0
14 Feb 2020
Multiresolution Tensor Learning for Efficient and Interpretable Spatial
  Analysis
Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis
Jung Yeon Park
K. T. Carr
Stephan Zhang
Yisong Yue
Rose Yu
33
14
0
13 Feb 2020
Convex Density Constraints for Computing Plausible Counterfactual
  Explanations
Convex Density Constraints for Computing Plausible Counterfactual Explanations
André Artelt
Barbara Hammer
19
47
0
12 Feb 2020
Machine Learning in Python: Main developments and technology trends in
  data science, machine learning, and artificial intelligence
Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence
S. Raschka
Joshua Patterson
Corey J. Nolet
AI4CE
29
485
0
12 Feb 2020
Decisions, Counterfactual Explanations and Strategic Behavior
Decisions, Counterfactual Explanations and Strategic Behavior
Stratis Tsirtsis
Manuel Gomez Rodriguez
27
58
0
11 Feb 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
32
143
0
10 Feb 2020
LUNAR: Cellular Automata for Drifting Data Streams
LUNAR: Cellular Automata for Drifting Data Streams
J. Lobo
Javier Del Ser
Francisco Herrera
AI4TS
19
4
0
06 Feb 2020
Transparency and Trust in Human-AI-Interaction: The Role of
  Model-Agnostic Explanations in Computer Vision-Based Decision Support
Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support
Christian Meske
Enrico Bunde
24
7
0
04 Feb 2020
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for
  Sequential Decision-Making Problems with Inscrutable Representations
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations
S. Sreedharan
Utkarsh Soni
Mudit Verma
Siddharth Srivastava
S. Kambhampati
76
30
0
04 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
33
197
0
03 Feb 2020
Interpretability of Blackbox Machine Learning Models through Dataview
  Extraction and Shadow Model creation
Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation
Rupam Patir
Shubham Singhal
C. Anantaram
Vikram Goyal
16
0
0
02 Feb 2020
Black Box Explanation by Learning Image Exemplars in the Latent Feature
  Space
Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
Riccardo Guidotti
A. Monreale
Stan Matwin
D. Pedreschi
FAtt
24
67
0
27 Jan 2020
Visualisation of Medical Image Fusion and Translation for Accurate
  Diagnosis of High Grade Gliomas
Visualisation of Medical Image Fusion and Translation for Accurate Diagnosis of High Grade Gliomas
Nishant Kumar
Nico Hoffmann
M. Kirsch
Stefan Gumhold
MedIm
27
3
0
26 Jan 2020
How to Support Users in Understanding Intelligent Systems? Structuring
  the Discussion
How to Support Users in Understanding Intelligent Systems? Structuring the Discussion
Malin Eiband
Daniel Buschek
H. Hussmann
45
28
0
22 Jan 2020
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating
  Explainable AI Systems
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Zana Buçinca
Phoebe Lin
Krzysztof Z. Gajos
Elena L. Glassman
ELM
17
280
0
22 Jan 2020
Deceptive AI Explanations: Creation and Detection
Deceptive AI Explanations: Creation and Detection
Johannes Schneider
Christian Meske
Michalis Vlachos
29
28
0
21 Jan 2020
Evaluating Weakly Supervised Object Localization Methods Right
Evaluating Weakly Supervised Object Localization Methods Right
Junsuk Choe
Seong Joon Oh
Seungho Lee
Sanghyuk Chun
Zeynep Akata
Hyunjung Shim
WSOL
303
186
0
21 Jan 2020
Explaining Data-Driven Decisions made by AI Systems: The Counterfactual
  Approach
Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach
Carlos Fernandez
F. Provost
Xintian Han
CML
27
70
0
21 Jan 2020
An interpretable neural network model through piecewise linear
  approximation
An interpretable neural network model through piecewise linear approximation
Mengzhuo Guo
Qingpeng Zhang
Xiuwu Liao
D. Zeng
MILM
FAtt
21
7
0
20 Jan 2020
Machine learning and AI-based approaches for bioactive ligand discovery
  and GPCR-ligand recognition
Machine learning and AI-based approaches for bioactive ligand discovery and GPCR-ligand recognition
S. Raschka
Benjamin Kaufman
AI4CE
28
67
0
17 Jan 2020
GraphLIME: Local Interpretable Model Explanations for Graph Neural
  Networks
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Q. Huang
M. Yamada
Yuan Tian
Dinesh Singh
Dawei Yin
Yi-Ju Chang
FAtt
37
346
0
17 Jan 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
39
207
0
15 Jan 2020
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven,
  AI-Enabled Medical Imaging Analysis
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis
Yao Xie
Melody Chen
David Kao
Ge Gao
Xiang Ánthony' Chen
31
126
0
15 Jan 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
35
139
0
14 Jan 2020
On the Resilience of Biometric Authentication Systems against Random
  Inputs
On the Resilience of Biometric Authentication Systems against Random Inputs
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
M. Kâafar
AAML
39
23
0
13 Jan 2020
Explaining the Explainer: A First Theoretical Analysis of LIME
Explaining the Explainer: A First Theoretical Analysis of LIME
Damien Garreau
U. V. Luxburg
FAtt
11
172
0
10 Jan 2020
Theory In, Theory Out: The uses of social theory in machine learning for
  social science
Theory In, Theory Out: The uses of social theory in machine learning for social science
J. Radford
K. Joseph
16
44
0
09 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
43
301
0
08 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
702
0
08 Jan 2020
Revealing Neural Network Bias to Non-Experts Through Interactive
  Counterfactual Examples
Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples
Chelsea M. Myers
Evan Freed
Luis Fernando Laris Pardo
Anushay Furqan
S. Risi
Jichen Zhu
CML
18
12
0
07 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
28
662
0
07 Jan 2020
Softmax-based Classification is k-means Clustering: Formal Proof,
  Consequences for Adversarial Attacks, and Improvement through Centroid Based
  Tailoring
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring
Sibylle Hess
W. Duivesteijn
Decebal Constantin Mocanu
25
12
0
07 Jan 2020
IMLI: An Incremental Framework for MaxSAT-Based Learning of
  Interpretable Classification Rules
IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules
Bishwamittra Ghosh
Kuldeep S. Meel
15
34
0
07 Jan 2020
Dirichlet uncertainty wrappers for actionable algorithm accuracy
  accountability and auditability
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability
José Mena
O. Pujol
Jordi Vitrià
21
8
0
29 Dec 2019
On the Morality of Artificial Intelligence
On the Morality of Artificial Intelligence
A. Luccioni
Yoshua Bengio
AI4TS
FaML
27
24
0
26 Dec 2019
Smell Pittsburgh: Engaging Community Citizen Science for Air Quality
Smell Pittsburgh: Engaging Community Citizen Science for Air Quality
Yen-Chia Hsu
Jennifer L. Cross
P. Dille
Michael Tasota
Beatrice Dias
Randy Sargent
Ting-Hao 'Kenneth' Huang
I. Nourbakhsh
11
16
0
26 Dec 2019
Explain Your Move: Understanding Agent Actions Using Specific and
  Relevant Feature Attribution
Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution
Nikaash Puri
Sukriti Verma
Piyush B. Gupta
Dhruv Kayastha
Shripad Deshmukh
Balaji Krishnamurthy
Sameer Singh
FAtt
AAML
19
75
0
23 Dec 2019
Exploring Interpretability for Predictive Process Analytics
Exploring Interpretability for Predictive Process Analytics
Renuka Sindhgatta
Chun Ouyang
Catarina Moreira
10
2
0
22 Dec 2019
Previous
123...787980...858687
Next