ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,309 papers shown
Title
The Language Interpretability Tool: Extensible, Interactive
  Visualizations and Analysis for NLP Models
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Ian Tenney
James Wexler
Jasmijn Bastings
Tolga Bolukbasi
Andy Coenen
...
Ellen Jiang
Mahima Pushkarna
Carey Radebaugh
Emily Reif
Ann Yuan
VLM
46
192
0
12 Aug 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
29
162
0
11 Aug 2020
Counterfactual Explanation Based on Gradual Construction for Deep
  Networks
Counterfactual Explanation Based on Gradual Construction for Deep Networks
Hong G Jung
Sin-Han Kang
Hee-Dong Kim
Dong-Ok Won
Seong-Whan Lee
OOD
FAtt
25
22
0
05 Aug 2020
Explainable Predictive Process Monitoring
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
19
60
0
04 Aug 2020
Evaluating the performance of the LIME and Grad-CAM explanation methods
  on a LEGO multi-label image classification task
Evaluating the performance of the LIME and Grad-CAM explanation methods on a LEGO multi-label image classification task
David Cian
Jan van Gemert
A. Lengyel
FAtt
27
22
0
04 Aug 2020
Safety design concepts for statistical machine learning components
  toward accordance with functional safety standards
Safety design concepts for statistical machine learning components toward accordance with functional safety standards
Akihisa Morikawa
Yamato Matsubara
22
3
0
04 Aug 2020
Explainable Face Recognition
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
16
71
0
03 Aug 2020
audioLIME: Listenable Explanations Using Source Separation
audioLIME: Listenable Explanations Using Source Separation
Verena Haunschmid
Ethan Manilow
Gerhard Widmer
FAtt
14
30
0
02 Aug 2020
An Explainable Machine Learning Model for Early Detection of Parkinson's
  Disease using LIME on DaTscan Imagery
An Explainable Machine Learning Model for Early Detection of Parkinson's Disease using LIME on DaTscan Imagery
Pavan Rajkumar Magesh
Richard Delwin Myloth
Rijo Jackson Tom
FAtt
19
188
0
01 Aug 2020
Explainable Prediction of Text Complexity: The Missing Preliminaries for
  Text Simplification
Explainable Prediction of Text Complexity: The Missing Preliminaries for Text Simplification
Cristina Garbacea
Mengtian Guo
Samuel Carton
Qiaozhu Mei
19
28
0
31 Jul 2020
DeepVA: Bridging Cognition and Computation through Semantic Interaction
  and Deep Learning
DeepVA: Bridging Cognition and Computation through Semantic Interaction and Deep Learning
Yail Bian
John E. Wenskovitch
Chris North
18
11
0
31 Jul 2020
Computing Optimal Decision Sets with SAT
Computing Optimal Decision Sets with SAT
Jinqiang Yu
Alexey Ignatiev
Peter J. Stuckey
P. L. Bodic
FAtt
22
26
0
29 Jul 2020
Closed-Form Expressions for Global and Local Interpretation of Tsetlin
  Machines with Applications to Explaining High-Dimensional Data
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data
Christopher D. Blakely
Ole-Christoffer Granmo
30
16
0
27 Jul 2020
Machine Learning Explanations to Prevent Overtrust in Fake News
  Detection
Machine Learning Explanations to Prevent Overtrust in Fake News Detection
Sina Mohseni
Fan Yang
Shiva K. Pentyala
Mengnan Du
Yi Liu
Nic Lupfer
Xia Hu
Shuiwang Ji
Eric D. Ragan
21
41
0
24 Jul 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest
  Feature Importance
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
36
42
0
21 Jul 2020
Machine Learning approach for Credit Scoring
Machine Learning approach for Credit Scoring
A. R. Provenzano
D. Trifirò
A. Datteo
L. Giada
N. Jean
A. Riciputi
Giacomo Le Pera
M. Spadaccino
L. Massaron
C. Nordio
15
21
0
20 Jul 2020
Sequential Explanations with Mental Model-Based Policies
Sequential Explanations with Mental Model-Based Policies
A. Yeung
Shalmali Joshi
Joseph Jay Williams
Frank Rudzicz
FAtt
LRM
36
15
0
17 Jul 2020
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Yunqing Zhao
Ngai-man Cheung
Alexander Binder
28
87
0
17 Jul 2020
Concept Learners for Few-Shot Learning
Concept Learners for Few-Shot Learning
Kaidi Cao
Maria Brbic
J. Leskovec
VLM
OffRL
30
4
0
14 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
33
37
0
13 Jul 2020
Monitoring and explainability of models in production
Monitoring and explainability of models in production
Janis Klaise
A. V. Looveren
Clive Cox
G. Vacanti
Alexandru Coca
43
49
0
13 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image
  Translation
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
30
20
0
10 Jul 2020
Impact of Legal Requirements on Explainability in Machine Learning
Impact of Legal Requirements on Explainability in Machine Learning
Adrien Bibal
Michael Lognoul
A. D. Streel
Benoit Frénay
ELM
AILaw
FaML
26
9
0
10 Jul 2020
General Pitfalls of Model-Agnostic Interpretation Methods for Machine
  Learning Models
General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models
Christoph Molnar
Gunnar Konig
J. Herbinger
Timo Freiesleben
Susanne Dandl
Christian A. Scholbeck
Giuseppe Casalicchio
Moritz Grosse-Wentrup
B. Bischl
FAtt
AI4CE
31
135
0
08 Jul 2020
Generating Adversarial Examples with Controllable Non-transferability
Generating Adversarial Examples with Controllable Non-transferability
Renzhi Wang
Tianwei Zhang
Xiaofei Xie
Lei Ma
Cong Tian
Felix Juefei Xu
Yang Liu
SILM
AAML
17
3
0
02 Jul 2020
The Impact of Explanations on AI Competency Prediction in VQA
The Impact of Explanations on AI Competency Prediction in VQA
Kamran Alipour
Arijit Ray
Xiaoyu Lin
J. Schulze
Yi Yao
Giedrius Burachas
30
9
0
02 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
30
627
0
01 Jul 2020
Unifying Model Explainability and Robustness via Machine-Checkable
  Concepts
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Vedant Nanda
Till Speicher
John P. Dickerson
Krishna P. Gummadi
Muhammad Bilal Zafar
AAML
14
4
0
01 Jul 2020
Mobile Link Prediction: Automated Creation and Crowd-sourced Validation
  of Knowledge Graphs
Mobile Link Prediction: Automated Creation and Crowd-sourced Validation of Knowledge Graphs
M. Ballandies
Evangelos Pournaras
HAI
25
8
0
30 Jun 2020
Classification of cancer pathology reports: a large-scale comparative
  study
Classification of cancer pathology reports: a large-scale comparative study
S. Martina
L. Ventura
P. Frasconi
27
11
0
29 Jun 2020
Interpreting and Disentangling Feature Components of Various Complexity
  from DNNs
Interpreting and Disentangling Feature Components of Various Complexity from DNNs
Jie Ren
Mingjie Li
Zexu Liu
Quanshi Zhang
CoGe
19
18
0
29 Jun 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
34
289
0
26 Jun 2020
Counterfactual explanation of machine learning survival models
Counterfactual explanation of machine learning survival models
M. Kovalev
Lev V. Utkin
CML
OffRL
37
19
0
26 Jun 2020
Evaluation of Text Generation: A Survey
Evaluation of Text Generation: A Survey
Asli Celikyilmaz
Elizabeth Clark
Jianfeng Gao
ELM
LM&MA
44
378
0
26 Jun 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
42
584
0
26 Jun 2020
SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization
SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization
Haofan Wang
Rakshit Naidu
J. Michael
Soumya Snigdha Kundu
FAtt
30
79
0
25 Jun 2020
Explainable CNN-attention Networks (C-Attention Network) for Automated
  Detection of Alzheimer's Disease
Explainable CNN-attention Networks (C-Attention Network) for Automated Detection of Alzheimer's Disease
Ning Wang
Mingxuan Chen
K. P. Subbalakshmi
25
22
0
25 Jun 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
35
73
0
24 Jun 2020
On Counterfactual Explanations under Predictive Multiplicity
On Counterfactual Explanations under Predictive Multiplicity
Martin Pawelczyk
Klaus Broelemann
Gjergji Kasneci
25
85
0
23 Jun 2020
Fair Performance Metric Elicitation
Fair Performance Metric Elicitation
Gaurush Hiranandani
Harikrishna Narasimhan
Oluwasanmi Koyejo
32
18
0
23 Jun 2020
Improving Workflow Integration with xPath: Design and Evaluation of a
  Human-AI Diagnosis System in Pathology
Improving Workflow Integration with xPath: Design and Evaluation of a Human-AI Diagnosis System in Pathology
H. Gu
Yuan Liang
Yifan Xu
Christopher Kazu Williams
S. Magaki
...
Wenzhong Yan
X. R. Zhang
Yang Li
Mohammad Haeri
Xiang Ánthony' Chen
40
29
0
23 Jun 2020
Improving LIME Robustness with Smarter Locality Sampling
Improving LIME Robustness with Smarter Locality Sampling
Sean Saito
Eugene Chua
Nicholas Capel
Rocco Hu
FAtt
AAML
11
22
0
22 Jun 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
22
94
0
19 Jun 2020
Feature Interaction Interpretability: A Case for Explaining
  Ad-Recommendation Systems via Neural Interaction Detection
Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Michael Tsang
Dehua Cheng
Hanpeng Liu
Xuening Feng
Eric Zhou
Yan Liu
FAtt
24
60
0
19 Jun 2020
COVIDLite: A depth-wise separable deep neural network with white balance
  and CLAHE for detection of COVID-19
COVIDLite: A depth-wise separable deep neural network with white balance and CLAHE for detection of COVID-19
Manu Siddhartha
Avik Santra
13
44
0
19 Jun 2020
Image classification in frequency domain with 2SReLU: a second harmonics
  superposition activation function
Image classification in frequency domain with 2SReLU: a second harmonics superposition activation function
Thomio Watanabe
D. Wolf
27
22
0
18 Jun 2020
Are you wearing a mask? Improving mask detection from speech using
  augmentation by cycle-consistent GANs
Are you wearing a mask? Improving mask detection from speech using augmentation by cycle-consistent GANs
Nicolae-Cuatualin Ristea
Radu Tudor Ionescu
CVBM
8
41
0
17 Jun 2020
Noise or Signal: The Role of Image Backgrounds in Object Recognition
Noise or Signal: The Role of Image Backgrounds in Object Recognition
Kai Y. Xiao
Logan Engstrom
Andrew Ilyas
Aleksander Madry
25
377
0
17 Jun 2020
Explanation-based Weakly-supervised Learning of Visual Relations with
  Graph Networks
Explanation-based Weakly-supervised Learning of Visual Relations with Graph Networks
Federico Baldassarre
Kevin Smith
Josephine Sullivan
Hossein Azizpour
34
25
0
16 Jun 2020
A generalizable saliency map-based interpretation of model outcome
A generalizable saliency map-based interpretation of model outcome
Shailja Thakur
S. Fischmeister
AAML
FAtt
MILM
30
2
0
16 Jun 2020
Previous
123...757677...858687
Next