ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,309 papers shown
Title
Efficient nonparametric statistical inference on population feature
  importance using Shapley values
Efficient nonparametric statistical inference on population feature importance using Shapley values
B. Williamson
Jean Feng
FAtt
13
70
0
16 Jun 2020
Model Explanations with Differential Privacy
Model Explanations with Differential Privacy
Neel Patel
Reza Shokri
Yair Zick
SILM
FedML
28
32
0
16 Jun 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
32
31
0
16 Jun 2020
Scalable Cross Lingual Pivots to Model Pronoun Gender for Translation
Scalable Cross Lingual Pivots to Model Pronoun Gender for Translation
Kellie Webster
Emily Pitler
22
5
0
16 Jun 2020
Self-supervised Learning: Generative or Contrastive
Self-supervised Learning: Generative or Contrastive
Xiao Liu
Fanjin Zhang
Zhenyu Hou
Zhaoyu Wang
Li Mian
Jing Zhang
Jie Tang
SSL
54
1,588
0
15 Jun 2020
Hindsight Logging for Model Training
Hindsight Logging for Model Training
Rolando Garcia
Eric Liu
Vikram Sreekanti
Bobby Yan
Anusha Dandamudi
Joseph E. Gonzalez
J. M. Hellerstein
Koushik Sen
VLM
29
10
0
12 Jun 2020
Generalized SHAP: Generating multiple types of explanations in machine
  learning
Generalized SHAP: Generating multiple types of explanations in machine learning
Dillon Bowen
L. Ungar
FAtt
11
41
0
12 Jun 2020
SegNBDT: Visual Decision Rules for Segmentation
SegNBDT: Visual Decision Rules for Segmentation
Alvin Wan
Daniel Ho
You Song
Henk Tillman
Sarah Adel Bargal
Joseph E. Gonzalez
SSeg
27
6
0
11 Jun 2020
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Javier Antorán
Umang Bhatt
T. Adel
Adrian Weller
José Miguel Hernández-Lobato
UQCV
BDL
50
112
0
11 Jun 2020
How Interpretable and Trustworthy are GAMs?
How Interpretable and Trustworthy are GAMs?
C. Chang
S. Tan
Benjamin J. Lengerich
Anna Goldenberg
R. Caruana
FAtt
22
77
0
11 Jun 2020
Scalable Partial Explainability in Neural Networks via Flexible
  Activation Functions
Scalable Partial Explainability in Neural Networks via Flexible Activation Functions
S. Sun
Chen Li
Zhuangkun Wei
Antonios Tsourdos
Weisi Guo
FAtt
32
2
0
10 Jun 2020
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Giorgio Visani
Enrico Bagli
F. Chesani
FAtt
27
60
0
10 Jun 2020
Why Attentions May Not Be Interpretable?
Why Attentions May Not Be Interpretable?
Bing Bai
Jian Liang
Guanhua Zhang
Hao Li
Kun Bai
Fei Wang
FAtt
25
56
0
10 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei Wang
AAML
59
18
0
09 Jun 2020
Provable tradeoffs in adversarially robust classification
Provable tradeoffs in adversarially robust classification
Yan Sun
Hamed Hassani
David Hong
Alexander Robey
23
53
0
09 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
Model-agnostic Feature Importance and Effects with Dependent Features --
  A Conditional Subgroup Approach
Model-agnostic Feature Importance and Effects with Dependent Features -- A Conditional Subgroup Approach
Christoph Molnar
Gunnar Konig
B. Bischl
Giuseppe Casalicchio
33
77
0
08 Jun 2020
BS-Net: learning COVID-19 pneumonia severity on a large Chest X-Ray
  dataset
BS-Net: learning COVID-19 pneumonia severity on a large Chest X-Ray dataset
A. Signoroni
Mattia Savardi
Sergio Benini
Nicola Adami
R. Leonardi
...
F. Vaccher
M. Ravanelli
A. Borghesi
R. Maroldi
D. Farina
21
37
0
08 Jun 2020
Propositionalization and Embeddings: Two Sides of the Same Coin
Propositionalization and Embeddings: Two Sides of the Same Coin
Nada Lavrac
Blaž Škrlj
Marko Robnik-Šikonja
21
26
0
08 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
34
217
0
05 Jun 2020
Location, location, location: Satellite image-based real-estate
  appraisal
Location, location, location: Satellite image-based real-estate appraisal
Jan-Peter Kucklick
Oliver Müller
28
5
0
04 Jun 2020
Consistent feature selection for neural networks via Adaptive Group
  Lasso
Consistent feature selection for neural networks via Adaptive Group Lasso
L. Ho
Vu C. Dinh
OOD
22
9
0
30 May 2020
AI Research Considerations for Human Existential Safety (ARCHES)
AI Research Considerations for Human Existential Safety (ARCHES)
Andrew Critch
David M. Krueger
30
50
0
30 May 2020
A Performance-Explainability Framework to Benchmark Machine Learning
  Methods: Application to Multivariate Time Series Classifiers
A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel
Véronique Masson
Elisa Fromont
AI4TS
44
17
0
29 May 2020
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CML
LRM
49
157
0
27 May 2020
Who is this Explanation for? Human Intelligence and Knowledge Graphs for
  eXplainable AI
Who is this Explanation for? Human Intelligence and Knowledge Graphs for eXplainable AI
I. Celino
12
5
0
27 May 2020
Rationalizing Text Matching: Learning Sparse Alignments via Optimal
  Transport
Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport
Kyle Swanson
L. Yu
Tao Lei
OT
29
37
0
27 May 2020
Review of Mathematical frameworks for Fairness in Machine Learning
Review of Mathematical frameworks for Fairness in Machine Learning
E. del Barrio
Paula Gordaliza
Jean-Michel Loubes
FaML
FedML
15
39
0
26 May 2020
The best way to select features?
The best way to select features?
Xin Man
Ernest P. Chan
16
60
0
26 May 2020
NILE : Natural Language Inference with Faithful Natural Language
  Explanations
NILE : Natural Language Inference with Faithful Natural Language Explanations
Sawan Kumar
Partha P. Talukdar
XAI
LRM
27
160
0
25 May 2020
Towards Analogy-Based Explanations in Machine Learning
Towards Analogy-Based Explanations in Machine Learning
Eyke Hüllermeier
XAI
16
20
0
23 May 2020
An analysis on the use of autoencoders for representation learning:
  fundamentals, learning task case studies, explainability and challenges
An analysis on the use of autoencoders for representation learning: fundamentals, learning task case studies, explainability and challenges
D. Charte
F. Charte
M. J. D. Jesus
Francisco Herrera
SSL
OOD
27
51
0
21 May 2020
A Robust Interpretable Deep Learning Classifier for Heart Anomaly
  Detection Without Segmentation
A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection Without Segmentation
T. Dissanayake
Tharindu Fernando
Simon Denman
Sridha Sridharan
H. Ghaemmaghami
Clinton Fookes
8
8
0
21 May 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
17
138
0
21 May 2020
An Adversarial Approach for Explaining the Predictions of Deep Neural
  Networks
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
Arash Rahnama
A.-Yu Tseng
FAtt
AAML
FaML
25
5
0
20 May 2020
The challenges of deploying artificial intelligence models in a rapidly
  evolving pandemic
The challenges of deploying artificial intelligence models in a rapidly evolving pandemic
Yipeng Hu
J. Jacob
Geoffrey J. M. Parker
D. Hawkes
J. Hurst
Danail Stoyanov
OOD
23
65
0
19 May 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
21
64
0
18 May 2020
Applying Genetic Programming to Improve Interpretability in Machine
  Learning Models
Applying Genetic Programming to Improve Interpretability in Machine Learning Models
Leonardo Augusto Ferreira
F. G. Guimarães
Rodrigo C. P. Silva
14
37
0
18 May 2020
Reliable Local Explanations for Machine Listening
Reliable Local Explanations for Machine Listening
Saumitra Mishra
Emmanouil Benetos
Bob L. T. Sturm
S. Dixon
AAML
FAtt
12
20
0
15 May 2020
Evolved Explainable Classifications for Lymph Node Metastases
Evolved Explainable Classifications for Lymph Node Metastases
Iam Palatnik de Sousa
M. Vellasco
E. C. Silva
19
6
0
14 May 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through
  Influence Functions
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
28
165
0
14 May 2020
Ensembled sparse-input hierarchical networks for high-dimensional
  datasets
Ensembled sparse-input hierarchical networks for high-dimensional datasets
Jean Feng
N. Simon
19
4
0
11 May 2020
Explainable Matrix -- Visualization for Global and Local
  Interpretability of Random Forest Classification Ensembles
Explainable Matrix -- Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
Mário Popolin Neto
F. Paulovich
FAtt
38
88
0
08 May 2020
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Marco Tulio Ribeiro
Tongshuang Wu
Carlos Guestrin
Sameer Singh
ELM
52
1,084
0
08 May 2020
XEM: An Explainable-by-Design Ensemble Method for Multivariate Time
  Series Classification
XEM: An Explainable-by-Design Ensemble Method for Multivariate Time Series Classification
Kevin Fauvel
Elisa Fromont
Véronique Masson
P. Faverdin
Alexandre Termier
AI4TS
33
41
0
07 May 2020
A Locally Adaptive Interpretable Regression
A Locally Adaptive Interpretable Regression
Lkhagvadorj Munkhdalai
Tsendsuren Munkhdalai
K. Ryu
14
5
0
07 May 2020
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Brendan Kennedy
Xisen Jin
Aida Mostafazadeh Davani
Morteza Dehghani
Xiang Ren
27
138
0
05 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
15
17
0
05 May 2020
A robust algorithm for explaining unreliable machine learning survival
  models using the Kolmogorov-Smirnov bounds
A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds
M. Kovalev
Lev V. Utkin
AAML
43
31
0
05 May 2020
Post-hoc explanation of black-box classifiers using confident itemsets
Post-hoc explanation of black-box classifiers using confident itemsets
M. Moradi
Matthias Samwald
62
98
0
05 May 2020
Previous
123...767778...858687
Next