ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,325 papers shown
Title
Supervising Model Attention with Human Explanations for Robust Natural
  Language Inference
Supervising Model Attention with Human Explanations for Robust Natural Language Inference
Joe Stacey
Yonatan Belinkov
Marek Rei
30
45
0
16 Apr 2021
MEG: Generating Molecular Counterfactual Explanations for Deep Graph
  Networks
MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks
Danilo Numeroso
D. Bacciu
29
38
0
16 Apr 2021
On the Complexity of SHAP-Score-Based Explanations: Tractability via
  Knowledge Compilation and Non-Approximability Results
On the Complexity of SHAP-Score-Based Explanations: Tractability via Knowledge Compilation and Non-Approximability Results
Marcelo Arenas
Pablo Barceló
Leopoldo Bertossi
Mikaël Monet
FAtt
29
35
0
16 Apr 2021
Faithful and Plausible Explanations of Medical Code Predictions
Faithful and Plausible Explanations of Medical Code Predictions
Zach Wood-Doughty
Isabel Cachola
Mark Dredze
14
2
0
16 Apr 2021
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
Dieter Brughmans
Pieter Leyman
David Martens
40
64
0
15 Apr 2021
Do Deep Neural Networks Forget Facial Action Units? -- Exploring the
  Effects of Transfer Learning in Health Related Facial Expression Recognition
Do Deep Neural Networks Forget Facial Action Units? -- Exploring the Effects of Transfer Learning in Health Related Facial Expression Recognition
Pooja Prajod
Dominik Schiller
Tobias Huber
Elisabeth André
CVBM
19
8
0
15 Apr 2021
What Makes a Scientific Paper be Accepted for Publication?
What Makes a Scientific Paper be Accepted for Publication?
Panagiotis Fytas
Georgios Rizos
Lucia Specia
21
10
0
14 Apr 2021
To Trust or Not to Trust a Regressor: Estimating and Explaining
  Trustworthiness of Regression Predictions
To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
K. D. Bie
Ana Lucic
H. Haned
FAtt
6
11
0
14 Apr 2021
Enabling Machine Learning Algorithms for Credit Scoring -- Explainable
  Artificial Intelligence (XAI) methods for clear understanding complex
  predictive models
Enabling Machine Learning Algorithms for Credit Scoring -- Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models
P. Biecek
M. Chlebus
Janusz Gajda
Alicja Gosiewska
A. Kozak
Dominik Ogonowski
Jakub Sztachelski
P. Wojewnik
20
10
0
14 Apr 2021
A Novel Approach to Curiosity and Explainable Reinforcement Learning via
  Interpretable Sub-Goals
A Novel Approach to Curiosity and Explainable Reinforcement Learning via Interpretable Sub-Goals
C. V. Rossum
Candice Feinberg
Adam Abu Shumays
Kyle Baxter
Benedek Bartha
GAN
LLMAG
LRM
26
1
0
14 Apr 2021
Towards an Interpretable Data-driven Trigger System for High-throughput
  Physics Facilities
Towards an Interpretable Data-driven Trigger System for High-throughput Physics Facilities
C. Mahesh
Kristin Dona
David W. Miller
Yuxin Chen
AI4CE
13
2
0
14 Apr 2021
Fast Hierarchical Games for Image Explanations
Fast Hierarchical Games for Image Explanations
Jacopo Teneggi
Alexandre Luster
Jeremias Sulam
FAtt
39
19
0
13 Apr 2021
LioNets: A Neural-Specific Local Interpretation Technique Exploiting
  Penultimate Layer Information
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
31
7
0
13 Apr 2021
Conclusive Local Interpretation Rules for Random Forests
Conclusive Local Interpretation Rules for Random Forests
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
FaML
FAtt
34
17
0
13 Apr 2021
Understanding Prediction Discrepancies in Machine Learning Classifiers
Understanding Prediction Discrepancies in Machine Learning Classifiers
X. Renard
Thibault Laugel
Marcin Detyniecki
FaML
60
13
0
12 Apr 2021
Towards a Collective Agenda on AI for Earth Science Data Analysis
Towards a Collective Agenda on AI for Earth Science Data Analysis
D. Tuia
R. Roscher
Jan Dirk Wegner
Nathan Jacobs
Xiaoxiang Zhu
Gustau Camps-Valls
AI4CE
44
68
0
11 Apr 2021
Connecting Attributions and QA Model Behavior on Realistic
  Counterfactuals
Connecting Attributions and QA Model Behavior on Realistic Counterfactuals
Xi Ye
Rohan Nair
Greg Durrett
24
24
0
09 Apr 2021
Explaining Neural Network Predictions on Sentence Pairs via Learning
  Word-Group Masks
Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Hanjie Chen
Song Feng
Jatin Ganhotra
H. Wan
Chulaka Gunasekara
Sachindra Joshi
Yangfeng Ji
34
18
0
09 Apr 2021
Model LineUpper: Supporting Interactive Model Comparison at Multiple
  Levels for AutoML
Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML
S. Narkar
Yunfeng Zhang
Q. V. Liao
Dakuo Wang
Justin D. Weisz
24
25
0
09 Apr 2021
Individual Explanations in Machine Learning Models: A Survey for
  Practitioners
Individual Explanations in Machine Learning Models: A Survey for Practitioners
Alfredo Carrillo
Luis F. Cantú
Alejandro Noriega
FaML
24
15
0
09 Apr 2021
An Empirical Comparison of Instance Attribution Methods for NLP
An Empirical Comparison of Instance Attribution Methods for NLP
Pouya Pezeshkpour
Sarthak Jain
Byron C. Wallace
Sameer Singh
TDI
18
35
0
09 Apr 2021
GrASP: A Library for Extracting and Exploring Human-Interpretable
  Textual Patterns
GrASP: A Library for Extracting and Exploring Human-Interpretable Textual Patterns
Piyawat Lertvittayakumjorn
Leshem Choshen
Eyal Shnarch
Francesca Toni
53
7
0
08 Apr 2021
Explainability-based Backdoor Attacks Against Graph Neural Networks
Explainability-based Backdoor Attacks Against Graph Neural Networks
Jing Xu
Minhui Xue
Xue
S. Picek
31
74
0
08 Apr 2021
How Transferable are Reasoning Patterns in VQA?
How Transferable are Reasoning Patterns in VQA?
Corentin Kervadec
Theo Jaunet
G. Antipov
M. Baccouche
Romain Vuillemot
Christian Wolf
LRM
23
28
0
08 Apr 2021
Question-Driven Design Process for Explainable AI User Experiences
Question-Driven Design Process for Explainable AI User Experiences
Q. V. Liao
Milena Pribić
Jaesik Han
Sarah Miller
Daby M. Sow
20
52
0
08 Apr 2021
Triplot: model agnostic measures and visualisations for variable
  importance in predictive models that take into account the hierarchical
  correlation structure
Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structure
Katarzyna Pekala
Katarzyna Wo'znica
P. Biecek
FAtt
8
3
0
07 Apr 2021
Adversarial Robustness Guarantees for Gaussian Processes
Adversarial Robustness Guarantees for Gaussian Processes
A. Patané
Arno Blaas
Luca Laurenti
L. Cardelli
Stephen J. Roberts
Marta Z. Kwiatkowska
GP
AAML
98
9
0
07 Apr 2021
Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in
  Visual Question Answering
Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering
Corentin Dancette
Rémi Cadène
Damien Teney
Matthieu Cord
CML
33
76
0
07 Apr 2021
Hollow-tree Super: a directional and scalable approach for feature
  importance in boosted tree models
Hollow-tree Super: a directional and scalable approach for feature importance in boosted tree models
S. Doyen
Hugh Taylor
P. Nicholas
L. Crawford
I. Young
M. Sughrue
12
12
0
07 Apr 2021
Deep Interpretable Models of Theory of Mind
Deep Interpretable Models of Theory of Mind
Ini Oguntola
Dana Hughes
Katia Sycara
HAI
33
26
0
07 Apr 2021
Sparse Oblique Decision Trees: A Tool to Understand and Manipulate
  Neural Net Features
Sparse Oblique Decision Trees: A Tool to Understand and Manipulate Neural Net Features
Suryabhan Singh Hada
Miguel Á. Carreira-Perpiñán
Arman Zharmagambetov
15
17
0
07 Apr 2021
Why? Why not? When? Visual Explanations of Agent Behavior in
  Reinforcement Learning
Why? Why not? When? Visual Explanations of Agent Behavior in Reinforcement Learning
Aditi Mishra
Utkarsh Soni
Jinbin Huang
Chris Bryan
OffRL
22
23
0
06 Apr 2021
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word
  Representations
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations
Archit Rathore
Sunipa Dev
J. M. Phillips
Vivek Srikumar
Yan Zheng
Chin-Chia Michael Yeh
Junpeng Wang
Wei Zhang
Bei Wang
46
10
0
06 Apr 2021
Towards a Rigorous Evaluation of Explainability for Multivariate Time
  Series
Towards a Rigorous Evaluation of Explainability for Multivariate Time Series
Rohit Saluja
A. Malhi
Samanta Knapic
Kary Främling
C. Cavdar
XAI
AI4TS
23
19
0
06 Apr 2021
White Box Methods for Explanations of Convolutional Neural Networks in
  Image Classification Tasks
White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks
Meghna P. Ayyar
J. Benois-Pineau
A. Zemmari
FAtt
30
17
0
06 Apr 2021
A Novel Approach for Semiconductor Etching Process with Inductive Biases
A Novel Approach for Semiconductor Etching Process with Inductive Biases
Sanghoon Myung
Hyunjae Jang
Byungseon Choi
Jisu Ryu
Hyuk Kim
Sang Wuk Park
C. Jeong
Daesin Kim
25
6
0
06 Apr 2021
Contrastive Explanations for Explaining Model Adaptations
Contrastive Explanations for Explaining Model Adaptations
André Artelt
Fabian Hinder
Valerie Vaquet
Robert Feldhans
Barbara Hammer
57
4
0
06 Apr 2021
Shapley Explanation Networks
Shapley Explanation Networks
Rui Wang
Xiaoqian Wang
David I. Inouye
TDI
FAtt
32
44
0
06 Apr 2021
Explainability-aided Domain Generalization for Image Classification
Explainability-aided Domain Generalization for Image Classification
Robin M. Schmidt
FAtt
OOD
27
1
0
05 Apr 2021
Exploring the Role of BERT Token Representations to Explain Sentence
  Probing Results
Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Hosein Mohebbi
Ali Modarressi
Mohammad Taher Pilehvar
MILM
27
25
0
03 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label
  deep learning classification tasks in remote sensing
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
28
118
0
03 Apr 2021
STARdom: an architecture for trusted and secure human-centered
  manufacturing systems
STARdom: an architecture for trusted and secure human-centered manufacturing systems
Jože M. Rožanec
Patrik Zajec
K. Kenda
I. Novalija
B. Fortuna
...
Diego Reforgiato Recupero
D. Kyriazis
G. Sofianidis
Spyros Theodoropoulos
John Soldatos
37
7
0
02 Apr 2021
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Thomas Rojat
Raphael Puget
David Filliat
Javier Del Ser
R. Gelin
Natalia Díaz Rodríguez
XAI
AI4TS
49
128
0
02 Apr 2021
Coalitional strategies for efficient individual prediction explanation
Coalitional strategies for efficient individual prediction explanation
Gabriel Ferrettini
Elodie Escriva
Julien Aligon
Jean-Baptiste Excoffier
C. Soulé-Dupuy
33
17
0
01 Apr 2021
Reconciling the Discrete-Continuous Divide: Towards a Mathematical
  Theory of Sparse Communication
Reconciling the Discrete-Continuous Divide: Towards a Mathematical Theory of Sparse Communication
André F. T. Martins
35
1
0
01 Apr 2021
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on
  Probing Attacks to an Institutional Network
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on Probing Attacks to an Institutional Network
E. Tufan
C. Tezcan
Cengiz Acartürk
26
29
0
31 Mar 2021
Contrastive Explanations of Plans Through Model Restrictions
Contrastive Explanations of Plans Through Model Restrictions
Benjamin Krarup
Senka Krivic
Daniele Magazzeni
D. Long
Michael Cashmore
David E. Smith
27
32
0
29 Mar 2021
Efficient Explanations from Empirical Explainers
Efficient Explanations from Empirical Explainers
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
37
9
0
29 Mar 2021
Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems
Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems
Sophia J. Abraham
Zachariah Carmichael
Sreya Banerjee
Rosaura G. VidalMata
Ankit Agrawal
M. N. A. Islam
Walter J. Scheirer
J. Cleland-Huang
34
19
0
28 Mar 2021
A Multistakeholder Approach Towards Evaluating AI Transparency
  Mechanisms
A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms
Ana Lucic
Madhulika Srikumar
Umang Bhatt
Alice Xiang
Ankur Taly
Q. V. Liao
Maarten de Rijke
28
5
0
27 Mar 2021
Previous
123...707172...858687
Next