ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,376 papers shown
Title
Best of both worlds: local and global explanations with
  human-understandable concepts
Best of both worlds: local and global explanations with human-understandable concepts
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Diana Mincu
Eric Loreaux
Ralph Blanes
James Wexler
Alan Karthikesalingam
Been Kim
FAtt
34
28
0
16 Jun 2021
Counterfactual Graphs for Explainable Classification of Brain Networks
Counterfactual Graphs for Explainable Classification of Brain Networks
Carlo Abrate
Francesco Bonchi
CML
30
55
0
16 Jun 2021
Developing a Fidelity Evaluation Approach for Interpretable Machine
  Learning
Developing a Fidelity Evaluation Approach for Interpretable Machine Learning
M. Velmurugan
Chun Ouyang
Catarina Moreira
Renuka Sindhgatta
XAI
29
16
0
16 Jun 2021
A Framework for Evaluating Post Hoc Feature-Additive Explainers
A Framework for Evaluating Post Hoc Feature-Additive Explainers
Zachariah Carmichael
Walter J. Scheirer
FAtt
51
4
0
15 Jun 2021
Generating Contrastive Explanations for Inductive Logic Programming
  Based on a Near Miss Approach
Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach
Johannes Rabold
M. Siebers
Ute Schmid
31
14
0
15 Jun 2021
S-LIME: Stabilized-LIME for Model Explanation
S-LIME: Stabilized-LIME for Model Explanation
Zhengze Zhou
Giles Hooker
Fei Wang
FAtt
30
88
0
15 Jun 2021
Keep CALM and Improve Visual Feature Attribution
Keep CALM and Improve Visual Feature Attribution
Jae Myung Kim
Junsuk Choe
Zeynep Akata
Seong Joon Oh
FAtt
350
20
0
15 Jun 2021
Controlling Neural Networks with Rule Representations
Controlling Neural Networks with Rule Representations
Sungyong Seo
Sercan O. Arik
Jinsung Yoon
Xiang Zhang
Kihyuk Sohn
Tomas Pfister
OOD
AI4CE
37
35
0
14 Jun 2021
Tracing Back Music Emotion Predictions to Sound Sources and Intuitive
  Perceptual Qualities
Tracing Back Music Emotion Predictions to Sound Sources and Intuitive Perceptual Qualities
Shreyan Chowdhury
Verena Praher
Gerhard Widmer
13
14
0
14 Jun 2021
Pitfalls of Explainable ML: An Industry Perspective
Pitfalls of Explainable ML: An Industry Perspective
Sahil Verma
Aditya Lahiri
John P. Dickerson
Su-In Lee
XAI
21
9
0
14 Jun 2021
Counterfactual Explanations as Interventions in Latent Space
Counterfactual Explanations as Interventions in Latent Space
Riccardo Crupi
Alessandro Castelnovo
D. Regoli
Beatriz San Miguel González
CML
16
24
0
14 Jun 2021
Characterizing the risk of fairwashing
Characterizing the risk of fairwashing
Ulrich Aïvodji
Hiromi Arai
Sébastien Gambs
Satoshi Hara
23
27
0
14 Jun 2021
Can Explainable AI Explain Unfairness? A Framework for Evaluating
  Explainable AI
Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI
Kiana Alikhademi
Brianna Richardson
E. Drobina
J. Gilbert
38
33
0
14 Jun 2021
Certification of embedded systems based on Machine Learning: A survey
Certification of embedded systems based on Machine Learning: A survey
Guillaume Vidot
Christophe Gabreau
I. Ober
Iulian Ober
16
12
0
14 Jun 2021
FairCanary: Rapid Continuous Explainable Fairness
FairCanary: Rapid Continuous Explainable Fairness
Avijit Ghosh
Aalok Shanbhag
Christo Wilson
19
20
0
13 Jun 2021
Entropy-based Logic Explanations of Neural Networks
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lio
Marco Gori
S. Melacci
FAtt
XAI
30
78
0
12 Jun 2021
Explaining the Deep Natural Language Processing by Mining Textual
  Interpretable Features
Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
F. Ventura
Salvatore Greco
D. Apiletti
Tania Cerquitelli
14
1
0
12 Jun 2021
Local Explanation of Dialogue Response Generation
Local Explanation of Dialogue Response Generation
Yi-Lin Tuan
Connor Pryor
Wenhu Chen
Lise Getoor
Wenjie Wang
30
11
0
11 Jun 2021
FedNLP: An interpretable NLP System to Decode Federal Reserve
  Communications
FedNLP: An interpretable NLP System to Decode Federal Reserve Communications
Jean Lee
Hoyoul Luis Youn
Nicholas Stevens
Josiah Poon
S. Han
24
10
0
11 Jun 2021
Interpreting Expert Annotation Differences in Animal Behavior
Interpreting Expert Annotation Differences in Animal Behavior
Megan Tjandrasuwita
Jennifer J. Sun
Ann Kennedy
Swarat Chaudhuri
Yisong Yue
19
8
0
11 Jun 2021
Cross-lingual Emotion Detection
Cross-lingual Emotion Detection
Sabit Hassan
Shaden Shaar
Kareem Darwish
32
12
0
10 Jun 2021
On the overlooked issue of defining explanation objectives for
  local-surrogate explainers
On the overlooked issue of defining explanation objectives for local-surrogate explainers
Rafael Poyiadzi
X. Renard
Thibault Laugel
Raúl Santos-Rodríguez
Marcin Detyniecki
21
6
0
10 Jun 2021
Explainable AI, but explainable to whom?
Explainable AI, but explainable to whom?
Julie Gerlings
Millie Søndergaard Jensen
Arisa Shollo
46
43
0
10 Jun 2021
An Interpretable Neural Network for Parameter Inference
An Interpretable Neural Network for Parameter Inference
Johann Pfitzinger
34
0
0
10 Jun 2021
Explaining Time Series Predictions with Dynamic Masks
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
30
80
0
09 Jun 2021
Exploiting auto-encoders and segmentation methods for middle-level
  explanations of image classification systems
Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
14
18
0
09 Jun 2021
Learning Domain Invariant Representations by Joint Wasserstein Distance
  Minimization
Learning Domain Invariant Representations by Joint Wasserstein Distance Minimization
Léo Andéol
Yusei Kawakami
Yuichiro Wada
Takafumi Kanamori
K. Müller
G. Montavon
OOD
44
7
0
09 Jun 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
31
32
0
09 Jun 2021
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness,
  and Semantic Evaluation
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation
Wei Zhang
Ziming Huang
Yada Zhu
Guangnan Ye
Xiaodong Cui
Fan Zhang
60
17
0
09 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
16
21
0
08 Jun 2021
White Paper Assistance: A Step Forward Beyond the Shortcut Learning
White Paper Assistance: A Step Forward Beyond the Shortcut Learning
Xuan Cheng
Tianshu Xie
Xiaomin Wang
Jiali Deng
Minghui Liu
Meilin Liu
AAML
26
0
0
08 Jun 2021
Amortized Generation of Sequential Algorithmic Recourses for Black-box
  Models
Amortized Generation of Sequential Algorithmic Recourses for Black-box Models
Sahil Verma
Keegan E. Hines
John P. Dickerson
22
23
0
07 Jun 2021
Accurate Shapley Values for explaining tree-based models
Accurate Shapley Values for explaining tree-based models
Salim I. Amoukou
Nicolas Brunel
Tangi Salaun
TDI
FAtt
16
13
0
07 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
Aleksander Madry
40
40
0
07 Jun 2021
Explainable Artificial Intelligence (XAI) for Increasing User Trust in
  Deep Reinforcement Learning Driven Autonomous Systems
Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems
Jeff Druce
M. Harradon
J. Tittle
XAI
11
16
0
07 Jun 2021
Causal Abstractions of Neural Networks
Causal Abstractions of Neural Networks
Atticus Geiger
Hanson Lu
Thomas Icard
Christopher Potts
NAI
CML
33
228
0
06 Jun 2021
Energy-Based Learning for Cooperative Games, with Applications to
  Valuation Problems in Machine Learning
Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning
Yatao Bian
Yu Rong
Tingyang Xu
Jiaxiang Wu
Andreas Krause
Junzhou Huang
51
16
0
05 Jun 2021
Constrained Generalized Additive 2 Model with Consideration of
  High-Order Interactions
Constrained Generalized Additive 2 Model with Consideration of High-Order Interactions
Akihisa Watanabe
Michiya Kuramata
Kaito Majima
Haruka Kiyohara
Kensho Kondo
Kazuhide Nakata
AI4CE
17
2
0
05 Jun 2021
Impact of data-splits on generalization: Identifying COVID-19 from cough
  and context
Impact of data-splits on generalization: Identifying COVID-19 from cough and context
Makkunda Sharma
Nikhil Shenoy
Jigar Doshi
Piyush Bagad
Aman Dalmia
Parag Bhamare
A. Mahale
S. Rane
Neeraj Agrawal
R. Panicker
OOD
61
4
0
05 Jun 2021
Counterfactual Explanations Can Be Manipulated
Counterfactual Explanations Can Be Manipulated
Dylan Slack
Sophie Hilgard
Himabindu Lakkaraju
Sameer Singh
20
136
0
04 Jun 2021
A Holistic Approach to Interpretability in Financial Lending: Models,
  Visualizations, and Summary-Explanations
A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations
Chaofan Chen
Kangcheng Lin
Cynthia Rudin
Yaron Shaposhnik
Sijia Wang
Tong Wang
47
41
0
04 Jun 2021
Evaluating Local Explanations using White-box Models
Evaluating Local Explanations using White-box Models
Amir Hossein Akhavan Rahnama
Judith Butepage
Pierre Geurts
Henrik Bostrom
FAtt
30
0
0
04 Jun 2021
Finding and Fixing Spurious Patterns with Explanations
Finding and Fixing Spurious Patterns with Explanations
Gregory Plumb
Marco Tulio Ribeiro
Ameet Talwalkar
43
41
0
03 Jun 2021
Exploring Distantly-Labeled Rationales in Neural Network Models
Exploring Distantly-Labeled Rationales in Neural Network Models
Quzhe Huang
Shengqi Zhu
Yansong Feng
Dongyan Zhao
12
10
0
03 Jun 2021
Causality in Neural Networks -- An Extended Abstract
Causality in Neural Networks -- An Extended Abstract
Abbavaram Gowtham Reddy
CML
OOD
16
1
0
03 Jun 2021
Dissecting Generation Modes for Abstractive Summarization Models via
  Ablation and Attribution
Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution
Jiacheng Xu
Greg Durrett
38
16
0
03 Jun 2021
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
G. Cabour
A. Morales
É. Ledoux
S. Bassetto
30
5
0
02 Jun 2021
On Efficiently Explaining Graph-Based Classifiers
On Efficiently Explaining Graph-Based Classifiers
Xuanxiang Huang
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
55
37
0
02 Jun 2021
When and Why does a Model Fail? A Human-in-the-loop Error Detection
  Framework for Sentiment Analysis
When and Why does a Model Fail? A Human-in-the-loop Error Detection Framework for Sentiment Analysis
Zhe Liu
Yufan Guo
J. Mahmud
17
9
0
02 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODD
LRM
FAtt
43
91
0
01 Jun 2021
Previous
123...676869...868788
Next