Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
v1
v2
v3 (latest)
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 3,508 papers shown
Title
The Quest for the Right Mediator: A History, Survey, and Theoretical Grounding of Causal Interpretability
Aaron Mueller
Jannik Brinkmann
Millicent Li
Samuel Marks
Koyena Pal
...
Arnab Sen Sharma
Jiuding Sun
Eric Todd
David Bau
Yonatan Belinkov
CML
109
25
0
02 Aug 2024
EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody
Coen Schoof
Hao-Wen Dong
Mauro Conti
S. Picek
AAML
63
2
0
02 Aug 2024
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition
Róisín Luo
James McDermott
C. O'Riordan
AAML
45
1
0
02 Aug 2024
META-ANOVA: Screening interactions for interpretable machine learning
Daniel A. Serino
Marc L. Klasky
Chanmoo Park
Dongha Kim
Yongdai Kim
65
0
0
02 Aug 2024
Explaining a probabilistic prediction on the simplex with Shapley compositions
Paul-Gauthier Noé
Miquel Perelló Nieto
J. Bonastre
Peter Flach
TDI
FAtt
68
0
0
02 Aug 2024
Explainable Emotion Decoding for Human and Computer Vision
Alessio Borriero
Martina Milazzo
M. Diano
Davide Orsenigo
Maria Chiara Villa
Chiara Di Fazio
Marco Tamietto
Alan Perotti
50
0
0
01 Aug 2024
Improving Machine Learning Based Sepsis Diagnosis Using Heart Rate Variability
Sai Balaji
Christopher Sun
Anaiy Somalwar
27
0
0
01 Aug 2024
Probabilistic Scoring Lists for Interpretable Machine Learning
Jonas Hanselle
Stefan Heid
Zhigang Zeng
Eyke Hüllermeier
58
0
0
31 Jul 2024
Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
Supriya Manna
Dionis Barcari
118
3
0
31 Jul 2024
Can LLMs be Fooled? Investigating Vulnerabilities in LLMs
Sara Abdali
Jia He
C. Barberan
Richard Anarfi
80
7
0
30 Jul 2024
Faithful and Plausible Natural Language Explanations for Image Classification: A Pipeline Approach
Adam Wojciechowski
Mateusz Lango
Ondrej Dusek
FAtt
58
1
0
30 Jul 2024
Can I trust my anomaly detection system? A case study based on explainable AI
Muhammad Rashid
E. Amparore
Enrico Ferrari
Damiano Verda
64
0
0
29 Jul 2024
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Mingli Song
XAI
66
1
0
28 Jul 2024
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
Xianren Zhang
Dongwon Lee
Suhang Wang
VLM
FAtt
76
4
0
27 Jul 2024
CoLiDR: Concept Learning using Aggregated Disentangled Representations
Sanchit Sinha
Guangzhi Xiong
Aidong Zhang
91
2
0
27 Jul 2024
Formalization of Dialogue in the Decision Support System of Dr. Watson Type
Saveli Goldberg
Vladimir Sluchak
14
0
0
27 Jul 2024
Understanding XAI Through the Philosopher's Lens: A Historical Perspective
Martina Mattioli
Antonio Emanuele Cinà
Marcello Pelillo
XAI
99
0
0
26 Jul 2024
Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust
Ruoxi Shang
Gary Hsieh
Chirag Shah
114
0
0
25 Jul 2024
Exploring the Plausibility of Hate and Counter Speech Detectors with Explainable AI
Adrian Jaques Böck
D. Slijepcevic
Matthias Zeppelzauer
61
0
0
25 Jul 2024
Explaining the Model, Protecting Your Data: Revealing and Mitigating the Data Privacy Risks of Post-Hoc Model Explanations via Membership Inference
Catherine Huang
Martin Pawelczyk
Himabindu Lakkaraju
AAML
49
1
0
24 Jul 2024
What Matters in Explanations: Towards Explainable Fake Review Detection Focusing on Transformers
Md. Shajalal
Md. Atabuzzaman
Alexander Boden
Gunnar Stevens
Delong Du
76
0
0
24 Jul 2024
The Hybrid Forecast of S&P 500 Volatility ensembled from VIX, GARCH and LSTM models
Natalia Roszyk
R. Ślepaczuk
AIFin
VLM
36
2
0
23 Jul 2024
Aggregated Attributions for Explanatory Analysis of 3D Segmentation Models
Maciej Chrabaszcz
Hubert Baniecki
Piotr Komorowski
Szymon Płotka
Przemysław Biecek
56
2
0
23 Jul 2024
Spurious Correlations in Concept Drift: Can Explanatory Interaction Help?
Cristiana Lalletti
Stefano Teso
63
1
0
23 Jul 2024
A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting
Pierre-Daniel Arsenault
Shengrui Wang
Jean-Marc Patenande
XAI
AI4TS
111
2
0
22 Jul 2024
Explaining Decisions in ML Models: a Parameterized Complexity Analysis
S. Ordyniak
Giacomo Paesani
Mateusz Rychlicki
Stefan Szeider
63
1
0
22 Jul 2024
The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis
Benjamin Frész
Vincent Philipp Goebels
Safa Omri
Danilo Brajovic
Andreas Aichele
Janika Kutz
Jens Neuhüttler
Marco F. Huber
110
0
0
22 Jul 2024
Learning at a Glance: Towards Interpretable Data-limited Continual Semantic Segmentation via Semantic-Invariance Modelling
Bo Yuan
Danpei Zhao
Z. Shi
VLM
CLL
81
3
0
22 Jul 2024
They Look Like Each Other: Case-based Reasoning for Explainable Depression Detection on Twitter using Large Language Models
Mohammad Saeid Mahdavinejad
Peyman Adibi
A. Monadjemi
Pascal Hitzler
98
0
0
21 Jul 2024
Deep multimodal saliency parcellation of cerebellar pathways: linking microstructure and individual function through explainable multitask learning
Ari Tchetchenian
L. Zekelman
Yuqian Chen
J. Rushmore
Fan Zhang
...
N. Makris
Yogesh Rathi
Erik H. W. Meijering
Yang Song
L. O’Donnell
76
1
0
21 Jul 2024
Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives
D. Hagos
Rick Battle
Danda B. Rawat
LM&MA
OffRL
88
25
0
20 Jul 2024
DEPICT: Diffusion-Enabled Permutation Importance for Image Classification Tasks
Sarah Jabbour
Gregory Kondas
Ella Kazerooni
Michael Sjoding
David Fouhey
Jenna Wiens
FAtt
DiffM
58
1
0
19 Jul 2024
Evaluating the Reliability of Self-Explanations in Large Language Models
Korbinian Randl
John Pavlopoulos
Aron Henriksson
Tony Lindgren
LRM
104
1
0
19 Jul 2024
Explainable Post hoc Portfolio Management Financial Policy of a Deep Reinforcement Learning agent
Alejandra de la Rica Escudero
E.C. Garrido-Merchán
Maria Coronado Vaca
AIFin
79
3
0
19 Jul 2024
EmoCAM: Toward Understanding What Drives CNN-based Emotion Recognition
Youssef Doulfoukar
Laurent Mertens
Joost Vennekens
FAtt
70
0
0
19 Jul 2024
Auditing Local Explanations is Hard
Robi Bhattacharjee
U. V. Luxburg
LRM
MLAU
FAtt
88
2
0
18 Jul 2024
Many Perception Tasks are Highly Redundant Functions of their Input Data
Rahul Ramesh
Anthony Bisulco
Ronald W. DiTullio
Linran Wei
Vijay Balasubramanian
Kostas Daniilidis
Pratik Chaudhari
98
2
0
18 Jul 2024
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
Qi Huang
Emanuele Mezzi
Osman Mutlu
Miltiadis Kofinas
Vidya Prasad
Shadnan Azwad Khan
Elena Ranguelova
Niki van Stein
92
0
0
17 Jul 2024
A survey and taxonomy of methods interpreting random forest models
Maissae Haddouchi
A. Berrado
93
3
0
17 Jul 2024
End-to-end Stroke imaging analysis, using reservoir computing-based effective connectivity, and interpretable Artificial intelligence
W. Ciezobka
J. Falco-Roget
C. Koba
A. Crimi
64
1
0
17 Jul 2024
Geometric Remove-and-Retrain (GOAR): Coordinate-Invariant eXplainable AI Assessment
Yong-Hyun Park
Junghoon Seo
Bomseok Park
Seongsu Lee
Junghyo Jo
AAML
69
0
0
17 Jul 2024
Evaluating graph-based explanations for AI-based recommender systems
Simon Delarue
Astrid Bertrand
Tiphaine Viard
69
0
0
17 Jul 2024
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps
Junseo Park
Hyeryung Jang
252
1
0
17 Jul 2024
Are Linear Regression Models White Box and Interpretable?
A. M. Salih
Yuhe Wang
XAI
64
2
0
16 Jul 2024
Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent
Karolis Jucys
George Adamopoulos
Mehrab Hamidi
Stephanie Milani
Mohammad Reza Samsami
Artem Zholus
Sonia Joseph
Blake A. Richards
Irina Rish
Özgür Simsek
67
3
0
16 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
82
3
0
16 Jul 2024
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions
Harrie Oosterhuis
Lijun Lyu
Avishek Anand
FAtt
99
1
0
16 Jul 2024
XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach
Truong Thanh Hung Nguyen
Phuc Truong Loc Nguyen
Hung Cao
60
4
0
16 Jul 2024
Towards consistency of rule-based explainer and black box model -- fusion of rule induction and XAI-based feature importance
M. Kozielski
Marek Sikora
Lukasz Wawrowski
94
1
0
16 Jul 2024
Feature Inference Attack on Shapley Values
Xinjian Luo
Yangfan Jiang
X. Xiao
AAML
FAtt
87
21
0
16 Jul 2024
Previous
1
2
3
...
11
12
13
...
69
70
71
Next