ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,255 papers shown
Title
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
Pouria Fatemi
Ehsan Sharifian
Mohammad Hossein Yassaee
43
0
0
05 May 2025
Explainable Face Recognition via Improved Localization
Explainable Face Recognition via Improved Localization
Rashik Shadman
Daqing Hou
Faraz Hussain
M. G. Sarwar Murshed
CVBM
FAtt
31
0
0
04 May 2025
PointExplainer: Towards Transparent Parkinson's Disease Diagnosis
PointExplainer: Towards Transparent Parkinson's Disease Diagnosis
Xuechao Wang
S. Nõmm
Junqing Huang
Kadri Medijainen
A. Toomela
Michael Ruzhansky
AAML
FAtt
26
0
0
04 May 2025
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
Zhiyu Zhu
Jiayu Zhang
Zhibo Jin
Fang Chen
Jianlong Zhou
FAtt
24
0
0
03 May 2025
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Federico Maria Cau
Lucio Davide Spano
31
0
0
02 May 2025
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
51
0
0
02 May 2025
Thinking Outside the Template with Modular GP-GOMEA
Thinking Outside the Template with Modular GP-GOMEA
Joe Harrison
Peter A. N. Bosman
Tanja Alderliesten
41
0
0
02 May 2025
Explainable AI Based Diagnosis of Poisoning Attacks in Evolutionary Swarms
Explainable AI Based Diagnosis of Poisoning Attacks in Evolutionary Swarms
Mehrdad Asadi
Roxana Rădulescu
Ann Nowé
AAML
32
0
0
02 May 2025
Explainable Machine Learning for Cyberattack Identification from Traffic Flows
Explainable Machine Learning for Cyberattack Identification from Traffic Flows
Yujing Zhou
Marc L. Jacquet
Robel Dawit
Skyler Fabre
Dev Sarawat
...
Yongxin Liu
Dahai Liu
Hongyun Chen
Jian Wang
Huihui Wang
AAML
24
0
0
02 May 2025
Machine Learning for Cyber-Attack Identification from Traffic Flows
Machine Learning for Cyber-Attack Identification from Traffic Flows
Yujing Zhou
Marc L. Jacquet
Robel Dawit
Skyler Fabre
Dev Sarawat
...
Yongxin Liu
Dahai Liu
Hongyun Chen
Jian Wang
Huihui Wang
26
0
0
02 May 2025
Overview and practical recommendations on using Shapley Values for identifying predictive biomarkers via CATE modeling
Overview and practical recommendations on using Shapley Values for identifying predictive biomarkers via CATE modeling
David Svensson
Erik Hermansson
N. Nikolaou
Konstantinos Sechidis
Ilya Lipkovich
CML
61
0
0
02 May 2025
Computational Identification of Regulatory Statements in EU Legislation
Computational Identification of Regulatory Statements in EU Legislation
Gijs Jan Brandsma
Jens Blom-Hansen
Christiaan Meijer
Kody Moodley
AILaw
63
0
0
01 May 2025
Combining LLMs with Logic-Based Framework to Explain MCTS
Combining LLMs with Logic-Based Framework to Explain MCTS
Ziyan An
Xia Wang
Hendrik Baier
Zirong Chen
A. Dubey
Taylor T. Johnson
Jonathan Sprinkle
Ayan Mukhopadhyay
Meiyi Ma
34
1
0
01 May 2025
Self-Ablating Transformers: More Interpretability, Less Sparsity
Self-Ablating Transformers: More Interpretability, Less Sparsity
Jeremias Ferrao
Luhan Mikaelson
Keenan Pepper
Natalia Perez-Campanero Antolin
MILM
26
0
0
01 May 2025
CognitionNet: A Collaborative Neural Network for Play Style Discovery in Online Skill Gaming Platform
CognitionNet: A Collaborative Neural Network for Play Style Discovery in Online Skill Gaming Platform
Rukma Talwadker
Surajit Chakrabarty
Aditya Pareek
Tridib Mukherjee
Deepak Saini
51
6
0
01 May 2025
IP-CRR: Information Pursuit for Interpretable Classification of Chest Radiology Reports
IP-CRR: Information Pursuit for Interpretable Classification of Chest Radiology Reports
Yuyan Ge
Kwan Ho Ryan Chan
Pablo Messina
René Vidal
43
0
0
30 Apr 2025
RuleKit 2: Faster and simpler rule learning
RuleKit 2: Faster and simpler rule learning
Adam Gudyś
Cezary Maszczyk
Joanna Badura
Adam Grzelak
Marek Sikora
Łukasz Wróbel
AI4TS
19
0
0
29 Apr 2025
Explanations Go Linear: Interpretable and Individual Latent Encoding for Post-hoc Explainability
Explanations Go Linear: Interpretable and Individual Latent Encoding for Post-hoc Explainability
Simone Piaggesi
Riccardo Guidotti
F. Giannotti
D. Pedreschi
FAtt
LRM
214
0
0
29 Apr 2025
In defence of post-hoc explanations in medical AI
In defence of post-hoc explanations in medical AI
Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
32
0
0
29 Apr 2025
A Domain-Agnostic Scalable AI Safety Ensuring Framework
A Domain-Agnostic Scalable AI Safety Ensuring Framework
Beomjun Kim
Kangyeon Kim
Sunwoo Kim
Heejin Ahn
57
0
0
29 Apr 2025
Unsupervised Surrogate Anomaly Detection
Unsupervised Surrogate Anomaly Detection
Simon Klüttermann
Tim Katzke
Emmanuel Müller
34
0
0
29 Apr 2025
GiBy: A Giant-Step Baby-Step Classifier For Anomaly Detection In Industrial Control Systems
GiBy: A Giant-Step Baby-Step Classifier For Anomaly Detection In Industrial Control Systems
Sarad Venugopalan
Sridhar Adepu
32
0
0
29 Apr 2025
GMAR: Gradient-Driven Multi-Head Attention Rollout for Vision Transformer Interpretability
GMAR: Gradient-Driven Multi-Head Attention Rollout for Vision Transformer Interpretability
Sehyeong Jo
Gangjae Jang
Haesol Park
32
0
0
28 Apr 2025
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
Loc Phuc Truong Nguyen
Hung Nguyen
Hung Cao
71
0
0
27 Apr 2025
SSA-UNet: Advanced Precipitation Nowcasting via Channel Shuffling
SSA-UNet: Advanced Precipitation Nowcasting via Channel Shuffling
Marco Turzi
Siamak Mehrkanoon
AI4TS
57
0
0
25 Apr 2025
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
233
0
0
24 Apr 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
Learning Explainable Dense Reward Shapes via Bayesian Optimization
Learning Explainable Dense Reward Shapes via Bayesian Optimization
Ryan Koo
Ian Yang
Vipul Raheja
Mingyi Hong
Kwang-Sung Jun
Dongyeop Kang
31
0
0
22 Apr 2025
Intrinsic Barriers to Explaining Deep Foundation Models
Intrinsic Barriers to Explaining Deep Foundation Models
Zhen Tan
Huan Liu
AI4CE
22
0
0
21 Apr 2025
Surrogate Fitness Metrics for Interpretable Reinforcement Learning
Surrogate Fitness Metrics for Interpretable Reinforcement Learning
Philipp Altmann
Céline Davignon
Maximilian Zorn
Fabian Ritz
Claudia Linnhoff-Popien
Thomas Gabor
29
0
0
20 Apr 2025
Walk the Talk? Measuring the Faithfulness of Large Language Model Explanations
Walk the Talk? Measuring the Faithfulness of Large Language Model Explanations
Katie Matton
Robert Osazuwa Ness
John Guttag
Emre Kıcıman
29
2
0
19 Apr 2025
Mathematical Programming Models for Exact and Interpretable Formulation of Neural Networks
Mathematical Programming Models for Exact and Interpretable Formulation of Neural Networks
Masoud Ataei
Edrin Hasaj
Jacob Gipp
Sepideh Forouzi
22
0
0
19 Apr 2025
Enhancing Multilingual Sentiment Analysis with Explainability for Sinhala, English, and Code-Mixed Content
Enhancing Multilingual Sentiment Analysis with Explainability for Sinhala, English, and Code-Mixed Content
Azmarah Rizvi
Navojith Thamindu
A.M.N.H. Adhikari
W.P.U. Senevirathna
Dharshana Kasthurirathna
Lakmini Abeywardhana
26
0
0
18 Apr 2025
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
27
0
0
18 Apr 2025
Transformation of audio embeddings into interpretable, concept-based representations
Transformation of audio embeddings into interpretable, concept-based representations
Alice Zhang
Edison Thomaz
Lie Lu
29
0
0
18 Apr 2025
Long-context Non-factoid Question Answering in Indic Languages
Long-context Non-factoid Question Answering in Indic Languages
Ritwik Mishra
R. Shah
Ponnurangam Kumaraguru
33
0
0
18 Apr 2025
Decoding Vision Transformers: the Diffusion Steering Lens
Decoding Vision Transformers: the Diffusion Steering Lens
Ryota Takatsuki
Sonia Joseph
Ippei Fujisawa
Ryota Kanai
DiffM
30
0
0
18 Apr 2025
Leakage and Interpretability in Concept-Based Models
Leakage and Interpretability in Concept-Based Models
Enrico Parisini
Tapabrata Chakraborti
Chris Harbron
Ben D. MacArthur
Christopher R. S. Banerji
40
0
0
18 Apr 2025
Learning to Attribute with Attention
Learning to Attribute with Attention
Benjamin Cohen-Wang
Yung-Sung Chuang
Aleksander Madry
30
0
0
18 Apr 2025
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
Jongseo Lee
Wooil Lee
Gyeong-Moon Park
Seong Tae Kim
Jinwoo Choi
35
0
0
17 Apr 2025
Readable Twins of Unreadable Models
Readable Twins of Unreadable Models
Krzysztof Pancerz
Piotr Kulicki
Michał Kalisz
Andrzej Burda
Maciej Stanisławski
Jaromir Sarzyński
SyDa
60
0
0
17 Apr 2025
Representation Learning for Tabular Data: A Comprehensive Survey
Representation Learning for Tabular Data: A Comprehensive Survey
Jun-Peng Jiang
Si-Yang Liu
Hao-Run Cai
Qile Zhou
Han-Jia Ye
LMTD
53
1
0
17 Apr 2025
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Yiyou Sun
Y. Gai
Lijie Chen
Abhilasha Ravichander
Yejin Choi
D. Song
HILM
57
0
0
17 Apr 2025
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Shubham Kumar
Dwip Dalal
Narendra Ahuja
26
0
0
15 Apr 2025
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
59
0
0
14 Apr 2025
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
Asiful Arefeen
Saman Khamesian
Maria Adela Grando
Bithika Thompson
Hassan Ghasemzadeh
46
0
0
14 Apr 2025
Challenges in interpretability of additive models
Challenges in interpretability of additive models
Xinyu Zhang
Julien Martinelli
S. T. John
AAML
AI4CE
32
1
0
14 Apr 2025
Explainable Artificial Intelligence techniques for interpretation of food datasets: a review
Explainable Artificial Intelligence techniques for interpretation of food datasets: a review
Leonardo Arrighi
Ingrid Alves de Moraes
Marco Zullich
Michele Simonato
Douglas Fernandes Barbin
Sylvio Barbon Junior
36
0
0
12 Apr 2025
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Zhen Tan
Song Wang
Yifan Li
Yu Kong
Jundong Li
Tianlong Chen
Huan Liu
FAtt
48
0
0
11 Apr 2025
Beyond Black-Box Predictions: Identifying Marginal Feature Effects in Tabular Transformer Networks
Beyond Black-Box Predictions: Identifying Marginal Feature Effects in Tabular Transformer Networks
Anton Thielmann
Arik Reuter
Benjamin Saefken
LMTD
75
0
0
11 Apr 2025
Previous
12345...848586
Next