ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,307 papers shown
Title
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Xinru Wang
Mengjie Yu
Hannah Nguyen
Michael Iuzzolino
Tianyi Wang
...
Ting Zhang
Naveen Sendhilnathan
Hrvoje Benko
Haijun Xia
Tanya R. Jonker
58
0
0
26 Feb 2025
Grad-ECLIP: Gradient-based Visual and Textual Explanations for CLIP
Grad-ECLIP: Gradient-based Visual and Textual Explanations for CLIP
Chenyang Zhao
Kun Wang
J. H. Hsiao
Antoni B. Chan
CLIP
73
0
0
26 Feb 2025
Models That Are Interpretable But Not Transparent
Models That Are Interpretable But Not Transparent
Chudi Zhong
Panyu Chen
Cynthia Rudin
AAML
66
0
0
26 Feb 2025
A Method for Evaluating the Interpretability of Machine Learning Models in Predicting Bond Default Risk Based on LIME and SHAP
A Method for Evaluating the Interpretability of Machine Learning Models in Predicting Bond Default Risk Based on LIME and SHAP
Yan Zhang
Lin Chen
Yixiang Tian
FAtt
67
0
0
26 Feb 2025
GNN-XAR: A Graph Neural Network for Explainable Activity Recognition in Smart Homes
GNN-XAR: A Graph Neural Network for Explainable Activity Recognition in Smart Homes
Michele Fiori
Davide Mor
Gabriele Civitarese
Claudio Bettini
49
0
0
25 Feb 2025
Can LLMs Explain Themselves Counterfactually?
Can LLMs Explain Themselves Counterfactually?
Zahra Dehghanighobadi
Asja Fischer
Muhammad Bilal Zafar
LRM
47
0
0
25 Feb 2025
Machine Learning-Based Prediction of ICU Mortality in Sepsis-Associated Acute Kidney Injury Patients Using MIMIC-IV Database with Validation from eICU Database
Machine Learning-Based Prediction of ICU Mortality in Sepsis-Associated Acute Kidney Injury Patients Using MIMIC-IV Database with Validation from eICU Database
Shuheng Chen
Junyi Fan
Elham Pishgar
Kamiar Alaei
G. Placencia
Maryam Pishgar
63
1
0
25 Feb 2025
Model Lakes
Model Lakes
Koyena Pal
David Bau
Renée J. Miller
67
0
0
24 Feb 2025
Analyzing Factors Influencing Driver Willingness to Accept Advanced Driver Assistance Systems
Hannah Musau
Nana Kankam Gyimah
Judith Mwakalonge
G. Comert
Saidi Siuhi
50
0
0
23 Feb 2025
Interpretable Retinal Disease Prediction Using Biology-Informed Heterogeneous Graph Representations
Interpretable Retinal Disease Prediction Using Biology-Informed Heterogeneous Graph Representations
Laurin Lux
Alexander H. Berger
Maria Romeo Tricas
Alaa E. Fayed
Shri Kiran Srinivasan
Linus Kreitner
Jonas Weidner
M. Menten
Daniel Rueckert
Johannes C. Paetzold
51
2
0
23 Feb 2025
Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models
Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models
Ranjan Sapkota
Shaina Raza
Manoj Karkee
50
4
0
21 Feb 2025
Evaluating The Explainability of State-of-the-Art Deep Learning-based Network Intrusion Detection Systems
Evaluating The Explainability of State-of-the-Art Deep Learning-based Network Intrusion Detection Systems
Ayush Kumar
V. Thing
65
0
0
21 Feb 2025
Detecting Linguistic Bias in Government Documents Using Large language Models
Detecting Linguistic Bias in Government Documents Using Large language Models
Milena de Swart
Floris den Hengst
Jieying Chen
66
0
0
20 Feb 2025
SPEX: Scaling Feature Interaction Explanations for LLMs
SPEX: Scaling Feature Interaction Explanations for LLMs
J. S. Kang
Landon Butler
Abhineet Agarwal
Yigit Efe Erginbas
Ramtin Pedarsani
Kannan Ramchandran
Bin Yu
VLM
LRM
77
0
0
20 Feb 2025
G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable Recommendation
G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable Recommendation
Yuhan Li
Xinni Zhang
Linhao Luo
Heng Chang
Yuxiang Ren
Irwin King
Jiyang Li
65
4
0
18 Feb 2025
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
Jiaxin Xu
Hung Chau
Angela Burden
TDI
50
0
0
18 Feb 2025
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
Ryuto Koike
Masahiro Kaneko
Ayana Niwa
Preslav Nakov
Naoaki Okazaki
DeLMO
73
0
0
17 Feb 2025
ExplainReduce: Summarising local explanations via proxies
ExplainReduce: Summarising local explanations via proxies
Lauri Seppäläinen
Mudong Guo
Kai Puolamäki
FAtt
52
0
0
17 Feb 2025
Suboptimal Shapley Value Explanations
Suboptimal Shapley Value Explanations
Xiaolei Lu
FAtt
67
0
0
17 Feb 2025
Time-series attribution maps with regularized contrastive learning
Time-series attribution maps with regularized contrastive learning
Steffen Schneider
Rodrigo González Laiz
Anastasiia Filippova
Markus Frey
Mackenzie W. Mathis
BDL
FAtt
CML
AI4TS
83
0
0
17 Feb 2025
The shape of the brain's connections is predictive of cognitive performance: an explainable machine learning study
The shape of the brain's connections is predictive of cognitive performance: an explainable machine learning study
Yui Lo
Yuqian Chen
Dongnan Liu
Wan Liu
L. Zekelman
...
Yogesh Rathi
N. Makris
A. Golby
Weidong Cai
L. O’Donnell
51
3
0
17 Feb 2025
From Text to Trust: Empowering AI-assisted Decision Making with Adaptive LLM-powered Analysis
From Text to Trust: Empowering AI-assisted Decision Making with Adaptive LLM-powered Analysis
Zhuoyan Li
Hangxiao Zhu
Zhuoran Lu
Ziang Xiao
Ming Yin
57
1
0
17 Feb 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
160
0
0
17 Feb 2025
Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
Sujan Sai Gannamaneni
Rohil Prakash Rao
Michael Mock
Maram Akila
Stefan Wrobel
AAML
222
0
0
17 Feb 2025
Demystifying Hateful Content: Leveraging Large Multimodal Models for Hateful Meme Detection with Explainable Decisions
Demystifying Hateful Content: Leveraging Large Multimodal Models for Hateful Meme Detection with Explainable Decisions
Ming Shan Hee
Roy Ka-wei Lee
VLM
92
0
0
16 Feb 2025
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Zhiyu Zhu
Zhibo Jin
Jiayu Zhang
Nan Yang
Jiahao Huang
Jianlong Zhou
Fang Chen
46
0
0
16 Feb 2025
Accelerating Anchors via Specialization and Feature Transformation
Accelerating Anchors via Specialization and Feature Transformation
Haonan Yu
Junhao Liu
Xin Zhang
49
1
0
16 Feb 2025
Recent Advances in Malware Detection: Graph Learning and Explainability
Recent Advances in Malware Detection: Graph Learning and Explainability
Hossein Shokouhinejad
Roozbeh Razavi-Far
Hesamodin Mohammadian
Mahdi Rabbani
Samuel Ansong
Griffin Higgins
Ali Ghorbani
AAML
81
2
0
14 Feb 2025
MorphNLI: A Stepwise Approach to Natural Language Inference Using Text Morphing
MorphNLI: A Stepwise Approach to Natural Language Inference Using Text Morphing
Vlad-Andrei Negru
Robert Vacareanu
Camelia Lemnaru
Mihai Surdeanu
Rodica Potolea
99
0
0
13 Feb 2025
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Greta Warren
Irina Shklovski
Isabelle Augenstein
OffRL
83
4
0
13 Feb 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
108
0
0
11 Feb 2025
SMAB: MAB based word Sensitivity Estimation Framework and its Applications in Adversarial Text Generation
SMAB: MAB based word Sensitivity Estimation Framework and its Applications in Adversarial Text Generation
Saurabh Kumar Pandey
S. Vashistha
Debrup Das
Somak Aditya
Monojit Choudhury
AAML
76
0
0
10 Feb 2025
Comprehensive Framework for Evaluating Conversational AI Chatbots
Comprehensive Framework for Evaluating Conversational AI Chatbots
Shailja Gupta
Rajesh Ranjan
Surya Narayan Singh
46
0
0
10 Feb 2025
DCENWCNet: A Deep CNN Ensemble Network for White Blood Cell Classification with LIME-Based Explainability
DCENWCNet: A Deep CNN Ensemble Network for White Blood Cell Classification with LIME-Based Explainability
Sibasish Dhibar
79
0
0
08 Feb 2025
Coherent Local Explanations for Mathematical Optimization
Coherent Local Explanations for Mathematical Optimization
Daan Otto
Jannis Kurtz
S. Ilker Birbil
61
0
0
07 Feb 2025
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Chhavi Yadav
Evan Monroe Laufer
Dan Boneh
Kamalika Chaudhuri
96
0
0
06 Feb 2025
CoRPA: Adversarial Image Generation for Chest X-rays Using Concept Vector Perturbations and Generative Models
CoRPA: Adversarial Image Generation for Chest X-rays Using Concept Vector Perturbations and Generative Models
Amy Rafferty
Rishi Ramaesh
Ajitha Rajan
MedIm
AAML
63
0
0
04 Feb 2025
Discovering Chunks in Neural Embeddings for Interpretability
Discovering Chunks in Neural Embeddings for Interpretability
Shuchen Wu
Stephan Alaniz
Eric Schulz
Zeynep Akata
47
0
0
03 Feb 2025
Guidance Source Matters: How Guidance from AI, Expert, or a Group of Analysts Impacts Visual Data Preparation and Analysis
Guidance Source Matters: How Guidance from AI, Expert, or a Group of Analysts Impacts Visual Data Preparation and Analysis
Arpit Narechania
Alex Endert
Atanu R. Sinha
50
0
0
02 Feb 2025
INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Hazard Detection and Edge Case Evaluation
INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Hazard Detection and Edge Case Evaluation
Dianwei Chen
Zifan Zhang
Yuchen Liu
Xianfeng Terry Yang
VLM
77
3
0
01 Feb 2025
Sparse Autoencoder Insights on Voice Embeddings
Sparse Autoencoder Insights on Voice Embeddings
Daniel Pluth
Yu Zhou
Vijay K. Gurbani
45
0
0
31 Jan 2025
Fake News Detection After LLM Laundering: Measurement and Explanation
Fake News Detection After LLM Laundering: Measurement and Explanation
Rupak Kumar Das
Jonathan Dodge
103
0
0
29 Jan 2025
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Gaole He
Nilay Aishwarya
U. Gadiraju
48
6
0
29 Jan 2025
Extending Information Bottleneck Attribution to Video Sequences
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
Explaining Decisions of Agents in Mixed-Motive Games
Explaining Decisions of Agents in Mixed-Motive Games
Maayan Orner
Oleg Maksimov
Akiva Kleinerman
Charles Ortiz
Sarit Kraus
101
1
0
28 Jan 2025
FIT-Print: Towards False-claim-resistant Model Ownership Verification via Targeted Fingerprint
Shuo Shao
Haozhe Zhu
Hongwei Yao
Yiming Li
Tianwei Zhang
Zhanyue Qin
Kui Ren
227
0
0
28 Jan 2025
A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
Kai He
Rui Mao
Qika Lin
Yucheng Ruan
Xiang Lan
Mengling Feng
Min Zhang
LM&MA
AILaw
107
157
0
28 Jan 2025
Enhancing Visual Inspection Capability of Multi-Modal Large Language Models on Medical Time Series with Supportive Conformalized and Interpretable Small Specialized Models
Huayu Li
Xiwen Chen
C. Zhang
S. Quan
William D.S. Killgore
Shu-Fen Wung
Chen X. Chen
Geng Yuan
Jin Lu
Ao Li
40
0
0
28 Jan 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
81
2
0
28 Jan 2025
Evaluating the Effectiveness of XAI Techniques for Encoder-Based Language Models
Melkamu Mersha
Mesay Gemeda Yigezu
Jugal Kalita
ELM
62
3
0
26 Jan 2025
Previous
123456...858687
Next