ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 3,508 papers shown
Title
Linking in Style: Understanding learned features in deep learning models
Linking in Style: Understanding learned features in deep learning models
Maren H. Wehrheim
Pamela Osuna-Vargas
Matthias Kaschube
GAN
43
0
0
25 Sep 2024
Enhancing Feature Selection and Interpretability in AI Regression Tasks
  Through Feature Attribution
Enhancing Feature Selection and Interpretability in AI Regression Tasks Through Feature Attribution
Alexander Hinterleitner
Thomas Bartz-Beielstein
Richard Schulz
Sebastian Spengler
Thomas Winter
Christoph Leitenmeier
62
1
0
25 Sep 2024
Entailment-Driven Privacy Policy Classification with LLMs
Entailment-Driven Privacy Policy Classification with LLMs
Bhanuka Silva
Dishanika Denipitiyage
Suranga Seneviratne
Anirban Mahanti
Aruna Seneviratne
AILaw
53
0
0
25 Sep 2024
Counterfactual Token Generation in Large Language Models
Counterfactual Token Generation in Large Language Models
Ivi Chatzi
N. C. Benz
Eleni Straitouri
Stratis Tsirtsis
Manuel Gomez Rodriguez
LRM
86
5
0
25 Sep 2024
Leveraging Local Structure for Improving Model Explanations: An
  Information Propagation Approach
Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach
Ruo Yang
Binghui Wang
M. Bilgic
FAtt
56
0
0
24 Sep 2024
Machine learning approaches for automatic defect detection in
  photovoltaic systems
Machine learning approaches for automatic defect detection in photovoltaic systems
Swayam Rajat Mohanty
Moin Uddin Maruf
Vaibhav Singh
Zeeshan Ahmad
31
0
0
24 Sep 2024
Creating Healthy Friction: Determining Stakeholder Requirements of Job
  Recommendation Explanations
Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations
Roan Schellingerhout
Francesco Barile
Nava Tintarev
95
1
0
24 Sep 2024
TSFeatLIME: An Online User Study in Enhancing Explainability in
  Univariate Time Series Forecasting
TSFeatLIME: An Online User Study in Enhancing Explainability in Univariate Time Series Forecasting
Hongnan Ma
Kevin McAreavey
Weiru Liu
AI4TSFAtt
41
0
0
24 Sep 2024
Interactive Example-based Explanations to Improve Health Professionals'
  Onboarding with AI for Human-AI Collaborative Decision Making
Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making
Min Hun Lee
Renee Bao Xuan Ng
Silvana Xin Yi Choo
S. Thilarajah
45
1
0
24 Sep 2024
Explaining word embeddings with perfect fidelity: Case study in research impact prediction
Explaining word embeddings with perfect fidelity: Case study in research impact prediction
Lucie Dvorackova
Marcin P. Joachimiak
Michal Cerny
Adriana Kubecova
Vilem Sklenak
Tomas Kliegr
49
0
0
24 Sep 2024
VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision
  Language Models
VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models
Nam Hyeon-Woo
Moon Ye-Bin
Wonseok Choi
Lee Hyun
Tae-Hyun Oh
CoGe
55
3
0
23 Sep 2024
Quantifying Context Bias in Domain Adaptation for Object Detection
Quantifying Context Bias in Domain Adaptation for Object Detection
Hojun Son
Asma Almutairi
Arpan Kusari
AI4CE
116
1
0
23 Sep 2024
A Comprehensive Survey with Critical Analysis for Deepfake Speech Detection
A Comprehensive Survey with Critical Analysis for Deepfake Speech Detection
Lam Pham
Phat Lam
Dat Tran
Hieu Tang
Tin Nguyen
Alexander Schindler
Canh Vu
Alexander Polonsky
Canh Vu
97
5
0
23 Sep 2024
EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and
  Quantized Vectors
EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and Quantized Vectors
Sangwon Kim
Dasom Ahn
B. Ko
In-su Jang
Kwang-Ju Kim
67
4
0
22 Sep 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
61
2
0
22 Sep 2024
To Err Is AI! Debugging as an Intervention to Facilitate Appropriate
  Reliance on AI Systems
To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems
Gaole He
Abri Bharos
U. Gadiraju
65
4
0
22 Sep 2024
LLMs are One-Shot URL Classifiers and Explainers
LLMs are One-Shot URL Classifiers and Explainers
Fariza Rashid
Nishavi Ranaweera
Ben Doyle
Suranga Seneviratne
LRM
78
3
0
22 Sep 2024
ReFine: Boosting Time Series Prediction of Extreme Events by Reweighting
  and Fine-tuning
ReFine: Boosting Time Series Prediction of Extreme Events by Reweighting and Fine-tuning
Jimeng Shi
Azam Shirali
Giri Narasimhan
AI4TS
70
0
0
21 Sep 2024
The FIX Benchmark: Extracting Features Interpretable to eXperts
The FIX Benchmark: Extracting Features Interpretable to eXperts
Helen Jin
Shreya Havaldar
Chaehyeon Kim
Anton Xue
Weiqiu You
...
Bhuvnesh Jain
Amin Madani
M. Sako
Lyle Ungar
Eric Wong
48
1
0
20 Sep 2024
Dermatologist-like explainable AI enhances melanoma diagnosis accuracy:
  eye-tracking study
Dermatologist-like explainable AI enhances melanoma diagnosis accuracy: eye-tracking study
T. Chanda
Sarah Haggenmueller
Tabea-Clara Bucher
Tim Holland-Letz
Harald Kittler
...
Martin Jansen
Juliane Wacker
Joerg Wacker
Reader Study Consortium
T. Brinker
45
1
0
20 Sep 2024
An Adaptive End-to-End IoT Security Framework Using Explainable AI and
  LLMs
An Adaptive End-to-End IoT Security Framework Using Explainable AI and LLMs
S. Baral
Sajal Saha
Anwar Haque
26
3
0
20 Sep 2024
Interpret the Predictions of Deep Networks via Re-Label Distillation
Interpret the Predictions of Deep Networks via Re-Label Distillation
Yingying Hua
Shiming Ge
Daichi Zhang
FAtt
117
0
0
20 Sep 2024
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Philipp Spitzer
Joshua Holstein
Katelyn Morrison
Kenneth Holstein
Gerhard Satzger
Niklas Kühl
67
3
0
19 Sep 2024
Counterfactual Explanations for Clustering Models
Counterfactual Explanations for Clustering Models
Aurora Spagnol
Kacper Sokol
Pietro Barbiero
Marc Langheinrich
Martin Gjoreski
CML
69
0
0
19 Sep 2024
Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data
Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data
Suryansh Vidya
Kush Gupta
Amir Aly
Andy Wills
Emmanuel Ifeachor
Rohit Shankar
111
1
0
19 Sep 2024
Abductive explanations of classifiers under constraints: Complexity and
  properties
Abductive explanations of classifiers under constraints: Complexity and properties
Martin Cooper
Leila Amgoud
55
8
0
18 Sep 2024
Towards Interpretable End-Stage Renal Disease (ESRD) Prediction:
  Utilizing Administrative Claims Data with Explainable AI Techniques
Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques
Yubo Li
Saba A. Al-Sayouri
R. Padman
59
2
0
18 Sep 2024
Additive-feature-attribution methods: a review on explainable artificial
  intelligence for fluid dynamics and heat transfer
Additive-feature-attribution methods: a review on explainable artificial intelligence for fluid dynamics and heat transfer
Andres Cremades
S. Hoyas
Ricardo Vinuesa
FAtt
51
12
0
18 Sep 2024
Local Explanations and Self-Explanations for Assessing Faithfulness in
  black-box LLMs
Local Explanations and Self-Explanations for Assessing Faithfulness in black-box LLMs
Christos Fragkathoulas
Odysseas S. Chlapanis
LRM
39
1
0
18 Sep 2024
Spontaneous Informal Speech Dataset for Punctuation Restoration
Spontaneous Informal Speech Dataset for Punctuation Restoration
Xing Yi Liu
Homayoon Beigi
27
0
0
17 Sep 2024
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable
  Approach
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable Approach
Debarpan Bhattacharya
A. H. Poorjam
Deepak Mittal
Sriram Ganapathy
57
0
0
17 Sep 2024
Trustworthy Conceptual Explanations for Neural Networks in Robot
  Decision-Making
Trustworthy Conceptual Explanations for Neural Networks in Robot Decision-Making
Som Sagar
Aditya Taparia
Harsh Mankodiya
Pranav M Bidare
Yifan Zhou
Ransalu Senanayake
FAtt
61
0
0
16 Sep 2024
Aligning Judgment Using Task Context and Explanations to Improve
  Human-Recommender System Performance
Aligning Judgment Using Task Context and Explanations to Improve Human-Recommender System Performance
Divya K. Srivastava
Karen Feigh
56
0
0
16 Sep 2024
LLMs for clinical risk prediction
LLMs for clinical risk prediction
Mohamed Rezk
Patricia Cabanillas Silva
F. Dahlweid
LM&MA
18
0
0
16 Sep 2024
Optimal ablation for interpretability
Optimal ablation for interpretability
Maximilian Li
Lucas Janson
FAtt
91
3
0
16 Sep 2024
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
114
1
0
16 Sep 2024
MusicLIME: Explainable Multimodal Music Understanding
MusicLIME: Explainable Multimodal Music Understanding
Theodoros Sotirou
Vassilis Lyberatos
Orfeas Menis Mastromichalakis
Giorgos Stamou
54
3
0
16 Sep 2024
SHIRE: Enhancing Sample Efficiency using Human Intuition in REinforcement Learning
SHIRE: Enhancing Sample Efficiency using Human Intuition in REinforcement Learning
Amogh Joshi
Adarsh Kosta
Kaushik Roy
OffRL
73
2
0
16 Sep 2024
Prevailing Research Areas for Music AI in the Era of Foundation Models
Prevailing Research Areas for Music AI in the Era of Foundation Models
Megan Wei
M. Modrzejewski
Aswin Sivaraman
Dorien Herremans
MedIm
87
2
0
14 Sep 2024
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers
  via Feature Substitution
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers via Feature Substitution
Kiana Vu
Phung Lai
Truc D. T. Nguyen
AAML
51
0
0
13 Sep 2024
Towards certifiable AI in aviation: landscape, challenges, and
  opportunities
Towards certifiable AI in aviation: landscape, challenges, and opportunities
Hymalai Bello
Daniel Geißler
L. Ray
Stefan Muller-Divéky
Peter Muller
Shannon Kittrell
Mengxi Liu
Bo Zhou
Paul Lukowicz
71
1
0
13 Sep 2024
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
Eleonora Mancini
Francesco Paissan
Mirco Ravanelli
Cem Subakan
51
2
0
13 Sep 2024
XMOL: Explainable Multi-property Optimization of Molecules
XMOL: Explainable Multi-property Optimization of Molecules
Aye Phyu Phyu Aung
Jay Chaudhary
Ji Wei Yoon
Senthilnath Jayavelu
48
1
0
12 Sep 2024
Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings
Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings
Wei Liu
Chris North
Rebecca Faust
35
0
0
06 Sep 2024
Understanding Data Importance in Machine Learning Attacks: Does Valuable
  Data Pose Greater Harm?
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?
Rui Wen
Michael Backes
Yang Zhang
TDIAAML
69
2
0
05 Sep 2024
Beyond Model Interpretability: Socio-Structural Explanations in Machine
  Learning
Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning
Andrew Smart
Atoosa Kasirzadeh
89
6
0
05 Sep 2024
Better Verified Explanations with Applications to Incorrectness and
  Out-of-Distribution Detection
Better Verified Explanations with Applications to Incorrectness and Out-of-Distribution Detection
Min Wu
Xiaofu Li
Haoze Wu
Clark Barrett
58
1
0
04 Sep 2024
Decompose the model: Mechanistic interpretability in image models with
  Generalized Integrated Gradients (GIG)
Decompose the model: Mechanistic interpretability in image models with Generalized Integrated Gradients (GIG)
Yearim Kim
Sangyu Han
Sangbum Han
Nojun Kwak
84
0
0
03 Sep 2024
Interpreting Outliers in Time Series Data through Decoding Autoencoder
Interpreting Outliers in Time Series Data through Decoding Autoencoder
Patrick Knab
Sascha Marton
Christian Bartelt
Robert Fuder
64
1
0
03 Sep 2024
Explanation Space: A New Perspective into Time Series Interpretability
Explanation Space: A New Perspective into Time Series Interpretability
Shahbaz Rezaei
Xin Liu
AI4TS
191
2
0
02 Sep 2024
Previous
123...91011...697071
Next