ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 3,508 papers shown
Title
Advancing Attribution-Based Neural Network Explainability through
  Relative Absolute Magnitude Layer-Wise Relevance Propagation and
  Multi-Component Evaluation
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
Davor Vukadin
Petar Afrić
Marin Šilić
Goran Delač
FAtt
119
2
0
12 Dec 2024
Make Satire Boring Again: Reducing Stylistic Bias of Satirical Corpus by
  Utilizing Generative LLMs
Make Satire Boring Again: Reducing Stylistic Bias of Satirical Corpus by Utilizing Generative LLMs
Asli Umay Ozturk
Recep Firat Cekinel
Asli Umay Ozturk
70
1
0
12 Dec 2024
Strategies and Challenges of Efficient White-Box Training for Human
  Activity Recognition
Strategies and Challenges of Efficient White-Box Training for Human Activity Recognition
Daniel Geissler
Bo Zhou
P. Lukowicz
HAI
97
2
0
11 Dec 2024
FaceX: Understanding Face Attribute Classifiers through Summary Model
  Explanations
FaceX: Understanding Face Attribute Classifiers through Summary Model Explanations
Ioannis Sarridis
C. Koutlis
Symeon Papadopoulos
Christos Diou
CVBM
144
0
0
10 Dec 2024
Neural network interpretability with layer-wise relevance propagation:
  novel techniques for neuron selection and visualization
Neural network interpretability with layer-wise relevance propagation: novel techniques for neuron selection and visualization
Deepshikha Bhati
Fnu Neha
Md. Amiruzzaman
Angela Guercio
Deepak Kumar Shukla
Ben Ward
FAtt
181
0
0
07 Dec 2024
Uniform Discretized Integrated Gradients: An effective attribution based
  method for explaining large language models
Uniform Discretized Integrated Gradients: An effective attribution based method for explaining large language models
Swarnava Sinha Roy
Ayan Kundu
FAtt
98
0
0
05 Dec 2024
A Unified Framework for Evaluating the Effectiveness and Enhancing the
  Transparency of Explainable AI Methods in Real-World Applications
A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications
M. Islam
M. F. Mridha
Md Abrar Jahin
Nilanjan Dey
79
2
0
05 Dec 2024
OMENN: One Matrix to Explain Neural Networks
OMENN: One Matrix to Explain Neural Networks
Adam Wróbel
Mikołaj Janusz
Bartosz Zieliñski
Dawid Rymarczyk
FAttAAML
134
0
0
03 Dec 2024
Explainable and Interpretable Multimodal Large Language Models: A
  Comprehensive Survey
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey
Yunkai Dang
Kaichen Huang
Jiahao Huo
Yibo Yan
Shijie Huang
...
Kun Wang
Yong Liu
Jing Shao
Hui Xiong
Xuming Hu
LRM
135
21
0
03 Dec 2024
Exploring the Robustness of AI-Driven Tools in Digital Forensics: A
  Preliminary Study
Exploring the Robustness of AI-Driven Tools in Digital Forensics: A Preliminary Study
Silvia Lucia Sanna
Leonardo Regano
Davide Maiorca
Giorgio Giacinto
95
0
0
02 Dec 2024
Integrative CAM: Adaptive Layer Fusion for Comprehensive Interpretation
  of CNNs
Integrative CAM: Adaptive Layer Fusion for Comprehensive Interpretation of CNNs
Aniket K. Singh
Debasis Chaudhuri
Manish P. Singh
Samiran Chattopadhyay
107
0
0
02 Dec 2024
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability
Wen-Dong Jiang
Chih-Yung Chang
Show-Jane Yen
Diptendu Sinha Roy
FAttHAI
251
1
0
02 Dec 2024
Explaining Object Detectors via Collective Contribution of Pixels
Explaining Object Detectors via Collective Contribution of Pixels
Toshinori Yamauchi
Hiroshi Kera
K. Kawamoto
ObjDFAtt
105
1
0
01 Dec 2024
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
232
0
0
29 Nov 2024
Knowledge-Augmented Explainable and Interpretable Learning for Anomaly Detection and Diagnosis
Martin Atzmueller
Tim Bohne
Patricia Windler
137
0
0
28 Nov 2024
Explainable deep learning improves human mental models of self-driving
  cars
Explainable deep learning improves human mental models of self-driving cars
Eoin M. Kenny
Akshay Dharmavaram
Sang Uk Lee
Tung Phan-Minh
Shreyas Rajesh
Yunqing Hu
Laura Major
Momchil S. Tomov
Julie A. Shah
145
0
0
27 Nov 2024
NormXLogit: The Head-on-Top Never Lies
NormXLogit: The Head-on-Top Never Lies
Sina Abbasi
Mohammad Reza Modarres
Mohammad Taher Pilehvar
97
2
0
25 Nov 2024
Transparent Neighborhood Approximation for Text Classifier Explanation
Transparent Neighborhood Approximation for Text Classifier Explanation
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AAML
109
0
0
25 Nov 2024
Interpreting Language Reward Models via Contrastive Explanations
Interpreting Language Reward Models via Contrastive Explanations
Junqi Jiang
Tom Bewley
Saumitra Mishra
Freddy Lecue
Manuela Veloso
145
2
0
25 Nov 2024
Transparent but Powerful: Explainability, Accuracy, and Generalizability
  in ADHD Detection from Social Media Data
Transparent but Powerful: Explainability, Accuracy, and Generalizability in ADHD Detection from Social Media Data
D. Wiechmann
E. Kempa
E. Kerz
Y. Qiao
65
0
0
23 Nov 2024
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
114
0
0
23 Nov 2024
The Explabox: Model-Agnostic Machine Learning Transparency & Analysis
The Explabox: Model-Agnostic Machine Learning Transparency & Analysis
Marcel Robeer
Michiel Bron
Elize Herrewijnen
Riwish Hoeseni
Floris Bex
154
0
0
22 Nov 2024
BERT-Based Approach for Automating Course Articulation Matrix
  Construction with Explainable AI
BERT-Based Approach for Automating Course Articulation Matrix Construction with Explainable AI
Natenaile Asmamaw Shiferaw
Simpenzwe Honore Leandre
Aman Sinha
Dillip Rout
72
0
0
21 Nov 2024
Verifying Machine Unlearning with Explainable AI
Verifying Machine Unlearning with Explainable AI
Àlex Pujol Vidal
A. S. Johansen
M. N. Jahromi
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
MU
89
2
0
20 Nov 2024
ODTE -- An ensemble of multi-class SVM-based oblique decision trees
ODTE -- An ensemble of multi-class SVM-based oblique decision trees
Ricardo Montañana
J. A. Gamez
J. M. Puerta
124
0
0
20 Nov 2024
Can Highlighting Help GitHub Maintainers Track Security Fixes?
Xueqing Liu
Yuchen Xiong
Qiushi Liu
Jiangrui Zheng
101
0
0
18 Nov 2024
Explainable Artificial Intelligence for Medical Applications: A Review
Explainable Artificial Intelligence for Medical Applications: A Review
Qiyang Sun
Alican Akman
Björn Schuller
141
9
0
15 Nov 2024
Towards Utilising a Range of Neural Activations for Comprehending
  Representational Associations
Towards Utilising a Range of Neural Activations for Comprehending Representational Associations
Laura O'Mahony
Nikola S. Nikolov
David JP O'Sullivan
130
0
0
15 Nov 2024
Artificial Intelligence in Pediatric Echocardiography: Exploring Challenges, Opportunities, and Clinical Applications with Explainable AI and Federated Learning
Artificial Intelligence in Pediatric Echocardiography: Exploring Challenges, Opportunities, and Clinical Applications with Explainable AI and Federated Learning
M. Y. Jabarulla
T. Uden
Thomas Jack
P. Beerbaum
S. Oeltze-jafra
67
1
0
15 Nov 2024
AI-Spectra: A Visual Dashboard for Model Multiplicity to Enhance
  Informed and Transparent Decision-Making
AI-Spectra: A Visual Dashboard for Model Multiplicity to Enhance Informed and Transparent Decision-Making
Gilles Eerlings
Sebe Vanbrabant
Jori Liesenborgs
Gustavo Rovelo Ruiz
Davy Vanacken
Kris Luyten
76
1
0
14 Nov 2024
Inherently Interpretable and Uncertainty-Aware Models for Online
  Learning in Cyber-Security Problems
Inherently Interpretable and Uncertainty-Aware Models for Online Learning in Cyber-Security Problems
Benjamin Kolicic
Alberto Caron
Chris Hicks
V. Mavroudis
AI4CE
73
0
0
14 Nov 2024
Learning Model Agnostic Explanations via Constraint Programming
Learning Model Agnostic Explanations via Constraint Programming
F. Koriche
Jean-Marie Lagniez
Stefan Mengel
Chi Tran
63
1
0
13 Nov 2024
Causal-discovery-based root-cause analysis and its application in time-series prediction error diagnosis
Causal-discovery-based root-cause analysis and its application in time-series prediction error diagnosis
Hiroshi Yokoyama
Ryusei Shingaki
Kaneharu Nishino
Shohei Shimizu
Thong Pham
CML
97
0
0
11 Nov 2024
CineXDrama: Relevance Detection and Sentiment Analysis of Bangla YouTube
  Comments on Movie-Drama using Transformers: Insights from Interpretability
  Tool
CineXDrama: Relevance Detection and Sentiment Analysis of Bangla YouTube Comments on Movie-Drama using Transformers: Insights from Interpretability Tool
Usafa Akther Rifa
Pronay Debnath
Busra Kamal Rafa
Shamaun Safa Hridi
Md. Aminur Rahman
25
0
0
10 Nov 2024
DNAMite: Interpretable Calibrated Survival Analysis with Discretized
  Additive Models
DNAMite: Interpretable Calibrated Survival Analysis with Discretized Additive Models
Mike Van Ness
Billy Block
Madeleine Udell
55
1
0
08 Nov 2024
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Luis M. Lopez-Ramos
Florian Leiser
Aditya Rastogi
Steven Hicks
Inga Strümke
V. Madai
Tobias Budig
Ali Sunyaev
A. Hilbert
218
3
0
07 Nov 2024
Orbit: A Framework for Designing and Evaluating Multi-objective Rankers
Orbit: A Framework for Designing and Evaluating Multi-objective Rankers
Chenyang Yang
Tesi Xiao
Michael Shavlovsky
Christian Kastner
Tongshuang Wu
74
0
0
07 Nov 2024
Local vs distributed representations: What is the right basis for
  interpretability?
Local vs distributed representations: What is the right basis for interpretability?
Julien Colin
L. Goetschalckx
Thomas Fel
Victor Boutin
Jay Gopal
Thomas Serre
Nuria Oliver
HAI
79
2
0
06 Nov 2024
An Open API Architecture to Discover the Trustworthy Explanation of
  Cloud AI Services
An Open API Architecture to Discover the Trustworthy Explanation of Cloud AI Services
Zerui Wang
Yan Liu
Jun Huang
83
2
0
05 Nov 2024
A Bayesian explanation of machine learning models based on modes and
  functional ANOVA
A Bayesian explanation of machine learning models based on modes and functional ANOVA
Quan Long
21
1
0
05 Nov 2024
Explanations that reveal all through the definition of encoding
Explanations that reveal all through the definition of encoding
A. Puli
Nhi Nguyen
Rajesh Ranganath
FAttXAI
81
2
0
04 Nov 2024
Benchmarking XAI Explanations with Human-Aligned Evaluations
Benchmarking XAI Explanations with Human-Aligned Evaluations
Rémi Kazmierczak
Steve Azzolin
Eloise Berthier
Anna Hedström
Patricia Delhomme
...
Goran Frehse
Massimiliano Mancini
Baptiste Caramiaux
Andrea Passerini
Gianni Franchi
74
3
0
04 Nov 2024
EXAGREE: Towards Explanation Agreement in Explainable Machine Learning
EXAGREE: Towards Explanation Agreement in Explainable Machine Learning
Sichao Li
Quanling Deng
Amanda S. Barnard
65
0
0
04 Nov 2024
GraphXAIN: Narratives to Explain Graph Neural Networks
GraphXAIN: Narratives to Explain Graph Neural Networks
Mateusz Cedro
David Martens
131
0
0
04 Nov 2024
ParseCaps: An Interpretable Parsing Capsule Network for Medical Image
  Diagnosis
ParseCaps: An Interpretable Parsing Capsule Network for Medical Image Diagnosis
Xinyu Geng
Jiaming Wang
Jun Xu
MedIm
45
0
0
03 Nov 2024
Utilizing Human Behavior Modeling to Manipulate Explanations in
  AI-Assisted Decision Making: The Good, the Bad, and the Scary
Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary
Zhuoyan Li
Ming Yin
58
3
0
02 Nov 2024
Designing a Robust Radiology Report Generation System
Designing a Robust Radiology Report Generation System
Sonit Singh
MedIm
63
1
0
02 Nov 2024
A Mechanistic Explanatory Strategy for XAI
A Mechanistic Explanatory Strategy for XAI
Marcin Rabiza
99
1
0
02 Nov 2024
The Interaction Layer: An Exploration for Co-Designing User-LLM Interactions in Parental Wellbeing Support Systems
The Interaction Layer: An Exploration for Co-Designing User-LLM Interactions in Parental Wellbeing Support Systems
Sruthi Viswanathan
Seray Ibrahim
Ravi Shankar
Reuben Binns
Max Van Kleek
Petr Slovák
103
1
0
02 Nov 2024
STAA: Spatio-Temporal Attention Attribution for Real-Time Interpreting
  Transformer-based Video Models
STAA: Spatio-Temporal Attention Attribution for Real-Time Interpreting Transformer-based Video Models
Zerui Wang
Yan Liu
101
1
0
01 Nov 2024
Previous
123...678...697071
Next