ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,966 papers shown
Title
DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation
DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation
Yingying Fang
Shuang Wu
Zihao Jin
Caiwen Xu
Shiyi Wang
Simon Walsh
Guang Yang
MedIm
93
6
0
21 Jun 2024
Online detection and infographic explanation of spam reviews with data
  drift adaptation
Online detection and infographic explanation of spam reviews with data drift adaptation
Francisco de Arriba-Pérez
Silvia García-Méndez
Fátima Leal
Benedita Malheiro
J. C. Burguillo
46
0
0
21 Jun 2024
Self-supervised Interpretable Concept-based Models for Text
  Classification
Self-supervised Interpretable Concept-based Models for Text Classification
Francesco De Santis
Philippe Bich
Gabriele Ciravegna
Pietro Barbiero
Danilo Giordano
Tania Cerquitelli
49
1
0
20 Jun 2024
Reasoning with trees: interpreting CNNs using hierarchies
Reasoning with trees: interpreting CNNs using hierarchies
Caroline Mazini Rodrigues
Nicolas Boutry
Laurent Najman
49
0
0
19 Jun 2024
Investigating the Role of Explainability and AI Literacy in User
  Compliance
Investigating the Role of Explainability and AI Literacy in User Compliance
Niklas Kühl
Christian Meske
Maximilian Nitsche
Jodie Lobana
37
5
0
18 Jun 2024
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations
  for Vision Foundation Models
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang
Shiwei Tan
Hao Wang
BDL
112
7
0
18 Jun 2024
MiSuRe is all you need to explain your image segmentation
MiSuRe is all you need to explain your image segmentation
Syed Nouman Hasany
Fabrice Mériaudeau
Caroline Petitjean
30
2
0
18 Jun 2024
WellDunn: On the Robustness and Explainability of Language Models and
  Large Language Models in Identifying Wellness Dimensions
WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions
Seyedali Mohammadi
Edward Raff
Jinendra Malekar
Vedant Palit
Francis Ferraro
Manas Gaur
AI4MH
81
3
0
17 Jun 2024
On GNN explanability with activation rules
On GNN explanability with activation rules
Luca Veyrin-Forrer
Ataollah Kamal
Stefan Duffner
Marc Plantevit
C. Robardet
AI4CE
51
2
0
17 Jun 2024
GECOBench: A Gender-Controlled Text Dataset and Benchmark for
  Quantifying Biases in Explanations
GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Rick Wilming
Artur Dox
Hjalmar Schulz
Marta Oliveira
Benedict Clark
Stefan Haufe
100
2
0
17 Jun 2024
P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models
P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models
Shuo Yang
Chenchen Yuan
Yao Rong
Felix Steinbauer
Gjergji Kasneci
89
1
0
17 Jun 2024
CELL your Model: Contrastive Explanations for Large Language Models
CELL your Model: Contrastive Explanations for Large Language Models
Ronny Luss
Erik Miehling
Amit Dhurandhar
137
0
0
17 Jun 2024
SynthTree: Co-supervised Local Model Synthesis for Explainable
  Prediction
SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction
Evgenii Kuriabov
Jia Li
58
0
0
16 Jun 2024
IG2: Integrated Gradient on Iterative Gradient Path for Feature
  Attribution
IG2: Integrated Gradient on Iterative Gradient Path for Feature Attribution
Yue Zhuo
Zhiqiang Ge
57
9
0
16 Jun 2024
E-SAGE: Explainability-based Defense Against Backdoor Attacks on Graph
  Neural Networks
E-SAGE: Explainability-based Defense Against Backdoor Attacks on Graph Neural Networks
Dingqiang Yuan
Xiaohua Xu
Lei Yu
Tongchang Han
Rongchang Li
Meng Han
AAML
63
1
0
15 Jun 2024
A Theory of Interpretable Approximations
A Theory of Interpretable Approximations
Marco Bressan
Nicolò Cesa-Bianchi
Emmanuel Esposito
Yishay Mansour
Shay Moran
Maximilian Thiessen
FAtt
80
5
0
15 Jun 2024
Phoneme Discretized Saliency Maps for Explainable Detection of
  AI-Generated Voice
Phoneme Discretized Saliency Maps for Explainable Detection of AI-Generated Voice
Shubham Gupta
Mirco Ravanelli
Pascal Germain
Cem Subakan
FAtt
65
4
0
14 Jun 2024
Selecting Interpretability Techniques for Healthcare Machine Learning
  models
Selecting Interpretability Techniques for Healthcare Machine Learning models
Daniel Sierra-Botero
Ana Molina-Taborda
Mario S. Valdés-Tresanco
Alejandro Hernández-Arango
Leonardo Espinosa-Leal
Alexander Karpenko
O. Lopez-Acevedo
106
0
0
14 Jun 2024
Trustworthy Artificial Intelligence in the Context of Metrology
Trustworthy Artificial Intelligence in the Context of Metrology
Tameem Adel
Sam Bilson
Mark Levene
Andrew Thompson
68
3
0
14 Jun 2024
Challenges in explaining deep learning models for data with biological
  variation
Challenges in explaining deep learning models for data with biological variation
Lenka Tětková
E. Dreier
Robin Malm
Lars Kai Hansen
AAML
64
1
0
14 Jun 2024
Enhancing Text Corpus Exploration with Post Hoc Explanations and
  Comparative Design
Enhancing Text Corpus Exploration with Post Hoc Explanations and Comparative Design
Michael Gleicher
Keaton Leppenan
Yunyu Bai
63
0
0
14 Jun 2024
Explainable AI for Comparative Analysis of Intrusion Detection Models
Explainable AI for Comparative Analysis of Intrusion Detection Models
Pap M. Corea
Yongxin Liu
Jian Wang
Shuteng Niu
Houbing Song
76
6
0
14 Jun 2024
On the Robustness of Global Feature Effect Explanations
On the Robustness of Global Feature Effect Explanations
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
84
2
0
13 Jun 2024
Conceptual Learning via Embedding Approximations for Reinforcing
  Interpretability and Transparency
Conceptual Learning via Embedding Approximations for Reinforcing Interpretability and Transparency
Maor Dikter
Tsachi Blau
Chaim Baskin
116
0
0
13 Jun 2024
Applications of Explainable artificial intelligence in Earth system
  science
Applications of Explainable artificial intelligence in Earth system science
Feini Huang
Shijie Jiang
Lu Li
Yongkun Zhang
Ye Zhang
Ruqing Zhang
Qingliang Li
Danxi Li
Wei Shangguan
Yongjiu Dai
75
2
0
12 Jun 2024
How Interpretable Are Interpretable Graph Neural Networks?
How Interpretable Are Interpretable Graph Neural Networks?
Yongqiang Chen
Yatao Bian
Bo Han
James Cheng
90
7
0
12 Jun 2024
Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial
  Analysis
Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial Analysis
Prithwijit Chowdhury
Mohit Prabhushankar
Ghassan AlRegib
Mohamed Deriche
103
2
0
12 Jun 2024
Unifying Interpretability and Explainability for Alzheimer's Disease
  Progression Prediction
Unifying Interpretability and Explainability for Alzheimer's Disease Progression Prediction
Raja Farrukh Ali
Stephanie Milani
John Woods
Emmanuel Adenij
Ayesha Farooq
Clayton Mansel
Jeffrey Burns
William Hsu
69
0
0
11 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAIFAtt
82
2
0
11 Jun 2024
Agnostic Sharpness-Aware Minimization
Agnostic Sharpness-Aware Minimization
Van-Anh Nguyen
Quyen Tran
Tuan Truong
Thanh-Toan Do
Dinh Q. Phung
Trung Le
98
0
0
11 Jun 2024
MaskLID: Code-Switching Language Identification through Iterative
  Masking
MaskLID: Code-Switching Language Identification through Iterative Masking
Amir Hossein Kargaran
François Yvon
Hinrich Schütze
61
2
0
10 Jun 2024
Beyond Trend Following: Deep Learning for Market Trend Prediction
Beyond Trend Following: Deep Learning for Market Trend Prediction
Fernando Berzal
Alberto Garcia
75
0
0
10 Jun 2024
Explainable AI for Mental Disorder Detection via Social Media: A survey
  and outlook
Explainable AI for Mental Disorder Detection via Social Media: A survey and outlook
Yusif Ibrahimov
Tarique Anwar
Tommy Yuan
71
4
0
10 Jun 2024
Sequential Binary Classification for Intrusion Detection
Sequential Binary Classification for Intrusion Detection
Ishan Chokshi
Shrihari Vasudevan
Nachiappan Sundaram
Raaghul Ranganathan
135
0
0
10 Jun 2024
Methodology and Real-World Applications of Dynamic Uncertain Causality
  Graph for Clinical Diagnosis with Explainability and Invariance
Methodology and Real-World Applications of Dynamic Uncertain Causality Graph for Clinical Diagnosis with Explainability and Invariance
Zhan Zhang
Qin Zhang
Yang Jiao
Lin Lu
Lin Ma
...
Yiming Wang
Lei Zhang
Fengwei Tian
Jie Hu
Xin Gou
CMLMedIm
59
1
0
09 Jun 2024
Understanding Inhibition Through Maximally Tense Images
Understanding Inhibition Through Maximally Tense Images
Chris Hamblin
Srijani Saha
Talia Konkle
George Alvarez
FAtt
42
0
0
08 Jun 2024
Attri-Net: A Globally and Locally Inherently Interpretable Model for
  Multi-Label Classification Using Class-Specific Counterfactuals
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
72
1
0
08 Jun 2024
Automated Trustworthiness Testing for Machine Learning Classifiers
Automated Trustworthiness Testing for Machine Learning Classifiers
Steven Cho
Seaton Cousins-Baxter
Stefano Ruberto
Valerio Terragni
97
0
0
07 Jun 2024
DiffusionPID: Interpreting Diffusion via Partial Information
  Decomposition
DiffusionPID: Interpreting Diffusion via Partial Information Decomposition
Shaurya Dewan
Rushikesh Zawar
Prakanshul Saxena
Yingshan Chang
Andrew F. Luo
Yonatan Bisk
DiffM
117
4
0
07 Jun 2024
Provably Better Explanations with Optimized Aggregation of Feature
  Attributions
Provably Better Explanations with Optimized Aggregation of Feature Attributions
Thomas Decker
Ananta R. Bhattarai
Jindong Gu
Volker Tresp
Florian Buettner
77
3
0
07 Jun 2024
Classification Metrics for Image Explanations: Towards Building Reliable
  XAI-Evaluations
Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations
Benjamin Frész
Lena Lörcher
Marco F. Huber
62
5
0
07 Jun 2024
Leveraging Activations for Superpixel Explanations
Leveraging Activations for Superpixel Explanations
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
AAMLFAttXAI
61
0
0
07 Jun 2024
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for
  Explaining Language Model Predictions
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions
Jingtan Wang
Xiaoqiang Lin
Rui Qiao
Chuan-Sheng Foo
Bryan Kian Hsiang Low
TDI
63
5
0
07 Jun 2024
Shaping History: Advanced Machine Learning Techniques for the Analysis
  and Dating of Cuneiform Tablets over Three Millennia
Shaping History: Advanced Machine Learning Techniques for the Analysis and Dating of Cuneiform Tablets over Three Millennia
Danielle Kapon
Michael Fire
S. Gordin
107
1
0
06 Jun 2024
POEM: Interactive Prompt Optimization for Enhancing Multimodal Reasoning
  of Large Language Models
POEM: Interactive Prompt Optimization for Enhancing Multimodal Reasoning of Large Language Models
Jianben He
Xingbo Wang
Shiyi Liu
Guande Wu
Claudio Silva
Huamin Qu
LRM
57
3
0
06 Jun 2024
GNNAnatomy: Rethinking Model-Level Explanations for Graph Neural Networks
GNNAnatomy: Rethinking Model-Level Explanations for Graph Neural Networks
Hsiao-Ying Lu
Yiran Li
Ujwal Pratap Krishna Kaluvakolanu Thyagarajan
K. Ma
62
0
0
06 Jun 2024
Why is "Problems" Predictive of Positive Sentiment? A Case Study of
  Explaining Unintuitive Features in Sentiment Classification
Why is "Problems" Predictive of Positive Sentiment? A Case Study of Explaining Unintuitive Features in Sentiment Classification
Jiaming Qu
Jaime Arguello
Yue Wang
FAtt
58
1
0
05 Jun 2024
Post-hoc Part-prototype Networks
Post-hoc Part-prototype Networks
Andong Tan
Fengtao Zhou
Hao Chen
58
5
0
05 Jun 2024
Tensor Polynomial Additive Model
Tensor Polynomial Additive Model
Yang Chen
Ce Zhu
Jiani Liu
Yipeng Liu
TPM
60
0
0
05 Jun 2024
Language Model Can Do Knowledge Tracing: Simple but Effective Method to
  Integrate Language Model and Knowledge Tracing Task
Language Model Can Do Knowledge Tracing: Simple but Effective Method to Integrate Language Model and Knowledge Tracing Task
Unggi Lee
Jiyeong Bae
Dohee Kim
Sookbun Lee
Jaekwon Park
Taekyung Ahn
Gunho Lee
Damji Stratton
Hyeoncheol Kim
AI4EdKELM
63
12
0
05 Jun 2024
Previous
123...141516...9899100
Next