ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 3,508 papers shown
Title
Enabling MCTS Explainability for Sequential Planning Through Computation
  Tree Logic
Enabling MCTS Explainability for Sequential Planning Through Computation Tree Logic
Ziyan An
Hendrik Baier
Abhishek Dubey
Ayan Mukhopadhyay
Meiyi Ma
LRM
61
5
0
15 Jul 2024
Understanding the Dependence of Perception Model Competency on Regions
  in an Image
Understanding the Dependence of Perception Model Competency on Regions in an Image
Sara Pohland
Claire Tomlin
47
1
0
15 Jul 2024
TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley
  Value Estimation
TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation
Roni Goldshmidt
Miriam Horovicz
LLMAG
48
14
0
14 Jul 2024
Robustness of Explainable Artificial Intelligence in Industrial Process
  Modelling
Robustness of Explainable Artificial Intelligence in Industrial Process Modelling
Benedikt Kantz
Clemens Staudinger
C. Feilmayr
Johannes Wachlmayr
Alexander Haberl
Stefan Schuster
Franz Pernkopf
74
3
0
12 Jul 2024
Layer-Wise Relevance Propagation with Conservation Property for ResNet
Layer-Wise Relevance Propagation with Conservation Property for ResNet
Seitaro Otsuki
T. Iida
Félix Doublet
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
Komei Sugiura
FAtt
105
4
0
12 Jul 2024
Towards More Trustworthy and Interpretable LLMs for Code through
  Syntax-Grounded Explanations
Towards More Trustworthy and Interpretable LLMs for Code through Syntax-Grounded Explanations
David Nader-Palacio
Daniel Rodríguez-Cárdenas
Alejandro Velasco
Dipin Khati
Kevin Moran
Denys Poshyvanyk
87
6
0
12 Jul 2024
Attribution Methods in Asset Pricing: Do They Account for Risk?
Attribution Methods in Asset Pricing: Do They Account for Risk?
Dangxing Chen
Yuan Gao
FAtt
94
3
0
12 Jul 2024
Operationalizing the Blueprint for an AI Bill of Rights: Recommendations
  for Practitioners, Researchers, and Policy Makers
Operationalizing the Blueprint for an AI Bill of Rights: Recommendations for Practitioners, Researchers, and Policy Makers
Alex Oesterling
Usha Bhalla
Suresh Venkatasubramanian
Himabindu Lakkaraju
70
3
0
11 Jul 2024
Towards Explainable Evolution Strategies with Large Language Models
Towards Explainable Evolution Strategies with Large Language Models
Jill Baumann
Oliver Kramer
30
0
0
11 Jul 2024
Impact Measures for Gradual Argumentation Semantics
Impact Measures for Gradual Argumentation Semantics
Caren Al Anaissy
Jérôme Delobelle
Srdjan Vesic
Bruno Yun
51
0
0
11 Jul 2024
Explainability of Sub-Field Level Crop Yield Prediction using Remote
  Sensing
Explainability of Sub-Field Level Crop Yield Prediction using Remote Sensing
Hiba Najjar
Miro Miranda
Marlon Nuske
R. Roscher
A. Dengel
35
0
0
11 Jul 2024
fairBERTs: Erasing Sensitive Information Through Semantic and
  Fairness-aware Perturbations
fairBERTs: Erasing Sensitive Information Through Semantic and Fairness-aware Perturbations
Jinfeng Li
YueFeng Chen
Xiangyu Liu
Longtao Huang
Rong Zhang
Hui Xue
AAML
60
0
0
11 Jul 2024
The Misclassification Likelihood Matrix: Some Classes Are More Likely To
  Be Misclassified Than Others
The Misclassification Likelihood Matrix: Some Classes Are More Likely To Be Misclassified Than Others
Daniel Sikar
Artur Garcez
Robin Bloomfield
Tillman Weyde
Kaleem Peeroo
Naman Singh
Maeve Hutchinson
Dany Laksono
Mirela Reljan-Delaney
85
2
0
10 Jul 2024
Explaining Graph Neural Networks for Node Similarity on Graphs
Explaining Graph Neural Networks for Node Similarity on Graphs
Daniel Daza
C. Chu
T. Tran
Daria Stepanova
Michael Cochez
Paul T. Groth
53
1
0
10 Jul 2024
CHILLI: A data context-aware perturbation method for XAI
CHILLI: A data context-aware perturbation method for XAI
Saif Anwar
Nathan Griffiths
A. Bhalerao
T. Popham
69
0
0
10 Jul 2024
Towards A Comprehensive Visual Saliency Explanation Framework for
  AI-based Face Recognition Systems
Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems
Yuhang Lu
Zewei Xu
Touradj Ebrahimi
CVBMFAttXAI
67
3
0
08 Jul 2024
Experiments with truth using Machine Learning: Spectral analysis and
  explainable classification of synthetic, false, and genuine information
Experiments with truth using Machine Learning: Spectral analysis and explainable classification of synthetic, false, and genuine information
Vishnu S Pendyala
Madhulika Dutta
61
0
0
07 Jul 2024
Explainable AI: Comparative Analysis of Normal and Dilated ResNet Models
  for Fundus Disease Classification
Explainable AI: Comparative Analysis of Normal and Dilated ResNet Models for Fundus Disease Classification
P. N. Karthikayan
Yoga Sri Varshan V
Hitesh Gupta Kattamuri
Umarani Jayaraman
MedIm
26
2
0
07 Jul 2024
PDiscoFormer: Relaxing Part Discovery Constraints with Vision
  Transformers
PDiscoFormer: Relaxing Part Discovery Constraints with Vision Transformers
Ananthu Aniraj
C. Dantas
Dino Ienco
Diego Marcos
65
2
0
05 Jul 2024
Regulating Model Reliance on Non-Robust Features by Smoothing Input
  Marginal Density
Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density
Peiyu Yang
Naveed Akhtar
Mubarak Shah
Ajmal Mian
AAML
58
1
0
05 Jul 2024
Understanding the Role of Invariance in Transfer Learning
Understanding the Role of Invariance in Transfer Learning
Till Speicher
Vedant Nanda
Krishna P. Gummadi
SSLOOD
75
1
0
05 Jul 2024
Learning Interpretable Differentiable Logic Networks
Learning Interpretable Differentiable Logic Networks
Chang Yue
N. Jha
NAIAI4CE
57
1
0
04 Jul 2024
A Critical Assessment of Interpretable and Explainable Machine Learning
  for Intrusion Detection
A Critical Assessment of Interpretable and Explainable Machine Learning for Intrusion Detection
Omer Subasi
J. Cree
Joseph Manzano
Elena Peterson
AAML
55
2
0
04 Jul 2024
A Survey on Natural Language Counterfactual Generation
A Survey on Natural Language Counterfactual Generation
Yongjie Wang
Xiaoqi Qiu
Yu Yue
Xu Guo
Zhiwei Zeng
Yuhong Feng
Zhiqi Shen
59
9
0
04 Jul 2024
DocXplain: A Novel Model-Agnostic Explainability Method for Document
  Image Classification
DocXplain: A Novel Model-Agnostic Explainability Method for Document Image Classification
S. Saifullah
S. Agne
Andreas Dengel
Sheraz Ahmed
119
0
0
04 Jul 2024
Machine Learning for Economic Forecasting: An Application to China's GDP
  Growth
Machine Learning for Economic Forecasting: An Application to China's GDP Growth
Yanqing Yang
Xingcheng Xu
Jinfeng Ge
Yan Xu
40
2
0
04 Jul 2024
VCHAR:Variance-Driven Complex Human Activity Recognition framework with
  Generative Representation
VCHAR:Variance-Driven Complex Human Activity Recognition framework with Generative Representation
Yuan Sun
Navid Salami Pargoo
Taqiya Ehsan
Zhao Zhang
Jorge Ortiz
HAI
37
3
0
03 Jul 2024
How Reliable and Stable are Explanations of XAI Methods?
How Reliable and Stable are Explanations of XAI Methods?
José Ribeiro
Lucas F. F. Cardoso
Vitor Santos
Eduardo Carvalho
Nikolas Carneiro
Ronnie Cley de Oliveira Alves
XAI
115
1
0
03 Jul 2024
Revisiting the Performance of Deep Learning-Based Vulnerability
  Detection on Realistic Datasets
Revisiting the Performance of Deep Learning-Based Vulnerability Detection on Realistic Datasets
Partha Chakraborty
Krishna Kanth Arumugam
Mahmoud Alfadel
Meiyappan Nagappan
Shane McIntosh
53
1
0
03 Jul 2024
Semantically Rich Local Dataset Generation for Explainable AI in
  Genomics
Semantically Rich Local Dataset Generation for Explainable AI in Genomics
Pedro Barbosa
Rosina Savisaar
Alcides Fonseca
73
0
0
03 Jul 2024
Reinforcement Learning and Machine ethics:a systematic review
Reinforcement Learning and Machine ethics:a systematic review
Ajay Vishwanath
Louise A. Dennis
Marija Slavkovik
84
1
0
02 Jul 2024
A Practical Review of Mechanistic Interpretability for Transformer-Based Language Models
A Practical Review of Mechanistic Interpretability for Transformer-Based Language Models
Daking Rai
Yilun Zhou
Shi Feng
Abulhair Saparov
Ziyu Yao
160
33
0
02 Jul 2024
NLPGuard: A Framework for Mitigating the Use of Protected Attributes by
  NLP Classifiers
NLPGuard: A Framework for Mitigating the Use of Protected Attributes by NLP Classifiers
Salvatore Greco
Ke Zhou
L. Capra
Tania Cerquitelli
Daniele Quercia
51
2
0
01 Jul 2024
FairLay-ML: Intuitive Debugging of Fairness in Data-Driven
  Social-Critical Software
FairLay-ML: Intuitive Debugging of Fairness in Data-Driven Social-Critical Software
Normen Yu
Luciana Carreon
Gang Tan
Saeid Tizpaz-Niari
39
2
0
01 Jul 2024
Integrated feature analysis for deep learning interpretation and class
  activation maps
Integrated feature analysis for deep learning interpretation and class activation maps
Yanli Li
Tahereh Hassanzadeh
D. Shamonin
Monique Reijnierse
A. H. V. D. H. Mil
B. Stoel
62
0
0
01 Jul 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
129
1
0
01 Jul 2024
Interpreting Pretrained Speech Models for Automatic Speech Assessment of
  Voice Disorders
Interpreting Pretrained Speech Models for Automatic Speech Assessment of Voice Disorders
Hok-Shing Lau
Mark Huntly
Nathon Morgan
Adesua Iyenoma
Biao Zeng
Tim Bashford
82
1
0
29 Jun 2024
Explainability of Machine Learning Models under Missing Data
Explainability of Machine Learning Models under Missing Data
Tuan L. Vo
T. Nguyen
Hugo Lewi Hammer
Michael A. Riegler
Pål Halvorsen
Pal Halvorsen
129
2
0
29 Jun 2024
ShapG: new feature importance method based on the Shapley value
ShapG: new feature importance method based on the Shapley value
Chi Zhao
Jing Liu
Elena Parilina
FAtt
261
4
0
29 Jun 2024
Efficient and Accurate Explanation Estimation with Distribution Compression
Efficient and Accurate Explanation Estimation with Distribution Compression
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
FAtt
101
4
0
26 Jun 2024
Enabling Regional Explainability by Automatic and Model-agnostic Rule
  Extraction
Enabling Regional Explainability by Automatic and Model-agnostic Rule Extraction
Yu Chen
Tianyu Cui
Alexander Capstick
Nan Fletcher-Loyd
Payam Barnaghi
56
0
0
25 Jun 2024
A Moonshot for AI Oracles in the Sciences
A Moonshot for AI Oracles in the Sciences
Bryan Kaiser
Tailin Wu
Maike Sonnewald
Colin Thackray
Skylar Callis
AI4CE
51
0
0
25 Jun 2024
Large Language Models are Interpretable Learners
Large Language Models are Interpretable Learners
Ruochen Wang
Si Si
Felix X. Yu
Dorothea Wiesmann
Cho-Jui Hsieh
Inderjit Dhillon
77
3
0
25 Jun 2024
AND: Audio Network Dissection for Interpreting Deep Acoustic Models
AND: Audio Network Dissection for Interpreting Deep Acoustic Models
Tung-Yu Wu
Yu-Xiang Lin
Tsui-Wei Weng
96
2
0
24 Jun 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
113
2
0
24 Jun 2024
What Do VLMs NOTICE? A Mechanistic Interpretability Pipeline for Gaussian-Noise-free Text-Image Corruption and Evaluation
What Do VLMs NOTICE? A Mechanistic Interpretability Pipeline for Gaussian-Noise-free Text-Image Corruption and Evaluation
Michal Golovanevsky
William Rudman
Vedant Palit
Ritambhara Singh
Carsten Eickhoff
116
2
0
24 Jun 2024
A Review of Global Sensitivity Analysis Methods and a comparative case
  study on Digit Classification
A Review of Global Sensitivity Analysis Methods and a comparative case study on Digit Classification
Zahra Sadeghi
Stan Matwin
61
1
0
23 Jun 2024
Reading Is Believing: Revisiting Language Bottleneck Models for Image
  Classification
Reading Is Believing: Revisiting Language Bottleneck Models for Image Classification
Honori Udo
Takafumi Koshinaka
VLM
64
0
0
22 Jun 2024
Privacy Implications of Explainable AI in Data-Driven Systems
Privacy Implications of Explainable AI in Data-Driven Systems
Fatima Ezzeddine
48
3
0
22 Jun 2024
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local
  Explanations
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations
Parikshit Solunke
Vitória Guardieiro
Joao Rulff
Peter Xenopoulos
G. Chan
Brian Barr
L. G. Nonato
Claudio Silva
64
3
0
21 Jun 2024
Previous
123...121314...697071
Next