ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 3,508 papers shown
Title
Generating Global and Local Explanations for Tree-Ensemble Learning
  Methods by Answer Set Programming
Generating Global and Local Explanations for Tree-Ensemble Learning Methods by Answer Set Programming
A. Takemura
Katsumi Inoue
49
0
0
14 Oct 2024
Study on the Helpfulness of Explainable Artificial Intelligence
Study on the Helpfulness of Explainable Artificial Intelligence
Tobias Labarta
Elizaveta Kulicheva
Ronja Froelian
Christian Geißler
Xenia Melman
Julian von Klitzing
ELM
53
1
0
14 Oct 2024
XAI-based Feature Selection for Improved Network Intrusion Detection
  Systems
XAI-based Feature Selection for Improved Network Intrusion Detection Systems
Osvaldo Arreche
Tanish Guntur
Mustafa Abdallah
AAML
47
0
0
14 Oct 2024
CoTCoNet: An Optimized Coupled Transformer-Convolutional Network with an
  Adaptive Graph Reconstruction for Leukemia Detection
CoTCoNet: An Optimized Coupled Transformer-Convolutional Network with an Adaptive Graph Reconstruction for Leukemia Detection
Chandravardhan Singh Raghaw
Arnav Sharma
Shubhi Bansal
Mohammad Zia Ur Rehman
Nagendra Kumar
MedIm
70
11
0
11 Oct 2024
A Theoretical Framework for AI-driven data quality monitoring in
  high-volume data environments
A Theoretical Framework for AI-driven data quality monitoring in high-volume data environments
Nikhil Bangad
Vivekananda Jayaram
Manjunatha Sughaturu Krishnappa
Amey Ram Banarse
Darshan Mohan Bidkar
Akshay Nagpal
Vidyasagar Parlapalli
33
0
0
11 Oct 2024
Bilinear MLPs enable weight-based mechanistic interpretability
Bilinear MLPs enable weight-based mechanistic interpretability
Michael T. Pearce
Thomas Dooms
Alice Rigg
José Oramas
Lee Sharkey
64
5
0
10 Oct 2024
Neural Reasoning Networks: Efficient Interpretable Neural Networks With
  Automatic Textual Explanations
Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations
Stephen Carrow
Kyle Harper Erwin
Olga Vilenskaia
Parikshit Ram
Tim Klinger
Naweed Khan
Ndivhuwo Makondo
Alexander Gray
45
1
0
10 Oct 2024
A Comprehensive Survey and Classification of Evaluation Criteria for
  Trustworthy Artificial Intelligence
A Comprehensive Survey and Classification of Evaluation Criteria for Trustworthy Artificial Intelligence
Louise McCormack
Malika Bendechache
XAI
97
1
0
10 Oct 2024
Learning Low-Level Causal Relations using a Simulated Robotic Arm
Learning Low-Level Causal Relations using a Simulated Robotic Arm
Miroslav Cibula
Matthias Kerzel
Igor Farkaš
CML
25
0
0
10 Oct 2024
Explainability of Deep Neural Networks for Brain Tumor Detection
Explainability of Deep Neural Networks for Brain Tumor Detection
S. Park
J. Kim
MedIm
56
0
0
10 Oct 2024
Audio Explanation Synthesis with Generative Foundation Models
Audio Explanation Synthesis with Generative Foundation Models
Alican Akman
Qiyang Sun
Björn W. Schuller
70
1
0
10 Oct 2024
Time Can Invalidate Algorithmic Recourse
Time Can Invalidate Algorithmic Recourse
Giovanni De Toni
Stefano Teso
Bruno Lepri
Andrea Passerini
72
1
0
10 Oct 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
233
0
0
10 Oct 2024
Toward Robust Real-World Audio Deepfake Detection: Closing the
  Explainability Gap
Toward Robust Real-World Audio Deepfake Detection: Closing the Explainability Gap
Georgia Channing
Juil Sock
Ronald Clark
Philip Torr
Christian Schroeder de Witt
68
4
0
09 Oct 2024
Stanceformer: Target-Aware Transformer for Stance Detection
Stanceformer: Target-Aware Transformer for Stance Detection
Krishna Garg
Cornelia Caragea
68
1
0
09 Oct 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
148
1
0
09 Oct 2024
Unveiling Transformer Perception by Exploring Input Manifolds
Unveiling Transformer Perception by Exploring Input Manifolds
A. Benfenati
Alfio Ferrara
A. Marta
Davide Riva
Elisabetta Rocchetti
67
0
0
08 Oct 2024
Demonstration Based Explainable AI for Learning from Demonstration
  Methods
Demonstration Based Explainable AI for Learning from Demonstration Methods
Morris Gu
Elizabeth Croft
Dana Kulic
36
0
0
08 Oct 2024
Understanding with toy surrogate models in machine learning
Understanding with toy surrogate models in machine learning
Andrés Páez
SyDa
101
0
0
08 Oct 2024
Neural Networks Decoded: Targeted and Robust Analysis of Neural Network
  Decisions via Causal Explanations and Reasoning
Neural Networks Decoded: Targeted and Robust Analysis of Neural Network Decisions via Causal Explanations and Reasoning
A. Diallo
Vaishak Belle
P. Patras
AAML
53
0
0
07 Oct 2024
Ensured: Explanations for Decreasing the Epistemic Uncertainty in
  Predictions
Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions
Helena Lofstrom
Tuwe Löfström
Johan Hallberg Szabadvary
53
0
0
07 Oct 2024
Mechanistic?
Mechanistic?
Naomi Saphra
Sarah Wiegreffe
AI4CE
57
12
0
07 Oct 2024
Explanation sensitivity to the randomness of large language models: the
  case of journalistic text classification
Explanation sensitivity to the randomness of large language models: the case of journalistic text classification
Jérémie Bogaert
Marie-Catherine de Marneffe
Antonin Descampe
Louis Escouflaire
Cedrick Fairon
François-Xavier Standaert
93
1
0
07 Oct 2024
From Transparency to Accountability and Back: A Discussion of Access and
  Evidence in AI Auditing
From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing
Sarah H. Cen
Rohan Alur
102
4
0
07 Oct 2024
Comparing Zealous and Restrained AI Recommendations in a Real-World
  Human-AI Collaboration Task
Comparing Zealous and Restrained AI Recommendations in a Real-World Human-AI Collaboration Task
Chengyuan Xu
Kuo-Chin Lien
Tobias Höllerer
43
10
0
06 Oct 2024
GAMformer: In-Context Learning for Generalized Additive Models
GAMformer: In-Context Learning for Generalized Additive Models
Andreas Mueller
Julien N. Siems
Harsha Nori
David Salinas
Arber Zela
Rich Caruana
Frank Hutter
AI4CE
109
3
0
06 Oct 2024
Understanding the Effect of Algorithm Transparency of Model Explanations
  in Text-to-SQL Semantic Parsing
Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing
Daking Rai
Rydia R. Weiland
Kayla Margaret Gabriella Herrera
Tyler H. Shaw
Ziyu Yao
74
2
0
05 Oct 2024
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Zana Buçinca
S. Swaroop
Amanda E. Paluch
Finale Doshi-Velez
Krzysztof Z. Gajos
88
2
0
05 Oct 2024
Variational Language Concepts for Interpreting Foundation Language
  Models
Variational Language Concepts for Interpreting Foundation Language Models
Hengyi Wang
Shiwei Tan
Zhiqing Hong
Desheng Zhang
Hao Wang
99
3
0
04 Oct 2024
Distribution Guided Active Feature Acquisition
Distribution Guided Active Feature Acquisition
Yang Li
Junier Oliva
60
0
0
04 Oct 2024
An Approach To Enhance IoT Security In 6G Networks Through Explainable
  AI
An Approach To Enhance IoT Security In 6G Networks Through Explainable AI
Navneet Kaur
Lav Gupta
61
1
0
04 Oct 2024
Explaining the (Not So) Obvious: Simple and Fast Explanation of STAN, a
  Next Point of Interest Recommendation System
Explaining the (Not So) Obvious: Simple and Fast Explanation of STAN, a Next Point of Interest Recommendation System
Fajrian Yunus
Talel Abdessalem
FAttXAIHAI
29
0
0
04 Oct 2024
In-context Learning in Presence of Spurious Correlations
In-context Learning in Presence of Spurious Correlations
Hrayr Harutyunyan
R. Darbinyan
Samvel Karapetyan
Hrant Khachatrian
LRM
86
1
0
04 Oct 2024
Self-eXplainable AI for Medical Image Analysis: A Survey and New
  Outlooks
Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks
Junlin Hou
Sicen Liu
Yequan Bie
Hongmei Wang
Andong Tan
Luyang Luo
Hao Chen
XAI
83
5
0
03 Oct 2024
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
106
0
0
03 Oct 2024
Run-time Observation Interventions Make Vision-Language-Action Models
  More Visually Robust
Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust
Asher Hancock
Allen Z. Ren
Anirudha Majumdar
VLM
69
5
0
02 Oct 2024
Explainable Earth Surface Forecasting under Extreme Events
Explainable Earth Surface Forecasting under Extreme Events
Oscar J. Pellicer-Valero
Miguel-Ángel Fernández-Torres
Chaonan Ji
Miguel D. Mahecha
Gustau Camps-Valls
48
0
0
02 Oct 2024
Learning-Augmented Robust Algorithmic Recourse
Learning-Augmented Robust Algorithmic Recourse
Kshitij Kayastha
Vasilis Gkatzelis
Shahin Jabbari
66
0
0
02 Oct 2024
SHAP-CAT: A interpretable multi-modal framework enhancing WSI
  classification via virtual staining and shapley-value-based multimodal fusion
SHAP-CAT: A interpretable multi-modal framework enhancing WSI classification via virtual staining and shapley-value-based multimodal fusion
Jun Wang
Yu Mao
Nan Guan
Chun Jason Xue
62
1
0
02 Oct 2024
Enhancing End Stage Renal Disease Outcome Prediction: A Multi-Sourced Data-Driven Approach
Enhancing End Stage Renal Disease Outcome Prediction: A Multi-Sourced Data-Driven Approach
Yubo Li
R. Padman
81
2
0
02 Oct 2024
One Wave To Explain Them All: A Unifying Perspective On Feature Attribution
One Wave To Explain Them All: A Unifying Perspective On Feature Attribution
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAMLFAtt
76
0
0
02 Oct 2024
Best Practices for Responsible Machine Learning in Credit Scoring
Best Practices for Responsible Machine Learning in Credit Scoring
Giovani Valdrighi
Athyrson M. Ribeiro
Jansen S. B. Pereira
Vitoria Guardieiro
Arthur Hendricks
...
Juan David Nieto Garcia
Felipe F. Bocca
Thalita B. Veronese
Lucas Wanner
Marcos Medeiros Raimundo
FaML
99
0
0
30 Sep 2024
Sufficient and Necessary Explanations (and What Lies in Between)
Sufficient and Necessary Explanations (and What Lies in Between)
Beepul Bharti
Paul H. Yi
Jeremias Sulam
XAIFAtt
107
2
0
30 Sep 2024
Image-guided topic modeling for interpretable privacy classification
Image-guided topic modeling for interpretable privacy classification
Alina Elena Baia
Andrea Cavallaro
78
0
0
27 Sep 2024
"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot Decision Tree Induction and Embedding with Large Language Models
"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot Decision Tree Induction and Embedding with Large Language Models
Ricardo Knauer
Mario Koddenbrock
Raphael Wallsberger
Nicholas M. Brisson
Georg N. Duda
Deborah Falla
David W. Evans
Erik Rodner
161
0
0
27 Sep 2024
PCEvE: Part Contribution Evaluation Based Model Explanation for Human
  Figure Drawing Assessment and Beyond
PCEvE: Part Contribution Evaluation Based Model Explanation for Human Figure Drawing Assessment and Beyond
Jongseo Lee
Geo Ahn
Seong Tae Kim
Jinwoo Choi
50
0
0
26 Sep 2024
Faithfulness and the Notion of Adversarial Sensitivity in NLP
  Explanations
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna
Niladri Sett
AAML
47
1
0
26 Sep 2024
Recent advances in interpretable machine learning using structure-based
  protein representations
Recent advances in interpretable machine learning using structure-based protein representations
L. Vecchietti
Minji Lee
Begench Hangeldiyev
Hyunkyu Jung
Hahnbeom Park
Tae-Kyun Kim
Meeyoung Cha
Ho Min Kim
AI4CE
107
1
0
26 Sep 2024
Criticality and Safety Margins for Reinforcement Learning
Criticality and Safety Margins for Reinforcement Learning
Alexander Grushin
Walt Woods
Alvaro Velasquez
Simon Khan
AAML
75
1
0
26 Sep 2024
Towards User-Focused Research in Training Data Attribution for
  Human-Centered Explainable AI
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen
Johannes Bertram
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
486
2
0
25 Sep 2024
Previous
123...8910...697071
Next