ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00093
  4. Cited By
Why model why? Assessing the strengths and limitations of LIME

Why model why? Assessing the strengths and limitations of LIME

30 November 2020
Jurgen Dieber
S. Kirrane
    FAtt
ArXivPDFHTML

Papers citing "Why model why? Assessing the strengths and limitations of LIME"

19 / 19 papers shown
Title
Coherent Local Explanations for Mathematical Optimization
Coherent Local Explanations for Mathematical Optimization
Daan Otto
Jannis Kurtz
S. Ilker Birbil
73
0
0
07 Feb 2025
Enhancing IoT Network Security through Adaptive Curriculum Learning and XAI
Enhancing IoT Network Security through Adaptive Curriculum Learning and XAI
Sathwik Narkedimilli
Sujith Makam
Amballa Venkata Sriram
Sai Prashanth Mallellu
MSVPJ Sathvik
Ranga Rao Venkatesha Prasad
66
0
0
20 Jan 2025
A Comparative Analysis of DNN-based White-Box Explainable AI Methods in Network Security
A Comparative Analysis of DNN-based White-Box Explainable AI Methods in Network Security
Osvaldo Arreche
Mustafa Abdallah
AAML
137
1
0
14 Jan 2025
Model Agnostic Contrastive Explanations for Structured Data
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
56
82
0
31 May 2019
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Amir-Hossein Karimi
Gilles Barthe
Borja Balle
Isabel Valera
71
320
0
27 May 2019
CERTIFAI: Counterfactual Explanations for Robustness, Transparency,
  Interpretability, and Fairness of Artificial Intelligence models
CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models
Shubham Sharma
Jette Henderson
Joydeep Ghosh
28
87
0
20 May 2019
Local Interpretable Model-agnostic Explanations of Bayesian Predictive
  Models via Kullback-Leibler Projections
Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections
Tomi Peltola
FAtt
BDL
29
39
0
05 Oct 2018
Visualizing the Feature Importance for Black Box Models
Visualizing the Feature Importance for Black Box Models
Giuseppe Casalicchio
Christoph Molnar
B. Bischl
FAtt
31
182
0
18 Apr 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OOD
AAML
81
505
0
13 Mar 2018
Interpretability via Model Extraction
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
47
129
0
29 Jun 2017
Real Time Image Saliency for Black Box Classifiers
Real Time Image Saliency for Black Box Classifiers
P. Dabkowski
Y. Gal
40
586
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
45
1,514
0
11 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
134
2,854
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
108
5,920
0
04 Mar 2017
"What is Relevant in a Text Document?": An Interpretable Machine
  Learning Approach
"What is Relevant in a Text Document?": An Interpretable Machine Learning Approach
L. Arras
F. Horn
G. Montavon
K. Müller
Wojciech Samek
36
288
0
23 Dec 2016
An unexpected unity among methods for interpreting model predictions
An unexpected unity among methods for interpreting model predictions
Scott M. Lundberg
Su-In Lee
FAtt
42
110
0
22 Nov 2016
Nothing Else Matters: Model-Agnostic Explanations By Identifying
  Prediction Invariance
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
39
64
0
17 Nov 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
150
9,266
0
14 Dec 2015
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
258
15,825
0
12 Nov 2013
1