ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.10263
  4. Cited By
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations
  Approach for Computer-Aided Diagnosis Systems

DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems

24 June 2019
Muhammad Rehman Zafar
N. Khan
    FAtt
ArXivPDFHTML

Papers citing "DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems"

22 / 22 papers shown
Title
Display Content, Display Methods and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey
Display Content, Display Methods and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey
Weiqing Li
Yue Xu
Yuefeng Li
Yinghui Huang
23
0
0
14 May 2025
ExplainReduce: Summarising local explanations via proxies
ExplainReduce: Summarising local explanations via proxies
Lauri Seppäläinen
Mudong Guo
Kai Puolamäki
FAtt
52
0
0
17 Feb 2025
Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
Seung Hyun Cheon
Anneke Wernerfelt
Sorelle A. Friedler
Berk Ustun
FaML
FAtt
45
0
0
29 Oct 2024
SurvBeX: An explanation method of the machine learning survival models
  based on the Beran estimator
SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator
Lev V. Utkin
Danila Eremenko
A. Konstantinov
32
4
0
07 Aug 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
37
1
0
18 May 2023
Explanations for Automatic Speech Recognition
Explanations for Automatic Speech Recognition
Xiao-lan Wu
P. Bell
A. Rajan
14
6
0
27 Feb 2023
Explainable AI for clinical and remote health applications: a survey on
  tabular and time series data
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
28
90
0
14 Sep 2022
A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
  Metrics
A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation Metrics
Yiqiao Li
Jianlong Zhou
Sunny Verma
Fang Chen
XAI
34
39
0
26 Jul 2022
TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT
  Security
TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security
Maede Zolanvari
Zebo Yang
K. Khan
Rajkumar Jain
N. Meskin
27
73
0
02 May 2022
Enriching Artificial Intelligence Explanations with Knowledge Fragments
Enriching Artificial Intelligence Explanations with Knowledge Fragments
Jože M. Rožanec
Elena Trajkova
I. Novalija
Patrik Zajec
K. Kenda
B. Fortuna
Dunja Mladenić
28
9
0
12 Apr 2022
Using Decision Tree as Local Interpretable Model in Autoencoder-based
  LIME
Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME
Niloofar Ranjbar
Reza Safabakhsh
FAtt
18
5
0
07 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
34
10
0
31 Mar 2022
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
39
55
0
05 Dec 2021
LIMEcraft: Handcrafted superpixel selection and inspection for Visual
  eXplanations
LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations
Weronika Hryniewska
Adrianna Grudzieñ
P. Biecek
FAtt
53
3
0
15 Nov 2021
Knowledge-intensive Language Understanding for Explainable AI
Knowledge-intensive Language Understanding for Explainable AI
A. Sheth
Manas Gaur
Kaushik Roy
Keyur Faldu
19
48
0
02 Aug 2021
Why model why? Assessing the strengths and limitations of LIME
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
42
593
0
16 Jun 2020
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Giorgio Visani
Enrico Bagli
F. Chesani
FAtt
27
60
0
10 Jun 2020
A robust algorithm for explaining unreliable machine learning survival
  models using the Kolmogorov-Smirnov bounds
A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds
M. Kovalev
Lev V. Utkin
AAML
35
31
0
05 May 2020
An explanation method for Siamese neural networks
An explanation method for Siamese neural networks
Lev V. Utkin
M. Kovalev
E. Kasimov
22
14
0
18 Nov 2019
A Random Forest Guided Tour
A Random Forest Guided Tour
Gérard Biau
Erwan Scornet
AI4TS
158
2,729
0
18 Nov 2015
1