ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.04374
  4. Cited By
Towards Explainable Artificial Intelligence (XAI): A Data Mining
  Perspective
v1v2 (latest)

Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

9 January 2024
Haoyi Xiong
Xuhong Li
Xiaofei Zhang
Jiamin Chen
Xinhao Sun
Yuchen Li
Zeyi Sun
Jundong Li
    XAI
ArXiv (abs)PDFHTML

Papers citing "Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective"

30 / 80 papers shown
Title
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
122
1,452
0
17 Jul 2019
A study on the Interpretability of Neural Retrieval Models using
  DeepSHAP
A study on the Interpretability of Neural Retrieval Models using DeepSHAP
Zeon Trevor Fernando
Jaspreet Singh
Avishek Anand
FAttAAML
54
68
0
15 Jul 2019
The What-If Tool: Interactive Probing of Machine Learning Models
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
81
495
0
09 Jul 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
142
502
0
12 Jun 2019
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy
  Lifting, the Rest Can Be Pruned
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned
Elena Voita
David Talbot
F. Moiseev
Rico Sennrich
Ivan Titov
119
1,148
0
23 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
95
1,845
0
06 May 2019
Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very
  Large Pre-Trained Deep Neural Networks
Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
Charles H. Martin
Michael W. Mahoney
73
56
0
24 Jan 2019
ULDor: A Universal Lesion Detector for CT Scans with Pseudo Masks and
  Hard Negative Example Mining
ULDor: A Universal Lesion Detector for CT Scans with Pseudo Masks and Hard Negative Example Mining
Youbao Tang
Ke Yan
Yuxing Tang
Jiamin Liu
Jing Xiao
Ronald M. Summers
MedIm
111
61
0
18 Jan 2019
An Empirical Study of Example Forgetting during Deep Neural Network
  Learning
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Mariya Toneva
Alessandro Sordoni
Rémi Tachet des Combes
Adam Trischler
Yoshua Bengio
Geoffrey J. Gordon
134
741
0
12 Dec 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
123
201
0
02 Oct 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
255
1,187
0
27 Jun 2018
Learning Adversarially Fair and Transferable Representations
Learning Adversarially Fair and Transferable Representations
David Madras
Elliot Creager
T. Pitassi
R. Zemel
FaML
384
685
0
17 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
150
3,979
0
06 Feb 2018
Interpreting CNNs via Decision Trees
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
73
323
0
01 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
248
1,849
0
30 Nov 2017
Distilling a Neural Network Into a Soft Decision Tree
Distilling a Neural Network Into a Soft Decision Tree
Nicholas Frosst
Geoffrey E. Hinton
430
639
0
27 Nov 2017
Interpreting Deep Visual Representations via Network Dissection
Interpreting Deep Visual Representations via Network Dissection
Bolei Zhou
David Bau
A. Oliva
Antonio Torralba
FAttMILM
63
325
0
15 Nov 2017
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Aditya Chattopadhyay
Anirban Sarkar
Prantik Howlader
V. Balasubramanian
FAtt
119
2,311
0
30 Oct 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILMFAtt
158
1,526
1
19 Apr 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,884
0
10 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
219
2,910
0
14 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
335
20,110
0
07 Oct 2016
Training Region-based Object Detectors with Online Hard Example Mining
Training Region-based Object Detectors with Online Hard Example Mining
Abhinav Shrivastava
Abhinav Gupta
Ross B. Girshick
ObjD
157
2,422
0
12 Apr 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSLSSegFAtt
253
9,342
0
14 Dec 2015
Understanding Deep Image Representations by Inverting Them
Understanding Deep Image Representations by Inverting Them
Aravindh Mahendran
Andrea Vedaldi
FAtt
131
1,968
0
26 Nov 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
603
15,907
0
12 Nov 2013
Identifying Mislabeled Training Data
Identifying Mislabeled Training Data
C. Brodley
M. Friedl
111
972
0
01 Jun 2011
Previous
12