ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.08008
  4. Cited By
Beyond Explaining: Opportunities and Challenges of XAI-Based Model
  Improvement

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

15 March 2022
Leander Weber
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
ArXiv (abs)PDFHTML

Papers citing "Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement"

50 / 52 papers shown
Title
Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and Interpretation
Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and Interpretation
Chong Wang
Fengbei Liu
Yuanhong Chen
Helen Frazer
Gustavo Carneiro
75
2
0
07 Nov 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAIAI4TS
283
34
0
30 Aug 2024
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
127
10
0
07 Feb 2022
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
Soheil Feizi
FAtt
97
83
0
29 Nov 2021
This looks more like that: Enhancing Self-Explaining Models by
  Prototypical Relevance Propagation
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
60
49
0
27 Aug 2021
Software for Dataset-wide XAI: From Local Explanations to Global
  Insights with Zennit, CoRelAy, and ViRelAy
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
85
66
0
24 Jun 2021
Interpretability-Driven Sample Selection Using Self Supervised Learning
  For Disease Classification And Segmentation
Interpretability-Driven Sample Selection Using Self Supervised Learning For Disease Classification And Segmentation
Dwarikanath Mahapatra
50
55
0
13 Apr 2021
Unbox the Black-box for the Medical Explainable AI via Multi-modal and
  Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
Unbox the Black-box for the Medical Explainable AI via Multi-modal and Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
Guang Yang
Qinghao Ye
Jun Xia
132
501
0
03 Feb 2021
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
87
110
0
25 Nov 2020
Utilizing Explainable AI for Quantization and Pruning of Deep Neural
  Networks
Utilizing Explainable AI for Quantization and Pruning of Deep Neural Networks
Muhammad Sabih
Frank Hannig
J. Teich
MQ
96
25
0
20 Aug 2020
Fairwashing Explanations with Off-Manifold Detergent
Fairwashing Explanations with Off-Manifold Detergent
Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
K. Müller
Pan Kessel
FAttFaML
58
97
0
20 Jul 2020
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Yunqing Zhao
Ngai-Man Cheung
Alexander Binder
64
89
0
17 Jul 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
87
271
0
29 May 2020
Learning Sparse & Ternary Neural Networks with Entropy-Constrained
  Trained Ternarization (EC2T)
Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)
Arturo Marbán
Daniel Becking
Simon Wiedemann
Wojciech Samek
MQ
39
12
0
02 Apr 2020
Concept Whitening for Interpretable Image Recognition
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
89
322
0
05 Feb 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
111
213
0
15 Jan 2020
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Seul-Ki Yeom
P. Seegerer
Sebastian Lapuschkin
Alexander Binder
Simon Wiedemann
K. Müller
Wojciech Samek
CVBM
65
209
0
18 Dec 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
130
6,293
0
22 Oct 2019
Interpretations are useful: penalizing explanations to align neural
  networks with prior knowledge
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Laura Rieger
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
96
215
0
30 Sep 2019
Learning Credible Deep Neural Networks with Rationale Regularization
Learning Credible Deep Neural Networks with Rationale Regularization
Mengnan Du
Ninghao Liu
Fan Yang
Helen Zhou
FaML
82
46
0
13 Aug 2019
BCN20000: Dermoscopic Lesions in the Wild
BCN20000: Dermoscopic Lesions in the Wild
Marc Combalia
Noel Codella
V. Rotemberg
Brian Helba
Verónica Vilaplana
...
Cristina Carrera
Alicia Barreiro
Allan Halpern
S. Puig
J. Malvehy
66
446
0
06 Aug 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
112
1,452
0
17 Jul 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAttFaML
93
120
0
19 Jun 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
81
334
0
19 Jun 2019
Embedding Human Knowledge into Deep Neural Network via Attention Map
Embedding Human Knowledge into Deep Neural Network via Attention Map
Masahiro Mitsuhara
Hiroshi Fukui
Yusuke Sakashita
Takanori Ogata
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
61
73
0
09 May 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
104
1,017
0
26 Feb 2019
Taking a HINT: Leveraging Explanations to Make Vision and Language
  Models More Grounded
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded
Ramprasaath R. Selvaraju
Stefan Lee
Yilin Shen
Hongxia Jin
Shalini Ghosh
Larry Heck
Dhruv Batra
Devi Parikh
FAttVLM
64
254
0
11 Feb 2019
Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted
  by the International Skin Imaging Collaboration (ISIC)
Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)
Noel Codella
V. Rotemberg
P. Tschandl
M. E. Celebi
Stephen W. Dusza
...
Aadi Kalloo
Konstantinos Liopyris
Michael Marchetti
Harald Kittler
Allan Halpern
111
1,192
0
09 Feb 2019
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
201
775
0
12 Nov 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
250
1,188
0
27 Jun 2018
Beyond Word Importance: Contextual Decomposition to Extract Interactions
  from LSTMs
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
W. James Murdoch
Peter J. Liu
Bin Yu
83
210
0
16 Jan 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
232
1,850
0
30 Nov 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
157
683
0
26 Nov 2017
Learning to Compare: Relation Network for Few-Shot Learning
Learning to Compare: Relation Network for Few-Shot Learning
Flood Sung
Yongxin Yang
Li Zhang
Tao Xiang
Philip Torr
Timothy M. Hospedales
308
4,049
0
16 Nov 2017
Few-Shot Learning with Graph Neural Networks
Few-Shot Learning with Graph Neural Networks
Victor Garcia Satorras
Joan Bruna
GNN
174
1,240
0
10 Nov 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILMFAtt
158
1,523
1
19 Apr 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAttAAML
76
1,525
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,883
0
10 Apr 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
133
591
0
10 Mar 2017
Deep Bayesian Active Learning with Image Data
Deep Bayesian Active Learning with Image Data
Y. Gal
Riashat Islam
Zoubin Ghahramani
BDLUQCV
73
1,739
0
08 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,024
0
04 Mar 2017
A Survey on Deep Learning in Medical Image Analysis
A Survey on Deep Learning in Medical Image Analysis
G. Litjens
Thijs Kooi
B. Bejnordi
A. Setio
F. Ciompi
Mohsen Ghafoorian
Jeroen van der Laak
Bram van Ginneken
C. I. Sánchez
OOD
697
10,799
0
19 Feb 2017
Understanding Neural Networks through Representation Erasure
Understanding Neural Networks through Representation Erasure
Jiwei Li
Will Monroe
Dan Jurafsky
AAMLMILM
99
567
0
24 Dec 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
195
3,705
0
31 Aug 2016
Matching Networks for One Shot Learning
Matching Networks for One Shot Learning
Oriol Vinyals
Charles Blundell
Timothy Lillicrap
Koray Kavukcuoglu
Daan Wierstra
VLM
378
7,333
0
13 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,033
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSLSSegFAtt
253
9,338
0
14 Dec 2015
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
254
4,681
0
21 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
284
14,963
1
21 Dec 2013
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
314
7,317
0
20 Dec 2013
12
Next