ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.02479
  4. Cited By
Explaining NonLinear Classification Decisions with Deep Taylor
  Decomposition

Explaining NonLinear Classification Decisions with Deep Taylor Decomposition

8 December 2015
G. Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus-Robert Muller
    FAtt
ArXivPDFHTML

Papers citing "Explaining NonLinear Classification Decisions with Deep Taylor Decomposition"

50 / 100 papers shown
Title
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
Melkamu Mersha
Mesay Gemeda Yigezu
Hassan Shakil
Ali Al shami
SangHyun Byun
Jugal Kalita
62
0
0
06 Mar 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
Extending Information Bottleneck Attribution to Video Sequences
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
Evaluating the Effectiveness of XAI Techniques for Encoder-Based Language Models
Melkamu Mersha
Mesay Gemeda Yigezu
Jugal Kalita
ELM
51
3
0
26 Jan 2025
Interpreting Object-level Foundation Models via Visual Precision Search
Interpreting Object-level Foundation Models via Visual Precision Search
Ruoyu Chen
Siyuan Liang
Jingzhi Li
Shiming Liu
Maosen Li
Zheng Huang
Hua Zhang
Xiaochun Cao
FAtt
82
4
0
25 Nov 2024
Tackling the Accuracy-Interpretability Trade-off in a Hierarchy of Machine Learning Models for the Prediction of Extreme Heatwaves
Tackling the Accuracy-Interpretability Trade-off in a Hierarchy of Machine Learning Models for the Prediction of Extreme Heatwaves
Alessandro Lovo
Amaury Lancelin
Corentin Herbert
Freddy Bouchet
AI4CE
28
0
0
01 Oct 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
1
0
22 Sep 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
69
28
0
30 Aug 2024
Evaluating the Reliability of Self-Explanations in Large Language Models
Evaluating the Reliability of Self-Explanations in Large Language Models
Korbinian Randl
John Pavlopoulos
Aron Henriksson
Tony Lindgren
LRM
44
0
0
19 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
34
3
0
16 Jul 2024
Revealing the Learning Process in Reinforcement Learning Agents Through Attention-Oriented Metrics
Revealing the Learning Process in Reinforcement Learning Agents Through Attention-Oriented Metrics
Charlotte Beylier
Simon M. Hofmann
Nico Scherf
26
0
0
20 Jun 2024
Explainable automatic industrial carbon footprint estimation from bank
  transaction classification using natural language processing
Explainable automatic industrial carbon footprint estimation from bank transaction classification using natural language processing
Jaime González-González
Silvia García-Méndez
Francisco de Arriba-Pérez
Francisco J. González Castaño
Oscar Barba-Seara
36
8
0
23 May 2024
Explaining Text Similarity in Transformer Models
Explaining Text Similarity in Transformer Models
Alexandros Vasileiou
Oliver Eberle
43
7
0
10 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
45
5
0
03 May 2024
Audio Anti-Spoofing Detection: A Survey
Audio Anti-Spoofing Detection: A Survey
Menglu Li
Yasaman Ahmadiadli
Xiao-Ping Zhang
48
17
0
22 Apr 2024
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
Sicong Cao
Xiaobing Sun
Ratnadira Widyasari
David Lo
Xiaoxue Wu
...
Jiale Zhang
Bin Li
Wei Liu
Di Wu
Yixin Chen
33
6
0
26 Jan 2024
Respect the model: Fine-grained and Robust Explanation with Sharing
  Ratio Decomposition
Respect the model: Fine-grained and Robust Explanation with Sharing Ratio Decomposition
Sangyu Han
Yearim Kim
Nojun Kwak
AAML
29
1
0
25 Jan 2024
B-Cos Aligned Transformers Learn Human-Interpretable Features
B-Cos Aligned Transformers Learn Human-Interpretable Features
Manuel Tran
Amal Lahiani
Yashin Dicente Cid
Melanie Boxberg
Peter Lienemann
C. Matek
S. J. Wagner
Fabian J. Theis
Eldad Klaiman
Tingying Peng
MedIm
ViT
21
2
0
16 Jan 2024
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
34
2
0
13 Dec 2023
Improving Interpretation Faithfulness for Vision Transformers
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
41
5
0
29 Nov 2023
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation
  in Deep Feature Space
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space
Pedro Valois
Koichiro Niinuma
Kazuhiro Fukui
AAML
26
4
0
25 Nov 2023
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
29
0
0
20 Nov 2023
AI-based association analysis for medical imaging using latent-space geometric confounder correction
AI-based association analysis for medical imaging using latent-space geometric confounder correction
Xianjing Liu
Bo-wen Li
Meike W. Vernooij
E. Wolvius
Gennady V. Roshchupkin
Esther E. Bron
MedIm
29
0
0
03 Oct 2023
GAMER-MRIL identifies Disability-Related Brain Changes in Multiple
  Sclerosis
GAMER-MRIL identifies Disability-Related Brain Changes in Multiple Sclerosis
Po--Jui Lu
Benjamin Odry
M. Barakovic
Matthias Weigel
Robin Sandkühler
...
Mario Ocampo Pineda
J. Kuhle
L. Kappos
Philippe C. Cattin
Cristina Granziera
37
0
0
15 Aug 2023
HOPE: High-order Polynomial Expansion of Black-box Neural Networks
HOPE: High-order Polynomial Expansion of Black-box Neural Networks
Tingxiong Xiao
Weihang Zhang
Yuxiao Cheng
J. Suo
34
2
0
17 Jul 2023
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
  They be Trusted?
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?
A. Brankovic
David Cook
Jessica Rahman
Wenjie Huang
Sankalp Khanna
25
1
0
21 Jun 2023
Are Deep Neural Networks Adequate Behavioural Models of Human Visual
  Perception?
Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?
Felix Wichmann
Robert Geirhos
32
25
0
26 May 2023
XAI-based Comparison of Input Representations for Audio Event
  Classification
XAI-based Comparison of Input Representations for Audio Event Classification
A. Frommholz
Fabian Seipel
Sebastian Lapuschkin
Wojciech Samek
Johanna Vielhaben
AAML
AI4TS
30
6
0
27 Apr 2023
A Review on Explainable Artificial Intelligence for Healthcare: Why,
  How, and When?
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
26
56
0
10 Apr 2023
Towards Learning and Explaining Indirect Causal Effects in Neural
  Networks
Towards Learning and Explaining Indirect Causal Effects in Neural Networks
Abbaavaram Gowtham Reddy
Saketh Bachu
Harsh Nilesh Pathak
Ben Godfrey
V. Balasubramanian
V. Varshaneya
Satya Narayanan Kar
CML
31
0
0
24 Mar 2023
Explaining text classifiers through progressive neighborhood
  approximation with realistic samples
Explaining text classifiers through progressive neighborhood approximation with realistic samples
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AI4TS
22
0
0
11 Feb 2023
PAMI: partition input and aggregate outputs for model interpretation
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Wentao Zhang
Weishi Zheng
Ruixuan Wang
FAtt
26
3
0
07 Feb 2023
Towards Rigorous Understanding of Neural Networks via
  Semantics-preserving Transformations
Towards Rigorous Understanding of Neural Networks via Semantics-preserving Transformations
Maximilian Schlüter
Gerrit Nolte
Alnis Murtovi
Bernhard Steffen
29
6
0
19 Jan 2023
Negative Flux Aggregation to Estimate Feature Attributions
Negative Flux Aggregation to Estimate Feature Attributions
X. Li
Deng Pan
Chengyin Li
Yao Qiang
D. Zhu
FAtt
8
6
0
17 Jan 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
48
17
0
30 Dec 2022
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
29
68
0
25 Dec 2022
Interpretable Diabetic Retinopathy Diagnosis based on Biomarker
  Activation Map
Interpretable Diabetic Retinopathy Diagnosis based on Biomarker Activation Map
P. Zang
T. Hormel
Jie Wang
Yukun Guo
Steven T. Bailey
C. Flaxel
David Huang
T. Hwang
Yali Jia
MedIm
25
7
0
13 Dec 2022
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
Fairness and Explainability: Bridging the Gap Towards Fair Model
  Explanations
Fairness and Explainability: Bridging the Gap Towards Fair Model Explanations
Yuying Zhao
Yu-Chiang Frank Wang
Tyler Derr
FaML
33
13
0
07 Dec 2022
COmic: Convolutional Kernel Networks for Interpretable End-to-End
  Learning on (Multi-)Omics Data
COmic: Convolutional Kernel Networks for Interpretable End-to-End Learning on (Multi-)Omics Data
Jonas C. Ditz
Bernhard Reuter
Nícolas Pfeifer
29
1
0
02 Dec 2022
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
32
18
0
27 Nov 2022
Reconnoitering the class distinguishing abilities of the features, to know them better
Payel Sadhukhan
S. Palit
Kausik Sengupta
29
0
0
23 Nov 2022
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals
  Learned Features Similar to Diagnostic Criteria
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals Learned Features Similar to Diagnostic Criteria
Theresa Bender
J. Beinecke
D. Krefting
Carolin Müller
Henning Dathe
T. Seidler
Nicolai Spicher
Anne-Christin Hauschild
FAtt
16
25
0
03 Nov 2022
Explainable Deep Learning to Profile Mitochondrial Disease Using High
  Dimensional Protein Expression Data
Explainable Deep Learning to Profile Mitochondrial Disease Using High Dimensional Protein Expression Data
Atif Khan
C. Lawless
Amy Vincent
Satish Pilla
S. Ramesh
A. Mcgough
36
0
0
31 Oct 2022
Machine Learning for a Sustainable Energy Future
Machine Learning for a Sustainable Energy Future
Zhenpeng Yao
Yanwei Lum
Andrew K. Johnston
L. M. Mejia-Mendoza
Xiaoxia Zhou
Yonggang Wen
Alán Aspuru-Guzik
E. Sargent
Z. Seh
32
210
0
19 Oct 2022
InFIP: An Explainable DNN Intellectual Property Protection Method based
  on Intrinsic Features
InFIP: An Explainable DNN Intellectual Property Protection Method based on Intrinsic Features
Mingfu Xue
Xin Wang
Ying-Chang Wu
S. Ni
Yushu Zhang
Weiqiang Liu
21
2
0
14 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
44
4
0
07 Oct 2022
Artificial Intelligence in Concrete Materials: A Scientometric View
Artificial Intelligence in Concrete Materials: A Scientometric View
Zhanzhao Li
Aleksandra Radliñska
AI4CE
21
2
0
17 Sep 2022
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
53
71
0
13 Jul 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
30
131
0
07 Jun 2022
12
Next