ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.07270
  4. Cited By
Investigating the influence of noise and distractors on the
  interpretation of neural networks

Investigating the influence of noise and distractors on the interpretation of neural networks

22 November 2016
Pieter-Jan Kindermans
Kristof T. Schütt
K. Müller
Sven Dähne
    FAtt
ArXivPDFHTML

Papers citing "Investigating the influence of noise and distractors on the interpretation of neural networks"

30 / 30 papers shown
Title
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
46
1
0
23 Jul 2024
NeuralSentinel: Safeguarding Neural Network Reliability and
  Trustworthiness
NeuralSentinel: Safeguarding Neural Network Reliability and Trustworthiness
Xabier Echeberria-Barrio
Mikel Gorricho
Selene Valencia
Francesco Zola
AAML
26
1
0
12 Feb 2024
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak
  Supervision
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak Supervision
Daniel Hajialigol
Hanwen Liu
Xuan Wang
VLM
21
5
0
31 Oct 2023
Reconstruct Before Summarize: An Efficient Two-Step Framework for
  Condensing and Summarizing Meeting Transcripts
Reconstruct Before Summarize: An Efficient Two-Step Framework for Condensing and Summarizing Meeting Transcripts
Haochen Tan
Han Wu
Wei Shao
Xinyun Zhang
Mingjie Zhan
Zhaohui Hou
Ding Liang
Linqi Song
47
0
0
13 May 2023
Guide the Learner: Controlling Product of Experts Debiasing Method Based
  on Token Attribution Similarities
Guide the Learner: Controlling Product of Experts Debiasing Method Based on Token Attribution Similarities
Ali Modarressi
Hossein Amirkhani
Mohammad Taher Pilehvar
29
2
0
06 Feb 2023
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
44
4
0
07 Oct 2022
Identifying and Characterizing Active Citizens who Refute Misinformation
  in Social Media
Identifying and Characterizing Active Citizens who Refute Misinformation in Social Media
Yida Mu
Pu Niu
Nikolaos Aletras
34
12
0
21 Apr 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Model Doctor: A Simple Gradient Aggregation Strategy for Diagnosing and
  Treating CNN Classifiers
Model Doctor: A Simple Gradient Aggregation Strategy for Diagnosing and Treating CNN Classifiers
Zunlei Feng
Jiacong Hu
Sai Wu
Xiaotian Yu
Mingli Song
Xiuming Zhang
45
13
0
09 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
25
80
0
29 Nov 2021
"How Does It Detect A Malicious App?" Explaining the Predictions of
  AI-based Android Malware Detector
"How Does It Detect A Malicious App?" Explaining the Predictions of AI-based Android Malware Detector
Zhi Lu
V. Thing
AAML
24
4
0
06 Nov 2021
Enjoy the Salience: Towards Better Transformer-based Faithful
  Explanations with Word Salience
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
  Mapping for image saliency
CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency
M. Jalwana
Naveed Akhtar
Bennamoun
Ajmal Mian
27
54
0
20 Jun 2021
CiteWorth: Cite-Worthiness Detection for Improved Scientific Document
  Understanding
CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding
Dustin Wright
Isabelle Augenstein
16
24
0
23 May 2021
Improving the Faithfulness of Attention-based Explanations with
  Task-specific Information for Text Classification
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
27
37
0
06 May 2021
Flexible Instance-Specific Rationalization of NLP Models
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
31
14
0
16 Apr 2021
On the Impact of Interpretability Methods in Active Image Augmentation
  Method
On the Impact of Interpretability Methods in Active Image Augmentation Method
F. Santos
Cleber Zanchettin
L. Matos
P. Novais
AAML
33
2
0
24 Feb 2021
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of
  CNNs
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs
Ruigang Fu
Qingyong Hu
Xiaohu Dong
Yulan Guo
Yinghui Gao
Biao Li
FAtt
24
266
0
05 Aug 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
What went wrong and when? Instance-wise Feature Importance for
  Time-series Models
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
51
14
0
05 Mar 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
38
11
0
09 Apr 2019
Understanding Impacts of High-Order Loss Approximations and Features in
  Deep Learning Interpretation
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Sahil Singla
Eric Wallace
Shi Feng
S. Feizi
FAtt
31
59
0
01 Feb 2019
ISeeU: Visually interpretable deep learning for mortality prediction
  inside the ICU
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
William Caicedo-Torres
Jairo Gutiérrez
14
78
0
24 Jan 2019
Understanding Individual Decisions of CNNs via Contrastive
  Backpropagation
Understanding Individual Decisions of CNNs via Contrastive Backpropagation
Jindong Gu
Yinchong Yang
Volker Tresp
FAtt
17
94
0
05 Dec 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLT
FAtt
26
561
0
21 Feb 2018
Explaining First Impressions: Modeling, Recognizing, and Explaining
  Apparent Personality from Videos
Explaining First Impressions: Modeling, Recognizing, and Explaining Apparent Personality from Videos
Hugo Jair Escalante
Heysem Kaya
A. A. Salah
Sergio Escalera
Yağmur Güçlütürk
...
Furkan Gürpinar
Achmadnoer Sukma Wicaksana
Cynthia C. S. Liem
Marcel van Gerven
R. Lier
25
61
0
02 Feb 2018
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
45
678
0
02 Nov 2017
Learning how to explain neural networks: PatternNet and
  PatternAttribution
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAI
FAtt
27
338
0
16 May 2017
1