ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
CoSy: Evaluating Textual Explanations of Neurons
CoSy: Evaluating Textual Explanations of Neurons
Laura Kopf
P. Bommer
Anna Hedström
Sebastian Lapuschkin
Marina M.-C. Höhne
Kirill Bykov
44
7
0
30 May 2024
Locally Testing Model Detections for Semantic Global Concepts
Locally Testing Model Detections for Semantic Global Concepts
Franz Motzkus
Georgii Mikriukov
Christian Hellert
Ute Schmid
35
2
0
27 May 2024
Listenable Maps for Zero-Shot Audio Classifiers
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
32
4
0
27 May 2024
Understanding Stakeholders' Perceptions and Needs Across the LLM Supply
  Chain
Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain
Agathe Balayn
Lorenzo Corti
Fanny Rancourt
Fabio Casati
U. Gadiraju
29
5
0
25 May 2024
Explainable Molecular Property Prediction: Aligning Chemical Concepts
  with Predictions via Language Models
Explainable Molecular Property Prediction: Aligning Chemical Concepts with Predictions via Language Models
Zhenzhong Wang
Zehui Lin
Wanyu Lin
Ming Yang
Minggang Zeng
Kay Chen Tan
23
3
0
25 May 2024
Advancing Transportation Mode Share Analysis with Built Environment:
  Deep Hybrid Models with Urban Road Network
Advancing Transportation Mode Share Analysis with Built Environment: Deep Hybrid Models with Urban Road Network
Dingyi Zhuang
Qingyi Wang
Yunhan Zheng
Xiaotong Guo
Shenhao Wang
Haris N. Koutsopoulos
Jinhua Zhao
19
0
0
23 May 2024
Exploring the Relationship Between Feature Attribution Methods and Model
  Performance
Exploring the Relationship Between Feature Attribution Methods and Model Performance
Priscylla Silva
Claudio T. Silva
L. G. Nonato
FAtt
25
1
0
22 May 2024
Part-based Quantitative Analysis for Heatmaps
Part-based Quantitative Analysis for Heatmaps
Osman Tursun
Sinan Kalkan
Simon Denman
S. Sridharan
Clinton Fookes
35
0
0
22 May 2024
FFAM: Feature Factorization Activation Map for Explanation of 3D
  Detectors
FFAM: Feature Factorization Activation Map for Explanation of 3D Detectors
Shuai Liu
Boyang Li
Zhiyu Fang
Mingyue Cui
Kai Huang
40
0
0
21 May 2024
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model
  Stealing
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing
Yunlong Zhao
Xiaoheng Deng
Yijing Liu
Xin-jun Pei
Jiazhi Xia
Wei Chen
AAML
37
3
0
18 May 2024
Towards Gradient-based Time-Series Explanations through a SpatioTemporal
  Attention Network
Towards Gradient-based Time-Series Explanations through a SpatioTemporal Attention Network
Min Hun Lee
AI4TS
ViT
FAtt
33
3
0
18 May 2024
Enhancing the analysis of murine neonatal ultrasonic vocalizations:
  Development, evaluation, and application of different mathematical models
Enhancing the analysis of murine neonatal ultrasonic vocalizations: Development, evaluation, and application of different mathematical models
Rudolf Herdt
Louisa Kinzel
Johann Georg Maass
Marvin Walther
Henning Fröhlich
Tim Schubert
Peter Maass
C. Schaaf
24
0
0
17 May 2024
Manifold Integrated Gradients: Riemannian Geometry for Feature
  Attribution
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution
Eslam Zaher
Maciej Trzaskowski
Quan Nguyen
Fred Roosta
AAML
24
4
0
16 May 2024
Error-margin Analysis for Hidden Neuron Activation Labels
Error-margin Analysis for Hidden Neuron Activation Labels
Abhilekha Dalal
R. Rayan
Pascal Hitzler
FAtt
31
1
0
14 May 2024
Certified $\ell_2$ Attribution Robustness via Uniformly Smoothed
  Attributions
Certified ℓ2\ell_2ℓ2​ Attribution Robustness via Uniformly Smoothed Attributions
Fan Wang
Adams Wai-Kin Kong
43
1
0
10 May 2024
Interpretability Needs a New Paradigm
Interpretability Needs a New Paradigm
Andreas Madsen
Himabindu Lakkaraju
Siva Reddy
Sarath Chandar
39
4
0
08 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
37
5
0
03 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
61
5
0
02 May 2024
Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of
  Attribution Methods
Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of Attribution Methods
Peiyu Yang
Naveed Akhtar
Jiantong Jiang
Ajmal Saeed Mian
XAI
32
2
0
02 May 2024
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Hanxiao Tan
39
1
0
29 Apr 2024
A Comparative Analysis of Adversarial Robustness for Quantum and
  Classical Machine Learning Models
A Comparative Analysis of Adversarial Robustness for Quantum and Classical Machine Learning Models
Maximilian Wendlinger
Kilian Tscharke
Pascal Debus
AAML
18
8
0
24 Apr 2024
Guided AbsoluteGrad: Magnitude of Gradients Matters to Explanation's
  Localization and Saliency
Guided AbsoluteGrad: Magnitude of Gradients Matters to Explanation's Localization and Saliency
Jun Huang
Yan Liu
FAtt
52
0
0
23 Apr 2024
A Learning Paradigm for Interpretable Gradients
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
18
0
0
23 Apr 2024
CA-Stream: Attention-based pooling for interpretable image recognition
CA-Stream: Attention-based pooling for interpretable image recognition
Felipe Torres
Hanwei Zhang
R. Sicre
Stéphane Ayache
Yannis Avrithis
50
0
0
23 Apr 2024
CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining
  Vision Models
CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models
Teodor Chiaburu
Frank Haußer
Felix Bießmann
40
4
0
23 Apr 2024
Mechanistic Interpretability for AI Safety -- A Review
Mechanistic Interpretability for AI Safety -- A Review
Leonard Bereska
E. Gavves
AI4CE
40
112
0
22 Apr 2024
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron
  Activation Analysis
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
Abhilekha Dalal
R. Rayan
Adrita Barua
Eugene Y. Vasserman
Md Kamruzzaman Sarker
Pascal Hitzler
27
4
0
21 Apr 2024
Uncovering Safety Risks of Large Language Models through Concept
  Activation Vector
Uncovering Safety Risks of Large Language Models through Concept Activation Vector
Zhihao Xu
Ruixuan Huang
Changyu Chen
Shuai Wang
Xiting Wang
LLMSV
32
10
0
18 Apr 2024
Toward Understanding the Disagreement Problem in Neural Network Feature
  Attribution
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution
Niklas Koenen
Marvin N. Wright
FAtt
39
5
0
17 Apr 2024
CNN-based explanation ensembling for dataset, representation and
  explanations evaluation
CNN-based explanation ensembling for dataset, representation and explanations evaluation
Weronika Hryniewska-Guzik
Luca Longo
P. Biecek
FAtt
45
0
0
16 Apr 2024
Epistemic Uncertainty Quantification For Pre-trained Neural Network
Epistemic Uncertainty Quantification For Pre-trained Neural Network
Hanjing Wang
Qiang Ji
UQCV
39
2
0
15 Apr 2024
MCPNet: An Interpretable Classifier via Multi-Level Concept Prototypes
MCPNet: An Interpretable Classifier via Multi-Level Concept Prototypes
Bor-Shiun Wang
Chien-Yi Wang
Wei-Chen Chiu
30
3
0
13 Apr 2024
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of
  Human and Machine Explanations for Large Language Models
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models
Marvin Pafla
Kate Larson
Mark Hancock
35
6
0
11 Apr 2024
Structured Gradient-based Interpretations via Norm-Regularized
  Adversarial Training
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
40
2
0
06 Apr 2024
LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
Walid Bousselham
Angie Boggust
Sofian Chaybouti
Hendrik Strobelt
Hilde Kuehne
96
10
0
04 Apr 2024
Smooth Deep Saliency
Smooth Deep Saliency
Rudolf Herdt
Maximilian Schmidt
Daniel Otero Baguer
Peter Maass
MedIm
FAtt
18
0
0
02 Apr 2024
Using Interpretation Methods for Model Enhancement
Using Interpretation Methods for Model Enhancement
Zhuo Chen
Chengyue Jiang
Kewei Tu
19
2
0
02 Apr 2024
On the Faithfulness of Vision Transformer Explanations
On the Faithfulness of Vision Transformer Explanations
Junyi Wu
Weitai Kang
Hao Tang
Yuan Hong
Yan Yan
24
6
0
01 Apr 2024
A Survey of Privacy-Preserving Model Explanations: Privacy Risks,
  Attacks, and Countermeasures
A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures
Thanh Tam Nguyen
T. T. Huynh
Zhao Ren
Thanh Toan Nguyen
Phi Le Nguyen
Hongzhi Yin
Quoc Viet Hung Nguyen
65
8
0
31 Mar 2024
A Peg-in-hole Task Strategy for Holes in Concrete
A Peg-in-hole Task Strategy for Holes in Concrete
André Yuji Yasutomi
Hiroki Mori
Tetsuya Ogata
18
15
0
29 Mar 2024
Forward Learning for Gradient-based Black-box Saliency Map Generation
Forward Learning for Gradient-based Black-box Saliency Map Generation
Zeliang Zhang
Mingqian Feng
Jinyang Jiang
Rongyi Zhu
Yijie Peng
Chenliang Xu
FAtt
32
2
0
22 Mar 2024
Token Transformation Matters: Towards Faithful Post-hoc Explanation for
  Vision Transformer
Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
Junyi Wu
Bin Duan
Weitai Kang
Hao Tang
Yan Yan
36
6
0
21 Mar 2024
Listenable Maps for Audio Classifiers
Listenable Maps for Audio Classifiers
Francesco Paissan
Mirco Ravanelli
Cem Subakan
32
7
0
19 Mar 2024
Demystifying the Physics of Deep Reinforcement Learning-Based Autonomous
  Vehicle Decision-Making
Demystifying the Physics of Deep Reinforcement Learning-Based Autonomous Vehicle Decision-Making
Hanxi Wan
Pei Li
A. Kusari
AI4CE
32
0
0
18 Mar 2024
SelfIE: Self-Interpretation of Large Language Model Embeddings
SelfIE: Self-Interpretation of Large Language Model Embeddings
Haozhe Chen
Carl Vondrick
Chengzhi Mao
19
18
0
16 Mar 2024
Gradient based Feature Attribution in Explainable AI: A Technical Review
Gradient based Feature Attribution in Explainable AI: A Technical Review
Yongjie Wang
Tong Zhang
Xu Guo
Zhiqi Shen
XAI
21
18
0
15 Mar 2024
Interpretable Machine Learning for Survival Analysis
Interpretable Machine Learning for Survival Analysis
Sophie Hanna Langbein
Mateusz Krzyzinski
Mikolaj Spytek
Hubert Baniecki
P. Biecek
Marvin N. Wright
43
2
0
15 Mar 2024
What Sketch Explainability Really Means for Downstream Tasks
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Improving deep learning with prior knowledge and cognitive models: A
  survey on enhancing explainability, adversarial robustness and zero-shot
  learning
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
F. Mumuni
A. Mumuni
AAML
37
5
0
11 Mar 2024
Feature CAM: Interpretable AI in Image Classification
Feature CAM: Interpretable AI in Image Classification
Frincy Clement
Ji Yang
Irene Cheng
FAtt
28
1
0
08 Mar 2024
Previous
12345...222324
Next