ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
Seongun Kim
Sol-A. Kim
Geonhyeong Kim
Enver Menadjiev
Chanwoo Lee
Seongwook Chung
Nari Kim
Jaesik Choi
26
0
0
15 May 2025
Attention-aggregated Attack for Boosting the Transferability of Facial Adversarial Examples
Attention-aggregated Attack for Boosting the Transferability of Facial Adversarial Examples
Jian-Wei Li
Wen-Ze Shao
AAML
31
0
0
06 May 2025
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
Zhiyu Zhu
Jiayu Zhang
Zhibo Jin
Fang Chen
Jianlong Zhou
FAtt
24
0
0
03 May 2025
Overview and practical recommendations on using Shapley Values for identifying predictive biomarkers via CATE modeling
Overview and practical recommendations on using Shapley Values for identifying predictive biomarkers via CATE modeling
David Svensson
Erik Hermansson
N. Nikolaou
Konstantinos Sechidis
Ilya Lipkovich
CML
61
0
0
02 May 2025
Learning to Attribute with Attention
Learning to Attribute with Attention
Benjamin Cohen-Wang
Yung-Sung Chuang
Aleksander Madry
27
0
0
18 Apr 2025
Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted Concepts
Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted Concepts
Leyang Li
Shilin Lu
Yan Ren
A. Kong
DiffM
46
1
0
17 Apr 2025
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Zhen Tan
Song Wang
Yifan Li
Yu Kong
Jundong Li
Tianlong Chen
Huan Liu
FAtt
43
0
0
11 Apr 2025
Uncovering the Structure of Explanation Quality with Spectral Analysis
Uncovering the Structure of Explanation Quality with Spectral Analysis
Johannes Maeß
G. Montavon
Shinichi Nakajima
Klaus-Robert Müller
Thomas Schnake
FAtt
43
0
0
11 Apr 2025
A Meaningful Perturbation Metric for Evaluating Explainability Methods
A Meaningful Perturbation Metric for Evaluating Explainability Methods
Danielle Cohen
Hila Chefer
Lior Wolf
AAML
25
0
0
09 Apr 2025
PRIMEDrive-CoT: A Precognitive Chain-of-Thought Framework for Uncertainty-Aware Object Interaction in Driving Scene Scenario
PRIMEDrive-CoT: A Precognitive Chain-of-Thought Framework for Uncertainty-Aware Object Interaction in Driving Scene Scenario
Sriram Mandalika
Lalitha V
Athira Nambiar
38
1
0
08 Apr 2025
Fourier Feature Attribution: A New Efficiency Attribution Method
Fourier Feature Attribution: A New Efficiency Attribution Method
Zechen Liu
Feiyang Zhang
Wei Song
X. Li
Wei Wei
FAtt
57
0
0
02 Apr 2025
An Explainable Neural Radiomic Sequence Model with Spatiotemporal Continuity for Quantifying 4DCT-based Pulmonary Ventilation
An Explainable Neural Radiomic Sequence Model with Spatiotemporal Continuity for Quantifying 4DCT-based Pulmonary Ventilation
Rihui Zhang
Haiming Zhu
Jingtong Zhao
Lei Zhang
F. Yin
Chunhao Wang
Zhenyu Yang
37
0
0
31 Mar 2025
VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow
VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow
Ada Gorgun
Bernt Schiele
Jonas Fischer
34
0
0
28 Mar 2025
Interpretability of Graph Neural Networks to Assess Effects of Global Change Drivers on Ecological Networks
Interpretability of Graph Neural Networks to Assess Effects of Global Change Drivers on Ecological Networks
Emré Anakok
Pierre Barbillon
Colin Fontaine
Elisa Thébault
47
0
0
19 Mar 2025
A Digital Twin Simulator of a Pastillation Process with Applications to Automatic Control based on Computer Vision
A Digital Twin Simulator of a Pastillation Process with Applications to Automatic Control based on Computer Vision
Leonardo D. González
J. Pulsipher
Shengli Jiang
Tyler A. Soderstrom
Victor M. Zavala
32
0
0
18 Mar 2025
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Nhi Pham
Bernt Schiele
Adam Kortylewski
Jonas Fischer
61
0
0
17 Mar 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
100
1
0
13 Mar 2025
Discovering Influential Neuron Path in Vision Transformers
Discovering Influential Neuron Path in Vision Transformers
Yifan Wang
Yifei Liu
Yingdong Shi
C. Li
Anqi Pang
Sibei Yang
Jingyi Yu
Kan Ren
ViT
69
0
0
12 Mar 2025
Tangentially Aligned Integrated Gradients for User-Friendly Explanations
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
FAtt
78
2
0
11 Mar 2025
Now you see me! A framework for obtaining class-relevant saliency maps
Nils Philipp Walter
Jilles Vreeken
Jonas Fischer
FAtt
40
0
0
10 Mar 2025
Interactive Medical Image Analysis with Concept-based Similarity Reasoning
Ta Duc Huy
Sen Kim Tran
Phan Nguyen
Nguyen Hoang Tran
Tran Bao Sam
A. Hengel
Zhibin Liao
Johan W. Verjans
Minh Nguyen Nhat To
Vu Minh Hieu Phan
50
0
0
10 Mar 2025
Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations
Eren Erogullari
Sebastian Lapuschkin
Wojciech Samek
Frederik Pahde
LLMSV
CoGe
62
0
0
07 Mar 2025
Enhancing Network Security Management in Water Systems using FM-based Attack Attribution
Aleksandar Avdalovic
Joseph Khoury
Ahmad Taha
E. Bou-Harb
AAML
47
1
0
03 Mar 2025
Riemannian Integrated Gradients: A Geometric View of Explainable AI
Federico Costanza
Lachlan Simpson
37
0
0
02 Mar 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
Constraining Sequential Model Editing with Editing Anchor Compression
Hao-Xiang Xu
Jun-Yu Ma
Zhen-Hua Ling
Ningyu Zhang
Jia-Chen Gu
KELM
47
1
0
25 Feb 2025
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
48
0
0
24 Feb 2025
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
Tue Cao
Nhat X. Hoang
Hieu H. Pham
P. Nguyen
My T. Thai
86
0
0
22 Feb 2025
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
L. Arras
Bruno Puri
Patrick Kahardipraja
Sebastian Lapuschkin
Wojciech Samek
43
0
0
21 Feb 2025
Time-series attribution maps with regularized contrastive learning
Time-series attribution maps with regularized contrastive learning
Steffen Schneider
Rodrigo González Laiz
Anastasiia Filippova
Markus Frey
Mackenzie W. Mathis
BDL
FAtt
CML
AI4TS
76
0
0
17 Feb 2025
Using the Path of Least Resistance to Explain Deep Networks
Using the Path of Least Resistance to Explain Deep Networks
Sina Salek
Joseph Enguehard
FAtt
44
0
0
17 Feb 2025
Mechanistic Unveiling of Transformer Circuits: Self-Influence as a Key to Model Reasoning
Mechanistic Unveiling of Transformer Circuits: Self-Influence as a Key to Model Reasoning
L. Zhang
Lijie Hu
Di Wang
LRM
95
0
0
17 Feb 2025
ExplainReduce: Summarising local explanations via proxies
ExplainReduce: Summarising local explanations via proxies
Lauri Seppäläinen
Mudong Guo
Kai Puolamäki
FAtt
52
0
0
17 Feb 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
147
0
0
17 Feb 2025
Recent Advances in Malware Detection: Graph Learning and Explainability
Recent Advances in Malware Detection: Graph Learning and Explainability
Hossein Shokouhinejad
Roozbeh Razavi-Far
Hesamodin Mohammadian
Mahdi Rabbani
Samuel Ansong
Griffin Higgins
Ali Ghorbani
AAML
73
2
0
14 Feb 2025
DejAIvu: Identifying and Explaining AI Art on the Web in Real-Time with Saliency Maps
DejAIvu: Identifying and Explaining AI Art on the Web in Real-Time with Saliency Maps
Jocelyn Dzuong
92
0
0
12 Feb 2025
Do we really have to filter out random noise in pre-training data for language models?
Do we really have to filter out random noise in pre-training data for language models?
Jinghan Ru
Yuxin Xie
Xianwei Zhuang
Yuguo Yin
Zhihui Guo
Zhiming Liu
Qianli Ren
Yuexian Zou
83
2
0
10 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
111
7
0
06 Feb 2025
Generating visual explanations from deep networks using implicit neural representations
Generating visual explanations from deep networks using implicit neural representations
Michal Byra
Henrik Skibbe
GAN
FAtt
29
0
0
20 Jan 2025
MedGrad E-CLIP: Enhancing Trust and Transparency in AI-Driven Skin Lesion Diagnosis
MedGrad E-CLIP: Enhancing Trust and Transparency in AI-Driven Skin Lesion Diagnosis
Sadia Kamal
Tim Oates
MedIm
39
0
0
12 Jan 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
59
4
0
10 Jan 2025
Visual Large Language Models for Generalized and Specialized Applications
Yifan Li
Zhixin Lai
Wentao Bao
Zhen Tan
Anh Dao
Kewei Sui
Jiayi Shen
Dong Liu
Huan Liu
Yu Kong
VLM
88
11
0
06 Jan 2025
Physically Constrained Generative Adversarial Networks for Improving Precipitation Fields from Earth System Models
Physically Constrained Generative Adversarial Networks for Improving Precipitation Fields from Earth System Models
P. Hess
Markus Drüke
S. Petri
Felix M. Strnad
Niklas Boers
35
60
0
03 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
94
0
0
30 Dec 2024
Attribution for Enhanced Explanation with Transferable Adversarial
  eXploration
Attribution for Enhanced Explanation with Transferable Adversarial eXploration
Zhiyu Zhu
Jiayu Zhang
Zhibo Jin
Huaming Chen
Jianlong Zhou
Fang Chen
AAML
ViT
38
0
0
27 Dec 2024
Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory
Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory
Fabian Fumagalli
Maximilian Muschalik
Eyke Hüllermeier
Barbara Hammer
J. Herbinger
FAtt
39
1
0
22 Dec 2024
One Pixel is All I Need
One Pixel is All I Need
Deng Siqin
Zhou Xiaoyi
ViT
135
0
0
14 Dec 2024
Advancing Attribution-Based Neural Network Explainability through
  Relative Absolute Magnitude Layer-Wise Relevance Propagation and
  Multi-Component Evaluation
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
Davor Vukadin
Petar Afrić
Marin Šilić
Goran Delač
FAtt
93
2
0
12 Dec 2024
FaceX: Understanding Face Attribute Classifiers through Summary Model
  Explanations
FaceX: Understanding Face Attribute Classifiers through Summary Model Explanations
Ioannis Sarridis
C. Koutlis
Symeon Papadopoulos
Christos Diou
CVBM
102
0
0
10 Dec 2024
1234...222324
Next