ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

11 / 1,161 papers shown
Title
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
18
241
0
02 Feb 2018
Training Set Debugging Using Trusted Items
Training Set Debugging Using Trusted Items
Xuezhou Zhang
Xiaojin Zhu
Stephen J. Wright
11
73
0
24 Jan 2018
Visual Analytics in Deep Learning: An Interrogative Survey for the Next
  Frontiers
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Fred Hohman
Minsuk Kahng
Robert S. Pienta
Duen Horng Chau
OOD
HAI
30
536
0
21 Jan 2018
Beyond saliency: understanding convolutional neural networks from
  saliency prediction on layer-wise relevance propagation
Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation
Heyi Li
Yunke Tian
Klaus Mueller
Xin Chen
FAtt
19
38
0
22 Dec 2017
A Perceptual Measure for Deep Single Image Camera Calibration
A Perceptual Measure for Deep Single Image Camera Calibration
Yannick Hold-Geoffroy
Kalyan Sunkavalli
Jonathan Eisenmann
Matt Fisher
Emiliano Gambaretto
Sunil Hadap
Jean-François Lalonde
3DV
16
106
0
02 Dec 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
77
1,791
0
30 Nov 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
26
675
0
26 Nov 2017
No Classification without Representation: Assessing Geodiversity Issues
  in Open Data Sets for the Developing World
No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World
S. Shankar
Yoni Halpern
Eric Breck
James Atwood
Jimbo Wilson
D. Sculley
11
287
0
22 Nov 2017
Towards better understanding of gradient-based attribution methods for
  Deep Neural Networks
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Gross
FAtt
19
145
0
16 Nov 2017
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
31
678
0
02 Nov 2017
Learning how to explain neural networks: PatternNet and
  PatternAttribution
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAI
FAtt
16
338
0
16 May 2017
Previous
123...222324