ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.05294
  4. Cited By
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
  Moreau Envelope

MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope

8 January 2023
Jingwei Zhang
Farzan Farnia
    UQCV
ArXiv (abs)PDFHTML

Papers citing "MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope"

22 / 22 papers shown
Title
Building Reliable Explanations of Unreliable Neural Networks: Locally
  Smoothing Perspective of Model Interpretation
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation
Dohun Lim
Hyeonseok Lee
Sungchan Kim
FAttAAML
51
13
0
26 Mar 2021
There and Back Again: Revisiting Backpropagation Saliency Methods
There and Back Again: Revisiting Backpropagation Saliency Methods
Sylvestre-Alvise Rebuffi
Ruth C. Fong
Xu Ji
Andrea Vedaldi
FAttXAI
68
113
0
06 Apr 2020
Interpretable and Fine-Grained Visual Explanations for Convolutional
  Neural Networks
Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks
Jörg Wagner
Jan M. Köhler
Tobias Gindele
Leon Hetzel
Thaddäus Wiedemer
Sven Behnke
AAMLFAtt
90
122
0
07 Aug 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
88
334
0
19 Jun 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
Soheil Feizi
FAttAAML
73
65
0
28 May 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DVMedIm
167
18,193
0
28 May 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAMLFAtt
113
205
0
06 Feb 2019
Structured Adversarial Attack: Towards General Implementation and Better
  Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Kaidi Xu
Sijia Liu
Pu Zhao
Pin-Yu Chen
Huan Zhang
Quanfu Fan
Deniz Erdogmus
Yanzhi Wang
Xinyu Lin
AAML
124
161
0
05 Aug 2018
Object Detection with Deep Learning: A Review
Object Detection with Deep Learning: A Review
Zhong-Qiu Zhao
Peng Zheng
Shou-tao Xu
Xindong Wu
ObjD
132
4,017
0
15 Jul 2018
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Aditya Chattopadhyay
Anirban Sarkar
Prantik Howlader
V. Balasubramanian
FAtt
119
2,311
0
30 Oct 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAttAAML
150
871
0
29 Oct 2017
Deep Learning for Medical Image Analysis
Deep Learning for Medical Image Analysis
Mina Rezaei
Haojin Yang
Christoph Meinel
77
2,072
0
17 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
319
12,138
0
19 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAttODL
210
2,236
0
12 Jun 2017
Real Time Image Saliency for Black Box Classifiers
Real Time Image Saliency for Black Box Classifiers
P. Dabkowski
Y. Gal
72
593
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAttAAML
83
1,525
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,883
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,024
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
335
20,110
0
07 Oct 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSLSSegFAtt
253
9,342
0
14 Dec 2015
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,510
0
10 Dec 2015
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
314
7,321
0
20 Dec 2013
1