ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.09591
  4. Cited By
Saliency strikes back: How filtering out high frequencies improves
  white-box explanations

Saliency strikes back: How filtering out high frequencies improves white-box explanations

18 July 2023
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
    FAtt
ArXivPDFHTML

Papers citing "Saliency strikes back: How filtering out high frequencies improves white-box explanations"

18 / 18 papers shown
Title
Learning with Explanation Constraints
Learning with Explanation Constraints
Rattana Pukdee
Dylan Sam
J. Zico Kolter
Maria-Florina Balcan
Pradeep Ravikumar
FAtt
52
6
0
25 Mar 2023
On the coalitional decomposition of parameters of interest
On the coalitional decomposition of parameters of interest
Marouane Il Idrissi
Nicolas Bousquet
Fabrice Gamboa
Bertrand Iooss
Jean-Michel Loubes
24
8
0
06 Jan 2023
Making Sense of Dependence: Efficient Black-box Explanations Using
  Dependence Measure
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
Paul Novello
Thomas Fel
David Vigouroux
FAtt
37
28
0
13 Jun 2022
Which Explanation Should I Choose? A Function Approximation Perspective
  to Characterizing Post Hoc Explanations
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
FAtt
52
86
0
02 Jun 2022
Rethinking Stability for Attribution-based Explanations
Rethinking Stability for Attribution-based Explanations
Chirag Agarwal
Nari Johnson
Martin Pawelczyk
Satyapriya Krishna
Eshika Saxena
Marinka Zitnik
Himabindu Lakkaraju
FAtt
49
50
0
14 Mar 2022
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
281
436
0
15 Oct 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
40
162
0
11 Aug 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
70
219
0
01 May 2020
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
97
1,159
0
19 Jun 2018
Noise-adding Methods of Saliency Map as Series of Higher Order Partial
  Derivative
Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative
Junghoon Seo
J. Choe
Jamyoung Koo
Seunghyeon Jeon
Beomsu Kim
Taegyun Jeon
FAtt
ODL
16
29
0
08 Jun 2018
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
30
1,514
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
82
3,848
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
70
5,920
0
04 Mar 2017
The Shattered Gradients Problem: If resnets are the answer, then what is
  the question?
The Shattered Gradients Problem: If resnets are the answer, then what is the question?
David Balduzzi
Marcus Frean
Lennox Leary
J. P. Lewis
Kurt Wan-Duo Ma
Brian McWilliams
ODL
46
399
0
28 Feb 2017
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
262
10,149
0
16 Mar 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
119
9,266
0
14 Dec 2015
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
144
4,653
0
21 Dec 2014
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
208
15,825
0
12 Nov 2013
1