ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.10977
  4. Cited By
Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency
  Map Comparison

Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison

26 January 2021
Lukas Brunke
Prateek Agrawal
Nikhil George
    AAML
    FAtt
ArXivPDFHTML

Papers citing "Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison"

3 / 3 papers shown
Title
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Mingli Song
XAI
46
1
0
28 Jul 2024
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
26
1
0
31 Oct 2022
1