ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.04303
  4. Cited By
Defense Against Explanation Manipulation

Defense Against Explanation Manipulation

8 November 2021
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Helen Zhou
    AAML
ArXiv (abs)PDFHTML

Papers citing "Defense Against Explanation Manipulation"

8 / 8 papers shown
Title
Pixel-level Certified Explanations via Randomized Smoothing
Pixel-level Certified Explanations via Randomized Smoothing
Alaa Anani
Tobias Lorenz
Mario Fritz
Bernt Schiele
FAttAAML
49
0
0
18 Jun 2025
Revealing Vulnerabilities of Neural Networks in Parameter Learning and
  Defense Against Explanation-Aware Backdoors
Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors
Md Abdul Kadir
G. Addluri
Daniel Sonntag
AAML
114
0
0
25 Mar 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
64
0
0
09 Mar 2024
Adversarial attacks and defenses in explainable artificial intelligence:
  A survey
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Hubert Baniecki
P. Biecek
AAML
128
69
0
06 Jun 2023
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
83
6
0
29 Nov 2022
SAFARI: Versatile and Efficient Evaluations for Robustness of
  Interpretability
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Wei Huang
Xingyu Zhao
Gao Jin
Xiaowei Huang
AAML
90
31
0
19 Aug 2022
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILMAAML
99
14
0
05 Jul 2021
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.3K
17,197
0
16 Feb 2016
1