ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.10824
  4. Cited By
Assessing the Reliability of Visual Explanations of Deep Models with
  Adversarial Perturbations

Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations

22 April 2020
Dan Valle
Tiago Pimentel
Adriano Veloso
    FAtt
    XAI
    AAML
ArXivPDFHTML

Papers citing "Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations"

1 / 1 papers shown
Title
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1