ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.10229
  4. Cited By
Explainable Adversarial Attacks in Deep Neural Networks Using Activation
  Profiles

Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles

18 March 2021
G. Cantareira
R. Mello
F. Paulovich
    AAML
ArXivPDFHTML

Papers citing "Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles"

3 / 3 papers shown
Title
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
Seongmin Lee
Zijie J. Wang
Judy Hoffman
Duen Horng Chau
24
11
0
12 Apr 2022
TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural
  Networks
TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks
C. Amarnath
Aishwarya H. Balwani
Kwondo Ma
Abhijit Chatterjee
AAML
13
3
0
16 Oct 2021
Analyzing the Noise Robustness of Deep Neural Networks
Analyzing the Noise Robustness of Deep Neural Networks
Kelei Cao
Mengchen Liu
Hang Su
Jing Wu
Jun Zhu
Shixia Liu
AAML
60
89
0
26 Jan 2020
1