ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.04693
  4. Cited By
Towards Imperceptible and Robust Adversarial Example Attacks against
  Neural Networks

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

15 January 2018
Bo Luo
Yannan Liu
Lingxiao Wei
Q. Xu
    AAML
ArXivPDFHTML

Papers citing "Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks"

13 / 63 papers shown
Title
Localized Adversarial Training for Increased Accuracy and Robustness in
  Image Classification
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification
Eitan Rothberg
Tingting Chen
Luo Jie
Hao Ji
AAML
8
0
0
10 Sep 2019
Adversarial Security Attacks and Perturbations on Machine Learning and
  Deep Learning Methods
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
Arif Siddiqi
AAML
27
11
0
17 Jul 2019
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with
  Adversarial Perturbations
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations
Yuezun Li
Xin Yang
Baoyuan Wu
Siwei Lyu
AAML
PICV
CVBM
26
38
0
21 Jun 2019
The Attack Generator: A Systematic Approach Towards Constructing
  Adversarial Attacks
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
F. Assion
Peter Schlicht
Florens Greßner
W. Günther
Fabian Hüger
Nico M. Schmidt
Umair Rasheed
AAML
25
14
0
17 Jun 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the
  Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer
  Domain
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
37
18
0
19 May 2019
Towards a Robust Deep Neural Network in Texts: A Survey
Towards a Robust Deep Neural Network in Texts: A Survey
Wenqi Wang
Benxiao Tang
Run Wang
Lina Wang
Aoshuang Ye
AAML
26
39
0
12 Feb 2019
Model Compression with Adversarial Robustness: A Unified Optimization
  Framework
Model Compression with Adversarial Robustness: A Unified Optimization Framework
Shupeng Gui
Haotao Wang
Chen Yu
Haichuan Yang
Zhangyang Wang
Ji Liu
MQ
19
137
0
10 Feb 2019
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities
  of Spiking and Deep Neural Networks
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
AAML
13
7
0
04 Feb 2019
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule
  Networks
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
GAN
AAML
30
26
0
28 Jan 2019
Disentangling Adversarial Robustness and Generalization
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAML
OOD
194
277
0
03 Dec 2018
Exploring the Vulnerability of Single Shot Module in Object Detectors
  via Imperceptible Background Patches
Exploring the Vulnerability of Single Shot Module in Object Detectors via Imperceptible Background Patches
Yuezun Li
Xiao Bian
Ming-Ching Chang
Siwei Lyu
AAML
ObjD
25
31
0
16 Sep 2018
Robustness of Rotation-Equivariant Networks to Adversarial Perturbations
Robustness of Rotation-Equivariant Networks to Adversarial Perturbations
Beranger Dumont
Simona Maggio
Pablo Montalvo
AAML
16
23
0
19 Feb 2018
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
368
5,849
0
08 Jul 2016
Previous
12