ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.01617
  4. Cited By
Improving the Transferability of Adversarial Examples with New Iteration
  Framework and Input Dropout

Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout

3 June 2021
Pengfei Xie
Linyuan Wang
Ruoxi Qin
Kai Qiao
S. Shi
Guoen Hu
Bin Yan
    AAML
ArXivPDFHTML

Papers citing "Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout"

10 / 10 papers shown
Title
Improving the Transferability of Adversarial Examples with
  Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting
Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting
Junhua Zou
Zhisong Pan
Junyang Qiu
Xin Liu
Ting Rui
Wei Li
33
67
0
11 Dec 2021
AdvHat: Real-world adversarial attack on ArcFace Face ID system
AdvHat: Real-world adversarial attack on ArcFace Face ID system
Stepan Alekseevich Komkov
Aleksandr Petiushko
AAML
CVBM
44
284
0
23 Aug 2019
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on
  Text Classification and Entailment
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
Di Jin
Zhijing Jin
Qiufeng Wang
Peter Szolovits
SILM
AAML
135
1,064
0
27 Jul 2019
Feature Denoising for Improving Adversarial Robustness
Feature Denoising for Improving Adversarial Robustness
Cihang Xie
Yuxin Wu
Laurens van der Maaten
Alan Yuille
Kaiming He
93
907
0
09 Dec 2018
Defensive Dropout for Hardening Deep Neural Networks under Adversarial
  Attacks
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
Siyue Wang
Tianlin Li
Pu Zhao
Wujie Wen
David Kaeli
S. Chin
Xinyu Lin
AAML
46
70
0
13 Sep 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
83
1,076
0
05 Jan 2018
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
183
8,513
0
16 Aug 2016
Inception-v4, Inception-ResNet and the Impact of Residual Connections on
  Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
320
14,196
0
23 Feb 2016
Domain-Adversarial Training of Neural Networks
Domain-Adversarial Training of Neural Networks
Yaroslav Ganin
E. Ustinova
Hana Ajakan
Pascal Germain
Hugo Larochelle
François Laviolette
M. Marchand
Victor Lempitsky
GAN
OOD
357
9,418
0
28 May 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
204
14,831
1
21 Dec 2013
1