ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.02770
  4. Cited By
Delving into Transferable Adversarial Examples and Black-box Attacks
v1v2v3 (latest)

Delving into Transferable Adversarial Examples and Black-box Attacks

8 November 2016
Yanpei Liu
Xinyun Chen
Chang-rui Liu
Basel Alomair
    AAML
ArXiv (abs)PDFHTML

Papers citing "Delving into Transferable Adversarial Examples and Black-box Attacks"

50 / 928 papers shown
Title
Protecting JPEG Images Against Adversarial Attacks
Protecting JPEG Images Against Adversarial Attacks
Aaditya (Adi) Prakash
N. Moran
Solomon Garber
Antonella DiLillo
J. Storer
AAML
80
34
0
02 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAMLSILM
80
99
0
27 Feb 2018
Adversarial vulnerability for any classifier
Adversarial vulnerability for any classifier
Alhussein Fawzi
Hamza Fawzi
Omar Fawzi
AAML
128
251
0
23 Feb 2018
Adversarial Examples that Fool both Computer Vision and Time-Limited
  Humans
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
Gamaleldin F. Elsayed
Shreya Shankar
Brian Cheung
Nicolas Papernot
Alexey Kurakin
Ian Goodfellow
Jascha Narain Sohl-Dickstein
AAML
117
264
0
22 Feb 2018
Divide, Denoise, and Defend against Adversarial Attacks
Divide, Denoise, and Defend against Adversarial Attacks
Seyed-Mohsen Moosavi-Dezfooli
A. Shrivastava
Oncel Tuzel
AAML
57
45
0
19 Feb 2018
DARTS: Deceiving Autonomous Cars with Toxic Signs
DARTS: Deceiving Autonomous Cars with Toxic Signs
Chawin Sitawarin
A. Bhagoji
Arsalan Mosenia
M. Chiang
Prateek Mittal
AAML
117
236
0
18 Feb 2018
Fooling OCR Systems with Adversarial Text Images
Fooling OCR Systems with Adversarial Text Images
Congzheng Song
Vitaly Shmatikov
AAML
61
51
0
15 Feb 2018
Stealing Hyperparameters in Machine Learning
Stealing Hyperparameters in Machine Learning
Binghui Wang
Neil Zhenqiang Gong
AAML
178
467
0
14 Feb 2018
Understanding Membership Inferences on Well-Generalized Learning Models
Understanding Membership Inferences on Well-Generalized Learning Models
Yunhui Long
Vincent Bindschaedler
Lei Wang
Diyue Bu
Xiaofeng Wang
Haixu Tang
Carl A. Gunter
Kai Chen
MIALMMIACV
74
224
0
13 Feb 2018
Identify Susceptible Locations in Medical Records via Adversarial
  Attacks on Deep Predictive Models
Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models
Mengying Sun
Fengyi Tang
Jinfeng Yi
Fei Wang
Jiayu Zhou
AAMLOODMedIm
80
63
0
13 Feb 2018
Query-Free Attacks on Industry-Grade Face Recognition Systems under
  Resource Constraints
Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints
Di Tang
Xiaofeng Wang
Kehuan Zhang
AAML
67
1
0
13 Feb 2018
Hardening Deep Neural Networks via Adversarial Model Cascades
Hardening Deep Neural Networks via Adversarial Model Cascades
Deepak Vijaykeerthy
Anshuman Suri
S. Mehta
Ponnurangam Kumaraguru
AAML
31
10
0
02 Feb 2018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory
  Approach
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
D. Su
Yupeng Gao
Cho-Jui Hsieh
Luca Daniel
AAML
85
469
0
31 Jan 2018
Deflecting Adversarial Attacks with Pixel Deflection
Deflecting Adversarial Attacks with Pixel Deflection
Aaditya (Adi) Prakash
N. Moran
Solomon Garber
Antonella DiLillo
J. Storer
AAML
110
304
0
26 Jan 2018
Characterizing Adversarial Subspaces Using Local Intrinsic
  Dimensionality
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Xingjun Ma
Yue Liu
Yisen Wang
S. Erfani
S. Wijewickrema
Grant Schoenebeck
Basel Alomair
Michael E. Houle
James Bailey
AAML
138
742
0
08 Jan 2018
Spatially Transformed Adversarial Examples
Spatially Transformed Adversarial Examples
Chaowei Xiao
Jun-Yan Zhu
Yue Liu
Warren He
M. Liu
Basel Alomair
AAML
99
524
0
08 Jan 2018
Generating Adversarial Examples with Adversarial Networks
Generating Adversarial Examples with Adversarial Networks
Chaowei Xiao
Yue Liu
Jun-Yan Zhu
Warren He
M. Liu
Basel Alomair
GANAAML
129
905
0
08 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
101
1,083
0
05 Jan 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A
  Survey
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
Ajmal Mian
AAML
146
1,873
0
02 Jan 2018
Exploring the Space of Black-box Attacks on Deep Neural Networks
Exploring the Space of Black-box Attacks on Deep Neural Networks
A. Bhagoji
Warren He
Yue Liu
Basel Alomair
AAML
28
70
0
27 Dec 2017
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
Siqi Yang
Arnold Wiliem
Shaokang Chen
Brian C. Lovell
CVBMAAML
61
3
0
22 Dec 2017
Query-Efficient Black-box Adversarial Examples (superceded)
Query-Efficient Black-box Adversarial Examples (superceded)
Andrew Ilyas
Logan Engstrom
Anish Athalye
Jessy Lin
AAMLMLAU
97
53
0
19 Dec 2017
Adversarial Examples: Attacks and Defenses for Deep Learning
Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan
Pan He
Qile Zhu
Xiaolin Li
SILMAAML
153
1,628
0
19 Dec 2017
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAMLSILM
155
1,864
0
15 Dec 2017
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box
  Machine Learning Models
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel
Jonas Rauber
Matthias Bethge
AAML
103
1,352
0
12 Dec 2017
NAG: Network for Adversary Generation
NAG: Network for Adversary Generation
Konda Reddy Mopuri
Utkarsh Ojha
Utsav Garg
R. Venkatesh Babu
AAML
88
146
0
09 Dec 2017
Adversarial Examples that Fool Detectors
Adversarial Examples that Fool Detectors
Jiajun Lu
Hussein Sibai
Evan Fabry
AAML
84
144
0
07 Dec 2017
Generative Adversarial Perturbations
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAMLGANWIGM
88
356
0
06 Dec 2017
Attacking Visual Language Grounding with Adversarial Examples: A Case
  Study on Neural Image Captioning
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Hongge Chen
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
Cho-Jui Hsieh
GANAAML
80
49
0
06 Dec 2017
Towards Practical Verification of Machine Learning: The Case of Computer
  Vision Systems
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Kexin Pei
Linjie Zhu
Yinzhi Cao
Junfeng Yang
Carl Vondrick
Suman Jana
AAML
111
103
0
05 Dec 2017
Towards Robust Neural Networks via Random Self-ensemble
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedMLAAML
108
424
0
02 Dec 2017
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Anurag Arnab
O. Mikšík
Philip Torr
AAML
115
308
0
27 Nov 2017
Butterfly Effect: Bidirectional Control of Classification Performance by
  Small Additive Perturbation
Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation
Y. Yoo
Seonguk Park
Junyoung Choi
Sangdoo Yun
Nojun Kwak
AAML
50
4
0
27 Nov 2017
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not
  Robust to Adversarial Examples
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
Nicholas Carlini
D. Wagner
AAML
75
249
0
22 Nov 2017
Adversarial Attacks Beyond the Image Space
Adversarial Attacks Beyond the Image Space
Fangyin Wei
Chenxi Liu
Yu-Siang Wang
Weichao Qiu
Lingxi Xie
Yu-Wing Tai
Chi-Keung Tang
Alan Yuille
AAML
126
150
0
20 Nov 2017
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
82
9
0
17 Nov 2017
Defense against Universal Adversarial Perturbations
Defense against Universal Adversarial Perturbations
Naveed Akhtar
Jian Liu
Ajmal Mian
AAML
103
208
0
16 Nov 2017
Robust Multilingual Part-of-Speech Tagging via Adversarial Training
Robust Multilingual Part-of-Speech Tagging via Adversarial Training
Michihiro Yasunaga
Jungo Kasai
Dragomir R. Radev
71
104
0
14 Nov 2017
Synthetic and Natural Noise Both Break Neural Machine Translation
Synthetic and Natural Noise Both Break Neural Machine Translation
Yonatan Belinkov
Yonatan Bisk
170
744
0
06 Nov 2017
Mitigating Adversarial Effects Through Randomization
Mitigating Adversarial Effects Through Randomization
Cihang Xie
Jianyu Wang
Zhishuai Zhang
Zhou Ren
Alan Yuille
AAML
142
1,067
0
06 Nov 2017
Towards Reverse-Engineering Black-Box Neural Networks
Towards Reverse-Engineering Black-Box Neural Networks
Seong Joon Oh
Maximilian Augustin
Bernt Schiele
Mario Fritz
AAML
363
3
0
06 Nov 2017
Countering Adversarial Images using Input Transformations
Countering Adversarial Images using Input Transformations
Chuan Guo
Mayank Rana
Moustapha Cissé
Laurens van der Maaten
AAML
147
1,409
0
31 Oct 2017
PixelDefend: Leveraging Generative Models to Understand and Defend
  against Adversarial Examples
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Yang Song
Taesup Kim
Sebastian Nowozin
Stefano Ermon
Nate Kushman
AAML
141
791
0
30 Oct 2017
Attacking the Madry Defense Model with $L_1$-based Adversarial Examples
Attacking the Madry Defense Model with L1L_1L1​-based Adversarial Examples
Yash Sharma
Pin-Yu Chen
126
118
0
30 Oct 2017
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Matthew Wicker
Xiaowei Huang
Marta Kwiatkowska
AAML
80
235
0
21 Oct 2017
Boosting Adversarial Attacks with Momentum
Boosting Adversarial Attacks with Momentum
Yinpeng Dong
Fangzhou Liao
Tianyu Pang
Hang Su
Jun Zhu
Xiaolin Hu
Jianguo Li
AAML
112
85
0
17 Oct 2017
Detecting Adversarial Attacks on Neural Network Policies with Visual
  Foresight
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
Yen-Chen Lin
Ming-Yuan Liu
Min Sun
Jia-Bin Huang
AAML
96
48
0
02 Oct 2017
Improving image generative models with human interactions
Improving image generative models with human interactions
Andrew Kyle Lampinen
David R. So
Douglas Eck
Fred Bertsch
GAN
47
3
0
29 Sep 2017
Fooling Vision and Language Models Despite Localization and Attention
  Mechanism
Fooling Vision and Language Models Despite Localization and Attention Mechanism
Xiaojun Xu
Xinyun Chen
Chang-rui Liu
Anna Rohrbach
Trevor Darrell
Basel Alomair
AAML
99
41
0
25 Sep 2017
Mitigating Evasion Attacks to Deep Neural Networks via Region-based
  Classification
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
85
212
0
17 Sep 2017
Previous
123...171819
Next