ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.07277
  4. Cited By
Transferability in Machine Learning: from Phenomena to Black-Box Attacks
  using Adversarial Samples

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

24 May 2016
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
    SILM
    AAML
ArXivPDFHTML

Papers citing "Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples"

50 / 360 papers shown
Title
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Lu Wang
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Yuan Jiang
AAML
35
12
0
11 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
64
185
0
30 Apr 2020
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Eric Wallace
Mitchell Stern
D. Song
AAML
27
120
0
30 Apr 2020
Transferable Perturbations of Deep Feature Distributions
Transferable Perturbations of Deep Feature Distributions
Nathan Inkawhich
Kevin J Liang
Lawrence Carin
Yiran Chen
AAML
30
84
0
27 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
31
8
0
23 Apr 2020
PatchAttack: A Black-box Texture-based Attack with Reinforcement
  Learning
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Chenglin Yang
Adam Kortylewski
Cihang Xie
Yinzhi Cao
Alan Yuille
AAML
45
109
0
12 Apr 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
32
1
0
24 Mar 2020
Adversarial Perturbations Fool Deepfake Detectors
Adversarial Perturbations Fool Deepfake Detectors
Apurva Gandhi
Shomik Jain
AAML
16
103
0
24 Mar 2020
Face-Off: Adversarial Face Obfuscation
Face-Off: Adversarial Face Obfuscation
Varun Chandrasekaran
Chuhan Gao
Brian Tang
Kassem Fawaz
S. Jha
Suman Banerjee
PICV
27
44
0
19 Mar 2020
Diversity can be Transferred: Output Diversification for White- and
  Black-box Attacks
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks
Y. Tashiro
Yang Song
Stefano Ermon
AAML
14
13
0
15 Mar 2020
MAB-Malware: A Reinforcement Learning Framework for Attacking Static
  Malware Classifiers
MAB-Malware: A Reinforcement Learning Framework for Attacking Static Malware Classifiers
Wei Song
Xuezixiang Li
Sadia Afroz
D. Garg
Dmitry Kuznetsov
Heng Yin
AAML
53
27
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
31
14
0
06 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
72
63
0
02 Mar 2020
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial
  Attacks
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks
Yue Zhao
Yuwei Wu
Caihua Chen
A. Lim
3DPC
16
70
0
27 Feb 2020
Entangled Watermarks as a Defense against Model Extraction
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
13
218
0
27 Feb 2020
Adversarial Ranking Attack and Defense
Adversarial Ranking Attack and Defense
Mo Zhou
Zhenxing Niu
Le Wang
Qilin Zhang
G. Hua
36
38
0
26 Feb 2020
Real-Time Detectors for Digital and Physical Adversarial Inputs to
  Perception Systems
Real-Time Detectors for Digital and Physical Adversarial Inputs to Perception Systems
Y. Kantaros
Taylor J. Carpenter
Kaustubh Sridhar
Yahan Yang
Insup Lee
James Weimer
AAML
17
12
0
23 Feb 2020
Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition
Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition
Ziwen He
Wei Wang
Jing Dong
Tieniu Tan
AAML
22
23
0
22 Feb 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
A. Madry
AAML
109
823
0
19 Feb 2020
Machine Learning in Python: Main developments and technology trends in
  data science, machine learning, and artificial intelligence
Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence
S. Raschka
Joshua Patterson
Corey J. Nolet
AI4CE
29
485
0
12 Feb 2020
Attacking Optical Character Recognition (OCR) Systems with Adversarial
  Watermarks
Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks
Lu Chen
Wenyuan Xu
AAML
24
21
0
08 Feb 2020
Over-the-Air Adversarial Attacks on Deep Learning Based Modulation
  Classifier over Wireless Channels
Over-the-Air Adversarial Attacks on Deep Learning Based Modulation Classifier over Wireless Channels
Brian Kim
Y. Sagduyu
Kemal Davaslioglu
T. Erpek
S. Ulukus
AAML
48
68
0
05 Feb 2020
Minimax Defense against Gradient-based Adversarial Attacks
Minimax Defense against Gradient-based Adversarial Attacks
Blerta Lindqvist
R. Izmailov
AAML
19
0
0
04 Feb 2020
Adversarial Machine Learning -- Industry Perspectives
Adversarial Machine Learning -- Industry Perspectives
Ramnath Kumar
Magnus Nyström
J. Lambert
Andrew Marshall
Mario Goertzel
Andi Comissoneru
Matt Swann
Sharon Xia
AAML
SILM
29
232
0
04 Feb 2020
Towards Sharper First-Order Adversary with Quantized Gradients
Towards Sharper First-Order Adversary with Quantized Gradients
Zhuanghua Liu
Ivor W. Tsang
AAML
22
0
0
01 Feb 2020
GhostImage: Remote Perception Attacks against Camera-based Image
  Classification Systems
GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems
Yanmao Man
Ming Li
Ryan M. Gerdes
AAML
22
8
0
21 Jan 2020
Universal Adversarial Attack on Attention and the Resulting Dataset
  DAmageNet
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet
Sizhe Chen
Zhengbao He
Chengjin Sun
Jie Yang
Xiaolin Huang
AAML
31
104
0
16 Jan 2020
Efficient Adversarial Training with Transferable Adversarial Examples
Efficient Adversarial Training with Transferable Adversarial Examples
Haizhong Zheng
Ziqi Zhang
Juncheng Gu
Honglak Lee
A. Prakash
AAML
24
108
0
27 Dec 2019
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
A. Madry
AAML
11
383
0
05 Dec 2019
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
39
145
0
02 Dec 2019
DeepSmartFuzzer: Reward Guided Test Generation For Deep Learning
DeepSmartFuzzer: Reward Guided Test Generation For Deep Learning
Samet Demir
Hasan Ferit Eniser
A. Sen
AAML
11
28
0
24 Nov 2019
Enhancing Cross-task Black-Box Transferability of Adversarial Examples
  with Dispersion Reduction
Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction
Yantao Lu
Yunhan Jia
Jianyu Wang
Bai Li
Weiheng Chai
Lawrence Carin
Senem Velipasalar
AAML
24
81
0
22 Nov 2019
Defective Convolutional Networks
Defective Convolutional Networks
Tiange Luo
Tianle Cai
Mengxiao Zhang
Siyu Chen
Di He
Liwei Wang
AAML
35
3
0
19 Nov 2019
Privacy Leakage Avoidance with Switching Ensembles
Privacy Leakage Avoidance with Switching Ensembles
R. Izmailov
Peter Lin
Chris Mesterharm
S. Basu
25
2
0
18 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
21
104
0
13 Nov 2019
Patch augmentation: Towards efficient decision boundaries for neural
  networks
Patch augmentation: Towards efficient decision boundaries for neural networks
Marcus D. Bloice
P. Roth
Andreas Holzinger
AAML
18
2
0
08 Nov 2019
White-Box Target Attack for EEG-Based BCI Regression Problems
White-Box Target Attack for EEG-Based BCI Regression Problems
Lubin Meng
Chin-Teng Lin
T. Jung
Dongrui Wu
AAML
31
42
0
07 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
39
68
0
06 Nov 2019
Towards Robust and Stable Deep Learning Algorithms for Forward Backward
  Stochastic Differential Equations
Towards Robust and Stable Deep Learning Algorithms for Forward Backward Stochastic Differential Equations
Batuhan Güler
Alexis Laignelet
P. Parpas
OOD
21
16
0
25 Oct 2019
Effectiveness of random deep feature selection for securing image
  manipulation detectors against adversarial examples
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples
Mauro Barni
Ehsan Nowroozi
B. Tondi
Bowen Zhang
AAML
16
17
0
25 Oct 2019
Absum: Simple Regularization Method for Reducing Structural Sensitivity
  of Convolutional Neural Networks
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Sekitoshi Kanai
Yasutoshi Ida
Yasuhiro Fujiwara
Masanori Yamada
S. Adachi
AAML
23
1
0
19 Sep 2019
Metric Learning for Adversarial Robustness
Metric Learning for Adversarial Robustness
Chengzhi Mao
Ziyuan Zhong
Junfeng Yang
Carl Vondrick
Baishakhi Ray
OOD
27
184
0
03 Sep 2019
advPattern: Physical-World Attacks on Deep Person Re-Identification via
  Adversarially Transformable Patterns
advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns
Zhibo Wang
Siyan Zheng
Mengkai Song
Qian Wang
Alireza Rahimpour
Hairong Qi
AAML
OOD
19
59
0
25 Aug 2019
Denoising and Verification Cross-Layer Ensemble Against Black-box
  Adversarial Attacks
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
Ka-Ho Chow
Wenqi Wei
Yanzhao Wu
Ling Liu
AAML
25
15
0
21 Aug 2019
DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic
  Segmentation
DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation
Seungju Cho
Tae Joon Jun
Byungsoo Oh
Daeyoung Kim
27
31
0
14 Aug 2019
BlurNet: Defense by Filtering the Feature Maps
BlurNet: Defense by Filtering the Feature Maps
Ravi Raju
Mikko H. Lipasti
AAML
42
15
0
06 Aug 2019
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech
  Recognition Systems
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems
Lea Schonherr
Thorsten Eisenhofer
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
54
63
0
05 Aug 2019
On the Design of Black-box Adversarial Examples by Leveraging
  Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
Pu Zhao
Sijia Liu
Pin-Yu Chen
Nghia Hoang
Kaidi Xu
B. Kailkhura
Xue Lin
AAML
32
54
0
26 Jul 2019
Prediction Poisoning: Towards Defenses Against DNN Model Stealing
  Attacks
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
19
164
0
26 Jun 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
24
104
0
25 Jun 2019
Previous
12345678
Next