ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.07277
  4. Cited By
Transferability in Machine Learning: from Phenomena to Black-Box Attacks
  using Adversarial Samples

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

24 May 2016
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
    SILM
    AAML
ArXivPDFHTML

Papers citing "Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples"

50 / 360 papers shown
Title
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural
  Networks against Adversarial Malware Samples
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples
Deqiang Li
Ramesh Baral
Tao Li
Han Wang
Qianmu Li
Shouhuai Xu
AAML
28
21
0
18 Sep 2018
On the Structural Sensitivity of Deep Convolutional Networks to the
  Directions of Fourier Basis Functions
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
Yusuke Tsuzuku
Issei Sato
AAML
24
62
0
11 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Adversarial Reprogramming of Text Classification Neural Networks
Adversarial Reprogramming of Text Classification Neural Networks
Paarth Neekhara
Shehzeen Samarah Hussain
Shlomo Dubnov
F. Koushanfar
AAML
SILM
29
9
0
06 Sep 2018
Adversarial Attacks Against Automatic Speech Recognition Systems via
  Psychoacoustic Hiding
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
Lea Schonherr
Katharina Kohls
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
33
287
0
16 Aug 2018
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the
  Robustness of 18 Deep Image Classification Models
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
D. Su
Huan Zhang
Hongge Chen
Jinfeng Yi
Pin-Yu Chen
Yupeng Gao
VLM
40
389
0
05 Aug 2018
Enabling Trust in Deep Learning Models: A Digital Forensics Case Study
Enabling Trust in Deep Learning Models: A Digital Forensics Case Study
Aditya K
Slawomir Grzonkowski
NhienAn Lekhac
19
27
0
03 Aug 2018
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
L. Hanzlik
Yang Zhang
Kathrin Grosse
A. Salem
Maximilian Augustin
Michael Backes
Mario Fritz
OffRL
22
103
0
01 Aug 2018
Harmonic Adversarial Attack Method
Harmonic Adversarial Attack Method
Wen Heng
Shuchang Zhou
Tingting Jiang
AAML
22
6
0
18 Jul 2018
Adversarial Examples in Deep Learning: Characterization and Divergence
Adversarial Examples in Deep Learning: Characterization and Divergence
Wenqi Wei
Ling Liu
Margaret Loper
Stacey Truex
Lei Yu
Mehmet Emre Gursoy
Yanzhao Wu
AAML
SILM
33
18
0
29 Jun 2018
Copycat CNN: Stealing Knowledge by Persuading Confession with Random
  Non-Labeled Data
Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data
Jacson Rodrigues Correia-Silva
Rodrigo Berriel
C. Badue
Alberto F. de Souza
Thiago Oliveira-Santos
MLAU
14
174
0
14 Jun 2018
Resisting Adversarial Attacks using Gaussian Mixture Variational
  Autoencoders
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders
Partha Ghosh
Arpan Losalka
Michael J. Black
AAML
21
77
0
31 May 2018
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for
  Attacking Black-box Neural Networks
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
Chun-Chen Tu
Pai-Shun Ting
Pin-Yu Chen
Sijia Liu
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Shin-Ming Cheng
MLAU
AAML
26
395
0
30 May 2018
Laplacian Networks: Bounding Indicator Function Smoothness for Neural
  Network Robustness
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness
Carlos Lassance
Vincent Gripon
Antonio Ortega
AAML
24
16
0
24 May 2018
Towards Robust Training of Neural Networks by Regularizing Adversarial
  Gradients
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
Fuxun Yu
Zirui Xu
Yanzhi Wang
Chenchen Liu
Xiang Chen
AAML
18
10
0
23 May 2018
Adversarially Robust Training through Structured Gradient Regularization
Adversarially Robust Training through Structured Gradient Regularization
Kevin Roth
Aurelien Lucchi
Sebastian Nowozin
Thomas Hofmann
30
23
0
22 May 2018
Detecting Adversarial Samples for Deep Neural Networks through Mutation
  Testing
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing
Jingyi Wang
Jun Sun
Peixin Zhang
Xinyu Wang
AAML
21
41
0
14 May 2018
Black-box Adversarial Attacks with Limited Queries and Information
Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas
Logan Engstrom
Anish Athalye
Jessy Lin
MLAU
AAML
70
1,191
0
23 Apr 2018
Adversarial Attacks Against Medical Deep Learning Systems
Adversarial Attacks Against Medical Deep Learning Systems
S. G. Finlayson
Hyung Won Chung
I. Kohane
Andrew L. Beam
SILM
AAML
OOD
MedIm
25
230
0
15 Apr 2018
Bypassing Feature Squeezing by Increasing Adversary Strength
Bypassing Feature Squeezing by Increasing Adversary Strength
Yash Sharma
Pin-Yu Chen
AAML
19
34
0
27 Mar 2018
On the Limitation of Local Intrinsic Dimensionality for Characterizing
  the Subspaces of Adversarial Examples
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
17
26
0
26 Mar 2018
Clipping free attacks against artificial neural networks
Clipping free attacks against artificial neural networks
B. Addad
Jérôme Kodjabachian
Christophe Meyer
AAML
16
1
0
26 Mar 2018
Security Theater: On the Vulnerability of Classifiers to Exploratory
  Attacks
Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks
Tegjyot Singh Sethi
M. Kantardzic
J. Ryu
AAML
23
11
0
24 Mar 2018
A Dynamic-Adversarial Mining Approach to the Security of Machine
  Learning
A Dynamic-Adversarial Mining Approach to the Security of Machine Learning
Tegjyot Singh Sethi
M. Kantardzic
Lingyu Lyu
Jiashun Chen
AAML
16
11
0
24 Mar 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
40
286
0
19 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
30
97
0
27 Feb 2018
Are Generative Classifiers More Robust to Adversarial Attacks?
Are Generative Classifiers More Robust to Adversarial Attacks?
Yingzhen Li
John Bradshaw
Yash Sharma
AAML
57
78
0
19 Feb 2018
DARTS: Deceiving Autonomous Cars with Toxic Signs
DARTS: Deceiving Autonomous Cars with Toxic Signs
Chawin Sitawarin
A. Bhagoji
Arsalan Mosenia
M. Chiang
Prateek Mittal
AAML
37
233
0
18 Feb 2018
Few-shot learning of neural networks from scratch by pseudo example
  optimization
Few-shot learning of neural networks from scratch by pseudo example optimization
Akisato Kimura
Zoubin Ghahramani
Koh Takeuchi
Tomoharu Iwata
N. Ueda
35
52
0
08 Feb 2018
Characterizing Adversarial Subspaces Using Local Intrinsic
  Dimensionality
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Xingjun Ma
Bo-wen Li
Yisen Wang
S. Erfani
S. Wijewickrema
Grant Schoenebeck
D. Song
Michael E. Houle
James Bailey
AAML
43
730
0
08 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
38
1,074
0
05 Jan 2018
Query-limited Black-box Attacks to Classifiers
Query-limited Black-box Attacks to Classifiers
Fnu Suya
Yuan Tian
David Evans
Paolo Papotti
AAML
20
24
0
23 Dec 2017
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
44
1,808
0
15 Dec 2017
Generative Adversarial Perturbations
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAML
GAN
WIGM
31
351
0
06 Dec 2017
Attacking Visual Language Grounding with Adversarial Examples: A Case
  Study on Neural Image Captioning
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Hongge Chen
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
Cho-Jui Hsieh
GAN
AAML
35
49
0
06 Dec 2017
Hardening Quantum Machine Learning Against Adversaries
Hardening Quantum Machine Learning Against Adversaries
N. Wiebe
Ramnath Kumar
AAML
25
20
0
17 Nov 2017
Towards Reverse-Engineering Black-Box Neural Networks
Towards Reverse-Engineering Black-Box Neural Networks
Seong Joon Oh
Maximilian Augustin
Bernt Schiele
Mario Fritz
AAML
292
3
0
06 Nov 2017
Generating Natural Adversarial Examples
Generating Natural Adversarial Examples
Zhengli Zhao
Dheeru Dua
Sameer Singh
GAN
AAML
40
596
0
31 Oct 2017
Neural Trojans
Neural Trojans
Yuntao Liu
Yang Xie
Ankur Srivastava
AAML
13
350
0
03 Oct 2017
PassGAN: A Deep Learning Approach for Password Guessing
PassGAN: A Deep Learning Approach for Password Guessing
Briland Hitaj
Paolo Gasti
G. Ateniese
Fernando Perez-Cruz
GAN
30
246
0
01 Sep 2017
Towards Crafting Text Adversarial Samples
Towards Crafting Text Adversarial Samples
Suranjana Samanta
S. Mehta
AAML
27
219
0
10 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
A. Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
89
11,872
0
19 Jun 2017
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
D. Song
AAML
43
242
0
15 Jun 2017
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Yizhen Wang
S. Jha
Kamalika Chaudhuri
AAML
19
154
0
13 Jun 2017
Towards Robust Detection of Adversarial Examples
Towards Robust Detection of Adversarial Examples
Tianyu Pang
Chao Du
Yinpeng Dong
Jun Zhu
AAML
39
18
0
02 Jun 2017
Black-Box Attacks against RNN based Malware Detection Algorithms
Black-Box Attacks against RNN based Malware Detection Algorithms
Weiwei Hu
Ying Tan
10
149
0
23 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
61
1,842
0
20 May 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
SILM
41
555
0
11 Apr 2017
Adversarial Image Perturbation for Privacy Protection -- A Game Theory
  Perspective
Adversarial Image Perturbation for Privacy Protection -- A Game Theory Perspective
Seong Joon Oh
Mario Fritz
Bernt Schiele
CVBM
AAML
339
160
0
28 Mar 2017
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial
  Domains
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains
Tegjyot Singh Sethi
M. Kantardzic
AAML
27
49
0
23 Mar 2017
Previous
12345678
Next