ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.03453
  4. Cited By
The Space of Transferable Adversarial Examples
v1v2 (latest)

The Space of Transferable Adversarial Examples

11 April 2017
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
    AAMLSILM
ArXiv (abs)PDFHTML

Papers citing "The Space of Transferable Adversarial Examples"

50 / 302 papers shown
Title
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Yue Liu
AAML
123
131
0
09 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
134
162
0
08 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
Lixing Chen
Junhai Yong
MLAUOOD
93
17
0
02 Sep 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
113
73
0
07 Aug 2020
TREND: Transferability based Robust ENsemble Design
TREND: Transferability based Robust ENsemble Design
Deepak Ravikumar
Sangamesh Kodge
Isha Garg
Kaushik Roy
OODAAML
30
4
0
04 Aug 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
76
129
0
13 Jul 2020
Knowledge Distillation Beyond Model Compression
Knowledge Distillation Beyond Model Compression
F. Sarfraz
Elahe Arani
Bahram Zonooz
76
42
0
03 Jul 2020
Generating Adversarial Examples with Controllable Non-transferability
Generating Adversarial Examples with Controllable Non-transferability
Renzhi Wang
Tianwei Zhang
Xiaofei Xie
Lei Ma
Cong Tian
Felix Juefei Xu
Yang Liu
SILMAAML
80
3
0
02 Jul 2020
Uncovering the Connections Between Adversarial Transferability and
  Knowledge Transferability
Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability
Kaizhao Liang
Jacky Y. Zhang
Wei Ping
Zhuolin Yang
Oluwasanmi Koyejo
Yangqiu Song
AAML
129
26
0
25 Jun 2020
Adversarial Attacks for Multi-view Deep Models
Adversarial Attacks for Multi-view Deep Models
Xuli Sun
Shiliang Sun
AAML
29
0
0
19 Jun 2020
The Pitfalls of Simplicity Bias in Neural Networks
The Pitfalls of Simplicity Bias in Neural Networks
Harshay Shah
Kaustav Tamuly
Aditi Raghunathan
Prateek Jain
Praneeth Netrapalli
AAML
76
364
0
13 Jun 2020
Adversarial Attack Vulnerability of Medical Image Analysis Systems:
  Unexplored Factors
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors
Gerda Bortsova
C. González-Gonzalo
S. Wetstein
Florian Dubost
Ioannis Katramados
...
Bram van Ginneken
J. Pluim
M. Veta
Clara I. Sánchez
Marleen de Bruijne
AAMLMedIm
38
131
0
11 Jun 2020
On the Effectiveness of Regularization Against Membership Inference
  Attacks
On the Effectiveness of Regularization Against Membership Inference Attacks
Yigitcan Kaya
Sanghyun Hong
Tudor Dumitras
87
28
0
09 Jun 2020
Tricking Adversarial Attacks To Fail
Tricking Adversarial Attacks To Fail
Blerta Lindqvist
AAML
48
0
0
08 Jun 2020
Enhancing Resilience of Deep Learning Networks by Means of Transferable
  Adversaries
Enhancing Resilience of Deep Learning Networks by Means of Transferable Adversaries
M. Seiler
Heike Trautmann
P. Kerschke
AAML
24
0
0
27 May 2020
Universalization of any adversarial attack using very few test examples
Universalization of any adversarial attack using very few test examples
Sandesh Kamath
Amit Deshpande
K. Subrahmanyam
Vineeth N. Balasubramanian
FedMLAAML
36
1
0
18 May 2020
Increased-confidence adversarial examples for deep learning
  counter-forensics
Increased-confidence adversarial examples for deep learning counter-forensics
Wenjie Li
B. Tondi
R. Ni
Mauro Barni
AAML
24
2
0
12 May 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAMLFedMLSILM
152
309
0
08 May 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
  Estimation
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
73
153
0
06 May 2020
Perturbing Across the Feature Hierarchy to Improve Standard and Strict
  Blackbox Attack Transferability
Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability
Nathan Inkawhich
Kevin J. Liang
Binghui Wang
Matthew J. Inkawhich
Lawrence Carin
Yiran Chen
AAML
87
90
0
29 Apr 2020
Transferable Perturbations of Deep Feature Distributions
Transferable Perturbations of Deep Feature Distributions
Nathan Inkawhich
Kevin J. Liang
Lawrence Carin
Yiran Chen
AAML
73
87
0
27 Apr 2020
Neural Network Laundering: Removing Black-Box Backdoor Watermarks from
  Deep Neural Networks
Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks
William Aiken
Hyoungshick Kim
Simon S. Woo
40
64
0
22 Apr 2020
SOAR: Second-Order Adversarial Regularization
SOAR: Second-Order Adversarial Regularization
A. Ma
Fartash Faghri
Nicolas Papernot
Amir-massoud Farahmand
AAML
35
4
0
04 Apr 2020
Do Deep Minds Think Alike? Selective Adversarial Attacks for
  Fine-Grained Manipulation of Multiple Deep Neural Networks
Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Zain Khan
Jirong Yi
R. Mudumbai
Xiaodong Wu
Weiyu Xu
AAMLMLAU
51
1
0
26 Mar 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
47
1
0
24 Mar 2020
Defense Through Diverse Directions
Defense Through Diverse Directions
Christopher M. Bender
Yang Li
Yifeng Shi
Michael K. Reiter
Junier B. Oliva
AAML
51
4
0
24 Mar 2020
Adversarial Perturbations Fool Deepfake Detectors
Adversarial Perturbations Fool Deepfake Detectors
Apurva Gandhi
Shomik Jain
AAML
66
106
0
24 Mar 2020
Investigating Image Applications Based on Spatial-Frequency Transform
  and Deep Learning Techniques
Investigating Image Applications Based on Spatial-Frequency Transform and Deep Learning Techniques
Qinkai Zheng
Han Qiu
G. Memmi
Isabelle Bloch
27
0
0
20 Mar 2020
Face-Off: Adversarial Face Obfuscation
Face-Off: Adversarial Face Obfuscation
Varun Chandrasekaran
Chuhan Gao
Brian Tang
Kassem Fawaz
S. Jha
Suman Banerjee
PICV
81
44
0
19 Mar 2020
GeoDA: a geometric framework for black-box adversarial attacks
GeoDA: a geometric framework for black-box adversarial attacks
A. Rahmati
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
H. Dai
MLAUAAML
143
120
0
13 Mar 2020
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Hadi Salman
Mingjie Sun
Greg Yang
Ashish Kapoor
J. Zico Kolter
94
23
0
04 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OODAAML
126
67
0
02 Mar 2020
Randomization matters. How to defend against strong adversarial attacks
Randomization matters. How to defend against strong adversarial attacks
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
AAML
114
60
0
26 Feb 2020
On the Matrix-Free Generation of Adversarial Perturbations for Black-Box
  Attacks
On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks
Hisaichi Shibata
S. Hanaoka
Y. Nomura
Naoto Hayashi
O. Abe
AAML
17
0
0
18 Feb 2020
Analysis of Random Perturbations for Robust Convolutional Neural
  Networks
Analysis of Random Perturbations for Robust Convolutional Neural Networks
Adam Dziedzic
S. Krishnan
OODAAML
59
1
0
08 Feb 2020
Minimax Defense against Gradient-based Adversarial Attacks
Minimax Defense against Gradient-based Adversarial Attacks
Blerta Lindqvist
R. Izmailov
AAML
22
0
0
04 Feb 2020
Tiny noise, big mistakes: Adversarial perturbations induce errors in
  Brain-Computer Interface spellers
Tiny noise, big mistakes: Adversarial perturbations induce errors in Brain-Computer Interface spellers
Xiao Zhang
Dongrui Wu
L. Ding
Hanbin Luo
Chin-Teng Lin
T. Jung
Ricardo Chavarriaga
AAML
91
60
0
30 Jan 2020
Challenges and Countermeasures for Adversarial Attacks on Deep
  Reinforcement Learning
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
Inaam Ilahi
Muhammad Usama
Junaid Qadir
M. Janjua
Ala I. Al-Fuqaha
D. Hoang
Dusit Niyato
AAML
143
136
0
27 Jan 2020
Ensemble Noise Simulation to Handle Uncertainty about Gradient-based
  Adversarial Attacks
Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks
Rehana Mahfuz
R. Sahay
Aly El Gamal
AAML
29
2
0
26 Jan 2020
Quantum Adversarial Machine Learning
Quantum Adversarial Machine Learning
Sirui Lu
L. Duan
D. Deng
AAML
108
102
0
31 Dec 2019
A Survey of Game Theoretic Approaches for Adversarial Machine Learning
  in Cybersecurity Tasks
A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks
P. Dasgupta
J. B. Collins
AAML
38
43
0
04 Dec 2019
Walking on the Edge: Fast, Low-Distortion Adversarial Examples
Walking on the Edge: Fast, Low-Distortion Adversarial Examples
Hanwei Zhang
Yannis Avrithis
Teddy Furon
Laurent Amsaleg
AAML
50
46
0
04 Dec 2019
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAUFedMLAAML
112
146
0
02 Dec 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAMLELM
56
14
0
28 Nov 2019
Shared Visual Abstractions
Shared Visual Abstractions
Tom White
38
5
0
19 Nov 2019
SMART: Skeletal Motion Action Recognition aTtack
SMART: Skeletal Motion Action Recognition aTtack
He Wang
Feixiang He
Zexi Peng
Yong-Liang Yang
Tianjia Shao
Kun Zhou
David C. Hogg
AAML
52
5
0
16 Nov 2019
Defending Against Model Stealing Attacks with Adaptive Misinformation
Defending Against Model Stealing Attacks with Adaptive Misinformation
Sanjay Kariyappa
Moinuddin K. Qureshi
MLAUAAML
64
109
0
16 Nov 2019
Learning To Characterize Adversarial Subspaces
Learning To Characterize Adversarial Subspaces
Xiaofeng Mao
YueFeng Chen
Yuhong Li
Yuan He
Hui Xue
AAML
68
11
0
15 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
124
105
0
13 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
95
70
0
06 Nov 2019
Previous
1234567
Next