ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.04248
  4. Cited By
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box
  Machine Learning Models
v1v2 (latest)

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

12 December 2017
Wieland Brendel
Jonas Rauber
Matthias Bethge
    AAML
ArXiv (abs)PDFHTMLGithub (2865★)

Papers citing "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models"

50 / 423 papers shown
Title
AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With
  Approximate Gradients
AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients
Xiaodan Li
YueFeng Chen
Yuan He
Hui Xue
OODAAML
38
9
0
15 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
127
105
0
13 Nov 2019
Learning From Brains How to Regularize Machines
Learning From Brains How to Regularize Machines
Zhe Li
Wieland Brendel
Edgar Y. Walker
Erick Cobos
Taliah Muhammad
Jacob Reimer
Matthias Bethge
Fabian H. Sinz
Xaq Pitkow
A. Tolias
OODAAML
69
63
0
11 Nov 2019
Active Learning for Black-Box Adversarial Attacks in EEG-Based
  Brain-Computer Interfaces
Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces
Xue Jiang
Xiao Zhang
Dongrui Wu
AAML
76
16
0
07 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
95
70
0
06 Nov 2019
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional
  Networks
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks
Qiyang Li
Saminul Haque
Cem Anil
James Lucas
Roger C. Grosse
Joern-Henrik Jacobsen
127
116
0
03 Nov 2019
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Guangke Chen
Sen Chen
Lingling Fan
Xiaoning Du
Zhe Zhao
Fu Song
Yang Liu
AAML
114
197
0
03 Nov 2019
An Alternative Surrogate Loss for PGD-based Adversarial Testing
An Alternative Surrogate Loss for PGD-based Adversarial Testing
Sven Gowal
J. Uesato
Chongli Qin
Po-Sen Huang
Timothy A. Mann
Pushmeet Kohli
AAML
107
90
0
21 Oct 2019
A New Defense Against Adversarial Images: Turning a Weakness into a
  Strength
A New Defense Against Adversarial Images: Turning a Weakness into a Strength
Tao Yu
Shengyuan Hu
Chuan Guo
Wei-Lun Chao
Kilian Q. Weinberger
AAML
120
103
0
16 Oct 2019
DeepSearch: A Simple and Effective Blackbox Attack for Deep Neural
  Networks
DeepSearch: A Simple and Effective Blackbox Attack for Deep Neural Networks
Fuyuan Zhang
Sankalan Pal Chowdhury
M. Christakis
AAML
58
8
0
14 Oct 2019
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack
Yang Zhang
Shiyu Chang
Mo Yu
Kaizhi Qian
AAML
29
2
0
01 Oct 2019
Black-box Adversarial Attacks with Bayesian Optimization
Black-box Adversarial Attacks with Bayesian Optimization
Satya Narayan Shukla
Anit Kumar Sahu
Devin Willmott
J. Zico Kolter
AAMLMLAU
68
31
0
30 Sep 2019
Lower Bounds on Adversarial Robustness from Optimal Transport
Lower Bounds on Adversarial Robustness from Optimal Transport
A. Bhagoji
Daniel Cullina
Prateek Mittal
OODOTAAML
70
94
0
26 Sep 2019
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Minhao Cheng
Simranjit Singh
Patrick H. Chen
Pin-Yu Chen
Sijia Liu
Cho-Jui Hsieh
AAML
237
224
0
24 Sep 2019
Absum: Simple Regularization Method for Reducing Structural Sensitivity
  of Convolutional Neural Networks
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Sekitoshi Kanai
Yasutoshi Ida
Yasuhiro Fujiwara
Masanori Yamada
S. Adachi
AAML
44
1
0
19 Sep 2019
An Empirical Study towards Characterizing Deep Learning Development and
  Deployment across Different Frameworks and Platforms
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms
Qianyu Guo
Sen Chen
Xiaofei Xie
Lei Ma
Q. Hu
Hongtao Liu
Yang Liu
Jianjun Zhao
Xiaohong Li
94
125
0
15 Sep 2019
Sparse and Imperceivable Adversarial Attacks
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
110
199
0
11 Sep 2019
Neural Belief Reasoner
Neural Belief Reasoner
Haifeng Qian
NAIBDL
26
1
0
10 Sep 2019
FDA: Feature Disruptive Attack
FDA: Feature Disruptive Attack
Aditya Ganeshan
S. VivekB.
R. Venkatesh Babu
AAML
118
105
0
10 Sep 2019
Adversarial Robustness Against the Union of Multiple Perturbation Models
Adversarial Robustness Against the Union of Multiple Perturbation Models
Pratyush Maini
Eric Wong
J. Zico Kolter
OODAAML
63
151
0
09 Sep 2019
Blackbox Attacks on Reinforcement Learning Agents Using Approximated
  Temporal Information
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information
Yiren Zhao
Ilia Shumailov
Han Cui
Xitong Gao
Robert D. Mullins
Ross J. Anderson
AAML
82
28
0
06 Sep 2019
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with
  Limited Queries
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries
Fnu Suya
Jianfeng Chi
David Evans
Yuan Tian
AAML
94
86
0
19 Aug 2019
On the Design of Black-box Adversarial Examples by Leveraging
  Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
Pu Zhao
Sijia Liu
Pin-Yu Chen
Nghia Hoang
Kaidi Xu
B. Kailkhura
Xue Lin
AAML
119
54
0
26 Jul 2019
Defense Against Adversarial Attacks Using Feature Scattering-based
  Adversarial Training
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
112
231
0
24 Jul 2019
Adversarial Security Attacks and Perturbations on Machine Learning and
  Deep Learning Methods
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
Arif Siddiqi
AAML
64
11
0
17 Jul 2019
Graph Interpolating Activation Improves Both Natural and Robust
  Accuracies in Data-Efficient Deep Learning
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Bao Wang
Stanley J. Osher
AAMLAI4CE
77
10
0
16 Jul 2019
Stateful Detection of Black-Box Adversarial Attacks
Stateful Detection of Black-Box Adversarial Attacks
Steven Chen
Nicholas Carlini
D. Wagner
AAMLMLAU
69
126
0
12 Jul 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
137
493
0
03 Jul 2019
Diminishing the Effect of Adversarial Perturbations via Refining Feature
  Representation
Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation
Nader Asadi
Amirm. Sarfi
Mehrdad Hosseinzadeh
Sahba Tahsini
M. Eftekhari
AAML
32
2
0
01 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAMLOOD
94
113
0
01 Jul 2019
Defending Against Adversarial Examples with K-Nearest Neighbor
Chawin Sitawarin
David Wagner
AAML
91
29
0
23 Jun 2019
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with
  Adversarial Perturbations
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations
Yuezun Li
Xin Yang
Baoyuan Wu
Siwei Lyu
AAMLPICVCVBM
95
38
0
21 Jun 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
Jason D. Lee
AAML
113
109
0
19 Jun 2019
The Attack Generator: A Systematic Approach Towards Constructing
  Adversarial Attacks
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
F. Assion
Peter Schlicht
Florens Greßner
W. Günther
Fabian Hüger
Nico M. Schmidt
Umair Rasheed
AAML
75
14
0
17 Jun 2019
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Shuyu Cheng
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
AAML
92
274
0
17 Jun 2019
Defending Against Adversarial Attacks Using Random Forests
Defending Against Adversarial Attacks Using Random Forests
Yifan Ding
Liqiang Wang
Huan Zhang
Jinfeng Yi
Deliang Fan
Boqing Gong
AAML
61
14
0
16 Jun 2019
Copy and Paste: A Simple But Effective Initialization Method for
  Black-Box Adversarial Attacks
Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks
T. Brunner
Frederik Diehl
Alois Knoll
AAML
44
8
0
14 Jun 2019
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient
  Black-box Attacks
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
79
111
0
11 Jun 2019
Robustness Verification of Tree-based Models
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
103
77
0
10 Jun 2019
On the Vulnerability of Capsule Networks to Adversarial Attacks
On the Vulnerability of Capsule Networks to Adversarial Attacks
Félix D. P. Michels
Tobias Uelwer
Eric Upschulte
Stefan Harmeling
AAML
77
24
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
72
37
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
84
62
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
93
101
0
08 Jun 2019
Making targeted black-box evasion attacks effective and efficient
Making targeted black-box evasion attacks effective and efficient
Mika Juuti
B. Atli
Nadarajah Asokan
AAMLMIACVMLAU
49
8
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and
  Defense
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILMAAML
92
43
0
07 Jun 2019
Query-efficient Meta Attack to Deep Neural Networks
Query-efficient Meta Attack to Deep Neural Networks
Jiawei Du
Hu Zhang
Qiufeng Wang
Yi Yang
Jiashi Feng
AAML
66
84
0
06 Jun 2019
Multi-way Encoding for Robustness
Multi-way Encoding for Robustness
Donghyun Kim
Sarah Adel Bargal
Jianming Zhang
Stan Sclaroff
AAML
38
2
0
05 Jun 2019
Understanding the Limitations of Conditional Generative Models
Understanding the Limitations of Conditional Generative Models
Ethan Fetaya
J. Jacobsen
Will Grathwohl
R. Zemel
96
54
0
04 Jun 2019
Enhancing Transformation-based Defenses using a Distribution Classifier
Enhancing Transformation-based Defenses using a Distribution Classifier
C. Kou
H. Lee
E. Chang
Teck Khim Ng
67
3
0
01 Jun 2019
Functional Adversarial Attacks
Functional Adversarial Attacks
Cassidy Laidlaw
Soheil Feizi
AAML
100
185
0
29 May 2019
Previous
123456789
Next