ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.04248
  4. Cited By
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box
  Machine Learning Models

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

12 December 2017
Wieland Brendel
Jonas Rauber
Matthias Bethge
    AAML
ArXivPDFHTML

Papers citing "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models"

30 / 280 papers shown
Title
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical
  Study
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study
Xurong Li
S. Ji
Men Han
Juntao Ji
Zhenyu Ren
Yushan Liu
Chunming Wu
AAML
26
31
0
04 Jan 2019
Adversarial Attack and Defense on Graph Data: A Survey
Adversarial Attack and Defense on Graph Data: A Survey
Lichao Sun
Yingtong Dou
Carl Yang
Ji Wang
Yixin Liu
Philip S. Yu
Lifang He
Yangqiu Song
GNN
AAML
23
275
0
26 Dec 2018
Random Spiking and Systematic Evaluation of Defenses Against Adversarial
  Examples
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
Huangyi Ge
Sze Yiu Chau
Bruno Ribeiro
Ninghui Li
AAML
27
1
0
05 Dec 2018
Mathematical Analysis of Adversarial Attacks
Mathematical Analysis of Adversarial Attacks
Zehao Dou
Stanley J. Osher
Bao Wang
AAML
24
18
0
15 Nov 2018
Exploring Connections Between Active Learning and Model Extraction
Exploring Connections Between Active Learning and Model Extraction
Varun Chandrasekaran
Kamalika Chaudhuri
Irene Giacomelli
Shane Walker
Songbai Yan
MIACV
16
157
0
05 Nov 2018
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters
Hassan Ali
Faiq Khalid
Hammad Tariq
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
27
14
0
04 Nov 2018
Efficient Neural Network Robustness Certification with General
  Activation Functions
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
28
747
0
02 Nov 2018
Analyzing biological and artificial neural networks: challenges with
  opportunities for synergy?
Analyzing biological and artificial neural networks: challenges with opportunities for synergy?
David Barrett
Ari S. Morcos
Jakob H. Macke
AI4CE
25
110
0
31 Oct 2018
Improved robustness to adversarial examples using Lipschitz regularization of the loss
Chris Finlay
Adam M. Oberman
B. Abbasi
32
34
0
01 Oct 2018
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep
  Convolutional Networks
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Kenneth T. Co
Luis Muñoz-González
Sixte de Maupeou
Emil C. Lupu
AAML
22
67
0
30 Sep 2018
Unrestricted Adversarial Examples
Unrestricted Adversarial Examples
Tom B. Brown
Nicholas Carlini
Chiyuan Zhang
Catherine Olsson
Paul Christiano
Ian Goodfellow
AAML
29
101
0
22 Sep 2018
Query-Efficient Black-Box Attack by Active Learning
Query-Efficient Black-Box Attack by Active Learning
Pengcheng Li
Jinfeng Yi
Lijun Zhang
AAML
MLAU
21
54
0
13 Sep 2018
Certified Adversarial Robustness with Additive Noise
Certified Adversarial Robustness with Additive Noise
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
28
341
0
10 Sep 2018
Metamorphic Relation Based Adversarial Attacks on Differentiable Neural
  Computer
Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer
Alvin Chan
Lei Ma
Felix Juefei Xu
Xiaofei Xie
Yang Liu
Yew-Soon Ong
OOD
AAML
24
17
0
07 Sep 2018
DeepHunter: Hunting Deep Neural Network Defects via Coverage-Guided
  Fuzzing
DeepHunter: Hunting Deep Neural Network Defects via Coverage-Guided Fuzzing
Xiaofei Xie
Lei Ma
Felix Juefei Xu
Hongxu Chen
Minhui Xue
Bo Li
Yang Liu
Jianjun Zhao
Jianxiong Yin
Simon See
43
40
0
04 Sep 2018
Structured Adversarial Attack: Towards General Implementation and Better
  Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Kaidi Xu
Sijia Liu
Pu Zhao
Pin-Yu Chen
Huan Zhang
Quanfu Fan
Deniz Erdogmus
Yanzhi Wang
Xinyu Lin
AAML
29
160
0
05 Aug 2018
Query-Efficient Hard-label Black-box Attack:An Optimization-based
  Approach
Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach
Minhao Cheng
Thong Le
Pin-Yu Chen
Jinfeng Yi
Huan Zhang
Cho-Jui Hsieh
AAML
43
346
0
12 Jul 2018
Vulnerability Analysis of Chest X-Ray Image Classification Against
  Adversarial Attacks
Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks
Saeid Asgari Taghanaki
A. Das
Ghassan Hamarneh
MedIm
43
52
0
09 Jul 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
31
145
0
14 Jun 2018
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for
  Attacking Black-box Neural Networks
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
Chun-Chen Tu
Pai-Shun Ting
Pin-Yu Chen
Sijia Liu
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Shin-Ming Cheng
MLAU
AAML
26
395
0
30 May 2018
Towards the first adversarially robust neural network model on MNIST
Towards the first adversarially robust neural network model on MNIST
Lukas Schott
Jonas Rauber
Matthias Bethge
Wieland Brendel
AAML
OOD
14
369
0
23 May 2018
Detecting Adversarial Samples for Deep Neural Networks through Mutation
  Testing
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing
Jingyi Wang
Jun Sun
Peixin Zhang
Xinyu Wang
AAML
21
41
0
14 May 2018
Black-box Adversarial Attacks with Limited Queries and Information
Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas
Logan Engstrom
Anish Athalye
Jessy Lin
MLAU
AAML
79
1,191
0
23 Apr 2018
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with
  Adversarial Examples
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
Minhao Cheng
Jinfeng Yi
Pin-Yu Chen
Huan Zhang
Cho-Jui Hsieh
SILM
AAML
54
242
0
03 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
30
97
0
27 Feb 2018
Retrieval-Augmented Convolutional Neural Networks for Improved
  Robustness against Adversarial Examples
Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples
Jake Zhao
Kyunghyun Cho
AAML
24
20
0
26 Feb 2018
Are Generative Classifiers More Robust to Adversarial Attacks?
Are Generative Classifiers More Robust to Adversarial Attacks?
Yingzhen Li
John Bradshaw
Yash Sharma
AAML
57
78
0
19 Feb 2018
DARTS: Deceiving Autonomous Cars with Toxic Signs
DARTS: Deceiving Autonomous Cars with Toxic Signs
Chawin Sitawarin
A. Bhagoji
Arsalan Mosenia
M. Chiang
Prateek Mittal
AAML
37
233
0
18 Feb 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
98
11,884
0
19 Jun 2017
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
365
5,849
0
08 Jul 2016
Previous
123456