Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1003.2751
Cited By
Near-Optimal Evasion of Convex-Inducing Classifiers
14 March 2010
B. Nelson
Benjamin I. P. Rubinstein
Ling Huang
A. Joseph
S. Lau
Steven J. Lee
Satish Rao
Anthony Tran
J. D. Tygar
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Near-Optimal Evasion of Convex-Inducing Classifiers"
8 / 8 papers shown
Title
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
19
164
0
26 Jun 2019
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
41
155
0
01 Mar 2019
Securing Behavior-based Opinion Spam Detection
Xingyu Lin
Guixiang Ma
B. Epureanu
Philip S. Yu
AAML
22
11
0
09 Nov 2018
Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks
Tegjyot Singh Sethi
M. Kantardzic
J. Ryu
AAML
23
11
0
24 Mar 2018
DARTS: Deceiving Autonomous Cars with Toxic Signs
Chawin Sitawarin
A. Bhagoji
Arsalan Mosenia
M. Chiang
Prateek Mittal
AAML
37
233
0
18 Feb 2018
Stealing Hyperparameters in Machine Learning
Binghui Wang
Neil Zhenqiang Gong
AAML
63
458
0
14 Feb 2018
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains
Tegjyot Singh Sethi
M. Kantardzic
AAML
27
49
0
23 Mar 2017
Learning convex bodies is hard
Navin Goyal
Luis Rademacher
58
24
0
07 Apr 2009
1