ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00851
  4. Cited By
Provable defenses against adversarial examples via the convex outer
  adversarial polytope

Provable defenses against adversarial examples via the convex outer adversarial polytope

2 November 2017
Eric Wong
J. Zico Kolter
    AAML
ArXivPDFHTML

Papers citing "Provable defenses against adversarial examples via the convex outer adversarial polytope"

50 / 386 papers shown
Title
Quantum noise protects quantum classifiers against adversaries
Quantum noise protects quantum classifiers against adversaries
Yuxuan Du
Min-hsiu Hsieh
Tongliang Liu
Dacheng Tao
Nana Liu
AAML
22
110
0
20 Mar 2020
Robust Deep Reinforcement Learning against Adversarial Perturbations on
  State Observations
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
Huan Zhang
Hongge Chen
Chaowei Xiao
Bo-wen Li
Mingyan D. Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
44
261
0
19 Mar 2020
Diversity can be Transferred: Output Diversification for White- and
  Black-box Attacks
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks
Y. Tashiro
Yang Song
Stefano Ermon
AAML
14
13
0
15 Mar 2020
Topological Effects on Attacks Against Vertex Classification
Topological Effects on Attacks Against Vertex Classification
B. A. Miller
Mustafa Çamurcu
Alexander J. Gomez
Kevin S. Chan
Tina Eliassi-Rad
AAML
19
2
0
12 Mar 2020
Exploiting Verified Neural Networks via Floating Point Numerical Error
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia
Martin Rinard
AAML
37
34
0
06 Mar 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
787
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
33
397
0
26 Feb 2020
Lagrangian Decomposition for Neural Network Verification
Lagrangian Decomposition for Neural Network Verification
Rudy Bunel
Alessandro De Palma
Alban Desmaison
Krishnamurthy Dvijotham
Pushmeet Kohli
Philip Torr
M. P. Kumar
19
50
0
24 Feb 2020
FR-Train: A Mutual Information-Based Approach to Fair and Robust
  Training
FR-Train: A Mutual Information-Based Approach to Fair and Robust Training
Yuji Roh
Kangwook Lee
Steven Euijong Whang
Changho Suh
24
78
0
24 Feb 2020
Black-Box Certification with Randomized Smoothing: A Functional
  Optimization Based Framework
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
Dinghuai Zhang
Mao Ye
Chengyue Gong
Zhanxing Zhu
Qiang Liu
AAML
24
62
0
21 Feb 2020
Indirect Adversarial Attacks via Poisoning Neighbors for Graph
  Convolutional Networks
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks
Tsubasa Takahashi
GNN
AAML
19
37
0
19 Feb 2020
Deflecting Adversarial Attacks
Deflecting Adversarial Attacks
Yao Qin
Nicholas Frosst
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
AAML
30
15
0
18 Feb 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the
  Curse of Dimensionality
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao Song
Sanjeev Arora
29
51
0
16 Feb 2020
Robustness Verification for Transformers
Robustness Verification for Transformers
Zhouxing Shi
Huan Zhang
Kai-Wei Chang
Minlie Huang
Cho-Jui Hsieh
AAML
24
105
0
16 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
38
64
0
11 Feb 2020
Adversarial Robustness for Code
Adversarial Robustness for Code
Pavol Bielik
Martin Vechev
AAML
22
89
0
11 Feb 2020
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Tong Chen
J. Lasserre
Victor Magron
Edouard Pauwels
36
3
0
10 Feb 2020
Curse of Dimensionality on Randomized Smoothing for Certifiable
  Robustness
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Aounon Kumar
Alexander Levine
Tom Goldstein
S. Feizi
15
94
0
08 Feb 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep
  Learning in Safety-Critical Perception Tasks
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
28
80
0
22 Jan 2020
GhostImage: Remote Perception Attacks against Camera-based Image
  Classification Systems
GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems
Yanmao Man
Ming Li
Ryan M. Gerdes
AAML
22
8
0
21 Jan 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
99
1,159
0
12 Jan 2020
ReluDiff: Differential Verification of Deep Neural Networks
ReluDiff: Differential Verification of Deep Neural Networks
Brandon Paulsen
Jingbo Wang
Chao Wang
27
53
0
10 Jan 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified
  Radius
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OOD
AAML
21
177
0
08 Jan 2020
Lossless Compression of Deep Neural Networks
Lossless Compression of Deep Neural Networks
Thiago Serra
Abhinav Kumar
Srikumar Ramalingam
24
56
0
01 Jan 2020
Efficient Adversarial Training with Transferable Adversarial Examples
Efficient Adversarial Training with Transferable Adversarial Examples
Haizhong Zheng
Ziqi Zhang
Juncheng Gu
Honglak Lee
A. Prakash
AAML
24
108
0
27 Dec 2019
Benchmarking Adversarial Robustness
Benchmarking Adversarial Robustness
Yinpeng Dong
Qi-An Fu
Xiao Yang
Tianyu Pang
Hang Su
Zihao Xiao
Jun Zhu
AAML
31
36
0
26 Dec 2019
Malware Makeover: Breaking ML-based Static Analysis by Modifying
  Executable Bytes
Malware Makeover: Breaking ML-based Static Analysis by Modifying Executable Bytes
Keane Lucas
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
S. Shintre
AAML
31
66
0
19 Dec 2019
Robustness Certificates for Sparse Adversarial Attacks by Randomized
  Ablation
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
Alexander Levine
S. Feizi
AAML
34
104
0
21 Nov 2019
Fine-grained Synthesis of Unrestricted Adversarial Examples
Fine-grained Synthesis of Unrestricted Adversarial Examples
Omid Poursaeed
Tianxing Jiang
Yordanos Goshu
Harry Yang
Serge J. Belongie
Ser-Nam Lim
AAML
37
13
0
20 Nov 2019
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Jingfeng Zhang
Bo Han
Gang Niu
Tongliang Liu
Masashi Sugiyama
30
6
0
20 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
21
104
0
13 Nov 2019
Towards Large yet Imperceptible Adversarial Image Perturbations with
  Perceptual Color Distance
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
18
142
0
06 Nov 2019
Counterexample-Guided Synthesis of Perception Models and Control
Counterexample-Guided Synthesis of Perception Models and Control
Shromona Ghosh
Yash Vardhan Pant
H. Ravanbakhsh
S. Seshia
35
14
0
04 Nov 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
33
9
0
31 Oct 2019
A New Defense Against Adversarial Images: Turning a Weakness into a
  Strength
A New Defense Against Adversarial Images: Turning a Weakness into a Strength
Tao Yu
Shengyuan Hu
Chuan Guo
Wei-Lun Chao
Kilian Q. Weinberger
AAML
58
101
0
16 Oct 2019
Probabilistic Verification and Reachability Analysis of Neural Networks
  via Semidefinite Programming
Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming
Mahyar Fazlyab
M. Morari
George J. Pappas
AAML
35
41
0
09 Oct 2019
Adversarial Examples for Cost-Sensitive Classifiers
Adversarial Examples for Cost-Sensitive Classifiers
Mahdi Akbari Zarkesh
A. Lohn
Ali Movaghar
SILM
AAML
24
3
0
04 Oct 2019
Test-Time Training with Self-Supervision for Generalization under
  Distribution Shifts
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
Yu Sun
Xiaolong Wang
Zhuang Liu
John Miller
Alexei A. Efros
Moritz Hardt
TTA
OOD
27
92
0
29 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
24
18
0
27 Sep 2019
Towards neural networks that provably know when they don't know
Towards neural networks that provably know when they don't know
Alexander Meinke
Matthias Hein
OODD
33
139
0
26 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
25
125
0
20 Sep 2019
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
34
176
0
17 Aug 2019
Adversarial shape perturbations on 3D point clouds
Adversarial shape perturbations on 3D point clouds
Daniel Liu
Ronald Yu
Hao Su
3DPC
33
12
0
16 Aug 2019
ART: Abstraction Refinement-Guided Training for Provably Correct Neural
  Networks
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
AAML
27
28
0
17 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
13
113
0
01 Jul 2019
Certifiable Robustness and Robust Training for Graph Convolutional
  Networks
Certifiable Robustness and Robust Training for Graph Convolutional Networks
Daniel Zügner
Stephan Günnemann
OffRL
39
162
0
28 Jun 2019
Invariance-inducing regularization using worst-case transformations
  suffices to boost accuracy and spatial robustness
Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Fanny Yang
Zuowen Wang
C. Heinze-Deml
28
42
0
26 Jun 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
21
104
0
25 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual
  Perspective
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi-Hua Zhou
Cho-Jui Hsieh
AAML
28
22
0
10 Jun 2019
Robustness Verification of Tree-based Models
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
17
76
0
10 Jun 2019
Previous
12345678
Next