ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00851
  4. Cited By
Provable defenses against adversarial examples via the convex outer
  adversarial polytope

Provable defenses against adversarial examples via the convex outer adversarial polytope

2 November 2017
Eric Wong
J. Zico Kolter
    AAML
ArXivPDFHTML

Papers citing "Provable defenses against adversarial examples via the convex outer adversarial polytope"

50 / 380 papers shown
Title
Adversarially Robust Classification based on GLRT
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLM
AAML
23
4
0
16 Nov 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against
  Adversarial Perturbations
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
16
24
0
15 Nov 2020
Adversarial Robust Training of Deep Learning MRI Reconstruction Models
Adversarial Robust Training of Deep Learning MRI Reconstruction Models
Francesco Calivá
Kaiyang Cheng
Rutwik Shah
V. Pedoia
OOD
AAML
MedIm
30
10
0
30 Oct 2020
Reliable Graph Neural Networks via Robust Aggregation
Reliable Graph Neural Networks via Robust Aggregation
Simon Geisler
Daniel Zügner
Stephan Günnemann
AAML
OOD
6
71
0
29 Oct 2020
Evaluating Robustness of Predictive Uncertainty Estimation: Are
  Dirichlet-based Models Reliable?
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
Anna-Kathrin Kopetzki
Bertrand Charpentier
Daniel Zügner
Sandhya Giri
Stephan Günnemann
23
45
0
28 Oct 2020
An efficient nonconvex reformulation of stagewise convex optimization
  problems
An efficient nonconvex reformulation of stagewise convex optimization problems
Rudy Bunel
Oliver Hinder
Srinadh Bhojanapalli
Krishnamurthy Dvijotham
Dvijotham
OffRL
35
14
0
27 Oct 2020
Precise Statistical Analysis of Classification Accuracies for
  Adversarial Training
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
33
62
0
21 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
680
0
19 Oct 2020
Understanding Catastrophic Overfitting in Single-step Adversarial
  Training
Understanding Catastrophic Overfitting in Single-step Adversarial Training
Hoki Kim
Woojin Lee
Jaewook Lee
AAML
16
108
0
05 Oct 2020
Geometry-aware Instance-reweighted Adversarial Training
Geometry-aware Instance-reweighted Adversarial Training
Jingfeng Zhang
Jianing Zhu
Gang Niu
Bo Han
Masashi Sugiyama
Mohan Kankanhalli
AAML
47
269
0
05 Oct 2020
Data-Driven Certification of Neural Networks with Random Input Noise
Data-Driven Certification of Neural Networks with Random Input Noise
Brendon G. Anderson
Somayeh Sojoudi
AAML
17
11
0
02 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust
  Defense
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated
  Gradients
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients
Yifei Huang
Yaodong Yu
Hongyang R. Zhang
Yi Ma
Yuan Yao
AAML
37
26
0
28 Sep 2020
Deep Learning & Software Engineering: State of Research and Future
  Directions
Deep Learning & Software Engineering: State of Research and Future Directions
P. Devanbu
Matthew B. Dwyer
Sebastian G. Elbaum
M. Lowry
Kevin Moran
Denys Poshyvanyk
Baishakhi Ray
Rishabh Singh
Xiangyu Zhang
11
22
0
17 Sep 2020
Certifying Confidence via Randomized Smoothing
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
33
39
0
17 Sep 2020
Defending Against Multiple and Unforeseen Adversarial Videos
Defending Against Multiple and Unforeseen Adversarial Videos
Shao-Yuan Lo
Vishal M. Patel
AAML
31
23
0
11 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
33
128
0
09 Sep 2020
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
  Adversarial Attacks
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks
Wei-An Lin
Chun Pong Lau
Alexander Levine
Ramalingam Chellappa
S. Feizi
AAML
81
60
0
05 Sep 2020
Adversarial Training and Provable Robustness: A Tale of Two Objectives
Adversarial Training and Provable Robustness: A Tale of Two Objectives
Jiameng Fan
Wenchao Li
AAML
23
20
0
13 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
27
73
0
07 Aug 2020
Stronger and Faster Wasserstein Adversarial Attacks
Stronger and Faster Wasserstein Adversarial Attacks
Kaiwen Wu
Allen Wang
Yaoliang Yu
AAML
22
32
0
06 Aug 2020
Robust Deep Reinforcement Learning through Adversarial Loss
Robust Deep Reinforcement Learning through Adversarial Loss
Tuomas P. Oikarinen
Wang Zhang
Alexandre Megretski
Luca Daniel
Tsui-Wei Weng
AAML
44
94
0
05 Aug 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
22
6
0
22 Jul 2020
Scaling Polyhedral Neural Network Verification on GPUs
Scaling Polyhedral Neural Network Verification on GPUs
Christoph Müller
F. Serre
Gagandeep Singh
Markus Püschel
Martin Vechev
AAML
29
56
0
20 Jul 2020
Do Adversarially Robust ImageNet Models Transfer Better?
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
A. Madry
37
417
0
16 Jul 2020
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial
  Test Examples
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
S. Goldwasser
Adam Tauman Kalai
Y. Kalai
Omar Montasser
AAML
19
38
0
10 Jul 2020
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron
  Relaxations for Neural Network Verification
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification
Christian Tjandraatmadja
Ross Anderson
Joey Huchette
Will Ma
Krunal Patel
J. Vielma
AAML
27
89
0
24 Jun 2020
Verifying Individual Fairness in Machine Learning Models
Verifying Individual Fairness in Machine Learning Models
Philips George John
Deepak Vijaykeerthy
Diptikalyan Saha
FaML
27
57
0
21 Jun 2020
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
  Faster Adversarial Robustness Proofs
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs
Christopher Brix
T. Noll
AAML
25
10
0
16 Jun 2020
Counterexample-Guided Learning of Monotonic Neural Networks
Counterexample-Guided Learning of Monotonic Neural Networks
Aishwarya Sivaraman
G. Farnadi
T. Millstein
Mathias Niepert
24
50
0
16 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Provable tradeoffs in adversarially robust classification
Provable tradeoffs in adversarially robust classification
Yan Sun
Hamed Hassani
David Hong
Alexander Robey
23
53
0
09 Jun 2020
Adversarial Classification via Distributional Robustness with
  Wasserstein Ambiguity
Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity
Nam Ho-Nguyen
Stephen J. Wright
OOD
50
16
0
28 May 2020
Calibrated Surrogate Losses for Adversarially Robust Classification
Calibrated Surrogate Losses for Adversarially Robust Classification
Han Bao
Clayton Scott
Masashi Sugiyama
29
45
0
28 May 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via
  Small Receptive Fields and Masking
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Encryption Inspired Adversarial Defense for Visual Classification
Encryption Inspired Adversarial Defense for Visual Classification
Maungmaung Aprilpyone
Hitoshi Kiya
21
32
0
16 May 2020
Towards Understanding the Adversarial Vulnerability of Skeleton-based
  Action Recognition
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
Tianhang Zheng
Sheng Liu
Changyou Chen
Junsong Yuan
Baochun Li
K. Ren
AAML
21
17
0
14 May 2020
Training robust neural networks using Lipschitz bounds
Training robust neural networks using Lipschitz bounds
Patricia Pauli
Anne Koch
J. Berberich
Paul Kohler
Frank Allgöwer
19
156
0
06 May 2020
Provably robust deep generative models
Provably robust deep generative models
Filipe Condessa
Zico Kolter
AAML
OOD
6
5
0
22 Apr 2020
Certifying Joint Adversarial Robustness for Model Ensembles
Certifying Joint Adversarial Robustness for Model Ensembles
M. Jonas
David Evans
AAML
21
2
0
21 Apr 2020
Single-step Adversarial training with Dropout Scheduling
Single-step Adversarial training with Dropout Scheduling
S. VivekB.
R. Venkatesh Babu
OOD
AAML
18
71
0
18 Apr 2020
Verification of Deep Convolutional Neural Networks Using ImageStars
Verification of Deep Convolutional Neural Networks Using ImageStars
Hoang-Dung Tran
Stanley Bak
Weiming Xiang
Taylor T. Johnson
AAML
20
127
0
12 Apr 2020
Certifiable Robustness to Adversarial State Uncertainty in Deep
  Reinforcement Learning
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
13
41
0
11 Apr 2020
Safety-Aware Hardening of 3D Object Detection Neural Network Systems
Safety-Aware Hardening of 3D Object Detection Neural Network Systems
Chih-Hong Cheng
3DPC
27
12
0
25 Mar 2020
Quantum noise protects quantum classifiers against adversaries
Quantum noise protects quantum classifiers against adversaries
Yuxuan Du
Min-hsiu Hsieh
Tongliang Liu
Dacheng Tao
Nana Liu
AAML
22
110
0
20 Mar 2020
Robust Deep Reinforcement Learning against Adversarial Perturbations on
  State Observations
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
Huan Zhang
Hongge Chen
Chaowei Xiao
Bo-wen Li
Mingyan D. Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
38
261
0
19 Mar 2020
Diversity can be Transferred: Output Diversification for White- and
  Black-box Attacks
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks
Y. Tashiro
Yang Song
Stefano Ermon
AAML
14
13
0
15 Mar 2020
Topological Effects on Attacks Against Vertex Classification
Topological Effects on Attacks Against Vertex Classification
B. A. Miller
Mustafa Çamurcu
Alexander J. Gomez
Kevin S. Chan
Tina Eliassi-Rad
AAML
19
2
0
12 Mar 2020
Exploiting Verified Neural Networks via Floating Point Numerical Error
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia
Martin Rinard
AAML
37
34
0
06 Mar 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
787
0
26 Feb 2020
Previous
12345678
Next