ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.03471
  4. Cited By
Certified Robustness to Adversarial Examples with Differential Privacy

Certified Robustness to Adversarial Examples with Differential Privacy

9 February 2018
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
    SILM
    AAML
ArXivPDFHTML

Papers citing "Certified Robustness to Adversarial Examples with Differential Privacy"

15 / 215 papers shown
Title
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep
  Learning
Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep Learning
Z. Din
P. Tigas
Samuel T. King
B. Livshits
VLM
39
29
0
17 May 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
68
1,231
0
29 Apr 2019
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial
  Robustness
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
J. Jacobsen
Jens Behrmann
Nicholas Carlini
Florian Tramèr
Nicolas Papernot
AAML
24
46
0
25 Mar 2019
Data Poisoning against Differentially-Private Learners: Attacks and
  Defenses
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
25
157
0
23 Mar 2019
Scalable Differential Privacy with Certified Robustness in Adversarial
  Learning
Scalable Differential Privacy with Certified Robustness in Adversarial Learning
Nhathai Phan
My T. Thai
Han Hu
R. Jin
Tong Sun
Dejing Dou
32
14
0
23 Mar 2019
A Convex Relaxation Barrier to Tight Robustness Verification of Neural
  Networks
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Hadi Salman
Greg Yang
Huan Zhang
Cho-Jui Hsieh
Pengchuan Zhang
AAML
26
263
0
23 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
22
1,995
0
08 Feb 2019
A Black-box Attack on Neural Networks Based on Swarm Evolutionary
  Algorithm
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
Xiaolei Liu
Yuheng Luo
Xiaosong Zhang
Qingxin Zhu
AAML
24
16
0
26 Jan 2019
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
24
23
0
06 Nov 2018
Certified Adversarial Robustness with Additive Noise
Certified Adversarial Robustness with Additive Noise
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
28
341
0
10 Sep 2018
Towards Robust Neural Networks via Random Self-ensemble
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
58
418
0
02 Dec 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
249
1,842
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
183
933
0
21 Oct 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
317
5,847
0
08 Jul 2016
Previous
12345