ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.00870
  4. Cited By
MadNet: Using a MAD Optimization for Defending Against Adversarial
  Attacks

MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks

3 November 2019
Shai Rozenberg
G. Elidan
Ran El-Yaniv
    AAML
ArXivPDFHTML

Papers citing "MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks"

15 / 15 papers shown
Title
When Does Label Smoothing Help?
When Does Label Smoothing Help?
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
UQCV
195
1,945
0
06 Jun 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
83
901
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
147
2,038
0
08 Feb 2019
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Logan Engstrom
Andrew Ilyas
Anish Athalye
AAML
57
141
0
26 Jul 2018
Scaling provable adversarial defenses
Scaling provable adversarial defenses
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
76
448
0
31 May 2018
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using
  Generative Models
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Pouya Samangouei
Maya Kabkab
Rama Chellappa
AAML
GAN
84
1,177
0
17 May 2018
Stochastic Activation Pruning for Robust Adversarial Defense
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet Singh Dhillon
Kamyar Azizzadenesheli
Zachary Chase Lipton
Jeremy Bernstein
Jean Kossaifi
Aran Khanna
Anima Anandkumar
AAML
76
547
0
05 Mar 2018
Lipschitz-Margin Training: Scalable Certification of Perturbation
  Invariance for Deep Neural Networks
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
AAML
105
307
0
12 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
219
3,185
0
01 Feb 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
304
12,063
0
19 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
121
1,857
0
20 May 2017
Detecting Adversarial Samples from Artifacts
Detecting Adversarial Samples from Artifacts
Reuben Feinman
Ryan R. Curtin
S. Shintre
Andrew B. Gardner
AAML
90
893
0
01 Mar 2017
On Detecting Adversarial Perturbations
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
61
950
0
14 Feb 2017
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
540
5,897
0
08 Jul 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
75
3,677
0
08 Feb 2016
1