ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.05165
  4. Cited By
Defensive Dropout for Hardening Deep Neural Networks under Adversarial
  Attacks

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

13 September 2018
Siyue Wang
Tianlin Li
Pu Zhao
Wujie Wen
David Kaeli
S. Chin
X. Lin
    AAML
ArXivPDFHTML

Papers citing "Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks"

9 / 9 papers shown
Title
Sparta: Spatially Attentive and Adversarially Robust Activation
Sparta: Spatially Attentive and Adversarially Robust Activation
Qing Guo
Felix Juefei Xu
Changqing Zhou
Wei Feng
Yang Liu
Song Wang
AAML
33
4
0
18 May 2021
Deep Video Inpainting Detection
Deep Video Inpainting Detection
Peng Zhou
Ning Yu
Zuxuan Wu
L. Davis
Abhinav Shrivastava
Ser-Nam Lim
30
18
0
26 Jan 2021
Addressing Neural Network Robustness with Mixup and Targeted Labeling
  Adversarial Training
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
Alfred Laugros
A. Caplier
Matthieu Ospici
AAML
24
19
0
19 Aug 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
53
184
0
30 Apr 2020
Protecting Neural Networks with Hierarchical Random Switching: Towards
  Better Robustness-Accuracy Trade-off for Stochastic Defenses
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
Tianlin Li
Siyue Wang
Pin-Yu Chen
Yanzhi Wang
Brian Kulis
Xue Lin
S. Chin
AAML
16
42
0
20 Aug 2019
On the Design of Black-box Adversarial Examples by Leveraging
  Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
Pu Zhao
Sijia Liu
Pin-Yu Chen
Nghia Hoang
Kaidi Xu
B. Kailkhura
Xue Lin
AAML
27
54
0
26 Jul 2019
Random Spiking and Systematic Evaluation of Defenses Against Adversarial
  Examples
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
Huangyi Ge
Sze Yiu Chau
Bruno Ribeiro
Ninghui Li
AAML
27
1
0
05 Dec 2018
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
308
5,842
0
08 Jul 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1