ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10509
  4. Cited By
HYDRA: Pruning Adversarially Robust Neural Networks

HYDRA: Pruning Adversarially Robust Neural Networks

24 February 2020
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
    AAML
ArXivPDFHTML

Papers citing "HYDRA: Pruning Adversarially Robust Neural Networks"

5 / 5 papers shown
Title
Improving Robustness by Enhancing Weak Subnets
Improving Robustness by Enhancing Weak Subnets
Yong Guo
David Stutz
Bernt Schiele
AAML
35
15
0
30 Jan 2022
How to Certify Machine Learning Based Safety-critical Systems? A
  Systematic Literature Review
How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review
Florian Tambon
Gabriel Laberge
Le An
Amin Nikanjam
Paulina Stevia Nouwou Mindom
Y. Pequignot
Foutse Khomh
G. Antoniol
E. Merlo
François Laviolette
37
66
0
26 Jul 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
26
31
0
09 Jun 2021
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao
Pavol Bielik
Petar Tsankov
Martin Vechev
AAML
19
24
0
23 Feb 2021
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
680
0
19 Oct 2020
1