ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.16479
  4. Cited By
Edge Detectors Can Make Deep Convolutional Neural Networks More Robust

Edge Detectors Can Make Deep Convolutional Neural Networks More Robust

26 February 2024
Jin Ding
Jie-Chao Zhao
Yong-zhi Sun
Ping Tan
Jia-Wei Wang
Ji-en Ma
You-tong Fang
    AAML
ArXivPDFHTML

Papers citing "Edge Detectors Can Make Deep Convolutional Neural Networks More Robust"

13 / 13 papers shown
Title
Injecting Logical Constraints into Neural Networks via Straight-Through
  Estimators
Injecting Logical Constraints into Neural Networks via Straight-Through Estimators
Zhun Yang
Joohyung Lee
Chi-youn Park
50
20
0
10 Jul 2023
A2: Efficient Automated Attacker for Boosting Adversarial Training
A2: Efficient Automated Attacker for Boosting Adversarial Training
Zhuoer Xu
Guanghui Zhu
Changhua Meng
Shiwen Cui
ZhenZhe Ying
Weiqiang Wang
GU Ming
Yihua Huang
AAML
61
14
0
07 Oct 2022
Part-Based Models Improve Adversarial Robustness
Part-Based Models Improve Adversarial Robustness
Chawin Sitawarin
Kornrapat Pongmala
Yizheng Chen
Nicholas Carlini
David Wagner
63
12
0
15 Sep 2022
Subspace Adversarial Training
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAML
OOD
87
59
0
24 Nov 2021
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated
  Gradients
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients
Yifei Huang
Yaodong Yu
Hongyang R. Zhang
Yi-An Ma
Yuan Yao
AAML
39
27
0
28 Sep 2020
Adversarial Examples: Opportunities and Challenges
Adversarial Examples: Opportunities and Challenges
Jiliang Zhang
Chen Li
AAML
53
233
0
13 Sep 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
93
1,776
0
30 May 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
185
3,180
0
01 Feb 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
265
12,029
0
19 Jun 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
461
3,138
0
04 Nov 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
222
8,533
0
16 Aug 2016
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.3K
100,213
0
04 Sep 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
235
14,893
1
21 Dec 2013
1