ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10900
  4. Cited By
Rearchitecting Classification Frameworks For Increased Robustness
v1v2v3 (latest)

Rearchitecting Classification Frameworks For Increased Robustness

26 May 2019
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
    AAMLOOD
ArXiv (abs)PDFHTML

Papers citing "Rearchitecting Classification Frameworks For Increased Robustness"

27 / 27 papers shown
Title
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous
  Driving
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
Yulong Cao
Chaowei Xiao
Benjamin Cyr
Yimeng Zhou
Wonseok Park
Sara Rampazzi
Qi Alfred Chen
Kevin Fu
Z. Morley Mao
AAML
49
540
0
16 Jul 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
93
1,844
0
06 May 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
113
361
0
02 May 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
98
905
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
160
2,051
0
08 Feb 2019
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud
  Classifiers
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers
Daniel Liu
Ronald Yu
Hao Su
3DPC
77
170
0
10 Jan 2019
Excessive Invariance Causes Adversarial Vulnerability
Excessive Invariance Causes Adversarial Vulnerability
J. Jacobsen
Jens Behrmann
R. Zemel
Matthias Bethge
AAML
71
166
0
01 Nov 2018
Humans can decipher adversarial images
Humans can decipher adversarial images
Zhenglong Zhou
C. Firestone
AAML
47
122
0
11 Sep 2018
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
74
283
0
06 Sep 2018
Generalisation in humans and deep neural networks
Generalisation in humans and deep neural networks
Robert Geirhos
Carlos R. Medina Temme
Jonas Rauber
Heiko H. Schutt
Matthias Bethge
Felix Wichmann
OOD
107
609
0
27 Aug 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
108
1,782
0
30 May 2018
Adversarial examples from computational constraints
Adversarial examples from computational constraints
Sébastien Bubeck
Eric Price
Ilya P. Razenshteyn
AAML
124
233
0
25 May 2018
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using
  Generative Models
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Pouya Samangouei
Maya Kabkab
Rama Chellappa
AAMLGAN
86
1,179
0
17 May 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OODAAML
149
508
0
13 Mar 2018
Stochastic Activation Pruning for Robust Adversarial Defense
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet Singh Dhillon
Kamyar Azizzadenesheli
Zachary Chase Lipton
Jeremy Bernstein
Jean Kossaifi
Aran Khanna
Anima Anandkumar
AAML
81
547
0
05 Mar 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILMAAML
96
939
0
09 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
243
3,194
0
01 Feb 2018
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
163
2,159
0
21 Aug 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
Basel Alomair
AAML
74
595
0
27 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
317
12,131
0
19 Jun 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,018
0
04 Mar 2017
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
478
5,381
0
05 Nov 2016
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
148
2,533
0
26 Oct 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILMAAML
545
5,910
0
08 Jul 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,682
0
08 Feb 2016
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
154
4,905
0
14 Nov 2015
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
595
15,902
0
12 Nov 2013
1