ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.02104
  4. Cited By
Are adversarial examples inevitable?

Are adversarial examples inevitable?

6 September 2018
Ali Shafahi
Yifan Jiang
Christoph Studer
S. Feizi
Tom Goldstein
    SILM
ArXivPDFHTML

Papers citing "Are adversarial examples inevitable?"

18 / 68 papers shown
Title
One Man's Trash is Another Man's Treasure: Resisting Adversarial
  Examples by Adversarial Examples
One Man's Trash is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples
Chang Xiao
Changxi Zheng
AAML
25
19
0
25 Nov 2019
Understanding and Quantifying Adversarial Examples Existence in Linear
  Classification
Understanding and Quantifying Adversarial Examples Existence in Linear Classification
Xupeng Shi
A. Ding
AAML
16
3
0
27 Oct 2019
Instance adaptive adversarial training: Improved accuracy tradeoffs in
  neural nets
Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets
Yogesh Balaji
Tom Goldstein
Judy Hoffman
AAML
134
103
0
17 Oct 2019
A New Defense Against Adversarial Images: Turning a Weakness into a
  Strength
A New Defense Against Adversarial Images: Turning a Weakness into a Strength
Tao Yu
Shengyuan Hu
Chuan Guo
Wei-Lun Chao
Kilian Q. Weinberger
AAML
58
101
0
16 Oct 2019
Universal Adversarial Audio Perturbations
Universal Adversarial Audio Perturbations
Sajjad Abdoli
L. G. Hafemann
Jérôme Rony
Ismail Ben Ayed
P. Cardinal
Alessandro Lameiras Koerich
AAML
25
51
0
08 Aug 2019
A Computationally Efficient Method for Defending Adversarial Deep
  Learning Attacks
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
R. Sahay
Rehana Mahfuz
Aly El Gamal
AAML
22
5
0
13 Jun 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
23
24
0
28 May 2019
Robust Classification using Robust Feature Augmentation
Robust Classification using Robust Feature Augmentation
Kevin Eykholt
Swati Gupta
Atul Prakash
Amir Rahmati
Pratik Vaishnavi
Haizhong Zheng
AAML
19
2
0
26 May 2019
Adversarial Policies: Attacking Deep Reinforcement Learning
Adversarial Policies: Attacking Deep Reinforcement Learning
Adam Gleave
Michael Dennis
Cody Wild
Neel Kant
Sergey Levine
Stuart J. Russell
AAML
27
349
0
25 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
24
97
0
25 May 2019
Adversarial Training and Robustness for Multiple Perturbations
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAML
SILM
28
375
0
30 Apr 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
68
1,227
0
29 Apr 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
22
1,995
0
08 Feb 2019
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule
  Networks
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Muhammad Shafique
GAN
AAML
19
26
0
28 Jan 2019
On the Security of Randomized Defenses Against Adversarial Samples
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
29
1
0
11 Dec 2018
Adversarial Defense based on Structure-to-Signal Autoencoders
Adversarial Defense based on Structure-to-Signal Autoencoders
Joachim Folz
Sebastián M. Palacio
Jörn Hees
Damian Borth
Andreas Dengel
AAML
26
32
0
21 Mar 2018
Shield: Fast, Practical Defense and Vaccination for Deep Learning using
  JPEG Compression
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Siwei Li
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
FedML
AAML
45
225
0
19 Feb 2018
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
317
5,847
0
08 Jul 2016
Previous
12