ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12459
  4. Cited By
Two Souls in an Adversarial Image: Towards Universal Adversarial Example
  Detection using Multi-view Inconsistency

Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency

25 September 2021
Sohaib Kiani
S. Awan
Chao Lan
Fengjun Li
Bo Luo
    GAN
    AAML
ArXivPDFHTML

Papers citing "Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency"

48 / 48 papers shown
Title
Anytime Sampling for Autoregressive Models via Ordered Autoencoding
Anytime Sampling for Autoregressive Models via Ordered Autoencoding
Yilun Xu
Yang Song
Sahaj Garg
Linyuan Gong
Rui Shu
Aditya Grover
Stefano Ermon
DiffM
55
11
0
23 Feb 2021
Beating Attackers At Their Own Games: Adversarial Example Detection
  Using Adversarial Gradient Directions
Beating Attackers At Their Own Games: Adversarial Example Detection Using Adversarial Gradient Directions
Yuhang Wu
Sunpreet S. Arora
Yanhong Wu
Hao Yang
AAML
26
9
0
31 Dec 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
41
185
0
15 Jun 2020
Adversarial Detection and Correction by Matching Prediction
  Distributions
Adversarial Detection and Correction by Matching Prediction Distributions
G. Vacanti
A. V. Looveren
AAML
90
16
0
21 Feb 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
197
827
0
19 Feb 2020
A Survey of Deep Learning Techniques for Autonomous Driving
A Survey of Deep Learning Techniques for Autonomous Driving
Sorin Grigorescu
Bogdan Trasnea
Tiberiu T. Cocias
G. Macesanu
3DPC
62
1,392
0
17 Oct 2019
Detecting Adversarial Samples Using Influence Functions and Nearest
  Neighbors
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors
Gilad Cohen
Guillermo Sapiro
Raja Giryes
TDI
32
125
0
15 Sep 2019
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor
  Contamination Detection
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
Di Tang
Xiaofeng Wang
Haixu Tang
Kehuan Zhang
AAML
43
198
0
02 Aug 2019
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial
  Examples
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
AAML
34
18
0
28 Jul 2019
Detecting and Diagnosing Adversarial Images with Class-Conditional
  Capsule Reconstructions
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin
Nicholas Frosst
S. Sabour
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
GAN
AAML
34
72
0
05 Jul 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
80
1,825
0
06 May 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
108
46
0
08 Apr 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
44
175
0
13 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
106
2,018
0
08 Feb 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
98
2,525
0
24 Jan 2019
A Simple Unified Framework for Detecting Out-of-Distribution Samples and
  Adversarial Attacks
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Kimin Lee
Kibok Lee
Honglak Lee
Jinwoo Shin
OODD
120
2,024
0
10 Jul 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
88
1,772
0
30 May 2018
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using
  Generative Models
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Pouya Samangouei
Maya Kabkab
Rama Chellappa
AAML
GAN
67
1,172
0
17 May 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
78
1,080
0
03 Apr 2018
Adversarial Examples that Fool both Computer Vision and Time-Limited
  Humans
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
Gamaleldin F. Elsayed
Shreya Shankar
Brian Cheung
Nicolas Papernot
Alexey Kurakin
Ian Goodfellow
Jascha Narain Sohl-Dickstein
AAML
64
261
0
22 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
164
3,171
0
01 Feb 2018
Generating Adversarial Examples with Adversarial Networks
Generating Adversarial Examples with Adversarial Networks
Chaowei Xiao
Yue Liu
Jun-Yan Zhu
Warren He
M. Liu
D. Song
GAN
AAML
107
893
0
08 Jan 2018
PixelDefend: Leveraging Generative Models to Understand and Defend
  against Adversarial Examples
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Yang Song
Taesup Kim
Sebastian Nowozin
Stefano Ermon
Nate Kushman
AAML
97
787
0
30 Oct 2017
Mitigating Evasion Attacks to Deep Neural Networks via Region-based
  Classification
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
47
209
0
17 Sep 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
106
2,142
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
236
11,962
0
19 Jun 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
75
751
0
09 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
103
1,851
0
20 May 2017
Fast Generation for Convolutional Autoregressive Models
Fast Generation for Convolutional Autoregressive Models
Prajit Ramachandran
T. Paine
Pooya Khorrami
Mohammad Babaeizadeh
Shiyu Chang
Yang Zhang
Mark Hasegawa-Johnson
R. Campbell
Thomas S. Huang
BDL
49
67
0
20 Apr 2017
Adversarial and Clean Data Are Not Twins
Adversarial and Clean Data Are Not Twins
Zhitao Gong
Wenlu Wang
Wei-Shinn Ku
AAML
46
156
0
17 Apr 2017
Detecting Adversarial Samples from Artifacts
Detecting Adversarial Samples from Artifacts
Reuben Feinman
Ryan R. Curtin
S. Shintre
Andrew B. Gardner
AAML
77
892
0
01 Mar 2017
On the (Statistical) Detection of Adversarial Examples
On the (Statistical) Detection of Adversarial Examples
Kathrin Grosse
Praveen Manoharan
Nicolas Papernot
Michael Backes
Patrick McDaniel
AAML
65
710
0
21 Feb 2017
On Detecting Adversarial Perturbations
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
49
947
0
14 Feb 2017
PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture
  Likelihood and Other Modifications
PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications
Tim Salimans
A. Karpathy
Xi Chen
Diederik P. Kingma
62
933
0
19 Jan 2017
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
Nicolas Papernot
Fartash Faghri
Nicholas Carlini
Ian Goodfellow
Reuben Feinman
...
David Berthelot
P. Hendricks
Jonas Rauber
Rujun Long
Patrick McDaniel
AAML
49
512
0
03 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
174
8,513
0
16 Aug 2016
Early Methods for Detecting Adversarial Images
Early Methods for Detecting Adversarial Images
Dan Hendrycks
Kevin Gimpel
AAML
61
236
0
01 Aug 2016
Conditional Image Generation with PixelCNN Decoders
Conditional Image Generation with PixelCNN Decoders
Aaron van den Oord
Nal Kalchbrenner
Oriol Vinyals
L. Espeholt
Alex Graves
Koray Kavukcuoglu
VLM
138
2,495
0
16 Jun 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
272
7,951
0
23 May 2016
Neural Autoregressive Distribution Estimation
Neural Autoregressive Distribution Estimation
Benigno Uria
Marc-Alexandre Côté
Karol Gregor
Iain Murray
Hugo Larochelle
70
314
0
07 May 2016
Pixel Recurrent Neural Networks
Pixel Recurrent Neural Networks
Aaron van den Oord
Nal Kalchbrenner
Koray Kavukcuoglu
SSeg
GAN
405
2,563
0
25 Jan 2016
The Limitations of Deep Learning in Adversarial Settings
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
69
3,947
0
24 Nov 2015
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
100
4,878
0
14 Nov 2015
DeepID3: Face Recognition with Very Deep Neural Networks
DeepID3: Face Recognition with Very Deep Neural Networks
Yi Sun
Ding Liang
Xiaogang Wang
Xiaoou Tang
CVBM
73
940
0
03 Feb 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
184
18,922
0
20 Dec 2014
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Anh Totti Nguyen
J. Yosinski
Jeff Clune
AAML
136
3,261
0
05 Dec 2014
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
1.2K
39,383
0
01 Sep 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
192
14,831
1
21 Dec 2013
1