ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.08401
  4. Cited By
Universal adversarial perturbations

Universal adversarial perturbations

26 October 2016
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
    AAML
ArXivPDFHTML

Papers citing "Universal adversarial perturbations"

50 / 1,266 papers shown
Title
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
44
1,808
0
15 Dec 2017
NAG: Network for Adversary Generation
NAG: Network for Adversary Generation
Konda Reddy Mopuri
Utkarsh Ojha
Utsav Garg
R. Venkatesh Babu
AAML
24
144
0
09 Dec 2017
Adversarial Examples that Fool Detectors
Adversarial Examples that Fool Detectors
Jiajun Lu
Hussein Sibai
Evan Fabry
AAML
19
144
0
07 Dec 2017
Generative Adversarial Perturbations
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAML
GAN
WIGM
31
351
0
06 Dec 2017
Attacking Visual Language Grounding with Adversarial Examples: A Case
  Study on Neural Image Captioning
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Hongge Chen
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
Cho-Jui Hsieh
GAN
AAML
35
49
0
06 Dec 2017
A+D Net: Training a Shadow Detector with Adversarial Shadow Attenuation
A+D Net: Training a Shadow Detector with Adversarial Shadow Attenuation
Hieu M. Le
T. F. Y. Vicente
Vu Nguyen
Minh Hoai
Dimitris Samaras
26
114
0
04 Dec 2017
Connecting Pixels to Privacy and Utility: Automatic Redaction of Private
  Information in Images
Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images
Tribhuvanesh Orekondy
Mario Fritz
Bernt Schiele
PICV
24
80
0
04 Dec 2017
Improving Network Robustness against Adversarial Attacks with Compact
  Convolution
Improving Network Robustness against Adversarial Attacks with Compact Convolution
Rajeev Ranjan
S. Sankaranarayanan
Carlos D. Castillo
Rama Chellappa
AAML
24
14
0
03 Dec 2017
Where Classification Fails, Interpretation Rises
Where Classification Fails, Interpretation Rises
Chanh Nguyen
Georgi Georgiev
Yujie Ji
Ting Wang
AAML
15
0
0
02 Dec 2017
Measuring the tendency of CNNs to Learn Surface Statistical Regularities
Measuring the tendency of CNNs to Learn Surface Statistical Regularities
Jason Jo
Yoshua Bengio
AAML
26
249
0
30 Nov 2017
Adversary Detection in Neural Networks via Persistent Homology
Adversary Detection in Neural Networks via Persistent Homology
Thomas Gebhart
Paul Schrater
AAML
24
25
0
28 Nov 2017
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Anurag Arnab
O. Mikšík
Philip Torr
AAML
33
304
0
27 Nov 2017
Butterfly Effect: Bidirectional Control of Classification Performance by
  Small Additive Perturbation
Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation
Y. Yoo
Seonguk Park
Junyoung Choi
Sangdoo Yun
Nojun Kwak
AAML
24
4
0
27 Nov 2017
Adversarial Attacks Beyond the Image Space
Adversarial Attacks Beyond the Image Space
Fangyin Wei
Chenxi Liu
Yu-Siang Wang
Weichao Qiu
Lingxi Xie
Yu-Wing Tai
Chi-Keung Tang
Alan Yuille
AAML
41
145
0
20 Nov 2017
Defense against Universal Adversarial Perturbations
Defense against Universal Adversarial Perturbations
Naveed Akhtar
Jian Liu
Ajmal Mian
AAML
28
207
0
16 Nov 2017
Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples
Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples
Jihun Hamm
Akshay Mehra
AAML
29
7
0
12 Nov 2017
LatentPoison - Adversarial Attacks On The Latent Space
LatentPoison - Adversarial Attacks On The Latent Space
Antonia Creswell
Anil A. Bharath
B. Sengupta
AAML
OOD
22
20
0
08 Nov 2017
Adversarial Frontier Stitching for Remote Neural Network Watermarking
Adversarial Frontier Stitching for Remote Neural Network Watermarking
Erwan Le Merrer
P. Pérez
Gilles Trédan
MLAU
AAML
42
336
0
06 Nov 2017
Towards Reverse-Engineering Black-Box Neural Networks
Towards Reverse-Engineering Black-Box Neural Networks
Seong Joon Oh
Maximilian Augustin
Bernt Schiele
Mario Fritz
AAML
289
3
0
06 Nov 2017
Countering Adversarial Images using Input Transformations
Countering Adversarial Images using Input Transformations
Chuan Guo
Mayank Rana
Moustapha Cissé
Laurens van der Maaten
AAML
30
1,388
0
31 Oct 2017
Generating Natural Adversarial Examples
Generating Natural Adversarial Examples
Zhengli Zhao
Dheeru Dua
Sameer Singh
GAN
AAML
38
596
0
31 Oct 2017
Rethinking generalization requires revisiting old ideas: statistical
  mechanics approaches and complex learning behavior
Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior
Charles H. Martin
Michael W. Mahoney
AI4CE
30
62
0
26 Oct 2017
One pixel attack for fooling deep neural networks
One pixel attack for fooling deep neural networks
Jiawei Su
Danilo Vasconcellos Vargas
Kouichi Sakurai
AAML
24
2,306
0
24 Oct 2017
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Matthew Wicker
Xiaowei Huang
Marta Kwiatkowska
AAML
12
232
0
21 Oct 2017
ADA: A Game-Theoretic Perspective on Data Augmentation for Object
  Detection
ADA: A Game-Theoretic Perspective on Data Augmentation for Object Detection
Sima Behpour
Kris Kitani
Brian Ziebart
AAML
35
4
0
21 Oct 2017
Boosting Adversarial Attacks with Momentum
Boosting Adversarial Attacks with Momentum
Yinpeng Dong
Fangzhou Liao
Tianyu Pang
Hang Su
Jun Zhu
Xiaolin Hu
Jianguo Li
AAML
26
83
0
17 Oct 2017
Standard detectors aren't (currently) fooled by physical adversarial
  stop signs
Standard detectors aren't (currently) fooled by physical adversarial stop signs
Jiajun Lu
Hussein Sibai
Evan Fabry
David A. Forsyth
AAML
16
59
0
09 Oct 2017
Verification of Binarized Neural Networks via Inter-Neuron Factoring
Verification of Binarized Neural Networks via Inter-Neuron Factoring
Chih-Hong Cheng
Georg Nührenberg
Chung-Hao Huang
Harald Ruess
AAML
25
21
0
09 Oct 2017
Fooling Vision and Language Models Despite Localization and Attention
  Mechanism
Fooling Vision and Language Models Despite Localization and Attention Mechanism
Xiaojun Xu
Xinyun Chen
Chang-rui Liu
Anna Rohrbach
Trevor Darrell
D. Song
AAML
10
41
0
25 Sep 2017
Verifying Properties of Binarized Deep Neural Networks
Verifying Properties of Binarized Deep Neural Networks
Nina Narodytska
S. Kasiviswanathan
L. Ryzhyk
Shmuel Sagiv
T. Walsh
AAML
21
215
0
19 Sep 2017
A Learning and Masking Approach to Secure Learning
A Learning and Masking Approach to Secure Learning
Linh Nguyen
Sky Wang
Arunesh Sinha
AAML
32
2
0
13 Sep 2017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
  Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
24
637
0
13 Sep 2017
Art of singular vectors and universal adversarial perturbations
Art of singular vectors and universal adversarial perturbations
Valentin Khrulkov
Ivan Oseledets
AAML
27
132
0
11 Sep 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
27
625
0
29 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
17
1,740
0
22 Aug 2017
Learning Universal Adversarial Perturbations with Generative Models
Learning Universal Adversarial Perturbations with Generative Models
Jamie Hayes
G. Danezis
AAML
13
54
0
17 Aug 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
22
1,854
0
14 Aug 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Bo-wen Li
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
D. Song
AAML
6
590
0
27 Jul 2017
Adversarial Examples for Evaluating Reading Comprehension Systems
Adversarial Examples for Evaluating Reading Comprehension Systems
Robin Jia
Percy Liang
AAML
ELM
157
1,584
0
23 Jul 2017
Confidence estimation in Deep Neural networks via density modelling
Confidence estimation in Deep Neural networks via density modelling
Akshayvarun Subramanya
Suraj Srinivas
R. Venkatesh Babu
22
51
0
21 Jul 2017
Efficient Defenses Against Adversarial Attacks
Efficient Defenses Against Adversarial Attacks
Valentina Zantedeschi
Maria-Irina Nicolae
Ambrish Rawat
AAML
13
297
0
21 Jul 2017
Fast Feature Fool: A data independent approach to universal adversarial
  perturbations
Fast Feature Fool: A data independent approach to universal adversarial perturbations
Konda Reddy Mopuri
Utsav Garg
R. Venkatesh Babu
AAML
36
205
0
18 Jul 2017
NO Need to Worry about Adversarial Examples in Object Detection in
  Autonomous Vehicles
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
Jiajun Lu
Hussein Sibai
Evan Fabry
David A. Forsyth
AAML
17
280
0
12 Jul 2017
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in
  Gaussian Process Hybrid Deep Networks
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks
John Bradshaw
A. G. Matthews
Zoubin Ghahramani
BDL
AAML
72
171
0
08 Jul 2017
UPSET and ANGRI : Breaking High Performance Image Classifiers
UPSET and ANGRI : Breaking High Performance Image Classifiers
Sayantan Sarkar
Ankan Bansal
U. Mahbub
Rama Chellappa
AAML
30
108
0
04 Jul 2017
Enhancing The Reliability of Out-of-distribution Image Detection in
  Neural Networks
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang
Yixuan Li
R. Srikant
UQCV
OODD
32
2,031
0
08 Jun 2017
Robustness of classifiers to universal perturbations: a geometric
  perspective
Robustness of classifiers to universal perturbations: a geometric perspective
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
Stefano Soatto
AAML
29
118
0
26 May 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
13
1,198
0
25 May 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial
  Manipulation
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
45
505
0
23 May 2017
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial
  Attacks with Moving Target Defense
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Sailik Sengupta
Tathagata Chakraborti
S. Kambhampati
AAML
26
63
0
19 May 2017
Previous
123...242526
Next