ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.08401
  4. Cited By
Universal adversarial perturbations
v1v2v3 (latest)

Universal adversarial perturbations

26 October 2016
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
    AAML
ArXiv (abs)PDFHTML

Papers citing "Universal adversarial perturbations"

50 / 1,270 papers shown
Title
When Measures are Unreliable: Imperceptible Adversarial Perturbations
  toward Top-$k$ Multi-Label Learning
When Measures are Unreliable: Imperceptible Adversarial Perturbations toward Top-kkk Multi-Label Learning
Yuchen Sun
Qianqian Xu
Zitai Wang
Qingming Huang
AAML
107
1
0
27 Jul 2023
FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal
  Adversarial Masks
FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks
Buse G. A. Tekgul
Nadarajah Asokan
AAML
53
2
0
27 Jul 2023
A Survey on Reservoir Computing and its Interdisciplinary Applications
  Beyond Traditional Machine Learning
A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning
Heng Zhang
Danilo Vasconcellos Vargas
AI4CE
68
22
0
27 Jul 2023
Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
  Optical Perturbations
Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations
Yi Han
Matthew Chan
Eric Wengrowski
Zhuo Li
Nils Ole Tippenhauer
Mani B. Srivastava
S. Zonouz
L. Garcia
AAML
48
1
0
24 Jul 2023
Latent Code Augmentation Based on Stable Diffusion for Data-free
  Substitute Attacks
Latent Code Augmentation Based on Stable Diffusion for Data-free Substitute Attacks
Mingwen Shao
Lingzhuang Meng
Yuanjian Qiao
Lixu Zhang
W. Zuo
DiffM
94
1
0
24 Jul 2023
An Estimator for the Sensitivity to Perturbations of Deep Neural
  Networks
An Estimator for the Sensitivity to Perturbations of Deep Neural Networks
Naman Maheshwari
Nicholas Malaya
Scott A. Moe
J. Kulkarni
S. Gurumurthi
AAML
30
0
0
24 Jul 2023
Lost In Translation: Generating Adversarial Examples Robust to
  Round-Trip Translation
Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation
Neel Bhandari
Pin-Yu Chen
AAMLSILM
84
3
0
24 Jul 2023
Downstream-agnostic Adversarial Examples
Downstream-agnostic Adversarial Examples
Ziqi Zhou
Shengshan Hu
Rui-Qing Zhao
Qian Wang
L. Zhang
Junhui Hou
Hai Jin
SILMAAML
88
25
0
23 Jul 2023
Adversarial Attacks on Traffic Sign Recognition: A Survey
Adversarial Attacks on Traffic Sign Recognition: A Survey
Svetlana Pavlitska
Nico Lambing
J. Marius Zöllner
AAML
88
18
0
17 Jul 2023
Single-Class Target-Specific Attack against Interpretable Deep Learning
  Systems
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Eldor Abdukhamidov
Mohammed Abuhamad
George K. Thiruvathukal
Hyoungshick Kim
Tamer Abuhmed
AAML
50
2
0
12 Jul 2023
Differential Analysis of Triggers and Benign Features for Black-Box DNN
  Backdoor Detection
Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection
Hao Fu
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
73
14
0
11 Jul 2023
Membership Inference Attacks on DNNs using Adversarial Perturbations
Membership Inference Attacks on DNNs using Adversarial Perturbations
Hassan Ali
Adnan Qayyum
Ala I. Al-Fuqaha
Junaid Qadir
AAML
105
3
0
11 Jul 2023
Scaling Model Checking for DNN Analysis via State-Space Reduction and
  Input Segmentation (Extended Version)
Scaling Model Checking for DNN Analysis via State-Space Reduction and Input Segmentation (Extended Version)
Mahum Naseer
Osman Hasan
Mohamed Bennai
36
2
0
29 Jun 2023
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural
  Network Inference in Low-Voltage Regimes
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes
Hao Sun
Lei Hsiung
Nandhini Chandramoorthy
Pin-Yu Chen
Tsung-Yi Ho
AAML
88
0
0
29 Jun 2023
Evaluating Similitude and Robustness of Deep Image Denoising Models via
  Adversarial Attack
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack
Jie Ning
Jiebao Sun
Yao Li
Zhichang Guo
Wangmeng Zuo
69
6
0
28 Jun 2023
A Survey on Out-of-Distribution Evaluation of Neural NLP Models
A Survey on Out-of-Distribution Evaluation of Neural NLP Models
Xinzhe Li
Ming Liu
Shang Gao
Wray Buntine
74
20
0
27 Jun 2023
On the Universal Adversarial Perturbations for Efficient Data-free
  Adversarial Detection
On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection
Songyang Gao
Shihan Dou
Qi Zhang
Xuanjing Huang
Jin Ma
Yingchun Shan
AAML
63
3
0
27 Jun 2023
The race to robustness: exploiting fragile models for urban camouflage
  and the imperative for machine learning security
The race to robustness: exploiting fragile models for urban camouflage and the imperative for machine learning security
Harriet Farlow
Matthew A. Garratt
G. Mount
T. Lynar
AAML
62
0
0
26 Jun 2023
A Comprehensive Study on the Robustness of Image Classification and
  Object Detection in Remote Sensing: Surveying and Benchmarking
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
Shaohui Mei
Jiawei Lian
Xiaofei Wang
Yuru Su
Mingyang Ma
Lap-Pui Chau
AAML
126
12
0
21 Jun 2023
Universal adversarial perturbations for multiple classification tasks
  with quantum classifiers
Universal adversarial perturbations for multiple classification tasks with quantum classifiers
Yun-Zhong Qiu
AAML
72
1
0
21 Jun 2023
Self-Supervised Learning for Time Series Analysis: Taxonomy, Progress,
  and Prospects
Self-Supervised Learning for Time Series Analysis: Taxonomy, Progress, and Prospects
Kexin Zhang
Qingsong Wen
Chaoli Zhang
Rongyao Cai
Ming Jin
...
James Y. Zhang
Yuxuan Liang
Guansong Pang
Dongjin Song
Shirui Pan
AI4TS
229
115
0
16 Jun 2023
OVLA: Neural Network Ownership Verification using Latent Watermarks
OVLA: Neural Network Ownership Verification using Latent Watermarks
Feisi Fu
Wenchao Li
AAML
131
1
0
15 Jun 2023
Efficient Backdoor Attacks for Deep Neural Networks in Real-world
  Scenarios
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios
Ziqiang Li
Hong Sun
Pengfei Xia
Heng Li
Beihao Xia
Yi Wu
Bin Li
AAML
101
10
0
14 Jun 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning
  Efficiency in Backdoor Attacks
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
134
8
0
14 Jun 2023
Malafide: a novel adversarial convolutive noise attack against deepfake
  and spoofing detection systems
Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems
Michele Panariello
W. Ge
Hemlata Tak
Massimiliano Todisco
Nicholas W. D. Evans
AAML
62
14
0
13 Jun 2023
A Linearly Convergent GAN Inversion-based Algorithm for Reverse
  Engineering of Deceptions
A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions
D. Thaker
Paris V. Giampouras
René Vidal
AAML
59
0
0
07 Jun 2023
PromptRobust: Towards Evaluating the Robustness of Large Language Models
  on Adversarial Prompts
PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
Kaijie Zhu
Jindong Wang
Jiaheng Zhou
Zichen Wang
Hao Chen
...
Linyi Yang
Weirong Ye
Yue Zhang
Neil Zhenqiang Gong
Xingxu Xie
SILM
135
144
0
07 Jun 2023
Adversarial Sample Detection Through Neural Network Transport Dynamics
Adversarial Sample Detection Through Neural Network Transport Dynamics
Skander Karkar
Patrick Gallinari
A. Rakotomamonjy
AAML
47
1
0
07 Jun 2023
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Zhen Xiang
Zidi Xiong
Yue Liu
AAML
92
20
0
29 May 2023
NaturalFinger: Generating Natural Fingerprint with Generative
  Adversarial Networks
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks
Kan Yang
Kunhao Lai
AAML
78
0
0
29 May 2023
DeepSeaNet: Improving Underwater Object Detection using EfficientDet
DeepSeaNet: Improving Underwater Object Detection using EfficientDet
Sanyam Jain
AAML
55
14
0
26 May 2023
A Guide Through the Zoo of Biased SGD
A Guide Through the Zoo of Biased SGD
Yury Demidovich
Grigory Malinovsky
Igor Sokolov
Peter Richtárik
102
28
0
25 May 2023
Adversarial Demonstration Attacks on Large Language Models
Adversarial Demonstration Attacks on Large Language Models
Jiong Wang
Zi-yang Liu
Keun Hee Park
Zhuojun Jiang
Zhaoheng Zheng
Zhuofeng Wu
Muhao Chen
Chaowei Xiao
SILM
112
56
0
24 May 2023
Impact of Light and Shadow on Robustness of Deep Neural Networks
Impact of Light and Shadow on Robustness of Deep Neural Networks
Chen-Hao Hu
Weiwen Shi
Chaoxian Li
Jialiang Sun
Donghua Wang
Jun Wu
Guijian Tang
AAML
59
2
0
23 May 2023
Adversarial Defenses via Vector Quantization
Adversarial Defenses via Vector Quantization
Zhiyi Dong
Yongyi Mao
AAML
53
1
0
23 May 2023
Flying Adversarial Patches: Manipulating the Behavior of Deep
  Learning-based Autonomous Multirotors
Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors
Pia Hanfeld
Marina M.-C. Höhne
Michael Bussmann
Wolfgang Hönig
AAML
56
1
0
22 May 2023
How Deep Learning Sees the World: A Survey on Adversarial Attacks &
  Defenses
How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses
Joana Cabral Costa
Tiago Roxo
Hugo Manuel Proença
Pedro R. M. Inácio
AAML
120
62
0
18 May 2023
Inter-frame Accelerate Attack against Video Interpolation Models
Inter-frame Accelerate Attack against Video Interpolation Models
Junpei Liao
Zhikai Chen
Liang Yi
Wenyuan Yang
Baoyuan Wu
Xiaochun Cao
AAML
95
1
0
11 May 2023
SepMark: Deep Separable Watermarking for Unified Source Tracing and
  Deepfake Detection
SepMark: Deep Separable Watermarking for Unified Source Tracing and Deepfake Detection
Xiaoshuai Wu
Xin Liao
Bo Ou
96
39
0
10 May 2023
Adversarial Examples Detection with Enhanced Image Difference Features
  based on Local Histogram Equalization
Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization
Z. Yin
Shaowei Zhu
Han Su
Jianteng Peng
Wanli Lyu
Bin Luo
AAML
60
2
0
08 May 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning
  Attacks
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
95
1
0
07 May 2023
FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain
  Adaptation of Medical Image Segmentation
FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation
Yan Wang
Jian Cheng
Yixin Chen
Shuai Shao
Lanyun Zhu
Zhenzhou Wu
Tianming Liu
Haogang Zhu
OODMedIm
126
26
0
26 Apr 2023
Generating Adversarial Examples with Task Oriented Multi-Objective
  Optimization
Generating Adversarial Examples with Task Oriented Multi-Objective Optimization
Anh-Vu Bui
Trung Le
He Zhao
Quan Hung Tran
Paul Montague
Dinh Q. Phung
AAML
64
0
0
26 Apr 2023
Evaluating Adversarial Robustness on Document Image Classification
Evaluating Adversarial Robustness on Document Image Classification
Timothée Fronteau
Arnaud Paran
A. Shabou
AAML
85
3
0
24 Apr 2023
SketchXAI: A First Look at Explainability for Human Sketches
SketchXAI: A First Look at Explainability for Human Sketches
Zhiyu Qu
Yulia Gryaditskaya
Ke Li
Kaiyue Pang
Tao Xiang
Yi-Zhe Song
86
8
0
23 Apr 2023
Universal Adversarial Backdoor Attacks to Fool Vertical Federated
  Learning in Cloud-Edge Collaboration
Universal Adversarial Backdoor Attacks to Fool Vertical Federated Learning in Cloud-Edge Collaboration
Peng Chen
Xin Du
Zhihui Lu
Hongfeng Chai
FedMLAAML
92
11
0
22 Apr 2023
RoboBEV: Towards Robust Bird's Eye View Perception under Corruptions
RoboBEV: Towards Robust Bird's Eye View Perception under Corruptions
Shaoyuan Xie
Lingdong Kong
Wenwei Zhang
Jiawei Ren
Liang Pan
Kai-xiang Chen
Ziwei Liu
95
25
0
13 Apr 2023
Certifiable Black-Box Attacks with Randomized Adversarial Examples:
  Breaking Defenses with Provable Confidence
Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence
Hanbin Hong
Xinyu Zhang
Binghui Wang
Zhongjie Ba
Yuan Hong
AAML
79
3
0
10 Apr 2023
AI Model Disgorgement: Methods and Choices
AI Model Disgorgement: Methods and Choices
Alessandro Achille
Michael Kearns
Carson Klingenberg
Stefano Soatto
MU
98
13
0
07 Apr 2023
Robustmix: Improving Robustness by Regularizing the Frequency Bias of
  Deep Nets
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
Jonas Ngnawé
Marianne Abémgnigni Njifon
Jonathan Heek
Yann N. Dauphin
OOD
42
5
0
06 Apr 2023
Previous
123456...242526
Next