ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.09903
21
10

Unrestricted Adversarial Attacks on ImageNet Competition

17 October 2021
YueFeng Chen
Xiaofeng Mao
Yuan He
Hui Xue
Chao Li
Yinpeng Dong
Qi-An Fu
Xiao Yang
Wenzhao Xiang
Tianyu Pang
Hang Su
Jun Zhu
Fangcheng Liu
Chaoning Zhang
Hongyang R. Zhang
Yichi Zhang
Shilong Liu
Chang-rui Liu
Wenzhao Xiang
Yajie Wang
Huipeng Zhou
Haoran Lyu
Yidan Xu
Zixuan Xu
Taoyu Zhu
Wenjun Li
Xianfeng Gao
Guoqiu Wang
Huanqian Yan
Yingjie Guo
Chaoning Zhang
Zheng Fang
Yang Wang
Bingyang Fu
Yunfei Zheng
Yekui Wang
Haorong Luo
Zhen Yang
    AAML
ArXivPDFHTML
Abstract

Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input. However in the real-world, the attacker does not need to comply with this restriction. In fact, more threats to the deep model come from unrestricted adversarial examples, that is, the attacker makes large and visible modifications on the image, which causes the model classifying mistakenly, but does not affect the normal observation in human perspective. Unrestricted adversarial attack is a popular and practical direction but has not been studied thoroughly. We organize this competition with the purpose of exploring more effective unrestricted adversarial attack algorithm, so as to accelerate the academical research on the model robustness under stronger unbounded attacks. The competition is held on the TianChi platform (\url{https://tianchi.aliyun.com/competition/entrance/531853/introduction}) as one of the series of AI Security Challengers Program.

View on arXiv
Comments on this paper