ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.10691
  4. Cited By
Re-thinking Data Availablity Attacks Against Deep Neural Networks

Re-thinking Data Availablity Attacks Against Deep Neural Networks

18 May 2023
Bin Fang
Yue Liu
Shuang Wu
Ran Yi
Shouhong Ding
Lizhuang Ma
    AAML
ArXivPDFHTML

Papers citing "Re-thinking Data Availablity Attacks Against Deep Neural Networks"

12 / 12 papers shown
Title
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
Soheil Feizi
MU
36
25
0
07 Mar 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
33
25
0
31 Dec 2022
The Creativity of Text-to-Image Generation
The Creativity of Text-to-Image Generation
J. Oppenlaender
52
194
0
13 May 2022
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Tianyu Pang
Min Lin
Xiao Yang
Junyi Zhu
Shuicheng Yan
53
121
0
21 Feb 2022
Availability Attacks Create Shortcuts
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
48
57
0
01 Nov 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
59
134
0
21 Jun 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
195
192
0
13 Jan 2021
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
44
425
0
16 Oct 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
68
509
0
05 Jul 2020
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
64
1,822
0
15 Dec 2017
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
843
39,383
0
01 Sep 2014
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
60
1,580
0
27 Jun 2012
1