ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01340
  4. Cited By
Generative Poisoning Attack Method Against Neural Networks

Generative Poisoning Attack Method Against Neural Networks

3 March 2017
Chaofei Yang
Qing Wu
Hai Helen Li
Yiran Chen
    AAML
ArXivPDFHTML

Papers citing "Generative Poisoning Attack Method Against Neural Networks"

50 / 99 papers shown
Title
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
44
0
0
27 Mar 2025
Poisoning Bayesian Inference via Data Deletion and Replication
Matthieu Carreau
Roi Naveiro
William N. Caballero
AAML
KELM
61
0
0
06 Mar 2025
Towards Autonomous Reinforcement Learning for Real-World Robotic Manipulation with Large Language Models
Niccolò Turcato
Matteo Iovino
Aris Synodinos
Alberto Dalla Libera
R. Carli
Pietro Falco
LM&Ro
43
0
0
06 Mar 2025
CONTINUUM: Detecting APT Attacks through Spatial-Temporal Graph Neural Networks
CONTINUUM: Detecting APT Attacks through Spatial-Temporal Graph Neural Networks
Atmane Ayoub Mansour Bahar
Kamel Soaid Ferrahi
Mohamed-Lamine Messai
H. Seba
Karima Amrouche
41
0
0
08 Jan 2025
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
Yongyi Su
Yushu Li
Nanqing Liu
Kui Jia
Xulei Yang
Chuan-Sheng Foo
Xun Xu
TTA
AAML
61
1
0
07 Oct 2024
PACE: Poisoning Attacks on Learned Cardinality Estimation
PACE: Poisoning Attacks on Learned Cardinality Estimation
Jintao Zhang
Chao Zhang
Guoliang Li
Chengliang Chai
30
8
0
24 Sep 2024
High-Frequency Anti-DreamBooth: Robust Defense against Personalized
  Image Synthesis
High-Frequency Anti-DreamBooth: Robust Defense against Personalized Image Synthesis
Takuto Onikubo
Yusuke Matsui
DiffM
AAML
31
1
0
12 Sep 2024
A Survey of Trojan Attacks and Defenses to Deep Neural Networks
A Survey of Trojan Attacks and Defenses to Deep Neural Networks
Lingxin Jin
Xianyu Wen
Wei Jiang
Jinyu Zhan
AAML
41
1
0
15 Aug 2024
Algorithmic Complexity Attacks on Dynamic Learned Indexes
Algorithmic Complexity Attacks on Dynamic Learned Indexes
Rui Yang
Evgenios M. Kornaropoulos
Yue Cheng
AAML
49
2
0
19 Mar 2024
Medical Unlearnable Examples: Securing Medical Data from Unauthorized
  Training via Sparsity-Aware Local Masking
Medical Unlearnable Examples: Securing Medical Data from Unauthorized Training via Sparsity-Aware Local Masking
Weixiang Sun
Yixin Liu
Zhiling Yan
Kaidi Xu
Lichao Sun
AAML
40
3
0
15 Mar 2024
Quantifying and Mitigating Privacy Risks for Tabular Generative Models
Quantifying and Mitigating Privacy Risks for Tabular Generative Models
Chaoyi Zhu
Jiayi Tang
Hans Brouwer
Juan F. Pérez
Marten van Dijk
Lydia Y. Chen
63
5
0
12 Mar 2024
Federated Learning Under Attack: Exposing Vulnerabilities through Data
  Poisoning Attacks in Computer Networks
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks
Ehsan Nowroozi
Imran Haider
R. Taheri
Mauro Conti
AAML
34
5
0
05 Mar 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Jiong Wang
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
32
5
0
02 Feb 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
23
0
0
31 Jan 2024
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey
  and the Open Libraries Behind Them
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them
Chao-Jung Liu
Boxi Chen
Wei Shao
Chris Zhang
Kelvin Wong
Yi Zhang
47
3
0
22 Jan 2024
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with
  Human Feedback in Large Language Models
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models
Jiong Wang
Junlin Wu
Muhao Chen
Yevgeniy Vorobeychik
Chaowei Xiao
AAML
29
13
0
16 Nov 2023
Machine Unlearning Methodology base on Stochastic Teacher Network
Machine Unlearning Methodology base on Stochastic Teacher Network
Xulong Zhang
Jianzong Wang
Ning Cheng
Yifu Sun
Chuanyao Zhang
Jing Xiao
MU
29
4
0
28 Aug 2023
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Tianshuo Cong
Xinlei He
Yun Shen
Yang Zhang
AAML
TTA
37
5
0
16 Aug 2023
A Blockchain-based Platform for Reliable Inference and Training of
  Large-Scale Models
A Blockchain-based Platform for Reliable Inference and Training of Large-Scale Models
Sanghyeon Park
Junmo Lee
Soo-Mook Moon
48
1
0
06 May 2023
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch
  Sampling
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling
Ethan Wisdom
Tejas Gokhale
Chaowei Xiao
Yezhou Yang
31
0
0
30 Mar 2023
The Devil's Advocate: Shattering the Illusion of Unexploitable Data
  using Diffusion Models
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
H. M. Dolatabadi
S. Erfani
C. Leckie
DiffM
54
17
0
15 Mar 2023
Analysis of Label-Flip Poisoning Attack on Machine Learning Based
  Malware Detector
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
AAML
26
18
0
03 Jan 2023
Learned Systems Security
Learned Systems Security
R. Schuster
Jinyi Zhou
Thorsten Eisenhofer
Paul Grubbs
Nicolas Papernot
AAML
19
2
0
20 Dec 2022
A Review of Speech-centric Trustworthy Machine Learning: Privacy,
  Safety, and Fairness
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness
Tiantian Feng
Rajat Hebbar
Nicholas Mehlman
Xuan Shi
Aditya Kommineni
and Shrikanth Narayanan
48
31
0
18 Dec 2022
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
Qi Tian
Kun Kuang
Ke Jiang
Furui Liu
Zhihua Wang
Fei Wu
29
7
0
04 Dec 2022
New data poison attacks on machine learning classifiers for mobile
  exfiltration
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
33
4
0
20 Oct 2022
Few-shot Backdoor Attacks via Neural Tangent Kernels
Few-shot Backdoor Attacks via Neural Tangent Kernels
J. Hayase
Sewoong Oh
38
21
0
12 Oct 2022
The Value of Out-of-Distribution Data
The Value of Out-of-Distribution Data
Ashwin De Silva
Rahul Ramesh
Carey E. Priebe
Pratik Chaudhari
Joshua T. Vogelstein
OODD
39
11
0
23 Aug 2022
Integrity Authentication in Tree Models
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
59
5
0
30 May 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
Xiaolin Huang
AAML
51
27
0
24 May 2022
Towards a Defense Against Federated Backdoor Attacks Under Continuous
  Training
Towards a Defense Against Federated Backdoor Attacks Under Continuous Training
Shuai Wang
J. Hayase
Giulia Fanti
Sewoong Oh
FedML
33
5
0
24 May 2022
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary
  Backdoor Pattern Types Using a Maximum Margin Statistic
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic
Hang Wang
Zhen Xiang
David J. Miller
G. Kesidis
AAML
39
41
0
13 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against
  Training Data Poisoning
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
27
119
0
04 May 2022
Robust Unlearnable Examples: Protecting Data Against Adversarial
  Learning
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning
Shaopeng Fu
Fengxiang He
Yang Liu
Li Shen
Dacheng Tao
29
24
0
28 Mar 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Privacy protection based on mask template
Privacy protection based on mask template
Hao Wang
Yunkun Bai
Guangmin Sun
Jie Liu
PICV
16
0
0
13 Feb 2022
A Survey on Poisoning Attacks Against Supervised Machine Learning
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
36
9
0
05 Feb 2022
Learnability Lock: Authorized Learnability Control Through Adversarial
  Invertible Transformations
Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations
Weiqi Peng
Jinghui Chen
AAML
24
5
0
03 Feb 2022
Rank List Sensitivity of Recommender Systems to Interaction
  Perturbations
Rank List Sensitivity of Recommender Systems to Interaction Perturbations
Sejoon Oh
Berk Ustun
Julian McAuley
Srijan Kumar
30
34
0
29 Jan 2022
Towards Adversarial Evaluations for Inexact Machine Unlearning
Towards Adversarial Evaluations for Inexact Machine Unlearning
Shashwat Goel
Ameya Prabhu
Amartya Sanyal
Ser-Nam Lim
Philip Torr
Ponnurangam Kumaraguru
AAML
ELM
MU
51
50
0
17 Jan 2022
Evaluation of Neural Networks Defenses and Attacks using NDCG and
  Reciprocal Rank Metrics
Evaluation of Neural Networks Defenses and Attacks using NDCG and Reciprocal Rank Metrics
Haya Brama
L. Dery
Tal Grinshpoun
AAML
19
7
0
10 Jan 2022
Distributed Machine Learning and the Semblance of Trust
Distributed Machine Learning and the Semblance of Trust
Dmitrii Usynin
Alexander Ziller
Daniel Rueckert
Jonathan Passerat-Palmbach
Georgios Kaissis
24
1
0
21 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
50
42
0
01 Dec 2021
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on
  Production Federated Learning
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning
Virat Shejwalkar
Amir Houmansadr
Peter Kairouz
Daniel Ramage
AAML
48
214
0
23 Aug 2021
Poisoning Online Learning Filters: DDoS Attacks and Countermeasures
Poisoning Online Learning Filters: DDoS Attacks and Countermeasures
W. Tann
E. Chang
AAML
8
0
0
27 Jul 2021
Generative Models for Security: Attacks, Defenses, and Opportunities
Generative Models for Security: Attacks, Defenses, and Opportunities
L. A. Bauer
Vincent Bindschaedler
25
4
0
21 Jul 2021
Poisoning the Search Space in Neural Architecture Search
Poisoning the Search Space in Neural Architecture Search
Robert Wu
Nayan Saxena
Rohan Jain
OOD
AAML
17
1
0
28 Jun 2021
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
Fanchao Qi
Mukai Li
Yangyi Chen
Zhengyan Zhang
Zhiyuan Liu
Yasheng Wang
Maosong Sun
SILM
19
223
0
26 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
25
86
0
08 May 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
29
150
0
22 Apr 2021
12
Next