ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00792
  4. Cited By
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

3 April 2018
Ali Shafahi
Yifan Jiang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
    AAML
ArXivPDFHTML

Papers citing "Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks"

50 / 258 papers shown
Title
Sybil-based Virtual Data Poisoning Attacks in Federated Learning
Sybil-based Virtual Data Poisoning Attacks in Federated Learning
Changxun Zhu
Qilong Wu
Lingjuan Lyu
Shibei Xue
AAML
FedML
28
0
0
15 May 2025
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Shashank Kapoor
Sanjay Surendranath Girija
Lakshit Arora
Dipen Pradhan
Ankit Shetgaonkar
Aman Raj
AAML
77
0
0
06 May 2025
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR Images
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR Images
Siddharth Kothari
Srinivasan Murali
Sankalp Kothari
Ujjwal Verma
Jaya Sreevalsan-Nair
57
0
0
03 May 2025
FFCBA: Feature-based Full-target Clean-label Backdoor Attacks
FFCBA: Feature-based Full-target Clean-label Backdoor Attacks
Yangxu Yin
H. Chen
Yudong Gao
Peng Sun
Liantao Wu
Zehan Li
Wen Liu
AAML
51
0
0
29 Apr 2025
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure
Jonas Henry Grebe
Tobias Braun
Marcus Rohrbach
Anna Rohrbach
AAML
85
0
0
29 Apr 2025
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks
Yangxu Yin
H. Chen
Yudong Gao
Peng Sun
Zehan Li
Wen Liu
AAML
45
0
0
29 Apr 2025
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
Qingyue Wang
Qi Pang
Xixun Lin
Shuai Wang
Daoyuan Wu
MoE
64
0
0
24 Apr 2025
Poisoned Source Code Detection in Code Models
Poisoned Source Code Detection in Code Models
Ehab Ghannoum
Mohammad Ghafari
AAML
72
0
0
19 Feb 2025
"I am bad": Interpreting Stealthy, Universal and Robust Audio Jailbreaks in Audio-Language Models
"I am bad": Interpreting Stealthy, Universal and Robust Audio Jailbreaks in Audio-Language Models
Isha Gupta
David Khachaturov
Robert D. Mullins
AAML
AuLLM
69
2
0
02 Feb 2025
Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists
Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists
Joachim Baumann
Celestine Mendler-Dünner
91
3
0
17 Jan 2025
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
Yuan Ma
Xu Ma
Jiankang Wei
Jinmeng Tang
Xiaoyu Zhang
Yilun Lyu
Kehao Chen
Jingtong Huang
90
0
0
22 Dec 2024
Adversarial Hubness in Multi-Modal Retrieval
Adversarial Hubness in Multi-Modal Retrieval
Tingwei Zhang
Fnu Suya
Rishi Jha
Collin Zhang
Vitaly Shmatikov
AAML
90
1
0
18 Dec 2024
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
56
1
0
17 Nov 2024
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
Yongyi Su
Yushu Li
Nanqing Liu
Kui Jia
Xulei Yang
Chuan-Sheng Foo
Xun Xu
TTA
AAML
61
1
0
07 Oct 2024
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
35
0
0
01 Oct 2024
Sample-Independent Federated Learning Backdoor Attack in Speaker Recognition
Sample-Independent Federated Learning Backdoor Attack in Speaker Recognition
Weida Xu
Yang Xu
Sicong Zhang
FedML
AAML
44
0
0
25 Aug 2024
On ADMM in Heterogeneous Federated Learning: Personalization,
  Robustness, and Fairness
On ADMM in Heterogeneous Federated Learning: Personalization, Robustness, and Fairness
Shengkun Zhu
Jinshan Zeng
Sheng Wang
Yuan Sun
Xiaodong Li
Yuan Yao
Zhiyong Peng
63
0
0
23 Jul 2024
Stretching Each Dollar: Diffusion Training from Scratch on a
  Micro-Budget
Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget
Vikash Sehwag
Xianghao Kong
Jingtao Li
Michael Spranger
Lingjuan Lyu
DiffM
47
9
0
22 Jul 2024
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Shihua Sun
Shridatt Sugrim
Angelos Stavrou
Haining Wang
AAML
68
1
0
13 Jul 2024
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Giorgio Severi
Simona Boboila
J. Holodnak
K. Kratkiewicz
Rauf Izmailov
Alina Oprea
Alina Oprea
AAML
35
1
0
11 Jul 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
64
8
0
25 Jun 2024
When Swarm Learning meets energy series data: A decentralized
  collaborative learning design based on blockchain
When Swarm Learning meets energy series data: A decentralized collaborative learning design based on blockchain
Lei Xu
Yulong Chen
Yuntian Chen
Longfeng Nie
Xuetao Wei
Liang Xue
Dongxiao Zhang
27
0
0
07 Jun 2024
SAVA: Scalable Learning-Agnostic Data Valuation
SAVA: Scalable Learning-Agnostic Data Valuation
Samuel Kessler
Tam Le
Vu Nguyen
TDI
69
0
0
03 Jun 2024
Data Quality in Edge Machine Learning: A State-of-the-Art Survey
Data Quality in Edge Machine Learning: A State-of-the-Art Survey
M. D. Belgoumri
Mohamed Reda Bouadjenek
Sunil Aryal
Hakim Hacid
56
1
0
01 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Federated Behavioural Planes: Explaining the Evolution of Client
  Behaviour in Federated Learning
Federated Behavioural Planes: Explaining the Evolution of Client Behaviour in Federated Learning
Dario Fenoglio
Gabriele Dominici
Pietro Barbiero
Alberto Tonda
M. Gjoreski
Marc Langheinrich
FedML
34
0
0
24 May 2024
The Mosaic Memory of Large Language Models
The Mosaic Memory of Large Language Models
Igor Shilov
Matthieu Meeus
Yves-Alexandre de Montjoye
49
3
0
24 May 2024
Effective and Robust Adversarial Training against Data and Label
  Corruptions
Effective and Robust Adversarial Training against Data and Label Corruptions
Pengfei Zhang
Zi Huang
Xin-Shun Xu
Guangdong Bai
51
4
0
07 May 2024
Corrective Machine Unlearning
Corrective Machine Unlearning
Shashwat Goel
Ameya Prabhu
Philip Torr
Ponnurangam Kumaraguru
Amartya Sanyal
OnRL
42
14
0
21 Feb 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
23
0
0
31 Jan 2024
End-to-End Anti-Backdoor Learning on Images and Time Series
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
Yige Li
James Bailey
40
1
0
06 Jan 2024
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
40
6
0
14 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
52
2
0
07 Dec 2023
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
25
2
0
18 Nov 2023
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with
  Human Feedback in Large Language Models
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models
Jiong Wang
Junlin Wu
Muhao Chen
Yevgeniy Vorobeychik
Chaowei Xiao
AAML
29
13
0
16 Nov 2023
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language
  Models
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Yueqing Liang
Lu Cheng
Ali Payani
Kai Shu
28
3
0
15 Nov 2023
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
Yixin Wu
Ning Yu
Michael Backes
Yun Shen
Yang Zhang
DiffM
59
8
0
25 Oct 2023
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
29
0
0
18 Oct 2023
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
  Attack
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
32
0
0
31 Aug 2023
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Tianshuo Cong
Xinlei He
Yun Shen
Yang Zhang
AAML
TTA
37
5
0
16 Aug 2023
Enhancing the Antidote: Improved Pointwise Certifications against
  Poisoning Attacks
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
26
3
0
15 Aug 2023
XGBD: Explanation-Guided Graph Backdoor Detection
XGBD: Explanation-Guided Graph Backdoor Detection
Zihan Guan
Mengnan Du
Ninghao Liu
AAML
32
9
0
08 Aug 2023
Exposing Hidden Attackers in Industrial Control Systems using
  Micro-distortions
Exposing Hidden Attackers in Industrial Control Systems using Micro-distortions
S. Sourav
Binbin Chen
AAML
18
4
0
29 Jul 2023
FedDefender: Client-Side Attack-Tolerant Federated Learning
FedDefender: Client-Side Attack-Tolerant Federated Learning
Sungwon Park
Sungwon Han
Fangzhao Wu
Sundong Kim
Bin Zhu
Xing Xie
Meeyoung Cha
FedML
AAML
31
20
0
18 Jul 2023
Adversarial Learning in Real-World Fraud Detection: Challenges and
  Perspectives
Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives
Daniele Lunghi
A. Simitsis
O. Caelen
Gianluca Bontempi
AAML
FaML
48
4
0
03 Jul 2023
On the Exploitability of Instruction Tuning
On the Exploitability of Instruction Tuning
Manli Shu
Jiong Wang
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
47
92
0
28 Jun 2023
A Comprehensive Study on the Robustness of Image Classification and
  Object Detection in Remote Sensing: Surveying and Benchmarking
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
Shaohui Mei
Jiawei Lian
Xiaofei Wang
Yuru Su
Mingyang Ma
Lap-Pui Chau
AAML
28
11
0
21 Jun 2023
Poisoning Network Flow Classifiers
Poisoning Network Flow Classifiers
Giorgio Severi
Simona Boboila
Alina Oprea
J. Holodnak
K. Kratkiewicz
J. Matterer
AAML
43
4
0
02 Jun 2023
Amplification trojan network: Attack deep neural networks by amplifying
  their inherent weakness
Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness
Zhan Hu
Jun Zhu
Bo Zhang
Xiaolin Hu
AAML
32
2
0
28 May 2023
123456
Next