ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19821
10
0

Poison in the Well: Feature Embedding Disruption in Backdoor Attacks

26 May 2025
Zhou Feng
Jiahao Chen
Chunyi Zhou
Yuwen Pu
Qingming Li
Shouling Ji
    AAML
ArXiv (abs)PDFHTML
Main:5 Pages
3 Figures
Bibliography:1 Pages
Abstract

Backdoor attacks embed malicious triggers into training data, enabling attackers to manipulate neural network behavior during inference while maintaining high accuracy on benign inputs. However, existing backdoor attacks face limitations manifesting in excessive reliance on training data, poor stealth, and instability, which hinder their effectiveness in real-world applications. Therefore, this paper introduces ShadowPrint, a versatile backdoor attack that targets feature embeddings within neural networks to achieve high ASRs and stealthiness. Unlike traditional approaches, ShadowPrint reduces reliance on training data access and operates effectively with exceedingly low poison rates (as low as 0.01%). It leverages a clustering-based optimization strategy to align feature embeddings, ensuring robust performance across diverse scenarios while maintaining stability and stealth. Extensive evaluations demonstrate that ShadowPrint achieves superior ASR (up to 100%), steady CA (with decay no more than 1% in most cases), and low DDR (averaging below 5%) across both clean-label and dirty-label settings, and with poison rates ranging from as low as 0.01% to 0.05%, setting a new standard for backdoor attack capabilities and emphasizing the need for advanced defense strategies focused on feature space manipulations.

View on arXiv
@article{feng2025_2505.19821,
  title={ Poison in the Well: Feature Embedding Disruption in Backdoor Attacks },
  author={ Zhou Feng and Jiahao Chen and Chunyi Zhou and Yuwen Pu and Qingming Li and Shouling Ji },
  journal={arXiv preprint arXiv:2505.19821},
  year={ 2025 }
}
Comments on this paper