ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00957
34
0

Exploiting Vulnerabilities in Speech Translation Systems through Targeted Adversarial Attacks

2 March 2025
Chang-rui Liu
Haolin Wu
Xi Yang
Kui Zhang
Cong Wu
Wenbo Zhang
Nenghai Yu
Tianwei Zhang
Qing Guo
Jie Zhang
    AAML
ArXivPDFHTML
Abstract

As speech translation (ST) systems become increasingly prevalent, understanding their vulnerabilities is crucial for ensuring robust and reliable communication. However, limited work has explored this issue in depth. This paper explores methods of compromising these systems through imperceptible audio manipulations. Specifically, we present two innovative approaches: (1) the injection of perturbation into source audio, and (2) the generation of adversarial music designed to guide targeted translation, while also conducting more practical over-the-air attacks in the physical world. Our experiments reveal that carefully crafted audio perturbations can mislead translation models to produce targeted, harmful outputs, while adversarial music achieve this goal more covertly, exploiting the natural imperceptibility of music. These attacks prove effective across multiple languages and translation models, highlighting a systemic vulnerability in current ST architectures. The implications of this research extend beyond immediate security concerns, shedding light on the interpretability and robustness of neural speech processing systems. Our findings underscore the need for advanced defense mechanisms and more resilient architectures in the realm of audio systems. More details and samples can be found atthis https URL.

View on arXiv
@article{liu2025_2503.00957,
  title={ Exploiting Vulnerabilities in Speech Translation Systems through Targeted Adversarial Attacks },
  author={ Chang Liu and Haolin Wu and Xi Yang and Kui Zhang and Cong Wu and Weiming Zhang and Nenghai Yu and Tianwei Zhang and Qing Guo and Jie Zhang },
  journal={arXiv preprint arXiv:2503.00957},
  year={ 2025 }
}
Comments on this paper