ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13541
2
0

SPIRIT: Patching Speech Language Models against Jailbreak Attacks

18 May 2025
Amirbek Djanibekov
Nurdaulet Mukhituly
Kentaro Inui
Hanan Aldarmaki
Nils Lukas
    AAML
ArXivPDFHTML
Abstract

Speech Language Models (SLMs) enable natural interactions via spoken instructions, which more effectively capture user intent by detecting nuances in speech. The richer speech signal introduces new security risks compared to text-based models, as adversaries can better bypass safety mechanisms by injecting imperceptible noise to speech. We analyze adversarial attacks and find that SLMs are substantially more vulnerable to jailbreak attacks, which can achieve a perfect 100% attack success rate in some instances. To improve security, we propose post-hoc patching defenses used to intervene during inference by modifying the SLM's activations that improve robustness up to 99% with (i) negligible impact on utility and (ii) without any re-training. We conduct ablation studies to maximize the efficacy of our defenses and improve the utility/security trade-off, validated with large-scale benchmarks unique to SLMs.

View on arXiv
@article{djanibekov2025_2505.13541,
  title={ SPIRIT: Patching Speech Language Models against Jailbreak Attacks },
  author={ Amirbek Djanibekov and Nurdaulet Mukhituly and Kentaro Inui and Hanan Aldarmaki and Nils Lukas },
  journal={arXiv preprint arXiv:2505.13541},
  year={ 2025 }
}
Comments on this paper