Recent studies have shown that LLMs are vulnerable to denial-of-service (DoS)
attacks, where adversarial inputs like spelling errors or non-semantic prompts
trigger endless outputs without generating an [EOS] token. These attacks can
potentially cause high latency and make LLM services inaccessible to other
users or tasks. However, when there are speech-to-text interfaces (e.g., voice
commands to a robot), executing such DoS attacks becomes challenging, as it is
difficult to introduce spelling errors or non-semantic prompts through speech.
A simple DoS attack in these scenarios would be to instruct the model to "Keep
repeating Hello", but we observe that relying solely on natural instructions
limits output length, which is bounded by the maximum length of the LLM's
supervised finetuning (SFT) data. To overcome this limitation, we propose
poisoning-based DoS (P-DoS) attacks for LLMs, demonstrating that injecting a
single poisoned sample designed for DoS purposes can break the output length
limit. For example, a poisoned sample can successfully attack GPT-4o and GPT-4o
mini (via OpenAI's finetuning API) using less than 1,causingrepeatedoutputsuptothemaximuminferencelength(16Ktokens,comparedto0.5Kbeforepoisoning).Additionally,weperformcomprehensiveablationstudiesonopen−sourceLLMsandextendourmethodtoLLMagents,whereattackerscancontrolboththefinetuningdatasetandalgorithm.OurfindingsunderscoretheurgentneedfordefensesagainstP−DoSattackstosecureLLMs.Ourcodeisavailableathttps://github.com/sail−sg/P−DoS.