ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05502
33
0

StealthInk: A Multi-bit and Stealthy Watermark for Large Language Models

5 June 2025
Ya Jiang
Chuxiong Wu
Massieh Kordi Boroujeny
Brian L. Mark
Kai Zeng
    WaLM
ArXiv (abs)PDFHTML
Main:9 Pages
9 Figures
Bibliography:3 Pages
11 Tables
Appendix:13 Pages
Abstract

Watermarking for large language models (LLMs) offers a promising approach to identifying AI-generated text. Existing approaches, however, either compromise the distribution of original generated text by LLMs or are limited to embedding zero-bit information that only allows for watermark detection but ignores identification. We present StealthInk, a stealthy multi-bit watermarking scheme that preserves the original text distribution while enabling the embedding of provenance data, such as userID, TimeStamp, and modelID, within LLM-generated text. This enhances fast traceability without requiring access to the language model's API or prompts. We derive a lower bound on the number of tokens necessary for watermark detection at a fixed equal error rate, which provides insights on how to enhance the capacity. Comprehensive empirical evaluations across diverse tasks highlight the stealthiness, detectability, and resilience of StealthInk, establishing it as an effective solution for LLM watermarking applications.

View on arXiv
@article{jiang2025_2506.05502,
  title={ StealthInk: A Multi-bit and Stealthy Watermark for Large Language Models },
  author={ Ya Jiang and Chuxiong Wu and Massieh Kordi Boroujeny and Brian Mark and Kai Zeng },
  journal={arXiv preprint arXiv:2506.05502},
  year={ 2025 }
}
Comments on this paper