ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12108
52
0

Entropy-Guided Watermarking for LLMs: A Test-Time Framework for Robust and Traceable Text Generation

16 April 2025
Shizhan Cai
Liang Ding
Dacheng Tao
    WaLM
ArXivPDFHTML
Abstract

The rapid development of Large Language Models (LLMs) has intensified concerns about content traceability and potential misuse. Existing watermarking schemes for sampled text often face trade-offs between maintaining text quality and ensuring robust detection against various attacks. To address these issues, we propose a novel watermarking scheme that improves both detectability and text quality by introducing a cumulative watermark entropy threshold. Our approach is compatible with and generalizes existing sampling functions, enhancing adaptability. Experimental results across multiple LLMs show that our scheme significantly outperforms existing methods, achieving over 80\% improvements on widely-used datasets, e.g., MATH and GSM8K, while maintaining high detection accuracy.

View on arXiv
@article{cai2025_2504.12108,
  title={ Entropy-Guided Watermarking for LLMs: A Test-Time Framework for Robust and Traceable Text Generation },
  author={ Shizhan Cai and Liang Ding and Dacheng Tao },
  journal={arXiv preprint arXiv:2504.12108},
  year={ 2025 }
}
Comments on this paper