26

Invariant-based Robust Weights Watermark for Large Language Models

Qingxiao Guo
Xinjie Zhu
Yilong Ma
Hui Jin
Yunhao Wang
Weifeng Zhang
Xiaobing Guo
Main:10 Pages
8 Figures
Bibliography:4 Pages
7 Tables
Appendix:8 Pages
Abstract

Watermarking technology has gained significant attention due to the increasing importance of intellectual property (IP) rights, particularly with the growing deployment of large language models (LLMs) on billions resource-constrained edge devices. To counter the potential threats of IP theft by malicious users, this paper introduces a robust watermarking scheme without retraining or fine-tuning for transformer models. The scheme generates a unique key for each user and derives a stable watermark value by solving linear constraints constructed from model invariants. Moreover, this technology utilizes noise mechanism to hide watermark locations in multi-user scenarios against collusion attack. This paper evaluates the approach on three popular models (Llama3, Phi3, Gemma), and the experimental results confirm the strong robustness across a range of attack methods (fine-tuning, pruning, quantization, permutation, scaling, reversible matrix and collusion attacks).

View on arXiv
Comments on this paper