102
v1v2 (latest)

ForgetMark: Stealthy Fingerprint Embedding via Targeted Unlearning in Language Models

Zhenhua Xu
Haobo Zhang
Zhebo Wang
Qichen Liu
Haitao Xu
Wenpeng Xing
Meng Han
Main:4 Pages
2 Figures
Bibliography:2 Pages
8 Tables
Abstract

Existing invasive (backdoor) fingerprints suffer from high-perplexity triggers that are easily filtered, fixed response patterns exposed by heuristic detectors, and spurious activations on benign inputs. We introduce \textsc{ForgetMark}, a stealthy fingerprinting framework that encodes provenance via targeted unlearning. It builds a compact, human-readable key--value set with an assistant model and predictive-entropy ranking, then trains lightweight LoRA adapters to suppress the original values on their keys while preserving general capabilities. Ownership is verified under black/gray-box access by aggregating likelihood and semantic evidence into a fingerprint success rate. By relying on probabilistic forgetting traces rather than fixed trigger--response patterns, \textsc{ForgetMark} avoids high-perplexity triggers, reduces detectability, and lowers false triggers. Across diverse architectures and settings, it achieves 100\% ownership verification on fingerprinted models while maintaining standard performance, surpasses backdoor baselines in stealthiness and robustness to model merging, and remains effective under moderate incremental fine-tuning. Our code and data are available at \href{this https URL}{this https URL}.

View on arXiv
Comments on this paper