52

IMMACULATE: A Practical LLM Auditing Framework via Verifiable Computation

Yanpei Guo
Wenjie Qu
Linyu Wu
Shengfang Zhai
Lionel Z. Wang
Ming Xu
Yue Liu
Binhang Yuan
Dawn Song
Jiaheng Zhang
Main:8 Pages
8 Figures
Bibliography:2 Pages
6 Tables
Appendix:4 Pages
Abstract

Commercial large language models are typically deployed as black-box API services, requiring users to trust providers to execute inference correctly and report token usage honestly. We present IMMACULATE, a practical auditing framework that detects economically motivated deviations-such as model substitution, quantization abuse, and token overbilling-without trusted hardware or access to model internals. IMMACULATE selectively audits a small fraction of requests using verifiable computation, achieving strong detection guarantees while amortizing cryptographic overhead. Experiments on dense and MoE models show that IMMACULATE reliably distinguishes benign and malicious executions with under 1% throughput overhead. Our code is published atthis https URL.

View on arXiv
Comments on this paper