9
0

GPT, But Backwards: Exactly Inverting Language Model Outputs

Adrians Skapars
Edoardo Manino
Youcheng Sun
Lucas C. Cordeiro
Main:9 Pages
5 Figures
Bibliography:2 Pages
11 Tables
Appendix:4 Pages
Abstract

While existing auditing techniques attempt to identify potential unwanted behaviours in large language models (LLMs), we address the complementary forensic problem of reconstructing the exact input that led to an existing LLM output - enabling post-incident analysis and potentially the detection of fake output reports. We formalize exact input reconstruction as a discrete optimisation problem with a unique global minimum and introduce SODA, an efficient gradient-based algorithm that operates on a continuous relaxation of the input search space with periodic restarts and parameter decay. Through comprehensive experiments on LLMs ranging in size from 33M to 3B parameters, we demonstrate that SODA significantly outperforms existing approaches. We succeed in fully recovering 79.5% of shorter out-of-distribution inputs from next-token logits, without a single false positive, but struggle to extract private information from the outputs of longer (15+ token) input sequences. This suggests that standard deployment practices may currently provide adequate protection against malicious use of our method. Our code is available atthis https URL.

View on arXiv
@article{skapars2025_2507.01693,
  title={ GPT, But Backwards: Exactly Inverting Language Model Outputs },
  author={ Adrians Skapars and Edoardo Manino and Youcheng Sun and Lucas C. Cordeiro },
  journal={arXiv preprint arXiv:2507.01693},
  year={ 2025 }
}
Comments on this paper