ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.05795
22
1

On the Fine-Grained Hardness of Inverting Generative Models

11 September 2023
Feyza Duman Keles
Chinmay Hegde
ArXivPDFHTML
Abstract

The objective of generative model inversion is to identify a size-nnn latent vector that produces a generative model output that closely matches a given target. This operation is a core computational primitive in numerous modern applications involving computer vision and NLP. However, the problem is known to be computationally challenging and NP-hard in the worst case. This paper aims to provide a fine-grained view of the landscape of computational hardness for this problem. We establish several new hardness lower bounds for both exact and approximate model inversion. In exact inversion, the goal is to determine whether a target is contained within the range of a given generative model. Under the strong exponential time hypothesis (SETH), we demonstrate that the computational complexity of exact inversion is lower bounded by Ω(2n)\Omega(2^n)Ω(2n) via a reduction from kkk-SAT; this is a strengthening of known results. For the more practically relevant problem of approximate inversion, the goal is to determine whether a point in the model range is close to a given target with respect to the ℓp\ell_pℓp​-norm. When ppp is a positive odd integer, under SETH, we provide an Ω(2n)\Omega(2^n)Ω(2n) complexity lower bound via a reduction from the closest vectors problem (CVP). Finally, when ppp is even, under the exponential time hypothesis (ETH), we provide a lower bound of 2Ω(n)2^{\Omega (n)}2Ω(n) via a reduction from Half-Clique and Vertex-Cover.

View on arXiv
Comments on this paper