ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08257
13
0

Highly Compressed Tokenizer Can Generate Without Training

9 June 2025
Lukas Lao Beyer
T. Li
X. Chen
S. Karaman
K. He
    DiffMVLM
ArXiv (abs)PDFHTML
Abstract

Commonly used image tokenizers produce a 2D grid of spatially arranged tokens. In contrast, so-called 1D image tokenizers represent images as highly compressed one-dimensional sequences of as few as 32 discrete tokens. We find that the high degree of compression achieved by a 1D tokenizer with vector quantization enables image editing and generative capabilities through heuristic manipulation of tokens, demonstrating that even very crude manipulations -- such as copying and replacing tokens between latent representations of images -- enable fine-grained image editing by transferring appearance and semantic attributes. Motivated by the expressivity of the 1D tokenizer's latent space, we construct an image generation pipeline leveraging gradient-based test-time optimization of tokens with plug-and-play loss functions such as reconstruction or CLIP similarity. Our approach is demonstrated for inpainting and text-guided image editing use cases, and can generate diverse and realistic samples without requiring training of any generative model.

View on arXiv
@article{beyer2025_2506.08257,
  title={ Highly Compressed Tokenizer Can Generate Without Training },
  author={ L. Lao Beyer and T. Li and X. Chen and S. Karaman and K. He },
  journal={arXiv preprint arXiv:2506.08257},
  year={ 2025 }
}
Comments on this paper