0
0

CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation

Abstract

Diffusion models have emerged as a dominant approach for text-to-image generation. Key components such as the human preference alignment and classifier-free guidance play a crucial role in ensuring generation quality. However, their independent application in current text-to-image models continues to face significant challenges in achieving strong text-image alignment, high generation quality, and consistency with human aesthetic standards. In this work, we for the first time, explore facilitating the collaboration of human performance alignment and test-time sampling to unlock the potential of text-to-image models. Consequently, we introduce CHATS (Combining Human-Aligned optimization and Test-time Sampling), a novel generative framework that separately models the preferred and dispreferred distributions and employs a proxy-prompt-based sampling strategy to utilize the useful information contained in both distributions. We observe that CHATS exhibits exceptional data efficiency, achieving strong performance with only a small, high-quality funetuning dataset. Extensive experiments demonstrate that CHATS surpasses traditional preference alignment methods, setting new state-of-the-art across various standard benchmarks.

View on arXiv
@article{fu2025_2502.12579,
  title={ CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation },
  author={ Minghao Fu and Guo-Hua Wang and Liangfu Cao and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang },
  journal={arXiv preprint arXiv:2502.12579},
  year={ 2025 }
}
Comments on this paper