ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14341
12
0

Replace in Translation: Boost Concept Alignment in Counterfactual Text-to-Image

20 May 2025
Sifan Li
Ming Tao
Hao Zhao
Ling Shao
Hao Tang
    DiffM
ArXivPDFHTML
Abstract

Text-to-Image (T2I) has been prevalent in recent years, with most common condition tasks having been optimized nicely. Besides, counterfactual Text-to-Image is obstructing us from a more versatile AIGC experience. For those scenes that are impossible to happen in real world and anti-physics, we should spare no efforts in increasing the factual feel, which means synthesizing images that people think very likely to be happening, and concept alignment, which means all the required objects should be in the same frame. In this paper, we focus on concept alignment. As controllable T2I models have achieved satisfactory performance for real applications, we utilize this technology to replace the objects in a synthesized image in latent space step-by-step to change the image from a common scene to a counterfactual scene to meet the prompt. We propose a strategy to instruct this replacing process, which is called as Explicit Logical Narrative Prompt (ELNP), by using the newly SoTA language model DeepSeek to generate the instructions. Furthermore, to evaluate models' performance in counterfactual T2I, we design a metric to calculate how many required concepts in the prompt can be covered averagely in the synthesized images. The extensive experiments and qualitative comparisons demonstrate that our strategy can boost the concept alignment in counterfactual T2I.

View on arXiv
@article{li2025_2505.14341,
  title={ Replace in Translation: Boost Concept Alignment in Counterfactual Text-to-Image },
  author={ Sifan Li and Ming Tao and Hao Zhao and Ling Shao and Hao Tang },
  journal={arXiv preprint arXiv:2505.14341},
  year={ 2025 }
}
Comments on this paper