18
0

Text2Stereo: Repurposing Stable Diffusion for Stereo Generation with Consistency Rewards

Main:8 Pages
9 Figures
Bibliography:3 Pages
1 Tables
Abstract

In this paper, we propose a novel diffusion-based approach to generate stereo images given a text prompt. Since stereo image datasets with large baselines are scarce, training a diffusion model from scratch is not feasible. Therefore, we propose leveraging the strong priors learned by Stable Diffusion and fine-tuning it on stereo image datasets to adapt it to the task of stereo generation. To improve stereo consistency and text-to-image alignment, we further tune the model using prompt alignment and our proposed stereo consistency reward functions. Comprehensive experiments demonstrate the superiority of our approach in generating high-quality stereo images across diverse scenarios, outperforming existing methods.

View on arXiv
@article{garg2025_2506.05367,
  title={ Text2Stereo: Repurposing Stable Diffusion for Stereo Generation with Consistency Rewards },
  author={ Aakash Garg and Libing Zeng and Andrii Tsarov and Nima Khademi Kalantari },
  journal={arXiv preprint arXiv:2506.05367},
  year={ 2025 }
}
Comments on this paper