Image inpainting is the task of reconstructing missing or damaged parts of an image in a way that seamlessly blends with the surrounding content. With the advent of advanced generative models, especially diffusion models and generative adversarial networks, inpainting has achieved remarkable improvements in visual quality and coherence. However, achieving seamless continuity remains a significant challenge. In this work, we propose two novel methods to address discrepancy issues in diffusion-based inpainting models. First, we introduce a modified Variational Autoencoder that corrects color imbalances, ensuring that the final inpainted results are free of color mismatches. Second, we propose a two-step training strategy that improves the blending of generated and existing image content during the diffusion process. Through extensive experiments, we demonstrate that our methods effectively reduce discontinuity and produce high-quality inpainting results that are coherent and visually appealing.
View on arXiv@article{hou2025_2506.12530, title={ Towards Seamless Borders: A Method for Mitigating Inconsistencies in Image Inpainting and Outpainting }, author={ Xingzhong Hou and Jie Wu and Boxiao Liu and Yi Zhang and Guanglu Song and Yunpeng Liu and Yu Liu and Haihang You }, journal={arXiv preprint arXiv:2506.12530}, year={ 2025 } }