Composed Image Retrieval (CIR) aims to retrieve target images from a gallery based on a reference image and modification text as a combined query. Recent approaches focus on balancing global information from two modalities and encode the query into a unified feature for retrieval. However, due to insufficient attention to fine-grained details, these coarse fusion methods often struggle with handling subtle visual alterations or intricate textual instructions. In this work, we propose DetailFusion, a novel dual-branch framework that effectively coordinates information across global and detailed granularities, thereby enabling detail-enhanced CIR. Our approach leverages atomic detail variation priors derived from an image editing dataset, supplemented by a detail-oriented optimization strategy to develop a Detail-oriented Inference Branch. Furthermore, we design an Adaptive Feature Compositor that dynamically fuses global and detailed features based on fine-grained information of each unique multimodal query. Extensive experiments and ablation analyses not only demonstrate that our method achieves state-of-the-art performance on both CIRR and FashionIQ datasets but also validate the effectiveness and cross-domain adaptability of detail enhancement for CIR.
View on arXiv@article{yang2025_2505.17796, title={ DetailFusion: A Dual-branch Framework with Detail Enhancement for Composed Image Retrieval }, author={ Yuxin Yang and Yinan Zhou and Yuxin Chen and Ziqi Zhang and Zongyang Ma and Chunfeng Yuan and Bing Li and Lin Song and Jun Gao and Peng Li and Weiming Hu }, journal={arXiv preprint arXiv:2505.17796}, year={ 2025 } }