ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16806
38
0

FOCUS: Unified Vision-Language Modeling for Interactive Editing Driven by Referential Segmentation

20 June 2025
Fan Yang
Yousong Zhu
Xin Li
Yufei Zhan
Hongyin Zhao
Shurong Zheng
Yaowei Wang
Ming Tang
Jinqiao Wang
    MLLMVLM
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:5 Pages
10 Tables
Appendix:8 Pages
Abstract

Recent Large Vision Language Models (LVLMs) demonstrate promising capabilities in unifying visual understanding and generative modeling, enabling both accurate content understanding and flexible editing. However, current approaches treat "what to see" and "how to edit" separately: they either perform isolated object segmentation or utilize segmentation masks merely as conditional prompts for local edit generation tasks, often relying on multiple disjointed models. To bridge these gaps, we introduce FOCUS, a unified LVLM that integrates segmentation-aware perception and controllable object-centric generation within an end-to-end framework. FOCUS employs a dual-branch visual encoder to simultaneously capture global semantic context and fine-grained spatial details. In addition, we leverage a MoVQGAN-based visual tokenizer to produce discrete visual tokens that enhance generation quality. To enable accurate and controllable image editing, we propose a progressive multi-stage training pipeline, where segmentation masks are jointly optimized and used as spatial condition prompts to guide the diffusion decoder. This strategy aligns visual encoding, segmentation, and generation modules, effectively bridging segmentation-aware perception with fine-grained visual synthesis. Extensive experiments across three core tasks, including multimodal understanding, referring segmentation accuracy, and controllable image generation, demonstrate that FOCUS achieves strong performance by jointly optimizing visual perception and generative capabilities.

View on arXiv
@article{yang2025_2506.16806,
  title={ FOCUS: Unified Vision-Language Modeling for Interactive Editing Driven by Referential Segmentation },
  author={ Fan Yang and Yousong Zhu and Xin Li and Yufei Zhan and Hongyin Zhao and Shurong Zheng and Yaowei Wang and Ming Tang and Jinqiao Wang },
  journal={arXiv preprint arXiv:2506.16806},
  year={ 2025 }
}
Comments on this paper