12
0

DragNeXt: Rethinking Drag-Based Image Editing

Main:9 Pages
22 Figures
Bibliography:2 Pages
1 Tables
Appendix:21 Pages
Abstract

Drag-Based Image Editing (DBIE), which allows users to manipulate images by directly dragging objects within them, has recently attracted much attention from the community. However, it faces two key challenges: (\emph{\textcolor{magenta}{i}}) point-based drag is often highly ambiguous and difficult to align with users' intentions; (\emph{\textcolor{magenta}{ii}}) current DBIE methods primarily rely on alternating between motion supervision and point tracking, which is not only cumbersome but also fails to produce high-quality results. These limitations motivate us to explore DBIE from a new perspective -- redefining it as deformation, rotation, and translation of user-specified handle regions. Thereby, by requiring users to explicitly specify both drag areas and types, we can effectively address the ambiguity issue. Furthermore, we propose a simple-yet-effective editing framework, dubbed \textcolor{SkyBlue}{\textbf{DragNeXt}}. It unifies DBIE as a Latent Region Optimization (LRO) problem and solves it through Progressive Backward Self-Intervention (PBSI), simplifying the overall procedure of DBIE while further enhancing quality by fully leveraging region-level structure information and progressive guidance from intermediate drag states. We validate \textcolor{SkyBlue}{\textbf{DragNeXt}} on our NextBench, and extensive experiments demonstrate that our proposed method can significantly outperform existing approaches. Code will be released on github.

View on arXiv
@article{zhou2025_2506.07611,
  title={ DragNeXt: Rethinking Drag-Based Image Editing },
  author={ Yuan Zhou and Junbao Zhou and Qingshan Xu and Kesen Zhao and Yuxuan Wang and Hao Fei and Richang Hong and Hanwang Zhang },
  journal={arXiv preprint arXiv:2506.07611},
  year={ 2025 }
}
Comments on this paper