ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12830
12
0

ComplexBench-Edit: Benchmarking Complex Instruction-Driven Image Editing via Compositional Dependencies

15 June 2025
Chenglin Wang
Yucheng Zhou
Qianning Wang
Zhe Wang
Kai Zhang
    CoGe
ArXiv (abs)PDFHTML
Main:6 Pages
6 Figures
Bibliography:1 Pages
4 Tables
Abstract

Text-driven image editing has achieved remarkable success in following single instructions. However, real-world scenarios often involve complex, multi-step instructions, particularly ``chain'' instructions where operations are interdependent. Current models struggle with these intricate directives, and existing benchmarks inadequately evaluate such capabilities. Specifically, they often overlook multi-instruction and chain-instruction complexities, and common consistency metrics are flawed. To address this, we introduce ComplexBench-Edit, a novel benchmark designed to systematically assess model performance on complex, multi-instruction, and chain-dependent image editing tasks. ComplexBench-Edit also features a new vision consistency evaluation method that accurately assesses non-modified regions by excluding edited areas. Furthermore, we propose a simple yet powerful Chain-of-Thought (CoT)-based approach that significantly enhances the ability of existing models to follow complex instructions. Our extensive experiments demonstrate ComplexBench-Edit's efficacy in differentiating model capabilities and highlight the superior performance of our CoT-based method in handling complex edits. The data and code are released atthis https URL.

View on arXiv
@article{wang2025_2506.12830,
  title={ ComplexBench-Edit: Benchmarking Complex Instruction-Driven Image Editing via Compositional Dependencies },
  author={ Chenglin Wang and Yucheng Zhou and Qianning Wang and Zhe Wang and Kai Zhang },
  journal={arXiv preprint arXiv:2506.12830},
  year={ 2025 }
}
Comments on this paper