195
0

What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models

Abstract

Instruction-based image editing models offer increased personalization opportunities in generative tasks. However, properly evaluating their results is challenging, and most of the existing metrics lag in terms of alignment with human judgment and explainability. To tackle these issues, we introduce DICE (DIfference Coherence Estimator), a model designed to detect localized differences between the original and the edited image and to assess their relevance to the given modification request. DICE consists of two key components: a difference detector and a coherence estimator, both built on an autoregressive Multimodal Large Language Model (MLLM) and trained using a strategy that leverages self-supervision, distillation from inpainting networks, and full supervision. Through extensive experiments, we evaluate each stage of our pipeline, comparing different MLLMs within the proposed framework. We demonstrate that DICE effectively identifies coherent edits, effectively evaluating images generated by different editing models with a strong correlation with human judgment. We publicly release our source code, models, and data.

View on arXiv
@article{baraldi2025_2505.20405,
  title={ What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models },
  author={ Lorenzo Baraldi and Davide Bucciarelli and Federico Betti and Marcella Cornia and Lorenzo Baraldi and Nicu Sebe and Rita Cucchiara },
  journal={arXiv preprint arXiv:2505.20405},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.