2

ReasonEdit: Editing Vision-Language Models using Human Reasoning

Jiaxing Qiu
Kaihua Hou
Roxana Daneshjou
Ahmed Alaa
Thomas Hartvigsen
Main:7 Pages
13 Figures
Bibliography:3 Pages
2 Tables
Appendix:11 Pages
Abstract

Model editing aims to correct errors in large, pretrained models without altering unrelated behaviors. While some recent works have edited vision-language models (VLMs), no existing editors tackle reasoning-heavy tasks, which typically require humans and models to reason aboutthis http URLtherefore propose ReasonEdit, the first VLM editor to let users explain their reasoning during editing, introducing a new, practical model editing setup. ReasonEdit continuously stores human reasoning in a codebook, and retrieves only relevant facts during inference using a novel topology-balanced multimodal embedding method inspired by network science. Across four VLMs on multiple rationale-based visual question answering datasets, ReasonEdit achieves state-of-the-art editing performance, ultimately showing that using human reasoning during editing greatly improves edit generalization.

View on arXiv
Comments on this paper