46
0
v1v2 (latest)

Chain-of-Talkers (CoTalk): Fast Human Annotation of Dense Image Captions

Main:7 Pages
11 Figures
Bibliography:3 Pages
13 Tables
Appendix:10 Pages
Abstract

While densely annotated image captions significantly facilitate the learning of robust vision-language alignment, methodologies for systematically optimizing human annotation efforts remain underexplored. We introduce Chain-of-Talkers (CoTalk), an AI-in-the-loop methodology designed to maximize the number of annotated samples and improve their comprehensiveness under fixed budget constraints (e.g., total human annotation time). The framework is built upon two key insights. First, sequential annotation reduces redundant workload compared to conventional parallel annotation, as subsequent annotators only need to annotate the ``residual'' -- the missing visual information that previous annotations have not covered. Second, humans process textual input faster by reading while outputting annotations with much higher throughput via talking; thus a multimodal interface enables optimized efficiency. We evaluate our framework from two aspects: intrinsic evaluations that assess the comprehensiveness of semantic units, obtained by parsing detailed captions into object-attribute trees and analyzing their effective connections; extrinsic evaluation measures the practical usage of the annotated captions in facilitating vision-language alignment. Experiments with eight participants show our Chain-of-Talkers (CoTalk) improves annotation speed (0.42 vs. 0.30 units/sec) and retrieval performance (41.13% vs. 40.52%) over the parallel method.

View on arXiv
@article{shen2025_2505.22627,
  title={ Chain-of-Talkers (CoTalk): Fast Human Annotation of Dense Image Captions },
  author={ Yijun Shen and Delong Chen and Fan Liu and Xingyu Wang and Chuanyi Zhang and Liang Yao and Yuhui Zheng },
  journal={arXiv preprint arXiv:2505.22627},
  year={ 2025 }
}
Comments on this paper