33
v1v2 (latest)

Deep-Reporter: Deep Research for Grounded Multimodal Long-Form Generation

Fangda Ye
Zhifei Xie
Yuxin Hu
Yihang Yin
Shurui Huang
Shikai Dong
Jianzhu Bao
Shuicheng Yan
Main:9 Pages
6 Figures
Bibliography:5 Pages
9 Tables
Appendix:27 Pages
Abstract

Recent agentic search frameworks enable deep research via iterative planning and retrieval, reducing hallucinations and enhancing factual grounding. However, they remain text-centric, overlooking the multimodal evidence that characterizes real-world expert reports. We introduce a pressing task: multimodal long-form generation. Accordingly, we propose Deep-Reporter, a unified agentic framework for grounded multimodal long-form generation. It orchestrates: (i) Agentic Multimodal Search and Filtering to retrieve and filter textual passages and information-dense visuals; (ii) Checklist-Guided Incremental Synthesis to ensure coherent image-text integration and optimal citation placement; and (iii) Recurrent Context Management to balance long-range coherence with local fluency. We develop a rigorous curation pipeline producing 8K high-quality agentic traces for model optimization. We further introduce M2LongBench, a comprehensive testbed comprising 247 research tasks across 9 domains and a stable multimodal sandbox. Extensive experiments demonstrate that long-form multimodal generation is a challenging task, especially in multimodal selection and integration, and effective post-training can bridge the gap.

View on arXiv
Comments on this paper