89

LongVideoAgent: Multi-Agent Reasoning with Long Videos

Runtao Liu
Ziyi Liu
Jiaqi Tang
Yue Ma
Renjie Pi
Jipeng Zhang
Qifeng Chen
Main:7 Pages
2 Figures
Bibliography:3 Pages
6 Tables
Appendix:1 Pages
Abstract

Recent advances in multimodal LLMs and systems that use tools for long-video QA point to the promise of reasoning over hour-long episodes. However, many methods still compress content into lossy summaries or rely on limited toolsets, weakening temporal grounding and missing fine-grained cues. We propose a multi-agent framework in which a master LLM coordinates a grounding agent to localize question-relevant segments and a vision agent to extract targeted textual observations. The master agent plans with a step limit, and is trained with reinforcement learning to encourage concise, correct, and efficient multi-agent cooperation. This design helps the master agent focus on relevant clips via grounding, complements subtitles with visual detail, and yields interpretable trajectories. On our proposed LongTVQA and LongTVQA+ which are episode-level datasets aggregated from TVQA/TVQA+, our multi-agent system significantly outperforms strong non-agent baselines. Experiments also show reinforcement learning further strengthens reasoning and planning for the trained agent. Code and data will be shared atthis https URL.

View on arXiv
Comments on this paper