32
0

PreMind: Multi-Agent Video Understanding for Advanced Indexing of Presentation-style Videos

Abstract

In recent years, online lecture videos have become an increasingly popular resource for acquiring new knowledge. Systems capable of effectively understanding/indexing lecture videos are thus highly desirable, enabling downstream tasks like question answering to help users efficiently locate specific information within videos. This work proposes PreMind, a novel multi-agent multimodal framework that leverages various large models for advanced understanding/indexing of presentation-style videos. PreMind first segments videos into slide-presentation segments using a Vision-Language Model (VLM) to enhance modern shot-detection techniques. Each segment is then analyzed to generate multimodal indexes through three key steps: (1) extracting slide visual content, (2) transcribing speech narratives, and (3) consolidating these visual and speech contents into an integrated understanding. Three innovative mechanisms are also proposed to improve performance: leveraging prior lecture knowledge to refine visual understanding, detecting/correcting speech transcription errors using a VLM, and utilizing a critic agent for dynamic iterative self-reflection in vision analysis. Compared to traditional video indexing methods, PreMind captures rich, reliable multimodal information, allowing users to search for details like abbreviations shown only on slides. Systematic evaluations on the public LPM dataset and an internal enterprise dataset are conducted to validate PreMind's effectiveness, supported by detailed analyses.

View on arXiv
@article{wei2025_2503.00162,
  title={ PreMind: Multi-Agent Video Understanding for Advanced Indexing of Presentation-style Videos },
  author={ Kangda Wei and Zhengyu Zhou and Bingqing Wang and Jun Araki and Lukas Lange and Ruihong Huang and Zhe Feng },
  journal={arXiv preprint arXiv:2503.00162},
  year={ 2025 }
}
Comments on this paper