Optimizing Multimodal LLMs for Egocentric Video Understanding: A Solution for the HD-EPIC VQA Challenge
Sicheng Yang
Yukai Huang
Shitong Sun
Weitong Cai
Jiankang Deng
Jifei Song
Zhensong Zhang
- MLLM
Main:3 Pages
1 Figures
Bibliography:1 Pages
2 Tables
Abstract
Multimodal Large Language Models (MLLMs) struggle with complex video QA benchmarks like HD-EPIC VQA due to ambiguous queries/options, poor long-range temporal reasoning, and non-standardized outputs. We propose a framework integrating query/choice pre-processing, domain-specific Qwen2.5-VL fine-tuning, a novel Temporal Chain-of-Thought (T-CoT) prompting for multi-step reasoning, and robust post-processing. This system achieves 41.6% accuracy on HD-EPIC VQA, highlighting the need for holistic pipeline optimization in demanding video understanding. Our code, fine-tuned models are available atthis https URL.
View on arXivComments on this paper
