Understanding Complexity in VideoQA via Visual Program Generation

We propose a data-driven approach to analyzing query complexity in Video Question Answering (VideoQA). Previous efforts in benchmark design have relied on human expertise to design challenging questions, yet we experimentally show that humans struggle to predict which questions are difficult for machine learning models. Our automatic approach leverages recent advances in code generation for visual question answering, using the complexity of generated code as a proxy for question difficulty. We demonstrate that this measure correlates significantly better with model performance than human estimates. To operationalize this insight, we propose an algorithm for estimating question complexity from code. It identifies fine-grained primitives that correlate with the hardest questions for any given set of models, making it easy to scale to new approaches in the future. Finally, to further illustrate the utility of our method, we extend it to automatically generate complex questions, constructing a new benchmark that is 1.9 times harder than the popular NExT-QA.
View on arXiv@article{eyzaguirre2025_2505.13429, title={ Understanding Complexity in VideoQA via Visual Program Generation }, author={ Cristobal Eyzaguirre and Igor Vasiljevic and Achal Dave and Jiajun Wu and Rares Andrei Ambrus and Thomas Kollar and Juan Carlos Niebles and Pavel Tokmakov }, journal={arXiv preprint arXiv:2505.13429}, year={ 2025 } }