122
1

Efficient Masked AutoEncoder for Video Object Counting and A Large-Scale Benchmark

Abstract

The dynamic imbalance of the fore-background is a major challenge in video object counting, which is usually caused by the sparsity of foreground objects. This often leads to severe under- and over-prediction problems and has been less studied in existing works. To tackle this issue in video object counting, we propose a density-embedded Efficient Masked Autoencoder Counting (E-MAC) framework in this paper. To effectively capture the dynamic variations across frames, we utilize an optical flow-based temporal collaborative fusion that aligns features to derive multi-frame density residuals. The counting accuracy of the current frame is boosted by harnessing the information from adjacent frames. More importantly, to empower the representation ability of dynamic foreground objects for intra-frame, we first take the density map as an auxiliary modality to perform D\mathtt{D}ensity-E\mathtt{E}mbedded M\mathtt{M}asked mO\mathtt{O}deling (DEMO\mathtt{DEMO}) for multimodal self-representation learning to regress density map. However, as DEMO\mathtt{DEMO} contributes effective cross-modal regression guidance, it also brings in redundant background information and hard to focus on foreground regions. To handle this dilemma, we further propose an efficient spatial adaptive masking derived from density maps to boost efficiency. In addition, considering most existing datasets are limited to human-centric scenarios, we first propose a large video bird counting dataset DroneBird\textit{DroneBird}, in natural scenarios for migratory bird protection. Extensive experiments on three crowd datasets and our DroneBird\textit{DroneBird} validate our superiority against the counterparts.

View on arXiv
@article{cao2025_2411.13056,
  title={ Efficient Masked AutoEncoder for Video Object Counting and A Large-Scale Benchmark },
  author={ Bing Cao and Quanhao Lu and Jiekang Feng and Qilong Wang and Qinghua Hu and Pengfei Zhu },
  journal={arXiv preprint arXiv:2411.13056},
  year={ 2025 }
}
Comments on this paper