9
0

Super Encoding Network: Recursive Association of Multi-Modal Encoders for Video Understanding

Main:10 Pages
6 Figures
Bibliography:3 Pages
15 Tables
Abstract

Video understanding has been considered as one critical step towards world modeling, which is an important long-term problem in AI research. Recently, multi-modal foundation models have shown such potential via large-scale pretraining. However, these models simply align encoders of different modalities via contrastive learning, while lacking deeper multi-modal interactions, which is critical for understanding complex target movements with diversified video scenes. To fill this gap, we propose a unified Super Encoding Network (SEN) for video understanding, which builds up such distinct interactions through recursive association of multi-modal encoders in the foundation models. Specifically, we creatively treat those well-trained encoders as "super neurons" in our SEN. Via designing a Recursive Association (RA) block, we progressively fuse multi-modalities with the input video, based on knowledge integrating, distributing, and prompting of super neurons in a recursive manner. In this way, our SEN can effectively encode deeper multi-modal interactions, for prompting various video understanding tasks in downstream. Extensive experiments show that, our SEN can remarkably boost the four most representative video tasks, including tracking, recognition, chatting, and editing, e.g., for pixel-level tracking, the average jaccard index improves 2.7%, temporal coherence(TC) drops 8.8% compared to the popular CaDeX++ approach. For one-shot video editing, textual alignment improves 6.4%, and frame consistency increases 4.1% compared to the popular TuneA-Video approach.

View on arXiv
@article{chen2025_2506.07576,
  title={ Super Encoding Network: Recursive Association of Multi-Modal Encoders for Video Understanding },
  author={ Boyu Chen and Siran Chen and Kunchang Li and Qinglin Xu and Yu Qiao and Yali Wang },
  journal={arXiv preprint arXiv:2506.07576},
  year={ 2025 }
}
Comments on this paper