44
0

Augmented Deep Contexts for Spatially Embedded Video Coding

Abstract

Most Neural Video Codecs (NVCs) only employ temporal references to generate temporal-only contexts and latent prior. These temporal-only NVCs fail to handle large motions or emerging objects due to limited contexts and misaligned latent prior. To relieve the limitations, we propose a Spatially Embedded Video Codec (SEVC), in which the low-resolution video is compressed for spatial references. Firstly, our SEVC leverages both spatial and temporal references to generate augmented motion vectors and hybrid spatial-temporal contexts. Secondly, to address the misalignment issue in latent prior and enrich the prior information, we introduce a spatial-guided latent prior augmented by multiple temporal latent representations. At last, we design a joint spatial-temporal optimization to learn quality-adaptive bit allocation for spatial references, further boosting rate-distortion performance. Experimental results show that our SEVC effectively alleviates the limitations in handling large motions or emerging objects, and also reduces 11.9% more bitrate than the previous state-of-the-art NVC while providing an additional low-resolution bitstream. Our code and model are available atthis https URL.

View on arXiv
@article{bian2025_2505.05309,
  title={ Augmented Deep Contexts for Spatially Embedded Video Coding },
  author={ Yifan Bian and Chuanbo Tang and Li Li and Dong Liu },
  journal={arXiv preprint arXiv:2505.05309},
  year={ 2025 }
}
Comments on this paper