Text-Driven Video Style Transfer with State-Space Models: Extending StyleMamba for Temporal Coherence

StyleMamba has recently demonstrated efficient text-driven image style transfer by leveraging state-space models (SSMs) and masked directional losses. In this paper, we extend the StyleMamba framework to handle video sequences. We propose new temporal modules, including a \emph{Video State-Space Fusion Module} to model inter-frame dependencies and a novel \emph{Temporal Masked Directional Loss} that ensures style consistency while addressing scene changes and partial occlusions. Additionally, we introduce a \emph{Temporal Second-Order Loss} to suppress abrupt style variations across consecutive frames. Our experiments on DAVIS and UCF101 show that the proposed approach outperforms competing methods in terms of style consistency, smoothness, and computational efficiency. We believe our new framework paves the way for real-time text-driven video stylization with state-of-the-art perceptual results.
View on arXiv@article{li2025_2503.12291, title={ Text-Driven Video Style Transfer with State-Space Models: Extending StyleMamba for Temporal Coherence }, author={ Chao Li and Minsu Park and Cristina Rossi and Zhuang Li }, journal={arXiv preprint arXiv:2503.12291}, year={ 2025 } }