Image-to-video (I2V) generation seeks to produce realistic motion sequences from a single reference image. Although recent methods exhibit strong temporal consistency, they often struggle when dealing with complex, non-repetitive human movements, leading to unnatural deformations. To tackle this issue, we present LatentMove, a DiT-based framework specifically tailored for highly dynamic human animation. Our architecture incorporates a conditional control branch and learnable face/body tokens to preserve consistency as well as fine-grained details across frames. We introduce Complex-Human-Videos (CHV), a dataset featuring diverse, challenging human motions designed to benchmark the robustness of I2V systems. We also introduce two metrics to assess the flow and silhouette consistency of generated videos with their ground truth. Experimental results indicate that LatentMove substantially improves human animation quality--particularly when handling rapid, intricate movements--thereby pushing the boundaries of I2V generation. The code, the CHV dataset, and the evaluation metrics will be available atthis https URL--.
View on arXiv@article{taghipour2025_2505.22046, title={ LatentMove: Towards Complex Human Movement Video Generation }, author={ Ashkan Taghipour and Morteza Ghahremani and Mohammed Bennamoun and Farid Boussaid and Aref Miri Rekavandi and Zinuo Li and Qiuhong Ke and Hamid Laga }, journal={arXiv preprint arXiv:2505.22046}, year={ 2025 } }