ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06145
93
8

Animate Anyone 2: High-Fidelity Character Image Animation with Environment Affordance

10 February 2025
Li Hu
Guangyuan Wang
Zhen Shen
Xin Gao
Dechao Meng
Lian Zhuo
Peng Zhang
Bang Zhang
Liefeng Bo
    DiffM
    VGen
ArXivPDFHTML
Abstract

Recent character image animation methods based on diffusion models, such as Animate Anyone, have made significant progress in generating consistent and generalizable character animations. However, these approaches fail to produce reasonable associations between characters and their environments. To address this limitation, we introduce Animate Anyone 2, aiming to animate characters with environment affordance. Beyond extracting motion signals from source video, we additionally capture environmental representations as conditional inputs. The environment is formulated as the region with the exclusion of characters and our model generates characters to populate these regions while maintaining coherence with the environmental context. We propose a shape-agnostic mask strategy that more effectively characterizes the relationship between character and environment. Furthermore, to enhance the fidelity of object interactions, we leverage an object guider to extract features of interacting objects and employ spatial blending for feature injection. We also introduce a pose modulation strategy that enables the model to handle more diverse motion patterns. Experimental results demonstrate the superior performance of the proposed method.

View on arXiv
@article{hu2025_2502.06145,
  title={ Animate Anyone 2: High-Fidelity Character Image Animation with Environment Affordance },
  author={ Li Hu and Guangyuan Wang and Zhen Shen and Xin Gao and Dechao Meng and Lian Zhuo and Peng Zhang and Bang Zhang and Liefeng Bo },
  journal={arXiv preprint arXiv:2502.06145},
  year={ 2025 }
}
Comments on this paper