ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17544
62
0

PRIMAL: Physically Reactive and Interactive Motor Model for Avatar Learning

21 March 2025
Yan Zhang
Yao Feng
Alpár Cseke
Nitin Saini
Nathan Bajandas
Nicolas Heron
M. Black
    DiffM
    VGen
ArXivPDFHTML
Abstract

To build a motor system of the interactive avatar, it is essential to develop a generative motion model drives the body to move through 3D space in a perpetual, realistic, controllable, and responsive manner. Although motion generation has been extensively studied, most methods do not support ``embodied intelligence'' due to their offline setting, slow speed, limited motion lengths, or unnatural movements. To overcome these limitations, we propose PRIMAL, an autoregressive diffusion model that is learned with a two-stage paradigm, inspired by recent advances in foundation models. In the pretraining stage, the model learns motion dynamics from a large number of sub-second motion segments, providing ``motor primitives'' from which more complex motions are built. In the adaptation phase, we employ a ControlNet-like adaptor to fine-tune the motor control for semantic action generation and spatial target reaching. Experiments show that physics effects emerge from our training. Given a single-frame initial state, our model not only generates unbounded, realistic, and controllable motion, but also enables the avatar to be responsive to induced impulses in real time. In addition, we can effectively and efficiently adapt our base model to few-shot personalized actions and the task of spatial control. Evaluations show that our proposed method outperforms state-of-the-art baselines. We leverage the model to create a real-time character animation system in Unreal Engine that is highly responsive and natural. Code, models, and more results are available at:this https URL

View on arXiv
@article{zhang2025_2503.17544,
  title={ PRIMAL: Physically Reactive and Interactive Motor Model for Avatar Learning },
  author={ Yan Zhang and Yao Feng and Alpár Cseke and Nitin Saini and Nathan Bajandas and Nicolas Heron and Michael J. Black },
  journal={arXiv preprint arXiv:2503.17544},
  year={ 2025 }
}
Comments on this paper