ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.14719
179
0

DiSPo: Diffusion-SSM based Policy Learning for Coarse-to-Fine Action Discretization

23 September 2024
Nayoung Oh
Jaehyeong Jang
Moonkyeong Jung
Daehyung Park
ArXivPDFHTML
Abstract

We aim to solve the problem of generating coarse-to-fine skills learning from demonstrations (LfD). To scale precision, traditional LfD approaches often rely on extensive fine-grained demonstrations with external interpolations or dynamics models with limited generalization capabilities. For memory-efficient learning and convenient granularity change, we propose a novel diffusion-SSM based policy (DiSPo) that learns from diverse coarse skills and produces varying control scales of actions by leveraging a state-space model, Mamba. Our evaluations show the adoption of Mamba and the proposed step-scaling method enable DiSPo to outperform in three coarse-to-fine benchmark tests with maximum 81% higher success rate than baselines. In addition, DiSPo improves inference efficiency by generating coarse motions in less critical regions. We finally demonstrate the scalability of actions with simulation and real-world manipulation tasks.

View on arXiv
@article{oh2025_2409.14719,
  title={ DiSPo: Diffusion-SSM based Policy Learning for Coarse-to-Fine Action Discretization },
  author={ Nayoung Oh and Jaehyeong Jang and Moonkyeong Jung and Daehyung Park },
  journal={arXiv preprint arXiv:2409.14719},
  year={ 2025 }
}
Comments on this paper