ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03863
155
1
v1v2 (latest)

STAR: Learning Diverse Robot Skill Abstractions through Rotation-Augmented Vector Quantization

4 June 2025
Hao Li
Qi Lv
Rui Shao
Xiang Deng
Yinchuan Li
Jianye Hao
Liqiang Nie
ArXiv (abs)PDFHTML
Main:8 Pages
11 Figures
Bibliography:3 Pages
5 Tables
Appendix:8 Pages
Abstract

Transforming complex actions into discrete skill abstractions has demonstrated strong potential for robotic manipulation. Existing approaches mainly leverage latent variable models, e.g., VQ-VAE, to learn skill abstractions through learned vectors (codebooks), while they suffer from codebook collapse and modeling the causal relationship between learned skills. To address these limitations, we present \textbf{S}kill \textbf{T}raining with \textbf{A}ugmented \textbf{R}otation (\textbf{STAR}), a framework that advances both skill learning and composition to complete complex behaviors. Specifically, to prevent codebook collapse, we devise rotation-augmented residual skill quantization (RaRSQ). It encodes relative angles between encoder outputs into the gradient flow by rotation-based gradient mechanism. Points within the same skill code are forced to be either pushed apart or pulled closer together depending on gradient directions. Further, to capture the causal relationship between skills, we present causal skill transformer (CST) which explicitly models dependencies between skill representations through an autoregressive mechanism for coherent action generation. Extensive experiments demonstrate the superiority of STAR on both LIBERO benchmark and realworld tasks, with around 12\% improvement over the baselines.

View on arXiv
@article{li2025_2506.03863,
  title={ STAR: Learning Diverse Robot Skill Abstractions through Rotation-Augmented Vector Quantization },
  author={ Hao Li and Qi Lv and Rui Shao and Xiang Deng and Yinchuan Li and Jianye Hao and Liqiang Nie },
  journal={arXiv preprint arXiv:2506.03863},
  year={ 2025 }
}
Comments on this paper