189
0

Policy Contrastive Decoding for Robotic Foundation Models

Abstract

Robotic foundation models, or generalist robot policies, hold immense potential to enable flexible, general-purpose and dexterous robotic systems. Despite their advancements, our empirical experiments reveal that existing robot policies are prone to learning spurious correlations from pre-training trajectories, adversely affecting their generalization capabilities beyond the training data. To tackle this, we propose a novel Policy Contrastive Decoding (PCD) approach, which redirects the robot policy's focus toward object-relevant visual clues by contrasting action probability distributions derived from original and object-masked visual inputs. As a training-free method, our PCD can be used as a plugin to improve different types of robot policies without needing to finetune or access model weights. We conduct extensive experiments on top of three open-source robot policies, including the autoregressive policy OpenVLA and the diffusion-based policies Octo and π0\pi_0. The obtained results in both simulation and real-world environments prove PCD's flexibility and effectiveness, e.g., PCD enhances the state-of-the-art policy π0\pi_0 by 8% in the simulation environment and by 108% in the real-world environment. Code and demos are publicly available at:this https URL.

View on arXiv
@article{wu2025_2505.13255,
  title={ Policy Contrastive Decoding for Robotic Foundation Models },
  author={ Shihan Wu and Ji Zhang and Xu Luo and Junlin Xie and Jingkuan Song and Heng Tao Shen and Lianli Gao },
  journal={arXiv preprint arXiv:2505.13255},
  year={ 2025 }
}
Comments on this paper