Point Cloud Mixture-of-Domain-Experts Model for 3D Self-supervised Learning
- 3DPC

Point clouds, as a primary representation of 3D data, can be categorized into scene domain point clouds and object domain point clouds. Point cloud self-supervised learning (SSL) has become a mainstream paradigm for learning 3D representations. However, existing point cloud SSL primarily focuses on learning domain-specific 3D representations within a single domain, neglecting the complementary nature of cross-domain knowledge, which limits the learning of 3D representations. In this paper, we propose to learn a comprehensive Point cloud Mixture-of-Domain-Experts model (Point-MoDE) via a block-to-scene pre-training strategy. Specifically, we first propose a mixture-of-domain-expert model consisting of scene domain experts and multiple shared object domain experts. Furthermore, we propose a block-to-scene pretraining strategy, which leverages the features of point blocks in the object domain to regress their initial positions in the scene domain through object-level block mask reconstruction and scene-level block position regression. By integrating the complementary knowledge between object and scene, this strategy simultaneously facilitates the learning of both object-domain and scene-domain representations, leading to a more comprehensive 3D representation. Extensive experiments in downstream tasks demonstrate the superiority of our model.
View on arXiv@article{zha2025_2410.09886, title={ Point Cloud Mixture-of-Domain-Experts Model for 3D Self-supervised Learning }, author={ Yaohua Zha and Tao Dai and Hang Guo and Yanzi Wang and Bin Chen and Ke Chen and Shu-Tao Xia }, journal={arXiv preprint arXiv:2410.09886}, year={ 2025 } }