Aligning robot navigation with human preferences is essential for ensuring comfortable and predictable robot movement in shared spaces, facilitating seamless human-robot coexistence. While preference-based learning methods, such as reinforcement learning from human feedback (RLHF), enable this alignment, the choice of the preference collection interface may influence the process. Traditional 2D interfaces provide structured views but lack spatial depth, whereas immersive VR offers richer perception, potentially affecting preference articulation. This study systematically examines how the interface modality impacts human preference collection and navigation policy alignment. We introduce a novel dataset of 2,325 human preference queries collected through both VR and 2D interfaces, revealing significant differences in user experience, preference consistency, and policy outcomes. Our findings highlight the trade-offs between immersion, perception, and preference reliability, emphasizing the importance of interface selection in preference-based robot learning. The dataset will be publicly released to support future research.
View on arXiv@article{heuvel2025_2503.16500, title={ The Impact of VR and 2D Interfaces on Human Feedback in Preference-Based Robot Learning }, author={ Jorge de Heuvel and Daniel Marta and Simon Holk and Iolanda Leite and Maren Bennewitz }, journal={arXiv preprint arXiv:2503.16500}, year={ 2025 } }