ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16677
12
0

PPTP: Performance-Guided Physiological Signal-Based Trust Prediction in Human-Robot Collaboration

20 June 2025
Hao Guo
Wei Fan
Shaohui Liu
Feng Jiang
Chunzhi Yi
ArXiv (abs)PDFHTML
Main:6 Pages
7 Figures
Bibliography:2 Pages
Abstract

Trust prediction is a key issue in human-robot collaboration, especially in construction scenarios where maintaining appropriate trust calibration is critical for safety and efficiency. This paper introduces the Performance-guided Physiological signal-based Trust Prediction (PPTP), a novel framework designed to improve trust assessment. We designed a human-robot construction scenario with three difficulty levels to induce different trust states. Our approach integrates synchronized multimodal physiological signals (ECG, GSR, and EMG) with collaboration performance evaluation to predict human trust levels. Individual physiological signals are processed using collaboration performance information as guiding cues, leveraging the standardized nature of collaboration performance to compensate for individual variations in physiological responses. Extensive experiments demonstrate the efficacy of our cross-modality fusion method in significantly improving trust classification performance. Our model achieves over 81% accuracy in three-level trust classification, outperforming the best baseline method by 6.7%, and notably reaches 74.3% accuracy in high-resolution seven-level classification, which is a first in trust prediction research. Ablation experiments further validate the superiority of physiological signal processing guided by collaboration performance assessment.

View on arXiv
@article{guo2025_2506.16677,
  title={ PPTP: Performance-Guided Physiological Signal-Based Trust Prediction in Human-Robot Collaboration },
  author={ Hao Guo and Wei Fan and Shaohui Liu and Feng Jiang and Chunzhi Yi },
  journal={arXiv preprint arXiv:2506.16677},
  year={ 2025 }
}
Comments on this paper