ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19949
72
0

Machine-learning for photoplethysmography analysis: Benchmarking feature, image, and signal-based approaches

27 February 2025
Mohammad Moulaeifard
Loic Coquelin
Mantas Rinkevičius
Andrius Sološenko
Oskar Pfeffer
Ciaran Bench
Nando Hegemann
Sara Vardanega
Manasi Nandi
Jordi Alastruey
Christian Heiss
Vaidotas Marozas
Andrew Thompson
P. Aston
Peter H. Charlton
Nils Strodthoff
    OOD
ArXivPDFHTML
Abstract

Photoplethysmography (PPG) is a widely used non-invasive physiological sensing technique, suitable for various clinical applications. Such clinical applications are increasingly supported by machine learning methods, raising the question of the most appropriate input representation and model choice. Comprehensive comparisons, in particular across different input representations, are scarce. We address this gap in the research landscape by a comprehensive benchmarking study covering three kinds of input representations, interpretable features, image representations and raw waveforms, across prototypical regression and classification use cases: blood pressure and atrial fibrillation prediction. In both cases, the best results are achieved by deep neural networks operating on raw time series as input representations. Within this model class, best results are achieved by modern convolutional neural networks (CNNs). but depending on the task setup, shallow CNNs are often also very competitive. We envision that these results will be insightful for researchers to guide their choice on machine learning tasks for PPG data, even beyond the use cases presented in this work.

View on arXiv
@article{moulaeifard2025_2502.19949,
  title={ Machine-learning for photoplethysmography analysis: Benchmarking feature, image, and signal-based approaches },
  author={ Mohammad Moulaeifard and Loic Coquelin and Mantas Rinkevičius and Andrius Sološenko and Oskar Pfeffer and Ciaran Bench and Nando Hegemann and Sara Vardanega and Manasi Nandi and Jordi Alastruey and Christian Heiss and Vaidotas Marozas and Andrew Thompson and Philip J. Aston and Peter H. Charlton and Nils Strodthoff },
  journal={arXiv preprint arXiv:2502.19949},
  year={ 2025 }
}
Comments on this paper