Accurate face alignment and adaptive patch selection for heart rate estimation from videos under realistic scenarios

PLoS One. 2018 May 11;13(5):e0197275. doi: 10.1371/journal.pone.0197275. eCollection 2018.

Abstract

Non-contact heart rate (HR) measurement from facial videos has attracted high interests due to its convenience and cost effectiveness. However, accurate and robust HR estimation under various realistic scenarios remain a very challenging problem. In this paper, we develop a novel system which can achieve a robust and accurate HR estimation under those challenging scenarios. First, to minimize tracking-artifacts arising from large head motions and facial expressions, we propose a joint face detection and alignment method which can produce alignment-friendly facial bounding boxes with reliable initial facial shapes, facilitating accurate and robust face alignment even in the presence of large pose variations and expressions. Second, different from most existing methods [1-5] which derive pulse signals from predetermined grid cells (i.e. local patches), our patches are varying-sized triangles generated adaptively to exclude negative effects from non-rigid facial motions. Third, we propose an adaptive patch selection method to choose patches which contain skin regions and are more likely to contain useful information, followed by an independent component analysis, for an accurate HR estimate. Extensive experiments on both public datasets and our own dataset demonstrated that, comparing with the state-of-the-art methods [1-3], our method reduces the root mean square error (RMSE) by a large margin, ranging from 12% to 63%, and can provide a robust and accurate estimation under various challenging scenarios.

Publication types

  • Evaluation Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Face*
  • Facial Expression
  • Female
  • Head Movements
  • Heart Rate Determination / methods*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Light
  • Male
  • Skin
  • Skin Pigmentation
  • Videotape Recording*

Grants and funding

This work was supported by the National Natural Science Foundation of China grant 61502188 to XY. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.