In clinical practice, functional limitations in patients with low back pain are subjectively assessed, potentially leading to misdiagnosis and prolonged pain. This paper proposes an objective deep learning (DL) markerless motion capture system that uses a red-green-blue-depth (RGB-D) camera to measure the kinematics of the spine during flexion-extension (FE) through: 1) the development and validation of a DL semantic segmentation algorithm that segments the back into four anatomical classes and 2) the development and validation of a framework that uses these segmentations to measure spine kinematics during FE. Twenty participants performed ten cycles of FE with drawn-on point markers while being recorded with an RGB-D camera. Five of these participants also performed an additional trial where they were recorded with an optical motion capture (OPT) system. The DL algorithm was trained to segment the back and pelvis into four anatomical classes: upper back, lower back, spine, and pelvis. A kinematic framework was then developed to refine these segmentations into upper spine, lower spine, and pelvis masks, which were used to measure spine kinematics after obtaining 3D global coordinates of the mask corners. The segmentation algorithm achieved high accuracy, and the root mean square error (RMSE) between ground truth and predicted lumbar kinematics was < 4°. When comparing markerless and OPT kinematics, RMSE values were < 6°. This work demonstrates the feasibility of using markerless motion capture to assess FE spine movement in clinical settings. Future work will expand the studied movement directions and test on different demographics.
Keywords: Deep Learning; RGB-D; Spine Kinematics.
Copyright © 2024 Elsevier Ltd. All rights reserved.