Quantification of motor symptom progression in Parkinson's disease (PD) patients is crucial for assessing disease progression and for optimizing therapeutic interventions, such as dopaminergic medications and deep brain stimulation. Cumulative and heuristic clinical experience has identified various clinical signs associated with PD severity, but these are neither objectively quantifiable nor robustly validated. Video-based objective symptom quantification enabled by machine learning (ML) introduces a potential solution. However, video-based diagnostic tools often have implementation challenges due to expensive and inaccessible technology, and typical "black-box" ML implementations are not tailored to be clinically interpretable. Here, we address these needs by releasing a comprehensive kinematic dataset and developing an interpretable video-based framework that predicts high versus low PD motor symptom severity according to MDS-UPDRS Part III metrics. This data driven approach validated and robustly quantified canonical movement features and identified new clinical insights, not previously appreciated as related to clinical severity, including pinkie finger movements and lower limb and axial features of gait. Our framework is enabled by retrospective, single-view, seconds-long videos recorded on consumer-grade devices such as smartphones, tablets, and digital cameras, thereby eliminating the requirement for specialized equipment. Following interpretable ML principles, our framework enforces robustness and interpretability by integrating (1) automatic, data-driven kinematic metric evaluation guided by pre-defined digital features of movement, (2) combination of bi-domain (body and hand) kinematic features, and (3) sparsity-inducing and stability-driven ML analysis with simple-to-interpret models. These elements ensure that the proposed framework quantifies clinically meaningful motor features useful for both ML predictions and clinical analysis.
© 2024. The Author(s).