Objective: To evaluate the use of virtual reality-based infrared pupillometry (VIP) to detect individuals suffering long COVID.
Design: Prospective, case-control cross-sectional study.
Participants: Participants aged 20-60 were recruited from a community eye screening programme.
Methods: Pupillary Light Responses (PLR) were recorded in response to 3 intensities of light stimuli (L6, L7 and L8) using a virtual reality head-mount display (VR-HMD). 9 PLR waveform features for each stimulus, were extracted by 2 masked observers and statistically analyzed. We also use various methods on the whole PLR waveform including trained, validated and tested (6:3:1) by machine learning models including Multi-layer Perceptron, Support Vector Machine, K-nearest Neighbors, Logistic Regression, Decision Tree, Random Forest and Long Short-Term Memory (LSTM) models for two and three-class classification into long-COVID (LCVD), post-COVID (PCVD) or control.
Main outcome measures: Accuracies/AUC of individual or combination of PLR features and ML models using PLR features or whole pupillometric waveform.
Results: PLR from a total of 185 subjects including 112 LCVD, 44 PCVD and 29 age/sex-matched controls were analysed. Models examined the independent effects of age and sex. Constriction Time(CT) after the brightest stimulus(L8) is significantly associated with LCVD status(two-way ANOVA, false discovery rate(FDR)<0.001; multinominal logistic regression, FDR<0.05). The overall accuracy/AUC of CT-L8 alone in differentiating LCVD from control or from PCVD were 0.7808/0.8711 and 0.8654/0.8140 respectively. Using cross-validated backward stepwise variable selection, CT-L8, CT-L6, Constriction Velocity(CV)-L6 were most useful to detect LCVD while CV-L8 for PCVD from other groups. The accuracy/AUC of selected features were 0.8000/0.9000 (control versus LCVD) and 0.9062/0.9710 (PCVD versus LCVD), better than when all 27 pupillometric features were combined. An LSTM model analyzing whole pupillometric waveform achieved the highest accuracy/AUC at 0.9375/1.000 in differentiating LCVD from PCVD and a slightly lower accuracy of 0.7838 for three-class classification (LCVD-PCVD-control).
Conclusions: We reported, for the first time, specific pupillometric signatures in differentiating LCVD from PCVD or control subjects using a VR-HMD. Combining statistical methods to identify specific pupillometric features and ML algorithms to analyse the performance further enhance the performance of VIP as a non-intrusive, low-cost, portable and objective method to detect and monitor long COVID.
Keywords: infrared pupillometry; long COVID; machine learning algorithm; pupillary light response (PLR); virtual reality head-mount display (VR-HMD).
Copyright © 2024. Published by Elsevier Inc.