Background and objective: Background and objective: Human fatigue is a major cause of road traffic accidents. Currently widely used fatigue driving detection methods are based on eyelid closure, vehicle information or physiological parameter detection. However, the detection of each single feature has certain limitations. Which in turn affects the accuracy of detection and the possibility and efficiency of prediction.
Methods: This paper introduces a novel driver fatigue detection framework that leverages facial features, head pose, and PPG signals to establish a fatigue detection model. To validate this approach, a real-road driving experiment was conducted, resulting in the acquisition of multi-source feature signal data from 30 drivers. Utilizing a method for locating 68 facial landmarks, we extracted 2D facial and 3D head feature parameters. Additionally, five-dimensional heart rate variability (HRV) features were extracted from PPG signals. These ten-dimensional features were fused to construct a fatigue driving dataset. Subsequently, a Long Short-Term Memory (LSTM) network model for fatigue detection was established and optimized using four optimization algorithms: Momentum, Rmsprop, Adam, and SGD. For comparison, Decision Tree (DT), Random Forest (RF), and Bidirectional LSTM (BiLSTM) models were also evaluated. Within the dataset, 2880 samples were designated as the training set, while 720 samples served as the test set.
Results: Adam's optimized LSTM fatigue detection model is the most effective, with a model accuracy of 97.36 %, precision of 97.4 %, recall of 97.4 %, and F1 of 0.97. It shows that the model can provide a more timely and accurate prediction and warning for drivers who are already fatigued.
Keywords: Driver fatigue; Facial feature; Head posture; LSTM model; PPG signals.
© 2024 The Authors. Published by Elsevier Ltd.