Standard Ergonomic Risk Assessment (ERA) from video analysis is a highly time-consuming activity and is affected by the subjectivity of ergonomists. Motion Capture (MOCAP) addresses these limitations by allowing objective ERA. Here a depth camera, one of the most commonly used MOCAP systems for ERA (i.e. Azure Kinect), is used for the evaluation of the NIOSH Lifting Equation exploiting a tool named AzKNIOSH. First, to validate the tool, we compared its performance with those provided by a commercial software, Siemens Jack TAT, based on an Inertial Measurement Units (IMUs) suit and found a high agreement between them. Secondly, a Convolutional Neural Network (CNN) was employed for task recognition, automatically identifying the lifting actions. This procedure was evaluated by comparing the results obtained from manual detection with those obtained through automatic detection. Thus, through automated task detection and the implementation of Auto-AzKNIOSH we achieved a fully automated ERA.Practitioner Summary:The standard evaluation of the NIOSH Lifting Equation is time-consuming and subjective, thus a new automatic tool is designed, which integrates motion captures provided by Azure Kinect and task recognition. We found a high agreement between our tool and Siemens Jack TAT suit, the golden standard technology for motion capture.
Keywords: Motion capture; ergonomics; kinect; picking; task recognition.