In skull base surgery, the method of using a probe to draw or 3D scanners to acquire intraoperative facial point clouds for spatial registration presents several issues. Manual manipulation results in inefficiency and poor consistency. Traditional registration algorithms based on point clouds are highly dependent on the initial pose. The complexity of registration algorithms can also extend the required time. To address these issues, we used an RGB-D camera to capture real-time facial point clouds during surgery. The initial registration of the 3D model reconstructed from preoperative CT/MR images and the point cloud collected during surgery is accomplished through corresponding facial landmarks. The facial point clouds collected intraoperatively often contain rotations caused by the free-angle camera. Benefit from the close spatial geometric relationship between head pose and facial landmarks coordinates, we propose a facial landmarks localization network assisted by estimating head pose. The shared representation head pose estimation module boosts network performance by enhancing its perception of global facial features. The proposed network facilitates the localization of landmark points in both preoperative and intraoperative point clouds, enabling rapid automatic registration. A free-view human facial landmarks dataset called 3D-FVL was synthesized from clinical CT images for training. The proposed network achieves leading localization accuracy and robustness on two public datasets and the 3D-FVL. In clinical experiments, using the Artec Eva scanner, the trained network achieved a concurrent reduction in average registration time to 0.28 s, with an average registration error of 2.33 mm. The proposed method significantly reduced registration time, while meeting clinical accuracy requirements for surgical navigation. Our research will help to improving the efficiency and quality of skull base surgery.
Keywords: Computer-assisted surgery; Deep learning; Facial landmark detection; Marker-less registration.
Copyright © 2025 Elsevier Ltd. All rights reserved.