Machine Learning for Automated Identification of Anatomical Landmarks in Ultrasound Periodontal Imaging

Dentomaxillofac Radiol. 2025 Jan 7:twaf001. doi: 10.1093/dmfr/twaf001. Online ahead of print.

Abstract

Objectives: To identify landmarks in ultrasound periodontal images and automate the image-based measurements of gingival recession (iGR), gingival height (iGH), and alveolar bone level (iABL) using machine learning.

Methods: We imaged 184 teeth from 29 human subjects. The dataset included 1580 frames for training and validating the U-Net CNN machine learning model, and 250 frames from new teeth that were not used in training for testing the generalization performance. The predicted landmarks including the tooth, gingiva, bone, gingival margin (GM), cementoenamel junction (CEJ), and alveolar bone crest (ABC), were compared to manual annotations. We further demonstrated automated measurements of the clinical metrics iGR, iGH, and iABL.

Results: Over 98% of predicted GM, CEJ, and ABC distances are within 200 µm of the manual annotation. Bland-Altman analysis revealed biases (bias of machine learning versus ground truth) of -0.1 µm, -37.6 µm, and -40.9 µm, with 95% limits of agreement of [-281.3, 281.0] µm, [-203.1, 127.9] µm, and [-297.6, 215.8] µm for iGR, iGH, and iABL, respectively, when compared to manual annotations. On the test dataset, the biases were 167.5 µm, 40.1 µm, and 78.7 µm with 95% CIs of [-1175, 1510] µm, [-910.3, 990.4] µm, and [-1954, 1796] µm for iGR, iGH, and iABL, respectively.

Conclusions: The proposed machine learning model demonstrates robust prediction performance, with the potential to enhance the efficiency of clinical periodontal diagnosis by automating landmark identification and clinical metrics measurements.