Objectives: This study aimed to harmonize panoramic radiograph images from different equipment in a single institution to display similar styles.
Methods: A total of 15,624 panoramic images were acquired using two different equipment: 8079 images from Rayscan Alpha Plus (R-unit) and 7545 images from Pax-i plus (P-unit). Among these, 222 image pairs (444 images) from the same patients comprised the test dataset to harmonize the P-unit images with the R-unit image style using CycleGAN. Objective evaluations included Frechet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS) assessments. Additionally, expert evaluation was conducted by two oral and maxillofacial radiologists on transformed P-unit and R-unit images. The statistical analysis of LPIPS employed a Student's t-test.
Results: The FID and mean LPIPS values of the transformed P-unit images (7.362, 0.488) were lower than those of the original P-unit images (8.380, 0.519), with a significant difference in LPIPS (p < 0.05). The experts evaluated 43.3-46.7% of the transformed P-unit images as R-unit images, 20.0-28.3% as P-units, and 28.3-33.3% as undetermined images.
Conclusions: CycleGAN has the potential to harmonize panoramic radiograph image styles. Enhancement of the model is anticipated for the application of images produced by additional units.
Keywords: Computer; Deep learning; Neural networks; Panoramic; Radiographic image enhancement; Radiography.
© 2024. The Author(s) under exclusive licence to Japanese Society for Oral and Maxillofacial Radiology.