Objective.The mean squared error (MSE), also known asL2loss, has been widely used as a loss function to optimize image denoising models due to its strong performance as a mean estimator of the Gaussian noise model. Recently, various low-dose computed tomography (LDCT) image denoising methods using deep learning combined with the MSE loss have been developed; however, this approach has been observed to suffer from the regression-to-the-mean problem, leading to over-smoothed edges and degradation of texture in the image.Approach.To overcome this issue, we propose a stochastic function in the loss function to improve the texture of the denoised CT images, rather than relying on complicated networks or feature space losses. The proposed loss function includes the MSE loss to learn the mean distribution and the Pearson divergence loss to learn feature textures. Specifically, the Pearson divergence loss is computed in an image space to measure the distance between two intensity measures of denoised low-dose and normal-dose CT images. The evaluation of the proposed model employs a novel approach of multi-metric quantitative analysis utilizing relative texture feature distance.Results.Our experimental results show that the proposed Pearson divergence loss leads to a significant improvement in texture compared to the conventional MSE loss and generative adversarial network (GAN), both qualitatively and quantitatively.Significance.Achieving consistent texture preservation in LDCT is a challenge in conventional GAN-type methods due to adversarial aspects aimed at minimizing noise while preserving texture. By incorporating the Pearson regularizer in the loss function, we can easily achieve a balance between two conflicting properties. Consistent high-quality CT images can significantly help clinicians in diagnoses and supporting researchers in the development of AI-diagnostic models.
Keywords: Pearson divergence; deep learning; denoising; low dose CT; texture.
© 2024 Institute of Physics and Engineering in Medicine.