Background: Human brown adipose tissue (BAT), mostly located in the cervical/supraclavicular region, is a promising target in obesity treatment. Magnetic resonance imaging (MRI) allows for mapping the fat content quantitatively. However, due to the complex heterogeneous distribution of BAT, it has been difficult to establish a standardized segmentation routine based on magnetic resonance (MR) images. Here, we suggest using a multi-modal deep neural network to detect the supraclavicular fat pocket.
Methods: A total of 50 healthy subjects [median age/body mass index (BMI) =36 years/24.3 kg/m2] underwent MRI scans of the neck region on a 3 T Ingenia scanner (Philips Healthcare, Best, Netherlands). Manual segmentations following fixed rules for anatomical borders were used as ground truth labels. A deep learning-based method (termed as BAT-Net) was proposed for the segmentation of BAT on MRI scans. It jointly leveraged two-dimensional (2D) and three-dimensional (3D) convolutional neural network (CNN) architectures to efficiently encode the multi-modal and 3D context information from multi-modal MRI scans of the supraclavicular region. We compared the performance of BAT-Net to that of 2D U-Net and 3D U-Net. For 2D U-Net, we analyzed the performance difference of implementing 2D U-Net in three different planes, denoted as 2D U-Net (axial), 2D U-Net (coronal), and 2D U-Net (sagittal).
Results: The proposed model achieved an average dice similarity coefficient (DSC) of 0.878 with a standard deviation of 0.020. The volume segmented by the network was smaller compared to the ground truth labels by 9.20 mL on average with a mean absolute increase in proton density fat fraction (PDFF) inside the segmented regions of 1.19 percentage points. The BAT-Net outperformed all implemented 2D U-Nets and the 3D U-Nets with average DSC enhancement ranging from 0.016 to 0.023.
Conclusions: The current work integrates a deep neural network-based segmentation into the automated segmentation of supraclavicular fat depot for quantitative evaluation of BAT. Experiments show that the presented multi-modal method benefits from leveraging both 2D and 3D CNN architecture and outperforms the independent use of 2D or 3D networks. Deep learning-based segmentation methods show potential towards a fully automated segmentation of the supraclavicular fat depot.
Keywords: Human brown adipose tissue (human BAT); automated medical image segmentation; convolutional neural network (CNN); deep neural network.
2023 Quantitative Imaging in Medicine and Surgery. All rights reserved.