Optical coherence tomography (OCT) has been used to investigate heart development because of its capability to image both structure and function of beating embryonic hearts. Cardiac structure segmentation is a prerequisite for the quantification of embryonic heart motion and function using OCT. Since manual segmentation is time-consuming and labor-intensive, an automatic method is needed to facilitate high-throughput studies. The purpose of this study is to develop an image-processing pipeline to facilitate the segmentation of beating embryonic heart structures from a 4-D OCT dataset. Sequential OCT images were obtained at multiple planes of a beating quail embryonic heart and reassembled to a 4-D dataset using image-based retrospective gating. Multiple image volumes at different time points were selected as key-volumes, and their cardiac structures including myocardium, cardiac jelly, and lumen, were manually labeled. Registration-based data augmentation was used to synthesize additional labeled image volumes by learning transformations between key-volumes and other unlabeled volumes. The synthesized labeled images were then used to train a fully convolutional network (U-Net) for heart structure segmentation. The proposed deep learning-based pipeline achieved high segmentation accuracy with only two labeled image volumes and reduced the time cost of segmenting one 4-D OCT dataset from a week to two hours. Using this method, one could carry out cohort studies that quantify complex cardiac motion and function in developing hearts.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement.