Megavoltage computed tomography (MVCT) plays a crucial role in patient positioning and dose reconstruction during tomotherapy. However, due to the limited scan field of view (sFOV), the entire cross-section of certain patients may not be fully covered, resulting in projection data truncation. Truncation artifacts in MVCT can compromise registration accuracy with the planned kilovoltage computed tomography (KVCT) and hinder subsequent MVCT-based adaptive planning. To address this issue, we propose a Prior-FOVNet to correct the truncation artifacts and extend the field of view (eFOV) by leveraging material and shape priors learned from the KVCT of the same patient. Specifically, to address the intensity discrepancies between different imaging modalities, we employ a contrastive learning-based GAN, named TransNet, to transform KVCT images into synthesized MVCT (sMVCT) images. The sMVCT images, along with pre-corrected MVCT images obtained via sinogram extrapolation, are then input into a Swin Transformer-based image inpainting network for artifact correction and FOV extension. Experimental results using both simulated and real patient data demonstrate that our method outperforms existing truncation correction techniques in reducing truncation artifacts and reconstructing anatomical structures beyond the sFOV. It achieves the lowest MAE of 23.8 ± 5.6 HU and the highest SSIM of 97.8 ± 0.6 across the test dataset, thereby enhancing the reliability and clinical applicability of MVCT in adaptive radiotherapy.
Keywords: contrastive learning; field-of-view extension; megavoltage computed tomography; swin transformer; truncation artifacts.