Current Transformer structure utilizes the self-attention mechanism to model global contextual relevance within image, which makes an impact on medical image registration. However, the use of Transformer in handling large deformation lung CT registration is relatively straightforwardly. These models only focus on single image feature representation neglecting to employ attention mechanism to capture the across image correspondence. This hinders further improvement in registration performance. To address the above limitations, we propose a novel registration method in a cascaded manner, Cascaded Swin Deformable Cross Attention Transformer based U-shape structure (SD-CATU), to address the challenge of large deformations in lung CT registration. In SD-CATU, we introduce a Cross Attention-based Transformer (CAT) block that incorporates the Shifted Regions Multihead Cross-attention (SR-MCA) mechanism to flexibly exchange feature information and thus reduce the computational complexity. Besides, a consistency constraint in the loss function is used to ensure the preservation of topology and inverse consistency of the transformations. Experiments with public lung datasets demonstrate that the Cascaded SD-CATU outperforms current state-of-the-art registration methods (Dice Similarity Coefficient of 93.19% and Target registration error of 0.98 mm). The results further highlight the potential for obtaining excellent registration accuracy while assuring desirable smoothness and consistency in the deformed images.
Keywords: Cross attention; Inverse consistency; Lung CT registration; Transformer.
© 2024. The Author(s) under exclusive licence to Society for Imaging Informatics in Medicine.