Objective: To investigate the accuracy and efficiency of spine 2D/3D preoperative CT and intraoperative X-ray registration through a framework for spine 2D/3D single-vertebra navigation registration based on the fusion of dual-position image features. Methods: The preoperative CT and intraoperative anteroposterior (AP) and lateral (LAT) X-ray images of 140 lumbar spine patients who visited Huashan Hospital Affiliated to Fudan University from January 2020 to December 2023 were selected. In order to achieve rapid and high-precision single vertebra registration in clinical orthopedic surgery, a designed transformation parameter feature extraction module combined with a lightweight module of channel and spatial attention (CBAM) was used to accurately extract the local single vertebra image transformation information. Subsequently, the fusion regression module was used to complement the features of the anterior posterior (AP) and lateral (LAT) images to improve the accuracy of the registration parameter regression. Two 1×1 convolutions were used to reduce the parameter calculation amount, improve computational efficiency, and accelerate intraoperative registration time. Finally, the regression module outputed the final transformation parameters. Comparative experiments were conducted using traditional iterative methods (Opt-MI, Opt-NCC, Opt-C2F) and existing deep learning methods convolutional neural network (CNN) as control group. The registration accuracy (mRPD), registration time, and registration success rate were compared among the iterative methods. Results: Through experiments on real CT data, the image-guided registration accuracy of the proposed method was verified. The method achieved a registration accuracy of (0.81±0.41) mm in the mRPD metric, a rotational angle error of 0.57°±0.24°, and a translation error of (0.41±0.21) mm. Through experimental comparisons on mainstream models, the selected DenseNet alignment accuracy was significantly better than ResNet as well as VGG (both P<0.05). Compared to existing deep learning methods [mRPD: (2.97±0.99) mm, rotational angle error: 2.64°±0.54°, translation error: (2.15±0.41) mm, registration time: (0.03±0.05) seconds], the proposed method significantly improved registration accuracy (all P<0.05). The registration success rate reached 97%, with an average single registration time of only (0.04±0.02) seconds. Compared to traditional iterative methods [mRPD: (0.78±0.26) mm, rotational angle error: 0.84°±0.57°, translation error: (1.05±0.28) mm, registration time: (35.5±10.5) seconds], registration efficiency of the proposed method was significantly improved (all P<0.05). The dual-position study also compensated for the limitations in the single-view perspective, and significantly outperforms both the front and side single-view perspectives in terms of positional transformation parameter errors (both P<0.05). Conclusion: Compared to existing methods, the proposed CT and X-ray registration method significantly reduces registration time while maintaining high registration accuracy, achieving efficient and precise single vertebra registration.
目的: 分析基于双位置图像特征融合的脊柱2D/3D单椎体导航配准框架的配准精度和效率。 方法: 回顾性选取2020年1月至2023年12月复旦大学附属华山医院就诊140例腰椎疾病患者的术前CT和术中正(AP)侧(LAT)位X线图像。为了实现临床骨科手术中快速高精度的单椎体配准,通过设计的变换参数特征提取模块,结合通道和空间注意力的轻量级模块(CBAM)精确提取输入的局部单椎体影像位置特征。继而通过融合回归模块融合正、侧两个位置影像的特征相互补充以提高配准参数回归的精确性,并通过两个1×1的卷积降低参数计算量,提高计算效率,加快术中配准时间。最后,通过回归模块输出最终的变换参数。使用传统迭代的方法(Opt-MI、Opt-NCC、Opt-C2F)以及基于现有的深度学习的方法卷积神经网络(CNN)作为对照组进行对比实验,比较各种方法平均再投影距离(mRPD)、配准时间、配准成功率等指标。 结果: 通过在真实的CT数据上实验检验本研究所提出方法的影像导航配准精度,在mRPD上实现了(0.81±0.41)mm的精度,旋转角度误差为0.57°±0.24°,平移误差为(0.41±0.21)mm。通过在主流模型上进行实验对比,所选取的DenseNet配准精度显著优于ResNet以及VGG(均P<0.05)。相较于基于现有的深度学习方法[mRPD:(2.97±0.99)mm、旋转角度误差:2.64°±0.54°、平移误差:(2.15±0.41)mm、配准时间:(0.03±0.05)s],在配准精度方面均有显著提高(均P<0.05)。配准成功率达到97%,单次配准时间仅(0.04±0.02)s,相较于基于传统迭代的方法[mRPD:(0.78±0.26)mm、旋转角度误差:0.84°±0.57°、平移误差:(1.05±0.28)mm、配准时间:(35.5±10.5)s],配准效率显著提高(均P<0.05)。同时双位置的研究弥补了单视角下的局限性,在位置变换参数误差上显著优于正、侧两个单视角(均P<0.05)。 结论: 与现有的方法相比,CT与X线配准方法在保证较高配准精度前提下,大幅缩短了配准时间,能够完成高效且精确的单椎体配准。.