Purpose: Subject motion during positron emission tomography (PET) brain scans can reduce image quality and may lead to incorrect biological outcome measures, especially for data acquired with high resolution tomographs. A semiautomatic method for assessing the quality of frame-to-frame image realignments to compensate for subject motion in dynamic brain PET is proposed and evaluated.
Methods: A test set of 256 11C-raclopride (a dopamine D2-type receptor antagonist) brain PET image frames was used to develop and evaluate the proposed method. The transformation matrix to be applied to each image to achieve a frame-to-frame realignment was calculated with two independent methods: Using motion data measured with the Polaris Vicra optical tracking device and using the image-based realignment algorithm AIR (automated image registration). The quality assessment method is based on the observation that there is a very low probability that two independent approaches to motion detection will produce equal, but incorrect results. Agreement between transformation matrices was taken to be a signature of an accurate motion determination and thus realignment. Each pair of realignment matrices was compared and used to calculate a metric describing the frame-to-frame image realignment accuracy. In order to determine the range of values of the metric that correspond to a successful realignment, a comparison was made to a detailed visual inspection of the frame-to-frame realigned images for each image in the test set. The threshold on the metric for realignment acceptance was then selected to optimize the numbers of true-positives (realignments accepted by both the protocol and the operator) and minimize the number of false-positives (accepted by the protocol but not the operator).
Results: The proposed method categorized 53% of the image realignments in the test dataset as successful, of which 11% were incorrectly categorized (6% of the total dataset). Implementation of the proposed assessment tool resulted in a 45% time savings compared to the same visual inspection applied to all image realignments.
Conclusions: The frame-to-frame image realignment assessment tool presented here required less operator time to evaluate realignment success compared to a method requiring visual inspection of all realigned images, while maintaining the same level of accuracy in the realigned dataset. This practical method can be easily implemented at any center with motion monitoring capabilities or, for centers lacking this technology, methods of estimating image realignment parameters that use independent information. In addition, the procedure is flexible, allowing modifications to be made for different tracer types and/or downstream analysis goals.