Deformation field correction for spatial normalization of PET images using a population-derived partial least squares model
Deformation Field Correction for Spatial Normalization of PET Images Using a Population-derived Partial Least Squares Model
Deformable medical image registration is essential to aligning a population of images, performing voxelwise association studies, and tracking longitudinal changes. While within-modality spatial normalization of structural medical images has been studied extensively, work on anatomically accurate positron emission tomography (PET) spatial normalization remains limited. The anatomical alignment of PET images is a difficult problem since they reflect metabolism and function rather than anatomy, the observed intensities depend on the amount of radiotracer used, and the spatial detail is confounded by radiotracer spillover. In this study, we present a method for the spatial normalization of PET images without a corresponding structural image based on a deformation correction model learned from structural image registration.
Our method is based on this observation: “PET-to-PET registration produces deformations that are systematically biased in certain regions, and these biases can be characterized as a function of location and estimated within small neighborhoods.” The correction operates on the PET-to-PET deformation fields obtained from a deformable registration algorithm and uses partial least squares regression models learned from a population of subjects relating the local PET intensities and deformation fields to the corresponding structural imaging deformation fields. The learned relationship between the deformation fields accounts for the anatomical inaccuracies present in the alignment of PET images, while the use of PET intensity information allows for inter-subject variability in radio tracer binding due to differences in physiology.
To construct our model, we need the deformation fields that are to be applied to the PET images and their structural counterparts to bring the images to a common template. Our model is then trained using the resulting deformation fields for the PET and the structural images as well as the warped PET image intensities, yielding a correction that can be applied to PET deformation fields.
The process starts with image template generation. To create an automatically accurate PET template image, we rely on the associated structural images. The structural images are co-registered rigidly with the subject PET images yielding a transformation followed by affine registration to a common space with a transformation. The affinely coregistered structural images are then used to create a structural population template image. Then the affine transformations and diffeomorphisms obtained from the structural image template construction are applied to the corresponding PET images in order to bring them into the same template space. The PET template is then defined as the mean of the spatially normalized PET image.
Computing a training set happens next. Using a set of subjects for whom both a structural image and a PET image are available, we perform deformable registration to map the PET images onto the PET template. For each subject in the training data, the deformable registration consists of an affine transformation followed by a diffeomorphic mapping. We then denote the PET image registered onto the PET template. Constraining the affine transformation to be the same as that obtained from the PET-to-PET registration, we then perform another registration to find the deformation field that must be applied to the structural image.
We then start the model training. The goal is to train a model at each voxel, describing a relationship between the estimated PET deformation field and the structural image deformation field.
We compared our method against the PET-to-PET template registration and an implementation of [J. Fripp et. al.] that involved modifying the PET template according to a whole-brain PCA model following an affine registration of the subject’s PET and using the modified template to perform the deformable registration. Ventricle size is overestimated in both the PET-to-PET method as well as the method described in [J. Fripp et. al.], but our method results in a better registration as shown in the difference image (figure 1). The putamen, a structure that exhibits higher activity in the PET image and thus causes spillover, is also better aligned by our method. Also, in Figure 2 we show a comparison of the root mean square (RMS) error of the deformation fields. Our method achieves the lowest overall RMS error.
To assess the accuracy of anatomical alignment, the FreeSurfer [R.S. Desikan] segmentations of the original MPRAGE images were brought into the template space by applying the mappings from the previously performed registrations. Using the FreeSurfer labels, we calculated the Dice coefficients [L.R. Dice et. al.]. Table 1 shows the summary statistics for Dice coefficients for major brain tissue types and Figure 3 shows the Dice coefficient box plots for cortical regions. Our method consistently achieves higher Dice coefficients than either of the methods compared against.
We presented a deformation correction method that can improve the anatomical alignment of PET images in PET-to-PET registration. Cross validation results show that our deformation correction method reduces the deformation field error and improves the anatomical alignment of PET images as evidenced by the higher Dice coefficients calculated using the deformed segmentations.
The improvement in anatomical alignment across multiple brain regions achieved by our method points to the systematic errors present in Pet-to-Pet registration. Our method can compensate for such errors present in PET-to-PET registration. Our method can compensate for those errors by learning locally from the structural image registrations. While we used SyN for registration purposes, the presented method can be applied to any deformable PET-to-PET registration method.
- A. Carass, J. Cuzzocreo, M.B. Wheeler, P.L. Bazin, S.M. Resnick, J.L. Prince, "Simple paradigm for extra-cerebral tissue removal: algorithm and analysis", NI, 56(4):1982-1992, 2011.
- J. Fripp, P. Bourgeat, O. Acosta, G. Jones, V. Villemagne, S. Ourselin, C. Rowe, O. Salvado,, "Generative atlases and atlas selection for C11-PIB PET-PET registration of elderly mild cognitive impaired and Alzheimer disease patients", Fifth IEEE International Symposium on Biomedical Imaging (ISBI 2008), Paris, France, May 14-17, 2008.
- L.R. Dice, "Measures of the amount of ecologic association between species", Ecology, 26(3): 297-302, 1945.
- R.S. Desikan, B. Fischl, B.T. Quinn, B.C. Dickerson, D. Blacker, R.L. Buckner, A.M. Dale, R.P. Maguire, B.T. Hyman, M.S. Albert, R.J. Killiany, "An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest", NI, 31(3):968-980, 2006.