Deformation field correction for spatial normalization of PET images using a population-derived partial least squares model

Revision as of 11:22, 19 August 2014 by Mark (Talk | contribs) (New page: <meta name="title" content="Deformation Field Correction for Spatial Normalization of PET Images Using a Population-derived Partial Least Squares Model"/> {{h2| Deformation Field Correctio...)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Deformation Field Correction for Spatial Normalization of PET Images Using a Population-derived Partial Least Squares Model

Murat Bilgel, Aaron Carass, Susan M. Resnick, Dean F. Wong, and Jerry L. Prince


Deformable medical image registration is essential to aligning a population of images, performing voxelwise association studies, and tracking longitudinal changes. Yet while deformable registration of positron emission tomography (PET) images are essential for population studies, work on anatomically accurate PET-to-PET registration remains limited. In this study we present a method for spatial normalization of PET images based on a deformation correction model learned from structural image registration.


Our method is based on this observation: “Pet-to-Pet registration produces deformations that are systematically biased in certain regions, and these biases can be characterized as a function of location and estimated within small neighborhoods.” The correction operates on the PET-to-PET deformation fields obtained from model learned from a population of subjects relating the local pet intensities and deformation fields to the corresponding structural imaging deformation fields. The learned relationship between the deformation fields accounts for the anatomical inaccuracies present in the alignment of the PET images, while the use of PET intensity information allows for the inter-subject variability in radiotracer binding due to differences in physiology.

To construct our model, we need the deformation fields that are to be applied to the PET images and their structural counterparts to bring the images to a common template. Our model is then trained using the resulting deformation fields for the PET and the structural images as well as the warped PET image intensities, yielding a correction that can be applied to PET deformation fields.

DFC Fig1.png
Figure 1: Visual comparison of deformed images for a sample subject. First row: PET deformed using (A) the deformation DFC Symbol1.png from MPRAGE-to-MPRAGE template registration, (B) the deformation DFC Symbol2.png from PET-to-PET template registration, (C) the deformation given by [J. Fripp et. al.] (D) the deformation DFC Symbol3.png predicted using the PLS model. Second row: MPRAGE deformed using (E) DFC Symbol1.png, (F) DFC Symbol2.png, (G) the deformation given by [J. Fripp et. al.] and (H) DFC Symbol3.png. Third row: (I) MPRAGE template, (J) difference of E and F, (K) difference of E and G (L) difference of E and H.

The process starts with image template generation. To create an automatically accurate PET template image, we rely on the associated structural images. The structural images are co-registered rigidly with the subject PET images yielding a transformation followed by affine registration to a common space with a transformation. The affinely coregistered structural images are then used to create a structural population template image. Then the affine transformations and diffeomorphisms obtained from the structural image template construction are applied to the corresponding PET images in order to bring them into the same template space. The PET template is then defined as the mean of the spatially normalized PET image.

Computing a training set happens next. Using a set of subjects for whom both a structural image and a PET image are available, we perform deformable registration to map the PET images onto the PET template. For each subject in the training data, the deformable registration consists of an affine transformation followed by a diffeomorphic mapping. We then denote the PET image registered onto the PET template. Constraining the affine transformation to be the same as that obtained from the PET-to-PET registration, we then perform another registration to find the deformation field that must be applied to the structural image.

We then start the model training. The goal is to train a model at each voxel, describing a relationship between the estimated PET deformation field and the structural image deformation field.


DFC Fig2.png
Figure 2: Root mean square (RMS) error (in mm) of the PET deformation fields, calculated across 79 subjects. Left to right: MPRAGE template, RMS error of DFC Symbol2.png , RMS error of the deformation given by [7], and RMS error of DFC Symbol3.png.

We compared our method against the PET-to-PET template registration and an implementation of [J. Fripp et. al.] that involved modifying the PET template according to a whole-brain PCA model following an affine registration of the subject’s PET and using the modified template to perform the deformable registration. Ventricle size is overestimated in both the PET-to-PET method as well as the method described in [J. Fripp et. al.], but our method results in a better registration as shown in the difference image (figure 1). The putamen, a structure that exhibits higher activity in the PET image and thus causes spillover, is also better aligned by our method. Also, in Figure 2 we show a comparison of the root mean square (RMS) error of the deformation fields. Our method achieves the lowest overall RMS error.

To assess the accuracy of anatomical alignment, the FreeSurfer [R.S. Desikan] segmentations of the original MPRAGE images were brought into the template space by applying the mappings from the previously performed registrations. Using the FreeSurfer labels, we calculated the Dice coefficients [L.R. Dice et. al.]. Table 1 shows the summary statistics for Dice coefficients for major brain tissue types and Figure 3 shows the Dice coefficient box plots for cortical regions. Our method consistently achieves higher Dice coefficients than either of the methods compared against.

DFC Table1.png
Table 1: Dice coefficients (mean ± st. dev., N = 79) for major brain tissue types. Dice coefficients for our method are statistically different (p < 0.01 for all three tissue types) from both other compared methods.
DFC Fig3.png
Figure 3: Box plots of Dice coefficients for cortical labels calculated using the deformations obtained from PET-to-PET registration (blue), the method proposed by [J. Fripp et. al.] (green), and our method (red). Dice coefficients for our method are statistically different (p<0.05 for cuneus, temporal pole, and transverse temporal and p<0.01 for all other regions) from both other compared methods.


We presented a deformation correction method that can improve the anatomical alignment of PET images in PET-to-PET registration. Cross validation results show that our deformation correction method reduces the deformation field error and improves the anatomical alignment of PET images as evidenced by the higher Dice coefficients calculated using the deformed segmentations.

The improvement in anatomical alignment across multiple brain regions achieved by our method points to the systematic errors present in Pet-to-Pet registration. Our method can compensate for such errors present in PET-to-PET registration. Our method can compensate for those errors by learning locally from the structural image registrations. While we used SyN for registration purposes, the presented method can be applied to any deformable PET-to-PET registration method.


  • A. Carass, J. Cuzzocreo, M.B. Wheeler, P.L. Bazin, S.M. Resnick, J.L. Prince, "Simple paradigm for extra-cerebral tissue removal: algorithm and analysis", NI, 56(4): 1982-1992, 2011.


  • J. Fripp, P. Bourgeat, O. Acosta, G. Jones, V. Villemagne, S. Ourselin, C. Rowe, O. Salvado,, "Generative atlases and atlas selection for C11-PIB PET-PET registration of elderly mild cognitive impaired and Alzheimer disease patients", Fifth IEEE International Symposium on Biomedical Imaging (ISBI 2008), Paris, France, May 14-17, 2008.
  • L.R. Dice, "Measures of the amount of ecologic association between species", Ecology, 26(3): 297-302, 1945.
  • R.S. Desikan, B. Fischl, B.T. Quinn, B.C. Dickerson, D. Blacker, R.L. Buckner, A.M. Dale, R.P. Maguire, B.T. Hyman, M.S. Albert, R.J. Killiany, "An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest", NI, 31(3):968-980, 2006.