LNCS Homepage
ContentsAuthor IndexSearch

Feature-Based Alignment of Volumetric Multi-modal Images

Matthew Toews1, Lilla Zöllei2, and William M. Wells1

1Brigham and Women’s Hospital, Harvard Medical School, USA
mt@bwh.harvard.edu
sw@bwh.harvard.edu

2A.A. Martinos Center, Massachussetts General Hospital, Harvard Medical School, USA
lzollei@nmr.mgh.harvard.edu

Abstract. This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology.

LNCS 7917, p. 25 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer-Verlag Berlin Heidelberg 2013