![]() |
|
||
Feature-Based Alignment of Volumetric Multi-modal ImagesMatthew Toews1, Lilla Zöllei2, and William M. Wells1 1Brigham and Women’s Hospital, Harvard Medical School, USA
2A.A. Martinos Center, Massachussetts General Hospital, Harvard Medical School, USA
Abstract. This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. LNCS 7917, p. 25 ff. lncs@springer.com
|