LNCS Homepage
ContentsAuthor IndexSearch

A Probabilistic, Non-parametric Framework for Inter-modality Label Fusion

Juan Eugenio Iglesias1, Mert Rory Sabuncu1, and Koen Van Leemput1, 2, 3

1Martinos Center for Biomedical Imaging, MGH, Harvard Medical School, USA

2Department of Applied Mathematics and Computer Science, DTU, Denmark

3Departments of Information and Computer Science and of Biomedical Engineering and Computational Science, Aalto University, Finland

Abstract. Multi-atlas techniques are commonplace in medical image segmentation due to their high performance and ease of implementation. Locally weighting the contributions from the different atlases in the label fusion process can improve the quality of the segmentation. However, how to define these weights in a principled way in inter-modality scenarios remains an open problem. Here we propose a label fusion scheme that does not require voxel intensity consistency between the atlases and the target image to segment. The method is based on a generative model of image data in which each intensity in the atlases has an associated conditional distribution of corresponding intensities in the target. The segmentation is computed using variational expectation maximization (VEM) in a Bayesian framework. The method was evaluated with a dataset of eight proton density weighted brain MRI scans with nine labeled structures of interest. The results show that the algorithm outperforms majority voting and a recently published inter-modality label fusion algorithm.

LNCS 8151, p. 576 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer-Verlag Berlin Heidelberg 2013