LNCS Homepage
ContentsAuthor IndexSearch

Towards Realtime Multimodal Fusion for Image-Guided Interventions Using Self-similarities

Mattias Paul Heinrich1, 2, Mark Jenkinson2, Bartlomiej W. Papie1, Sir Michael Brady3, and Julia A. Schnabel1

1Institute of Biomedical Engineering, Department of Engineering, University of Oxford, UK
mattias.heinrich@eng.ox.ac.uk
http://users.ox.ac.uk/~shil3388

2Oxford University, Centre for Functional MRI of the Brain, UK

3Department of Oncology, University of Oxford, UK

Abstract. Image-guided interventions often rely on deformable multi-modal registration to align pre-treatment and intra-operative scans. There are a number of requirements for automated image registration for this task, such as a robust similarity metric for scans of different modalities with different noise distributions and contrast, an efficient optimisation of the cost function to enable fast registration for this time-sensitive application, and an insensitive choice of registration parameters to avoid delays in practical clinical use. In this work, we build upon the concept of structural image representation for multi-modal similarity. Discriminative descriptors are densely extracted for the multi-modal scans based on the “self-similarity context”. An efficient quantised representation is derived that enables very fast computation of point-wise distances between descriptors. A symmetric multi-scale discrete optimisation with diffusion regularisation is used to find smooth transformations. The method is evaluated for the registration of 3D ultrasound and MRI brain scans for neurosurgery and demonstrates a significantly reduced registration error (on average 2.1 mm) compared to commonly used similarity metrics and computation times of less than 30 seconds per 3D registration.

Keywords: multimodal similarity, discrete optimisation, neurosurgery

LNCS 8149, p. 187 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer-Verlag Berlin Heidelberg 2013