LNCS Homepage
ContentsAuthor IndexSearch

Learning from Partially Annotated OPT Images by Contextual Relevance Ranking

Wenqi Li1, Jianguo Zhang1, Wei-Shi Zheng2, Maria Coats3, Frank A. Carey3, and Stephen J. McKenna1

1CVIP, School of Computing, University of Dundee, Dundee, UK

2School of Information Science and Technology, Sun Yat-sen University, China

3Ninewells Hospital & Medical School, University of Dundee, UK

Abstract. Annotations delineating regions of interest can provide valuable information for training medical image classification and segmentation methods. However the process of obtaining annotations is tedious and time-consuming, especially for high-resolution volumetric images. In this paper we present a novel learning framework to reduce the requirement of manual annotations while achieving competitive classification performance. The approach is evaluated on a dataset with 59 3D optical projection tomography images of colorectal polyps. The results show that the proposed method can robustly infer patterns from partially annotated images with low computational cost.

LNCS 8151, p. 429 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer-Verlag Berlin Heidelberg 2013