Difference between revisions of "MGDM segmentation of macular OCT images"
(6 intermediate revisions by 2 users not shown)  
Line 3:  Line 3:  
{{TOCright}}  {{TOCright}}  
−  Aaron Carass, Andrew Lang, Matthew Hauser, Peter A. Calabresi, Howard S. Ying, and Jerry L. Prince  +  {{iacl~aaron/Aaron Carass}}, [[AndrewAndrew Lang]], Matthew Hauser, Peter A. Calabresi, Howard S. Ying, and [[PrinceJerry L. Prince]] 
−  +  
−  +  
−  +  
−  +  
{{h3Introduction}}  {{h3Introduction}}  
+  Optical coherence tomography (OCT) is the de facto standard imaging modality for ophthalmological assessment of retinal eye disease, and is of increasing importance in the study of neurological disorders. Quantification of the thicknesses of various retinal layers within the macular cube provides unique diagnostic insights for many diseases but the capability for automatic segmentation and quantification remains quite limited. While manual segmentation has been used for many scientific studies, it is extremely time consuming and is subject t intra and interrater variation. This paper presents a new computational domain, referred to as flat space, and a segmentation method for specific retinal layers in the macular cube using a recently developed deformable model approach for multiple objects. The framework maintains object relationships and topology while preventing overlaps and gaps. The algorithm segments eight retinal layers over the whole macular cube, where each boundary is defined with subvoxel precision. Evaluation of the method on singleeye OCT scans from 37 subjects, each with manual ground truth, shows improvement over a stateoftheart method.  
−  +  <div style="background: white; border: 2px solid rgb(150, 150, 150); padding: 2px; textalign: justify;">  
−  +  
{align=center  {align=center  
−  
−  
    
{  {  
    
−  align=center[[Image:MGDMOCT_Figure_1.png]]  +  align=center[[Image:MGDMOCT_Figure_1.png600px]] 
    
'''Figure 1:''' (a) A Bscan from a subject in our cohort with annotations indicating the locations of the vitreous, choroid, clivus, and fovea. (The image has been rescaled by a factor of three along each Ascan for display purposes.) The red box is shown magnified (x3) in (b) with markings to denote various layers and boundaries. The layers are: RNFL; ganglion (GCL); inner plexiform (IPL); inner nuclear (INL); outer plexiform (OPL); outer nuclear (ONL), inner segment (IS); outer segment (OS); retinal pigment epithelium (RPE). The named boundaries are: inner limiting membrane (ILM); external limiting membrane (ELM); Brunch's Membrane (BrM). The OS and RPE are collectively referred to as the hyperreflectivity complex (HRC), and the ganglion cell complex (GCC) comprises the RNFL, GCL, and IPL.  '''Figure 1:''' (a) A Bscan from a subject in our cohort with annotations indicating the locations of the vitreous, choroid, clivus, and fovea. (The image has been rescaled by a factor of three along each Ascan for display purposes.) The red box is shown magnified (x3) in (b) with markings to denote various layers and boundaries. The layers are: RNFL; ganglion (GCL); inner plexiform (IPL); inner nuclear (INL); outer plexiform (OPL); outer nuclear (ONL), inner segment (IS); outer segment (OS); retinal pigment epithelium (RPE). The named boundaries are: inner limiting membrane (ILM); external limiting membrane (ELM); Brunch's Membrane (BrM). The OS and RPE are collectively referred to as the hyperreflectivity complex (HRC), and the ganglion cell complex (GCC) comprises the RNFL, GCL, and IPL.  
Line 25:  Line 19:  
}  }  
}  }  
−  +  </div>  
{{h3Methods}}  {{h3Methods}}  
−  
−  
Our method builds upon our random forest (RF) based segmentation of the macula and also provides for a new computational domain which refer to as flat space. First we estimate the boundaries of the vitreous and choroid with the retina. From these estimated boundaries, we apply a mapping that was learned by regression on a collection of manual segmentations, which maps the retinal space between the vitreous and choroid to a domain in which each of the layers is approximately flat, referred to as flat space. In the original space we use the random forest layer boundary estimation to compute probabilities for the boundaries of each layer and then map them to flat space. These probabilities are then used to drive MGDM, providing a segmentation in flat space which is then mapped back to the original space.  Our method builds upon our random forest (RF) based segmentation of the macula and also provides for a new computational domain which refer to as flat space. First we estimate the boundaries of the vitreous and choroid with the retina. From these estimated boundaries, we apply a mapping that was learned by regression on a collection of manual segmentations, which maps the retinal space between the vitreous and choroid to a domain in which each of the layers is approximately flat, referred to as flat space. In the original space we use the random forest layer boundary estimation to compute probabilities for the boundaries of each layer and then map them to flat space. These probabilities are then used to drive MGDM, providing a segmentation in flat space which is then mapped back to the original space.  
Data from the right eyes of 37 subjects (a mixture of 16 healthy controls and 21 MS patients) were obtained using a Spectralis OCT system. Seven subjects from the cohort were picked at random and used to train the RF boundary classifier. The RF classifier has been previous [A. Lang et. al. 2013] shown to be robust and independent of the training data and thus should not have introduced any bias in the results. The research protocol was approved by the local Institutional Review Board, and written informed consent was obtained from all participants. All scans were screened and found to be free of microcystic macular edema, which is sometimes found in a small percentage of MS subjects.  Data from the right eyes of 37 subjects (a mixture of 16 healthy controls and 21 MS patients) were obtained using a Spectralis OCT system. Seven subjects from the cohort were picked at random and used to train the RF boundary classifier. The RF classifier has been previous [A. Lang et. al. 2013] shown to be robust and independent of the training data and thus should not have introduced any bias in the results. The research protocol was approved by the local Institutional Review Board, and written informed consent was obtained from all participants. All scans were screened and found to be free of microcystic macular edema, which is sometimes found in a small percentage of MS subjects.  
+  
+  <div style="background: white; border: 2px solid rgb(150, 150, 150); padding: 2px; textalign: justify;">  
{align=center  {align=center  
−  
−  
    
{  {  
    
−  align=center[[Image:MGDMOCT_Figure_4.jpg]]  +  align=center[[Image:MGDMOCT_Figure_4.jpg600px]] 
    
−  '''Figure 2:''' Shown is a magnified (  +  '''Figure 2:''' Shown is a magnified (x18) region around the fovea for each of (a) the original image, (b) the manual delineation, and automated segmentations generated by (c) RF+Graph and (d) our method. The result in (d) is generated from the continuous representation of the level sets in the subjects native space, shown in (e) is the voxelated equivelent for our method. The RF+Graph method has to keep each layer at least one voxel thick (the GCIP and INL in this case). We also observe the voxelated nature of the RF+Graph result, whereas our approach has a continuous representation due to its use of levelsets shown in (d) but can also be converted a voxelated format (e). 
    
}  }  
}  }  
−  +  </div>  
{{h3Experiments and Results}}  {{h3Experiments and Results}}  
+  We compared our multiobject geometric deformable models based approach to our previous work (RF+Graph) on all 37 subjects. In terms of computational performance, we are currently not competitive with RF+Graph which takes only four minutes on a 3D volume of a 49 Bscans. However, our MGDM implementation is written in a generic framework and an optimized method based on a GPU framework could offer 10 to 20 fold speed up. To compare the two methods, we computed the Dice coefficient of the automatically segmented layers against manual delineations. The Dice coefficient is a measure of how much the two segmentations agree with each other. It has a range of [0,1], with a score of 0 meaning complete contradiction between the two, while 1 represents complete concurrence.  
−  +  <div style="background: white; border: 2px solid rgb(150, 150, 150); padding: 2px; textalign: justify;">  
−  +  
{align=center  {align=center  
−  
−  
    
{  {  
Line 67:  Line 57:  
}  }  
}  }  
−  +  </div>  
+  
The resulting Dice coefficients are shown in Table 1. It is observed that the mean Dice coefficient is larger for MGDM than RF+Graph in all layers. Further, we used a paired Wilcoxon rank sum test to compare the distributions of the Dice coefficients. The resulting pvalues in Table 1 show that sic of the eight ayers reach significance (‘’a’’ level of 0.001). Therefore, MGDM is statistically better than RF+Graph in six of the eight layers. The two remaining layers (INL and OPL) lack statistical significance because of the large variance.  The resulting Dice coefficients are shown in Table 1. It is observed that the mean Dice coefficient is larger for MGDM than RF+Graph in all layers. Further, we used a paired Wilcoxon rank sum test to compare the distributions of the Dice coefficients. The resulting pvalues in Table 1 show that sic of the eight ayers reach significance (‘’a’’ level of 0.001). Therefore, MGDM is statistically better than RF+Graph in six of the eight layers. The two remaining layers (INL and OPL) lack statistical significance because of the large variance.  
Line 74:  Line 65:  
−  
+  
+  {{h3Conclusion}}  
The Dice coefficient and absolute boundary error in conjunction with the comparison to RF+GC suggest that our method has very good performance characteristics. Our new algorithm uses a multiobject geometric deformable model of the retinal layers in a unique computational domain, which we refer to as flat space. The forces used for each layer were built from the same principles. These could be refined or modified on a perlayer basis to help improve the results.  The Dice coefficient and absolute boundary error in conjunction with the comparison to RF+GC suggest that our method has very good performance characteristics. Our new algorithm uses a multiobject geometric deformable model of the retinal layers in a unique computational domain, which we refer to as flat space. The forces used for each layer were built from the same principles. These could be refined or modified on a perlayer basis to help improve the results.  
{{h3Acknowledgments}}  {{h3Acknowledgments}}  
−  
−  
This work was supported by the NIH/NEI R21EY022150 and the NIH/NINDS R01NS082347  This work was supported by the NIH/NEI R21EY022150 and the NIH/NINDS R01NS082347  
{{h3Publications}}  {{h3Publications}}  
−  +  {{iaclpubauthor=A. Carass, A. Lang, M. Hauser, P.A. Calabresi, H.S. Ying, and J.L. Princetitle=[[MGDM segmentation of macular OCT imagesMultipleobject geometric deformable model for segmentation of macular OCT]]jrnl=bmoenumber=5(4):10621074when=2014doi=10.1364/BOE.5.001062pdf=/proceedings/iacl/2014/CarxBOE14MGDM_OCT_Segmentation.pdf}}  
−  {{iaclpubauthor=A. Carass, A. Lang, M. Hauser, P.A. Calabresi, H.S. Ying, and J.L. Princetitle= Multipleobject geometric deformable model for segmentation of macular OCTjrnl=  +  {{iaclpubauthor=A. Lang, A. Carass, M. Hauser, E.S. Sotirchos, P.A. Calabresi, H.S. Ying, and J.L. Princetitle=[[Retinal layer segmentation of macular OCT imagesRetinal layer segmentation of macular OCT images using boundary classification]]jrnl=bmoenumber=4(7):11331152when=2013doi=10.1364/BOE.4.001133pmcid=3704094pdf=/proceedings/iacl/2013/LanxBOE13Retinal_layer_OCT.pdf}} 
−  +  {{iaclpubauthor=M. Chen, A. Lang, H.S. Ying, P.A. Calabresi, J.L. Prince, and A. Carasstitle=[[Analysis of macular OCT images using deformable registration]]jrnl=bmoenumber=5(7):21962214when=2014doi=10.1364/BOE.5.002196pdf=/proceedings/iacl/2014/ChexBOE14Analysis_Macular_OCT_Deformable_Registration.pdf}}  
−  {{iaclpubauthor= A. Lang, A. Carass, M. Hauser, E.S. Sotirchos, P.A. Calabresi, H.S. Ying, and J.L. Princetitle=Retinal layer segmentation of macular OCT images using boundary classificationjrnl=  +  {{iaclpubauthor=M. Chen, A. Lang, E. Sotirchos, H.S. Ying, P.A. Calabresi, J.L. Prince, and A. Carasstitle=Deformable Registration of Macular OCT Using Amode Scan Similaritynumber=476479conf=isbi2013doi=10.1109/ISBI.2013.6556515pmcid=3892764pdf=/proceedings/iacl/2013/ChexISBI13Macular_OCT_using_Amode_scan_similarity.pdf}} 
−  +  {{iaclpubauthor=A. Lang, A. Carass, E. Sotirchos, and J.L. Princetitle=Segmentation of Retinal OCT Images using a Random Forest Classifierconf=spie2013doi=10.1117/12.2006649pdf=/proceedings/iacl/2013/LanxSPIE13Segmentation_of_retinal_OCT_images_forest_classifier.pdf}}  
−  {{iaclpubauthor=M. Chen, A. Lang, E. Sotirchos, H.S. Ying, P.A. Calabresi, J.L. Prince, and A. Carasstitle= Deformable  +  {{iaclpubauthor=A. Lang, A. Carass, P.A. Calabresi, H.S. Ying, and J.L. Princetitle=An adaptive grid for graphbased segmentation in macular cube OCTconf=spie2014doi=10.1117/12.2043040}} 
−  +  {{iaclpubauthor=J.A. Bogovic, J.L. Prince, and P.L. Bazintitle=A Multiple Object Geometric Deformable Model for Image Segmentationjrnl=cviunumber=117(2):145157when=2013doi=10.1016/j.cviu.2012.10.006pubmed=23316110pdf=/proceedings/iacl/2013/BogxCVIU13MGDM.pdf}}  
−  {{iaclpubauthor=A. Lang, A. Carass, E. Sotirchos  +  
−  +  
−  {{iaclpubauthor=A. Lang, A. Carass, P.A. Calabresi, H.S. Ying, and J.L. Princetitle= An adaptive grid for graphbased segmentation in macular cube OCTconf=spie2014  +  
−  +  
−  {{iaclpubauthor= J.A. Bogovic, J.L. Prince, and P.L. Bazintitle=A  + 
Latest revision as of 10:37, 12 September 2016
MGDM segmentation of macular OCT images
Aaron Carass, Andrew Lang, Matthew Hauser, Peter A. Calabresi, Howard S. Ying, and Jerry L. Prince
Introduction
Optical coherence tomography (OCT) is the de facto standard imaging modality for ophthalmological assessment of retinal eye disease, and is of increasing importance in the study of neurological disorders. Quantification of the thicknesses of various retinal layers within the macular cube provides unique diagnostic insights for many diseases but the capability for automatic segmentation and quantification remains quite limited. While manual segmentation has been used for many scientific studies, it is extremely time consuming and is subject t intra and interrater variation. This paper presents a new computational domain, referred to as flat space, and a segmentation method for specific retinal layers in the macular cube using a recently developed deformable model approach for multiple objects. The framework maintains object relationships and topology while preventing overlaps and gaps. The algorithm segments eight retinal layers over the whole macular cube, where each boundary is defined with subvoxel precision. Evaluation of the method on singleeye OCT scans from 37 subjects, each with manual ground truth, shows improvement over a stateoftheart method.
Methods
Our method builds upon our random forest (RF) based segmentation of the macula and also provides for a new computational domain which refer to as flat space. First we estimate the boundaries of the vitreous and choroid with the retina. From these estimated boundaries, we apply a mapping that was learned by regression on a collection of manual segmentations, which maps the retinal space between the vitreous and choroid to a domain in which each of the layers is approximately flat, referred to as flat space. In the original space we use the random forest layer boundary estimation to compute probabilities for the boundaries of each layer and then map them to flat space. These probabilities are then used to drive MGDM, providing a segmentation in flat space which is then mapped back to the original space.
Data from the right eyes of 37 subjects (a mixture of 16 healthy controls and 21 MS patients) were obtained using a Spectralis OCT system. Seven subjects from the cohort were picked at random and used to train the RF boundary classifier. The RF classifier has been previous [A. Lang et. al. 2013] shown to be robust and independent of the training data and thus should not have introduced any bias in the results. The research protocol was approved by the local Institutional Review Board, and written informed consent was obtained from all participants. All scans were screened and found to be free of microcystic macular edema, which is sometimes found in a small percentage of MS subjects.

Experiments and Results
We compared our multiobject geometric deformable models based approach to our previous work (RF+Graph) on all 37 subjects. In terms of computational performance, we are currently not competitive with RF+Graph which takes only four minutes on a 3D volume of a 49 Bscans. However, our MGDM implementation is written in a generic framework and an optimized method based on a GPU framework could offer 10 to 20 fold speed up. To compare the two methods, we computed the Dice coefficient of the automatically segmented layers against manual delineations. The Dice coefficient is a measure of how much the two segmentations agree with each other. It has a range of [0,1], with a score of 0 meaning complete contradiction between the two, while 1 represents complete concurrence.

The resulting Dice coefficients are shown in Table 1. It is observed that the mean Dice coefficient is larger for MGDM than RF+Graph in all layers. Further, we used a paired Wilcoxon rank sum test to compare the distributions of the Dice coefficients. The resulting pvalues in Table 1 show that sic of the eight ayers reach significance (‘’a’’ level of 0.001). Therefore, MGDM is statistically better than RF+Graph in six of the eight layers. The two remaining layers (INL and OPL) lack statistical significance because of the large variance.
An example of the manual delineations as well as the result of our method on the same subject are shown in Fig 3, with a magnified region about the fovea in Fig 4. Table 2 includes the absolute boundary error for the nine boundaries we approximate; these errors are measured along the Ascans in comparison to the same manual rater. We, again, used a paired Wilcoxon rank sum test to compute pvalues between the two methods for the absolute distance error, six of the nine boundaries reach significance (‘’a’’ level of 0.001).
Conclusion
The Dice coefficient and absolute boundary error in conjunction with the comparison to RF+GC suggest that our method has very good performance characteristics. Our new algorithm uses a multiobject geometric deformable model of the retinal layers in a unique computational domain, which we refer to as flat space. The forces used for each layer were built from the same principles. These could be refined or modified on a perlayer basis to help improve the results.
Acknowledgments
This work was supported by the NIH/NEI R21EY022150 and the NIH/NINDS R01NS082347
Publications
 A. Carass, A. Lang, M. Hauser, P.A. Calabresi, H.S. Ying, and J.L. Prince, "Multipleobject geometric deformable model for segmentation of macular OCT", Biomedical Optics Express, 5(4):10621074, 2014. (PDF) (doi)
 A. Lang, A. Carass, M. Hauser, E.S. Sotirchos, P.A. Calabresi, H.S. Ying, and J.L. Prince, "Retinal layer segmentation of macular OCT images using boundary classification", Biomedical Optics Express, 4(7):11331152, 2013. (PDF) (doi) (PMCID 3704094)
 M. Chen, A. Lang, H.S. Ying, P.A. Calabresi, J.L. Prince, and A. Carass, "Analysis of macular OCT images using deformable registration", Biomedical Optics Express, 5(7):21962214, 2014. (PDF) (doi)
 M. Chen, A. Lang, E. Sotirchos, H.S. Ying, P.A. Calabresi, J.L. Prince, and A. Carass, "Deformable Registration of Macular OCT Using Amode Scan Similarity", Tenth IEEE International Symposium on Biomedical Imaging (ISBI 2013), San Francisco, CA, April 7  11, 2013. (PDF) (doi) (PMCID 3892764)
 A. Lang, A. Carass, E. Sotirchos, and J.L. Prince, "Segmentation of Retinal OCT Images using a Random Forest Classifier", Proceedings of SPIE Medical Imaging (SPIEMI 2013), Orlando, FL, February 914, 2013. (PDF) (doi)
 A. Lang, A. Carass, P.A. Calabresi, H.S. Ying, and J.L. Prince, "An adaptive grid for graphbased segmentation in macular cube OCT", Proceedings of SPIE Medical Imaging (SPIEMI 2014), San Diego, CA, February 1520, 2014. (doi)
 J.A. Bogovic, J.L. Prince, and P.L. Bazin, "A Multiple Object Geometric Deformable Model for Image Segmentation", Computer Vision and Image Understanding, 117(2):145157, 2013. (PDF) (doi) (PubMed)