Revision as of 01:47, 24 September 2022 by Aaron (talk | contribs) (Link fix.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Variational Intensity Cross Channel Encoder for Unsupervised Vessel Segmentation on OCT Angiography (VICCE)

Variational intensity cross channel encoder is an vessel segmentation algorithm for 2D OCT angiography images. The associated publication is:

Source code 10kB
Pretrained model 80.2MB
License (GPL v3.0) 35kB


Prepare data

The data loader requires pairs of Spectralis and Cirrus scans as input. Name the Spectralis scan as "H.png" stand for Heidelberg and the Cirrus scan as "Z.png" for Zeiss, also provide a mask image "M.png" to outline the common field of view of the Spectralis scan and Cirrus scan. Those three images are stored in subfolder inside "data/train/" and "data/val/". The directory structure of the whole preject is as follows:

   │     ├──
   │     └──
   │    ├──
   │    ├──
   │    ├──
   │    └──
   │    ├──
   │    └── 
        |   └──subject_*
        |         ├── H.png
        |         ├── Z.png
        |         └── M.png
                  ├── H.png
                  ├── Z.png
                  └── M.png

Install packages

To install the required packages, run conda env create --name pytorch --file=pytorch_env.yml


  • If you have prepared registered pairs of Spectralis and Cirrus scans in "data/train/", you can use python --output vicce to train VICCE for your own data. It will load hyper parameters from "params.json" and store checkpoints into folder "checkpoints/vicce/".
  • To test images using existing models, run python --output vicce This will load parameters from "checkpoints/vicce/". This step also assume both "H.png" and "Z.png", as well as "M.png" exists in "data/val/*/", you might need to create dummy images for the data loader to work properly.

Use pretrained model

Create a folders "checkpoints/pretrained/" in the root directory. Download the pretrained models and put them in the created folder. Prepare your testing image and run python --output pretrained to specify to use the pretrained model.

If you have questions regarding the method or software, please contact Yihao Liu.