Difference between revisions of "ACRROSS"
Jump to navigation
Jump to search
(Fixed a link. Added some other links.) |
|||
(3 intermediate revisions by one other user not shown) | |||
Line 3: | Line 3: | ||
{{TOCright}} | {{TOCright}} | ||
− | Artifacts and Contrast Robust Representation for OCTA Semi-supervised Segmentation | + | ACRROSS: Artifacts and Contrast Robust Representation for OCTA Semi-supervised Segmentation is a vessel segmentation algorithm for 2D en face OCTA images. It can handle scans captured from multiple devices. The associated publication is: |
− | *{{pub|author = Yihao Liu, Aaron Carass, Lianrui Zuo, Yufan He, Shuo Han, Lorenzo Gregori, Sean Murray, Rohit Mishra, Jianqin Lei, Peter A. Calabresi, Shiv Saidha, Jerry L. Prince | title = [https://ieeexplore.ieee.org/abstract/document/9834971 Disentangled Representation Learning for OCTA Vessel Segmentation with Limited Training Data]}} | + | *{{pub|author = [[Yihao|Yihao Liu]], Aaron Carass, Lianrui Zuo, Yufan He, Shuo Han, Lorenzo Gregori, Sean Murray, Rohit Mishra, Jianqin Lei, Peter A. Calabresi, Shiv Saidha, [[Prince|Jerry L. Prince]] | title = [https://ieeexplore.ieee.org/abstract/document/9834971 Disentangled Representation Learning for OCTA Vessel Segmentation with Limited Training Data]}} |
− | If you have questions regarding the method or software, please contact [ | + | If you have questions regarding the method or software, please contact [[Yihao|Yihao Liu]]. |
Latest revision as of 19:07, 7 February 2024
ACRROSS
Contents |
ACRROSS: Artifacts and Contrast Robust Representation for OCTA Semi-supervised Segmentation is a vessel segmentation algorithm for 2D en face OCTA images. It can handle scans captured from multiple devices. The associated publication is:
- Yihao Liu, Aaron Carass, Lianrui Zuo, Yufan He, Shuo Han, Lorenzo Gregori, Sean Murray, Rohit Mishra, Jianqin Lei, Peter A. Calabresi, Shiv Saidha, Jerry L. Prince, "Disentangled Representation Learning for OCTA Vessel Segmentation with Limited Training Data", .
If you have questions regarding the method or software, please contact Yihao Liu.