Deep OCT Angiography Image Generation for Motion Artifact Suppression

Part of the Informatik aktuell book series (INFORMAT)

Bibliographic Details
Main Authors: Hossbach, Julian (Author), Husvogt, Lennart (Author), Kraus, Martin F. (Author), Fujimoto, James G (Author), Maier, Andreas K. (Author)
Other Authors: Massachusetts Institute of Technology. Research Laboratory of Electronics (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Springer Fachmedien Wiesbaden, 2021-01-08T15:10:59Z.
Subjects:
Online Access:Get fulltext
Description
Summary:Part of the Informatik aktuell book series (INFORMAT)
Eye movements, blinking and other motion during the acquisition of optical coherence tomography (OCT) can lead to artifacts, when processed to OCT angiography (OCTA) images. Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information. The aim of this research is to fill these gaps using a deep generative model for OCT to OCTA image translation relying on a single intact OCT scan. Therefore, a U-Net is trained to extract the angiographic information from OCT patches. At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network. We show that generative models can augment the missing scans. The augmented volumes could then be used for 3-D segmentation or increase the diagnostic value.