Efficient BFCN for Automatic Retinal Vessel Segmentation
Retinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cann...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2020-01-01
|
Series: | Journal of Ophthalmology |
Online Access: | http://dx.doi.org/10.1155/2020/6439407 |
id |
doaj-ca9fe927e9de473f9963cd8ffa359dde |
---|---|
record_format |
Article |
spelling |
doaj-ca9fe927e9de473f9963cd8ffa359dde2020-11-25T04:08:43ZengHindawi LimitedJournal of Ophthalmology2090-00582020-01-01202010.1155/2020/64394076439407Efficient BFCN for Automatic Retinal Vessel SegmentationYun Jiang0Falin Wang1Jing Gao2Wenhuan Liu3Turkish General StaffTurkish General StaffTurkish General StaffTurkish General StaffRetinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cannot capture global context information of the larger areas. Therefore, it is difficult to identify the lesion area, and the segmentation efficiency is poor. This paper presents a butterfly fully convolutional neural network (BFCN). First, in view of the low contrast between blood vessels and the background in retinal blood vessel images, this paper uses automatic color enhancement (ACE) technology to increase the contrast between blood vessels and the background. Second, using the multiscale information extraction (MSIE) module in the backbone network can capture the global contextual information in a larger area to reduce the loss of feature information. At the same time, using the transfer layer (T_Layer) can not only alleviate gradient vanishing problem and repair the information loss in the downsampling process but also obtain rich spatial information. Finally, for the first time in the paper, the segmentation image is postprocessed, and the Laplacian sharpening method is used to improve the accuracy of vessel segmentation. The method mentioned in this paper has been verified by the DRIVE, STARE, and CHASE datasets, with the accuracy of 0.9627, 0.9735, and 0.9688, respectively.http://dx.doi.org/10.1155/2020/6439407 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yun Jiang Falin Wang Jing Gao Wenhuan Liu |
spellingShingle |
Yun Jiang Falin Wang Jing Gao Wenhuan Liu Efficient BFCN for Automatic Retinal Vessel Segmentation Journal of Ophthalmology |
author_facet |
Yun Jiang Falin Wang Jing Gao Wenhuan Liu |
author_sort |
Yun Jiang |
title |
Efficient BFCN for Automatic Retinal Vessel Segmentation |
title_short |
Efficient BFCN for Automatic Retinal Vessel Segmentation |
title_full |
Efficient BFCN for Automatic Retinal Vessel Segmentation |
title_fullStr |
Efficient BFCN for Automatic Retinal Vessel Segmentation |
title_full_unstemmed |
Efficient BFCN for Automatic Retinal Vessel Segmentation |
title_sort |
efficient bfcn for automatic retinal vessel segmentation |
publisher |
Hindawi Limited |
series |
Journal of Ophthalmology |
issn |
2090-0058 |
publishDate |
2020-01-01 |
description |
Retinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cannot capture global context information of the larger areas. Therefore, it is difficult to identify the lesion area, and the segmentation efficiency is poor. This paper presents a butterfly fully convolutional neural network (BFCN). First, in view of the low contrast between blood vessels and the background in retinal blood vessel images, this paper uses automatic color enhancement (ACE) technology to increase the contrast between blood vessels and the background. Second, using the multiscale information extraction (MSIE) module in the backbone network can capture the global contextual information in a larger area to reduce the loss of feature information. At the same time, using the transfer layer (T_Layer) can not only alleviate gradient vanishing problem and repair the information loss in the downsampling process but also obtain rich spatial information. Finally, for the first time in the paper, the segmentation image is postprocessed, and the Laplacian sharpening method is used to improve the accuracy of vessel segmentation. The method mentioned in this paper has been verified by the DRIVE, STARE, and CHASE datasets, with the accuracy of 0.9627, 0.9735, and 0.9688, respectively. |
url |
http://dx.doi.org/10.1155/2020/6439407 |
work_keys_str_mv |
AT yunjiang efficientbfcnforautomaticretinalvesselsegmentation AT falinwang efficientbfcnforautomaticretinalvesselsegmentation AT jinggao efficientbfcnforautomaticretinalvesselsegmentation AT wenhuanliu efficientbfcnforautomaticretinalvesselsegmentation |
_version_ |
1715041652237664256 |