A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation

Automating sheet steel visual inspection can improve quality and reduce costs during its production. While many manufacturers still rely on manual or traditional inspection methods, deep learning-based approaches have proven their efficiency. In this paper, we go beyond the state-of-the-art in this...

Full description

Bibliographic Details
Main Authors: Robby Neven, Toon Goedemé
Format: Article
Language:English
Published: MDPI AG 2021-05-01
Series:Metals
Subjects:
Online Access:https://www.mdpi.com/2075-4701/11/6/870
id doaj-d0280a26b47c4409a9f35445a6f89e81
record_format Article
spelling doaj-d0280a26b47c4409a9f35445a6f89e812021-06-01T01:13:20ZengMDPI AGMetals2075-47012021-05-011187087010.3390/met11060870A Multi-Branch U-Net for Steel Surface Defect Type and Severity SegmentationRobby Neven0Toon Goedemé1PSI-EAVISE, KU Leuven, 2860 Sint-Katelijne-Waver, BelgiumPSI-EAVISE, KU Leuven, 2860 Sint-Katelijne-Waver, BelgiumAutomating sheet steel visual inspection can improve quality and reduce costs during its production. While many manufacturers still rely on manual or traditional inspection methods, deep learning-based approaches have proven their efficiency. In this paper, we go beyond the state-of-the-art in this domain by proposing a multi-task model that performs both pixel-based defect segmentation and severity estimation of the defects in one two-branch network. Additionally, we show how incorporation of the production process parameters improves the model’s performance. After manually constructing a real-life industrial dataset, we first implemented and trained two single-task models performing the defect segmentation and severity estimation tasks separately. Next, we compared this to a multi-task model that simultaneously performs the two tasks at hand. By combining the tasks into one model, both segmentation tasks improved by 2.5% and 3% mIoU, respectively. In the next step, we extended the multi-task model using sensor fusion with process parameters. We demonstrate that the incorporation of the process parameters resulted in a further mIoU increase of 6.8% and 2.9% for the defect segmentation and severity estimation tasks, respectively.https://www.mdpi.com/2075-4701/11/6/870steel surface defectsvisual inspectioncomputer visiondeep learningsemantic segmentation
collection DOAJ
language English
format Article
sources DOAJ
author Robby Neven
Toon Goedemé
spellingShingle Robby Neven
Toon Goedemé
A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation
Metals
steel surface defects
visual inspection
computer vision
deep learning
semantic segmentation
author_facet Robby Neven
Toon Goedemé
author_sort Robby Neven
title A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation
title_short A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation
title_full A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation
title_fullStr A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation
title_full_unstemmed A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation
title_sort multi-branch u-net for steel surface defect type and severity segmentation
publisher MDPI AG
series Metals
issn 2075-4701
publishDate 2021-05-01
description Automating sheet steel visual inspection can improve quality and reduce costs during its production. While many manufacturers still rely on manual or traditional inspection methods, deep learning-based approaches have proven their efficiency. In this paper, we go beyond the state-of-the-art in this domain by proposing a multi-task model that performs both pixel-based defect segmentation and severity estimation of the defects in one two-branch network. Additionally, we show how incorporation of the production process parameters improves the model’s performance. After manually constructing a real-life industrial dataset, we first implemented and trained two single-task models performing the defect segmentation and severity estimation tasks separately. Next, we compared this to a multi-task model that simultaneously performs the two tasks at hand. By combining the tasks into one model, both segmentation tasks improved by 2.5% and 3% mIoU, respectively. In the next step, we extended the multi-task model using sensor fusion with process parameters. We demonstrate that the incorporation of the process parameters resulted in a further mIoU increase of 6.8% and 2.9% for the defect segmentation and severity estimation tasks, respectively.
topic steel surface defects
visual inspection
computer vision
deep learning
semantic segmentation
url https://www.mdpi.com/2075-4701/11/6/870
work_keys_str_mv AT robbyneven amultibranchunetforsteelsurfacedefecttypeandseveritysegmentation
AT toongoedeme amultibranchunetforsteelsurfacedefecttypeandseveritysegmentation
AT robbyneven multibranchunetforsteelsurfacedefecttypeandseveritysegmentation
AT toongoedeme multibranchunetforsteelsurfacedefecttypeandseveritysegmentation
_version_ 1721412914405965824