An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks

Recent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to...

Full description

Bibliographic Details
Main Authors: Nagesh Subbanna, Matthias Wilms, Anup Tuladhar, Nils D. Forkert
Format: Article
Language:English
Published: MDPI AG 2021-06-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/11/3874
id doaj-9aedc5dab50a4393a672352bc147e219
record_format Article
spelling doaj-9aedc5dab50a4393a672352bc147e2192021-06-30T23:16:28ZengMDPI AGSensors1424-82202021-06-01213874387410.3390/s21113874An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion AttacksNagesh Subbanna0Matthias Wilms1Anup Tuladhar2Nils D. Forkert3Department of Radiology, University of Calgary, Calgary, AB T2N 1N4, CanadaDepartment of Radiology, University of Calgary, Calgary, AB T2N 1N4, CanadaDepartment of Radiology, University of Calgary, Calgary, AB T2N 1N4, CanadaDepartment of Radiology, University of Calgary, Calgary, AB T2N 1N4, CanadaRecent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to examine the vulnerability of deep learning techniques used in medical imaging to model inversion attacks and investigate multiple quantitative metrics to evaluate the quality of the reconstructed images. For the development and evaluation of model inversion attacks, the public LPBA40 database consisting of 40 brain MRI scans with corresponding segmentations of the gyri and deep grey matter brain structures were used to train two popular deep convolutional neural networks, namely a U-Net and SegNet, and corresponding inversion decoders. Matthews correlation coefficient, the structural similarity index measure (SSIM), and the magnitude of the deformation field resulting from non-linear registration of the original and reconstructed images were used to evaluate the reconstruction accuracy. A comparison of the similarity metrics revealed that the SSIM is best suited to evaluate the reconstruction accuray, followed closely by the magnitude of the deformation field. The quantitative evaluation of the reconstructed images revealed SSIM scores of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.73</mn><mo>±</mo><mn>0.12</mn></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.61</mn><mo>±</mo><mn>0.12</mn></mrow></semantics></math></inline-formula> for the U-Net and the SegNet, respectively. The qualitative evaluation showed that training images can be reconstructed with some degradation due to blurring but can be correctly matched to the original images in the majority of the cases. In conclusion, the results of this study indicate that it is possible to reconstruct patient data used for training of convolutional neural networks and that the SSIM is a good metric to assess the reconstruction accuracy.https://www.mdpi.com/1424-8220/21/11/3874medical imagingdeep neural networksinversion attackspatient privacy
collection DOAJ
language English
format Article
sources DOAJ
author Nagesh Subbanna
Matthias Wilms
Anup Tuladhar
Nils D. Forkert
spellingShingle Nagesh Subbanna
Matthias Wilms
Anup Tuladhar
Nils D. Forkert
An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks
Sensors
medical imaging
deep neural networks
inversion attacks
patient privacy
author_facet Nagesh Subbanna
Matthias Wilms
Anup Tuladhar
Nils D. Forkert
author_sort Nagesh Subbanna
title An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks
title_short An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks
title_full An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks
title_fullStr An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks
title_full_unstemmed An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks
title_sort analysis of the vulnerability of two common deep learning-based medical image segmentation techniques to model inversion attacks
publisher MDPI AG
series Sensors
issn 1424-8220
publishDate 2021-06-01
description Recent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to examine the vulnerability of deep learning techniques used in medical imaging to model inversion attacks and investigate multiple quantitative metrics to evaluate the quality of the reconstructed images. For the development and evaluation of model inversion attacks, the public LPBA40 database consisting of 40 brain MRI scans with corresponding segmentations of the gyri and deep grey matter brain structures were used to train two popular deep convolutional neural networks, namely a U-Net and SegNet, and corresponding inversion decoders. Matthews correlation coefficient, the structural similarity index measure (SSIM), and the magnitude of the deformation field resulting from non-linear registration of the original and reconstructed images were used to evaluate the reconstruction accuracy. A comparison of the similarity metrics revealed that the SSIM is best suited to evaluate the reconstruction accuray, followed closely by the magnitude of the deformation field. The quantitative evaluation of the reconstructed images revealed SSIM scores of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.73</mn><mo>±</mo><mn>0.12</mn></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.61</mn><mo>±</mo><mn>0.12</mn></mrow></semantics></math></inline-formula> for the U-Net and the SegNet, respectively. The qualitative evaluation showed that training images can be reconstructed with some degradation due to blurring but can be correctly matched to the original images in the majority of the cases. In conclusion, the results of this study indicate that it is possible to reconstruct patient data used for training of convolutional neural networks and that the SSIM is a good metric to assess the reconstruction accuracy.
topic medical imaging
deep neural networks
inversion attacks
patient privacy
url https://www.mdpi.com/1424-8220/21/11/3874
work_keys_str_mv AT nageshsubbanna ananalysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
AT matthiaswilms ananalysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
AT anuptuladhar ananalysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
AT nilsdforkert ananalysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
AT nageshsubbanna analysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
AT matthiaswilms analysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
AT anuptuladhar analysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
AT nilsdforkert analysisofthevulnerabilityoftwocommondeeplearningbasedmedicalimagesegmentationtechniquestomodelinversionattacks
_version_ 1721351807197773824