An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks
Recent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to...
Main Authors: | Nagesh Subbanna, Matthias Wilms, Anup Tuladhar, Nils D. Forkert |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-06-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/11/3874 |
Similar Items
-
An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack
by: Cheolhee Park, et al.
Published: (2019-01-01) -
TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications
by: Juncheng Shen, et al.
Published: (2019-01-01) -
Privacy and Security Issues in Deep Learning: A Survey
by: Ximeng Liu, et al.
Published: (2021-01-01) -
Differential Privacy Preservation in Deep Learning: Challenges, Opportunities and Solutions
by: Jingwen Zhao, et al.
Published: (2019-01-01) -
Universal adversarial attacks on deep neural networks for medical image classification
by: Hokuto Hirano, et al.
Published: (2021-01-01)