Generating adversarial images to monitor the training state of a CNN model

Deep neural networks have shown effectiveness in many applications, however, in regulated applications like automotive or medicine, quality guarantees are required. Thus, it is important to understand the robustness of the solutions to perturbations in the input space. In order to identify the vulne...

全面介紹

書目詳細資料
發表在:Current Directions in Biomedical Engineering
Main Authors: Ding Ning, Möller Knut
格式: Article
語言:英语
出版: De Gruyter 2021-10-01
主題:
在線閱讀:https://doi.org/10.1515/cdbme-2021-2077
實物特徵
總結:Deep neural networks have shown effectiveness in many applications, however, in regulated applications like automotive or medicine, quality guarantees are required. Thus, it is important to understand the robustness of the solutions to perturbations in the input space. In order to identify the vulnerability of a trained classification model and evaluate the effect of different perturbations in the input on the output class, two different methods to generate adversarial examples were implemented. The adversarial images created were developed into a robustness index to monitor the training state and safety of a convolutional neural network model. In the future work, some generated adversarial images will be included into the training phase to improve the model robustness.
ISSN:2364-5504