|
|
|
|
LEADER |
03488nam a2200409Ia 4500 |
001 |
10.1016-j.jbi.2022.104114 |
008 |
220718s2022 CNT 000 0 und d |
020 |
|
|
|a 15320464 (ISSN)
|
245 |
1 |
0 |
|a Improving the robustness and accuracy of biomedical language models through adversarial training
|
260 |
|
0 |
|b Academic Press Inc.
|c 2022
|
856 |
|
|
|z View Fulltext in Publisher
|u https://doi.org/10.1016/j.jbi.2022.104114
|
520 |
3 |
|
|a Deep transformer neural network models have improved the predictive accuracy of intelligent text processing systems in the biomedical domain. They have obtained state-of-the-art performance scores on a wide variety of biomedical and clinical Natural Language Processing (NLP) benchmarks. However, the robustness and reliability of these models has been less explored so far. Neural NLP models can be easily fooled by adversarial samples, i.e. minor changes to input that preserve the meaning and understandability of the text but force the NLP system to make erroneous decisions. This raises serious concerns about the security and trust-worthiness of biomedical NLP systems, especially when they are intended to be deployed in real-world use cases. We investigated the robustness of several transformer neural language models, i.e. BioBERT, SciBERT, BioMed-RoBERTa, and Bio-ClinicalBERT, on a wide range of biomedical and clinical text processing tasks. We implemented various adversarial attack methods to test the NLP systems in different attack scenarios. Experimental results showed that the biomedical NLP models are sensitive to adversarial samples; their performance dropped in average by 21 and 18.9 absolute percent on character-level and word-level adversarial noise, respectively, on Micro-F1, Pearson Correlation, and Accuracy measures. Conducting extensive adversarial training experiments, we fine-tuned the NLP models on a mixture of clean samples and adversarial inputs. Results showed that adversarial training is an effective defense mechanism against adversarial noise; the models’ robustness improved in average by 11.3 absolute percent. In addition, the models’ performance on clean data increased in average by 2.4 absolute percent, demonstrating that adversarial training can boost generalization abilities of biomedical NLP systems. This study takes an important step towards revealing vulnerabilities of deep neural language models in biomedical NLP applications. It also provides practical and effective strategies to develop secure, trust-worthy, and accurate intelligent text processing systems in the biomedical domain. © 2022 The Authors
|
650 |
0 |
4 |
|a Adversarial attack
|
650 |
0 |
4 |
|a Adversarial attack
|
650 |
0 |
4 |
|a Adversarial training
|
650 |
0 |
4 |
|a Adversarial training
|
650 |
0 |
4 |
|a Benchmarking
|
650 |
0 |
4 |
|a Biomedical natural language processing
|
650 |
0 |
4 |
|a Biomedical natural language processing
|
650 |
0 |
4 |
|a Biomedical text
|
650 |
0 |
4 |
|a Biomedical text
|
650 |
0 |
4 |
|a Computational linguistics
|
650 |
0 |
4 |
|a Correlation methods
|
650 |
0 |
4 |
|a Deep learning
|
650 |
0 |
4 |
|a Deep learning
|
650 |
0 |
4 |
|a Deep learning
|
650 |
0 |
4 |
|a Language model
|
650 |
0 |
4 |
|a Language processing
|
650 |
0 |
4 |
|a Natural language processing systems
|
650 |
0 |
4 |
|a Natural languages
|
650 |
0 |
4 |
|a Processing model
|
650 |
0 |
4 |
|a Robustness
|
650 |
0 |
4 |
|a Robustness
|
650 |
0 |
4 |
|a Text processing
|
700 |
1 |
|
|a Moradi, M.
|e author
|
700 |
1 |
|
|a Samwald, M.
|e author
|
773 |
|
|
|t Journal of Biomedical Informatics
|