A pre-training and self-training approach for biomedical named entity recognition.

Named entity recognition (NER) is a key component of many scientific literature mining tasks, such as information retrieval, information extraction, and question answering; however, many modern approaches require large amounts of labeled training data in order to be effective. This severely limits t...

Full description

Bibliographic Details
Main Authors: Shang Gao, Olivera Kotevska, Alexandre Sorokine, J Blair Christian
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2021-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0246310
id doaj-8715c1a164434112b3781efcda58e0e9
record_format Article
spelling doaj-8715c1a164434112b3781efcda58e0e92021-07-29T04:32:36ZengPublic Library of Science (PLoS)PLoS ONE1932-62032021-01-01162e024631010.1371/journal.pone.0246310A pre-training and self-training approach for biomedical named entity recognition.Shang GaoOlivera KotevskaAlexandre SorokineJ Blair ChristianNamed entity recognition (NER) is a key component of many scientific literature mining tasks, such as information retrieval, information extraction, and question answering; however, many modern approaches require large amounts of labeled training data in order to be effective. This severely limits the effectiveness of NER models in applications where expert annotations are difficult and expensive to obtain. In this work, we explore the effectiveness of transfer learning and semi-supervised self-training to improve the performance of NER models in biomedical settings with very limited labeled data (250-2000 labeled samples). We first pre-train a BiLSTM-CRF and a BERT model on a very large general biomedical NER corpus such as MedMentions or Semantic Medline, and then we fine-tune the model on a more specific target NER task that has very limited training data; finally, we apply semi-supervised self-training using unlabeled data to further boost model performance. We show that in NER tasks that focus on common biomedical entity types such as those in the Unified Medical Language System (UMLS), combining transfer learning with self-training enables a NER model such as a BiLSTM-CRF or BERT to obtain similar performance with the same model trained on 3x-8x the amount of labeled data. We further show that our approach can also boost performance in a low-resource application where entities types are more rare and not specifically covered in UMLS.https://doi.org/10.1371/journal.pone.0246310
collection DOAJ
language English
format Article
sources DOAJ
author Shang Gao
Olivera Kotevska
Alexandre Sorokine
J Blair Christian
spellingShingle Shang Gao
Olivera Kotevska
Alexandre Sorokine
J Blair Christian
A pre-training and self-training approach for biomedical named entity recognition.
PLoS ONE
author_facet Shang Gao
Olivera Kotevska
Alexandre Sorokine
J Blair Christian
author_sort Shang Gao
title A pre-training and self-training approach for biomedical named entity recognition.
title_short A pre-training and self-training approach for biomedical named entity recognition.
title_full A pre-training and self-training approach for biomedical named entity recognition.
title_fullStr A pre-training and self-training approach for biomedical named entity recognition.
title_full_unstemmed A pre-training and self-training approach for biomedical named entity recognition.
title_sort pre-training and self-training approach for biomedical named entity recognition.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2021-01-01
description Named entity recognition (NER) is a key component of many scientific literature mining tasks, such as information retrieval, information extraction, and question answering; however, many modern approaches require large amounts of labeled training data in order to be effective. This severely limits the effectiveness of NER models in applications where expert annotations are difficult and expensive to obtain. In this work, we explore the effectiveness of transfer learning and semi-supervised self-training to improve the performance of NER models in biomedical settings with very limited labeled data (250-2000 labeled samples). We first pre-train a BiLSTM-CRF and a BERT model on a very large general biomedical NER corpus such as MedMentions or Semantic Medline, and then we fine-tune the model on a more specific target NER task that has very limited training data; finally, we apply semi-supervised self-training using unlabeled data to further boost model performance. We show that in NER tasks that focus on common biomedical entity types such as those in the Unified Medical Language System (UMLS), combining transfer learning with self-training enables a NER model such as a BiLSTM-CRF or BERT to obtain similar performance with the same model trained on 3x-8x the amount of labeled data. We further show that our approach can also boost performance in a low-resource application where entities types are more rare and not specifically covered in UMLS.
url https://doi.org/10.1371/journal.pone.0246310
work_keys_str_mv AT shanggao apretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
AT oliverakotevska apretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
AT alexandresorokine apretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
AT jblairchristian apretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
AT shanggao pretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
AT oliverakotevska pretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
AT alexandresorokine pretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
AT jblairchristian pretrainingandselftrainingapproachforbiomedicalnamedentityrecognition
_version_ 1721259468840239104