Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records

Abstract Background Capturing sentence semantics plays a vital role in a range of text mining applications. Despite continuous efforts on the development of related datasets and models in the general domain, both datasets and models are limited in biomedical and clinical domains. The BioCreative/OHN...

Full description

Bibliographic Details
Main Authors: Qingyu Chen, Jingcheng Du, Sun Kim, W. John Wilbur, Zhiyong Lu
Format: Article
Language:English
Published: BMC 2020-04-01
Series:BMC Medical Informatics and Decision Making
Subjects:
Online Access:http://link.springer.com/article/10.1186/s12911-020-1044-0
id doaj-fb5c3afe3de04d79b2a2d5fdbebf6d65
record_format Article
spelling doaj-fb5c3afe3de04d79b2a2d5fdbebf6d652020-11-25T02:19:14ZengBMCBMC Medical Informatics and Decision Making1472-69472020-04-0120S111010.1186/s12911-020-1044-0Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical recordsQingyu Chen0Jingcheng Du1Sun Kim2W. John Wilbur3Zhiyong Lu4National Center for Biotechnology Information, National Library of Medicine, National Institutes of HealthNational Center for Biotechnology Information, National Library of Medicine, National Institutes of HealthNational Center for Biotechnology Information, National Library of Medicine, National Institutes of HealthNational Center for Biotechnology Information, National Library of Medicine, National Institutes of HealthNational Center for Biotechnology Information, National Library of Medicine, National Institutes of HealthAbstract Background Capturing sentence semantics plays a vital role in a range of text mining applications. Despite continuous efforts on the development of related datasets and models in the general domain, both datasets and models are limited in biomedical and clinical domains. The BioCreative/OHNLP2018 organizers have made the first attempt to annotate 1068 sentence pairs from clinical notes and have called for a community effort to tackle the Semantic Textual Similarity (BioCreative/OHNLP STS) challenge. Methods We developed models using traditional machine learning and deep learning approaches. For the post challenge, we focused on two models: the Random Forest and the Encoder Network. We applied sentence embeddings pre-trained on PubMed abstracts and MIMIC-III clinical notes and updated the Random Forest and the Encoder Network accordingly. Results The official results demonstrated our best submission was the ensemble of eight models. It achieved a Person correlation coefficient of 0.8328 – the highest performance among 13 submissions from 4 teams. For the post challenge, the performance of both Random Forest and the Encoder Network was improved; in particular, the correlation of the Encoder Network was improved by ~ 13%. During the challenge task, no end-to-end deep learning models had better performance than machine learning models that take manually-crafted features. In contrast, with the sentence embeddings pre-trained on biomedical corpora, the Encoder Network now achieves a correlation of ~ 0.84, which is higher than the original best model. The ensembled model taking the improved versions of the Random Forest and Encoder Network as inputs further increased performance to 0.8528. Conclusions Deep learning models with sentence embeddings pre-trained on biomedical corpora achieve the highest performance on the test set. Through error analysis, we find that end-to-end deep learning models and traditional machine learning models with manually-crafted features complement each other by finding different types of sentences. We suggest a combination of these models can better find similar sentences in practice.http://link.springer.com/article/10.1186/s12911-020-1044-0Sentence similarityElectronic medical recordsDeep learningMachine learning
collection DOAJ
language English
format Article
sources DOAJ
author Qingyu Chen
Jingcheng Du
Sun Kim
W. John Wilbur
Zhiyong Lu
spellingShingle Qingyu Chen
Jingcheng Du
Sun Kim
W. John Wilbur
Zhiyong Lu
Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
BMC Medical Informatics and Decision Making
Sentence similarity
Electronic medical records
Deep learning
Machine learning
author_facet Qingyu Chen
Jingcheng Du
Sun Kim
W. John Wilbur
Zhiyong Lu
author_sort Qingyu Chen
title Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_short Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_full Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_fullStr Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_full_unstemmed Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_sort deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
publisher BMC
series BMC Medical Informatics and Decision Making
issn 1472-6947
publishDate 2020-04-01
description Abstract Background Capturing sentence semantics plays a vital role in a range of text mining applications. Despite continuous efforts on the development of related datasets and models in the general domain, both datasets and models are limited in biomedical and clinical domains. The BioCreative/OHNLP2018 organizers have made the first attempt to annotate 1068 sentence pairs from clinical notes and have called for a community effort to tackle the Semantic Textual Similarity (BioCreative/OHNLP STS) challenge. Methods We developed models using traditional machine learning and deep learning approaches. For the post challenge, we focused on two models: the Random Forest and the Encoder Network. We applied sentence embeddings pre-trained on PubMed abstracts and MIMIC-III clinical notes and updated the Random Forest and the Encoder Network accordingly. Results The official results demonstrated our best submission was the ensemble of eight models. It achieved a Person correlation coefficient of 0.8328 – the highest performance among 13 submissions from 4 teams. For the post challenge, the performance of both Random Forest and the Encoder Network was improved; in particular, the correlation of the Encoder Network was improved by ~ 13%. During the challenge task, no end-to-end deep learning models had better performance than machine learning models that take manually-crafted features. In contrast, with the sentence embeddings pre-trained on biomedical corpora, the Encoder Network now achieves a correlation of ~ 0.84, which is higher than the original best model. The ensembled model taking the improved versions of the Random Forest and Encoder Network as inputs further increased performance to 0.8528. Conclusions Deep learning models with sentence embeddings pre-trained on biomedical corpora achieve the highest performance on the test set. Through error analysis, we find that end-to-end deep learning models and traditional machine learning models with manually-crafted features complement each other by finding different types of sentences. We suggest a combination of these models can better find similar sentences in practice.
topic Sentence similarity
Electronic medical records
Deep learning
Machine learning
url http://link.springer.com/article/10.1186/s12911-020-1044-0
work_keys_str_mv AT qingyuchen deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT jingchengdu deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT sunkim deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT wjohnwilbur deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT zhiyonglu deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
_version_ 1724877377254719488