Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural Networks
Subject categories of scholarly papers generally refer to the knowledge domain(s) to which the papers belong, examples being computer science or physics. Subject category classification is a prerequisite for bibliometric studies, organizing scientific publications for domain knowledge extraction, an...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-02-01
|
Series: | Frontiers in Research Metrics and Analytics |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frma.2020.600382/full |
id |
doaj-3568e5ef98954a2ebd8e97b55e27d26f |
---|---|
record_format |
Article |
spelling |
doaj-3568e5ef98954a2ebd8e97b55e27d26f2021-06-02T17:24:29ZengFrontiers Media S.A.Frontiers in Research Metrics and Analytics2504-05372021-02-01510.3389/frma.2020.600382600382Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural NetworksBharath Kandimalla0Shaurya Rohatgi1Jian Wu2C. Lee Giles3C. Lee Giles4Computer Science and Engineering, Pennsylvania State University, University Park, PA, United StatesInformation Sciences and Technology, Pennsylvania State University, University Park, PA, United StatesComputer Science, Old Dominion University, Norfolk, VA, United StatesComputer Science and Engineering, Pennsylvania State University, University Park, PA, United StatesInformation Sciences and Technology, Pennsylvania State University, University Park, PA, United StatesSubject categories of scholarly papers generally refer to the knowledge domain(s) to which the papers belong, examples being computer science or physics. Subject category classification is a prerequisite for bibliometric studies, organizing scientific publications for domain knowledge extraction, and facilitating faceted searches for digital library search engines. Unfortunately, many academic papers do not have such information as part of their metadata. Most existing methods for solving this task focus on unsupervised learning that often relies on citation networks. However, a complete list of papers citing the current paper may not be readily available. In particular, new papers that have few or no citations cannot be classified using such methods. Here, we propose a deep attentive neural network (DANN) that classifies scholarly papers using only their abstracts. The network is trained using nine million abstracts from Web of Science (WoS). We also use the WoS schema that covers 104 subject categories. The proposed network consists of two bi-directional recurrent neural networks followed by an attention layer. We compare our model against baselines by varying the architecture and text representation. Our best model achieves micro-F1 measure of 0.76 with F1 of individual subject categories ranging from 0.50 to 0.95. The results showed the importance of retraining word embedding models to maximize the vocabulary overlap and the effectiveness of the attention mechanism. The combination of word vectors with TFIDF outperforms character and sentence level embedding models. We discuss imbalanced samples and overlapping categories and suggest possible strategies for mitigation. We also determine the subject category distribution in CiteSeerX by classifying a random sample of one million academic papers.https://www.frontiersin.org/articles/10.3389/frma.2020.600382/fulltext classificationtext miningscientific papersdigital libraryneural networksciteseerx |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Bharath Kandimalla Shaurya Rohatgi Jian Wu C. Lee Giles C. Lee Giles |
spellingShingle |
Bharath Kandimalla Shaurya Rohatgi Jian Wu C. Lee Giles C. Lee Giles Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural Networks Frontiers in Research Metrics and Analytics text classification text mining scientific papers digital library neural networks citeseerx |
author_facet |
Bharath Kandimalla Shaurya Rohatgi Jian Wu C. Lee Giles C. Lee Giles |
author_sort |
Bharath Kandimalla |
title |
Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural Networks |
title_short |
Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural Networks |
title_full |
Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural Networks |
title_fullStr |
Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural Networks |
title_full_unstemmed |
Large Scale Subject Category Classification of Scholarly Papers With Deep Attentive Neural Networks |
title_sort |
large scale subject category classification of scholarly papers with deep attentive neural networks |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Research Metrics and Analytics |
issn |
2504-0537 |
publishDate |
2021-02-01 |
description |
Subject categories of scholarly papers generally refer to the knowledge domain(s) to which the papers belong, examples being computer science or physics. Subject category classification is a prerequisite for bibliometric studies, organizing scientific publications for domain knowledge extraction, and facilitating faceted searches for digital library search engines. Unfortunately, many academic papers do not have such information as part of their metadata. Most existing methods for solving this task focus on unsupervised learning that often relies on citation networks. However, a complete list of papers citing the current paper may not be readily available. In particular, new papers that have few or no citations cannot be classified using such methods. Here, we propose a deep attentive neural network (DANN) that classifies scholarly papers using only their abstracts. The network is trained using nine million abstracts from Web of Science (WoS). We also use the WoS schema that covers 104 subject categories. The proposed network consists of two bi-directional recurrent neural networks followed by an attention layer. We compare our model against baselines by varying the architecture and text representation. Our best model achieves micro-F1 measure of 0.76 with F1 of individual subject categories ranging from 0.50 to 0.95. The results showed the importance of retraining word embedding models to maximize the vocabulary overlap and the effectiveness of the attention mechanism. The combination of word vectors with TFIDF outperforms character and sentence level embedding models. We discuss imbalanced samples and overlapping categories and suggest possible strategies for mitigation. We also determine the subject category distribution in CiteSeerX by classifying a random sample of one million academic papers. |
topic |
text classification text mining scientific papers digital library neural networks citeseerx |
url |
https://www.frontiersin.org/articles/10.3389/frma.2020.600382/full |
work_keys_str_mv |
AT bharathkandimalla largescalesubjectcategoryclassificationofscholarlypaperswithdeepattentiveneuralnetworks AT shauryarohatgi largescalesubjectcategoryclassificationofscholarlypaperswithdeepattentiveneuralnetworks AT jianwu largescalesubjectcategoryclassificationofscholarlypaperswithdeepattentiveneuralnetworks AT cleegiles largescalesubjectcategoryclassificationofscholarlypaperswithdeepattentiveneuralnetworks AT cleegiles largescalesubjectcategoryclassificationofscholarlypaperswithdeepattentiveneuralnetworks |
_version_ |
1721402603572559872 |