An Abstractive Summarization Model Based on Joint-Attention Mechanism and a Priori Knowledge

An abstractive summarization model based on the joint-attention mechanism and a priori knowledge is proposed to address the problems of the inadequate semantic understanding of text and summaries that do not conform to human language habits in abstractive summary models. Word vectors that are most r...

Full description

Bibliographic Details
Published in:Applied Sciences
Main Authors: Yuanyuan Li, Yuan Huang, Weijian Huang, Junhao Yu, Zheng Huang
Format: Article
Language:English
Published: MDPI AG 2023-04-01
Subjects:
Online Access:https://www.mdpi.com/2076-3417/13/7/4610
Description
Summary:An abstractive summarization model based on the joint-attention mechanism and a priori knowledge is proposed to address the problems of the inadequate semantic understanding of text and summaries that do not conform to human language habits in abstractive summary models. Word vectors that are most relevant to the original text should be selected first. Second, the original text is represented in two dimensions—word-level and sentence-level, as word vectors and sentence vectors, respectively. After this processing, there will be not only a relationship between word-level vectors but also a relationship between sentence-level vectors, and the decoder discriminates between word-level and sentence-level vectors based on their relationship with the hidden state of the decoder. Then, the pointer generation network is improved using a priori knowledge. Finally, reinforcement learning is used to improve the quality of the generated summaries. Experiments on two classical datasets, CNN/DailyMail and DUC 2004, show that the model has good performance and effectively improves the quality of generated summaries.
ISSN:2076-3417