VS3‐NET: Neural variational inference model for machine‐reading comprehension

We propose the VS3‐NET model to solve the task of question answering questions with machine‐reading comprehension that searches for an appropriate answer in a given context. VS3‐NET is a model that trains latent variables for each question using variational inferences based on a model of a simple re...

Full description

Bibliographic Details
Main Authors: Cheoneum Park, Changki Lee, Heejun Song
Format: Article
Language:English
Published: Electronics and Telecommunications Research Institute (ETRI) 2019-07-01
Series:ETRI Journal
Subjects:
Online Access:https://doi.org/10.4218/etrij.2018-0467
id doaj-a28089dbf9e54571813db7b8e3470272
record_format Article
spelling doaj-a28089dbf9e54571813db7b8e34702722020-11-25T03:05:37ZengElectronics and Telecommunications Research Institute (ETRI)ETRI Journal1225-64631225-64632019-07-0141677178110.4218/etrij.2018-046710.4218/etrij.2018-0467VS3‐NET: Neural variational inference model for machine‐reading comprehensionCheoneum Park0Changki Lee1Heejun Song2Kangwon National UniversityKangwon National UniversitySamsung ResearchWe propose the VS3‐NET model to solve the task of question answering questions with machine‐reading comprehension that searches for an appropriate answer in a given context. VS3‐NET is a model that trains latent variables for each question using variational inferences based on a model of a simple recurrent unit‐based sentences and self‐matching networks. The types of questions vary, and the answers depend on the type of question. To perform efficient inference and learning, we introduce neural question‐type models to approximate the prior and posterior distributions of the latent variables, and we use these approximated distributions to optimize a reparameterized variational lower bound. The context given in machine‐reading comprehension usually comprises several sentences, leading to performance degradation caused by context length. Therefore, we model a hierarchical structure using sentence encoding, in which as the context becomes longer, the performance degrades. Experimental results show that the proposed VS3‐NET model has an exact‐match score of 76.8% and an F1 score of 84.5% on the SQuAD test set.https://doi.org/10.4218/etrij.2018-0467machine reading comprehensionquestion answeringsquadvariational inferencevs3-net
collection DOAJ
language English
format Article
sources DOAJ
author Cheoneum Park
Changki Lee
Heejun Song
spellingShingle Cheoneum Park
Changki Lee
Heejun Song
VS3‐NET: Neural variational inference model for machine‐reading comprehension
ETRI Journal
machine reading comprehension
question answering
squad
variational inference
vs3-net
author_facet Cheoneum Park
Changki Lee
Heejun Song
author_sort Cheoneum Park
title VS3‐NET: Neural variational inference model for machine‐reading comprehension
title_short VS3‐NET: Neural variational inference model for machine‐reading comprehension
title_full VS3‐NET: Neural variational inference model for machine‐reading comprehension
title_fullStr VS3‐NET: Neural variational inference model for machine‐reading comprehension
title_full_unstemmed VS3‐NET: Neural variational inference model for machine‐reading comprehension
title_sort vs3‐net: neural variational inference model for machine‐reading comprehension
publisher Electronics and Telecommunications Research Institute (ETRI)
series ETRI Journal
issn 1225-6463
1225-6463
publishDate 2019-07-01
description We propose the VS3‐NET model to solve the task of question answering questions with machine‐reading comprehension that searches for an appropriate answer in a given context. VS3‐NET is a model that trains latent variables for each question using variational inferences based on a model of a simple recurrent unit‐based sentences and self‐matching networks. The types of questions vary, and the answers depend on the type of question. To perform efficient inference and learning, we introduce neural question‐type models to approximate the prior and posterior distributions of the latent variables, and we use these approximated distributions to optimize a reparameterized variational lower bound. The context given in machine‐reading comprehension usually comprises several sentences, leading to performance degradation caused by context length. Therefore, we model a hierarchical structure using sentence encoding, in which as the context becomes longer, the performance degrades. Experimental results show that the proposed VS3‐NET model has an exact‐match score of 76.8% and an F1 score of 84.5% on the SQuAD test set.
topic machine reading comprehension
question answering
squad
variational inference
vs3-net
url https://doi.org/10.4218/etrij.2018-0467
work_keys_str_mv AT cheoneumpark vs3netneuralvariationalinferencemodelformachinereadingcomprehension
AT changkilee vs3netneuralvariationalinferencemodelformachinereadingcomprehension
AT heejunsong vs3netneuralvariationalinferencemodelformachinereadingcomprehension
_version_ 1724677456117366784