Pre-Training on Mixed Data for Low-Resource Neural Machine Translation
The pre-training fine-tuning mode has been shown to be effective for low resource neural machine translation. In this mode, pre-training models trained on monolingual data are used to initiate translation models to transfer knowledge from monolingual data into translation models. In recent years, pr...
Main Authors: | Wenbo Zhang, Xiao Li, Yating Yang, Rui Dong |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-03-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/12/3/133 |
Similar Items
-
A Diverse Data Augmentation Strategy for Low-Resource Neural Machine Translation
by: Yu Li, et al.
Published: (2020-05-01) -
Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation
by: Wenbo Zhang, et al.
Published: (2020-11-01) -
Improving Low-Resource Neural Machine Translation With Teacher-Free Knowledge Distillation
by: Xinlu Zhang, et al.
Published: (2020-01-01) -
Neural Machine Translation
by: Francisco Casacuberta Nolla, et al.
Published: (2017-12-01) -
Hierarchical Transfer Learning Architecture for Low-Resource Neural Machine Translation
by: Gongxu Luo, et al.
Published: (2019-01-01)