Deep Hierarchical Sequence Generation with Self-Attention

碩士 === 國立交通大學 === 電信工程研究所 === 107 === In recent years, deep generative models offering the promise of learning based on unlabeled data and synthesizing realistic data have been rapidly developing for image, speech, and text processing. The popular approaches, such as the variational autoencoder (VAE...

Full description

Bibliographic Details
Main Authors: Wang, Chun-Wei, 王俊煒
Other Authors: Chien, Jen-Tzung
Format: Others
Language:en_US
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/63b5zq