Deep Hierarchical Sequence Generation with Self-Attention
碩士 === 國立交通大學 === 電信工程研究所 === 107 === In recent years, deep generative models offering the promise of learning based on unlabeled data and synthesizing realistic data have been rapidly developing for image, speech, and text processing. The popular approaches, such as the variational autoencoder (VAE...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2018
|
Online Access: | http://ndltd.ncl.edu.tw/handle/63b5zq |