Unpaired Translation between Images and Texts Using Generative Adversarial Networks
碩士 === 國立交通大學 === 資訊科學與工程研究所 === 106 === Translation between images and texts could be regarded as a combination of two tasks: generating images conditioned on texts, and generating texts conditioned on images. In traditional supervised learning algorithms, we need not only the labels but also the p...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2017
|
Online Access: | http://ndltd.ncl.edu.tw/handle/8xayrv |
id |
ndltd-TW-106NCTU5394020 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-106NCTU53940202019-05-16T00:08:11Z http://ndltd.ncl.edu.tw/handle/8xayrv Unpaired Translation between Images and Texts Using Generative Adversarial Networks 產生性對抗式網路於無配對條件之圖片與文字間轉換 Wong, Ching-Nian 翁慶年 碩士 國立交通大學 資訊科學與工程研究所 106 Translation between images and texts could be regarded as a combination of two tasks: generating images conditioned on texts, and generating texts conditioned on images. In traditional supervised learning algorithms, we need not only the labels but also the pair information between the samples and the labels to learn the relations between images and corresponding text labels. Moreover, traditional supervised learning algorithms allow a single label for each sample, but multi-label outcomes also occur in many applications settings, explaining why multi-label classification has caught the attention of researchers over decades. It is apparent that labeling is a time-consuming and label-intensive task. In particular, the labeling and pairing information may be unavailable in many settings. This thesis focuses on the condition, in which pair information is absent from the data. The task of translation between images and texts without pair information could be considered a task to learn the implicit relationship between two different dataset, where one is in continuous field and the other one is in discrete field. We propose a model to deal with this task, and demonstrate that our proposed model trained without pair information could describe images by attribute tokens and generate images according to attribute tokens. Chuang, Jen-Hui Lee, Chia-Hoang Liu, Chien-Liang 莊仁輝 李嘉晃 劉建良 2017 學位論文 ; thesis 42 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立交通大學 === 資訊科學與工程研究所 === 106 === Translation between images and texts could be regarded as a combination of two tasks: generating images conditioned on texts, and generating texts conditioned on images. In traditional supervised learning algorithms, we need not only the labels but also the pair information between the samples and the labels to learn the relations between images and corresponding text labels. Moreover, traditional supervised learning algorithms allow a single label for each sample, but multi-label outcomes also occur in many applications settings, explaining why multi-label classification has caught the attention of researchers over decades. It is apparent that labeling is a time-consuming and label-intensive task. In particular, the labeling and pairing information may be unavailable in many settings. This thesis focuses on the condition, in which pair information is absent from the data. The task of translation between images and texts without pair information could be considered a task to learn the implicit relationship between two different dataset, where one is in continuous field and the other one is in discrete field. We propose a model to deal with this task, and demonstrate that our proposed model trained without pair information could describe images by attribute tokens and generate images according to attribute tokens.
|
author2 |
Chuang, Jen-Hui |
author_facet |
Chuang, Jen-Hui Wong, Ching-Nian 翁慶年 |
author |
Wong, Ching-Nian 翁慶年 |
spellingShingle |
Wong, Ching-Nian 翁慶年 Unpaired Translation between Images and Texts Using Generative Adversarial Networks |
author_sort |
Wong, Ching-Nian |
title |
Unpaired Translation between Images and Texts Using Generative Adversarial Networks |
title_short |
Unpaired Translation between Images and Texts Using Generative Adversarial Networks |
title_full |
Unpaired Translation between Images and Texts Using Generative Adversarial Networks |
title_fullStr |
Unpaired Translation between Images and Texts Using Generative Adversarial Networks |
title_full_unstemmed |
Unpaired Translation between Images and Texts Using Generative Adversarial Networks |
title_sort |
unpaired translation between images and texts using generative adversarial networks |
publishDate |
2017 |
url |
http://ndltd.ncl.edu.tw/handle/8xayrv |
work_keys_str_mv |
AT wongchingnian unpairedtranslationbetweenimagesandtextsusinggenerativeadversarialnetworks AT wēngqìngnián unpairedtranslationbetweenimagesandtextsusinggenerativeadversarialnetworks AT wongchingnian chǎnshēngxìngduìkàngshìwǎnglùyúwúpèiduìtiáojiànzhītúpiànyǔwénzìjiānzhuǎnhuàn AT wēngqìngnián chǎnshēngxìngduìkàngshìwǎnglùyúwúpèiduìtiáojiànzhītúpiànyǔwénzìjiānzhuǎnhuàn |
_version_ |
1719161705918889984 |