From Audio to Topics: Learning Semantics with Convolutional Neural Network

碩士 === 國立臺灣大學 === 資訊工程學研究所 === 105 === Nowadays, music has become an import part of our lives. As cloud-based streaming service becomes popular, people are more dependent on music. Music as a tool of expressing emotions, it is rich in semantics. In previous genre and mood classification tasks, some...

Full description

Bibliographic Details
Main Authors: Siao-Yun Dai, 戴筱芸
Other Authors: 鄭卜壬
Format: Others
Language:en_US
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/4n9sxe
Description
Summary:碩士 === 國立臺灣大學 === 資訊工程學研究所 === 105 === Nowadays, music has become an import part of our lives. As cloud-based streaming service becomes popular, people are more dependent on music. Music as a tool of expressing emotions, it is rich in semantics. In previous genre and mood classification tasks, some people already show that combining lyrics and audio features can improve the results. Their research indicates there are potential relationship between audio and lyrics. Lyrics directly describe a song’s topic, while audio can expand the emotions. Nevertheless, lyrics can be incomplete or missing. If we can learn the topics from audio, we can guess the possible topics for a song without using lyrics. We proposed an unsupervised two-stage method. First, we learn the latent topics in lyrics by topic model. Second, we transfer audio signal to topic distribution via a convolutional neural network. We show that this framework can indeed learns a semantical representation from audio and can be directly applied to song retrievals. We can not only search the songs with lyrics. For those songs without lyrics, i.e. classical songs, we can also provide a reasonable result.