Deep Learning of Approximate Message Passing Algorithm based on Sparse Superposition Code

碩士 === 國立中正大學 === 通訊工程研究所 === 107 === In coding and information theory, it has been a crucial issue to develop a computationally efficient, capacity-achieving code. This paper is based on two coding schemes. One is Sparse Superposition Codes (or Sparse Regression Codes) and the other one is Approxim...

Full description

Bibliographic Details
Main Authors: CHUANG, HSIANG-LUNG, 莊翔隆
Other Authors: CHIU, MAO-CHING
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/9n346y
Description
Summary:碩士 === 國立中正大學 === 通訊工程研究所 === 107 === In coding and information theory, it has been a crucial issue to develop a computationally efficient, capacity-achieving code. This paper is based on two coding schemes. One is Sparse Superposition Codes (or Sparse Regression Codes) and the other one is Approximate Message Passing Algorithm. Especially for Approximate Message Passing Algorithm, we will make some improvement and optimization to it through machine learning. Under additive white Gaussian noise channel (AWGN) and power constraint circumstance, not only is Sparse Superposition Codes both feasible in encoding and decoding, but it can also nearly achieve channel capacity. Its codewords are defined by a matrix which is Gaussian distributed and will be divided into several sections. By selecting some column vectors from every section to do linear combination, we can generate its codewords. During iterative decoding, Approximate Message Passing Algorithm will process many matrix multiplications and quantization and it is very similar to the operation of deep learning scheme, which can easily realize by constructing neuron networks. However, traditional Approximate Message Passing Algorithm needs to prepare many parameters before computation. Thus, it is very possible to improve it by deep learning. Finally, in this paper, we propose two different ways to deal with different length of codewords.