Based on the Tensorflow Framework HCMAC Neural Network Design
碩士 === 國立臺中科技大學 === 資訊工程系碩士班 === 106 === ABSTRACT Deep neural networks require high performance computing and operations.Among them, the machine learning framework TensorFlow is a neural network implementation tool.The calculations expressed using TensorFlow can be flexibly applied to various system...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2018
|
Online Access: | http://ndltd.ncl.edu.tw/handle/t8m2vy |
id |
ndltd-TW-106NTTI5392020 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-106NTTI53920202019-07-18T03:56:06Z http://ndltd.ncl.edu.tw/handle/t8m2vy Based on the Tensorflow Framework HCMAC Neural Network Design 基於Tensorflow之HCMAC神經網路設計 Guan-Min Wei 魏冠民 碩士 國立臺中科技大學 資訊工程系碩士班 106 ABSTRACT Deep neural networks require high performance computing and operations.Among them, the machine learning framework TensorFlow is a neural network implementation tool.The calculations expressed using TensorFlow can be flexibly applied to various systems.Paspberry Pi, Android, Windows, iOS, Linux to server,With little or no modification, portability ranges from portable devices to large distributed systems. Even tens of millions of computing devices.The entire system is very flexible,Mainly explore very large deep neural networks,Data flow graph deep learning provides convenience.And assume that many subgraphs contain distributed execution in this dataflow graph,These executions are manually defined through the step call API. CMAC can be regarded as a mapping method. It has the characteristics of basic functional network (BFN) and supervised learning. It has fast learning convergence and approximation effects, and has the characteristics of simple structure and good local generalization effect. However, since the hyper-cubic blocks of CMAC are evenly distributed, the values of the output in each partition are equal, and in order to solve the problem of huge storage in high-dimensional problems, The performance that causes it depends to a large extent on input space quantization. In order to solve the above two problems, the Gaussian function is added to the hypercube block to form the non-fixed variable micro-partition block, and the HCMAC is cited on the high-dimensional classification problem, which has the ability of fast learning and low memory requirements, and the traditional Compared with CMAC, it is a good neural network in solving high-dimensional classification problems. In this paper, the simulation results show that HCMAC applied to Tensorflow has good approximation ability and convergence performance. Yung-Feng Lu 盧永豐 2018 學位論文 ; thesis 24 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺中科技大學 === 資訊工程系碩士班 === 106 === ABSTRACT
Deep neural networks require high performance computing and operations.Among them, the machine learning framework TensorFlow is a neural network implementation tool.The calculations expressed using TensorFlow can be flexibly applied to various systems.Paspberry Pi, Android, Windows, iOS, Linux to server,With little or no modification, portability ranges from portable devices to large distributed systems.
Even tens of millions of computing devices.The entire system is very flexible,Mainly explore very large deep neural networks,Data flow graph deep learning provides convenience.And assume that many subgraphs contain distributed execution in this dataflow graph,These executions are manually defined through the step call API.
CMAC can be regarded as a mapping method. It has the characteristics of basic functional network (BFN) and supervised learning. It has fast learning convergence and approximation effects, and has the characteristics of simple structure and good local generalization effect. However, since the hyper-cubic blocks of CMAC are evenly distributed, the values of the output in each partition are equal, and in order to solve the problem of huge storage in high-dimensional problems, The performance that causes it depends to a large extent on input space quantization. In order to solve the above two problems, the Gaussian function is added to the hypercube block to form the non-fixed variable micro-partition block, and the HCMAC is cited on the high-dimensional classification problem, which has the ability of fast learning and low memory requirements, and the traditional Compared with CMAC, it is a good neural network in solving high-dimensional classification problems. In this paper, the simulation results show that HCMAC applied to Tensorflow has good approximation ability and convergence performance.
|
author2 |
Yung-Feng Lu |
author_facet |
Yung-Feng Lu Guan-Min Wei 魏冠民 |
author |
Guan-Min Wei 魏冠民 |
spellingShingle |
Guan-Min Wei 魏冠民 Based on the Tensorflow Framework HCMAC Neural Network Design |
author_sort |
Guan-Min Wei |
title |
Based on the Tensorflow Framework HCMAC Neural Network Design |
title_short |
Based on the Tensorflow Framework HCMAC Neural Network Design |
title_full |
Based on the Tensorflow Framework HCMAC Neural Network Design |
title_fullStr |
Based on the Tensorflow Framework HCMAC Neural Network Design |
title_full_unstemmed |
Based on the Tensorflow Framework HCMAC Neural Network Design |
title_sort |
based on the tensorflow framework hcmac neural network design |
publishDate |
2018 |
url |
http://ndltd.ncl.edu.tw/handle/t8m2vy |
work_keys_str_mv |
AT guanminwei basedonthetensorflowframeworkhcmacneuralnetworkdesign AT wèiguānmín basedonthetensorflowframeworkhcmacneuralnetworkdesign AT guanminwei jīyútensorflowzhīhcmacshénjīngwǎnglùshèjì AT wèiguānmín jīyútensorflowzhīhcmacshénjīngwǎnglùshèjì |
_version_ |
1719228126437834752 |