The Implementations of High Robustness Recurrent Neural Networks
碩士 === 亞洲大學 === 光電與通訊學系碩士在職專班 === 106 === In this thesis, a recurrent neural network based on high robustness state-space structure is proposed. The proposed RNN structure is local feedback. By using L2 sensitivity minimization, an optimal state-space realization can be synthesized with respect to f...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2018
|
Online Access: | http://ndltd.ncl.edu.tw/handle/3gt6h8 |
id |
ndltd-TW-106THMU1652004 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-106THMU16520042019-05-16T00:30:18Z http://ndltd.ncl.edu.tw/handle/3gt6h8 The Implementations of High Robustness Recurrent Neural Networks 具高強健性之遞迴神經網路架構實現 Wen-Chung Hsu 許文仲 碩士 亞洲大學 光電與通訊學系碩士在職專班 106 In this thesis, a recurrent neural network based on high robustness state-space structure is proposed. The proposed RNN structure is local feedback. By using L2 sensitivity minimization, an optimal state-space realization can be synthesized with respect to finite precision implementations. The proposed structure is not only with minimal L2 sensitivity measure but also is L2-scaling constraint. This property may reduce the probability of overflow. Based upon back propagation learning algorithm, the proposed approach may lead to better performances when the RNN is implemented under finite precision devices. Finally, numerical examples are performed to illustrate the effectiveness of the proposed approach. Hsien-Ju Ko 柯賢儒 2018 學位論文 ; thesis 42 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 亞洲大學 === 光電與通訊學系碩士在職專班 === 106 === In this thesis, a recurrent neural network based on high robustness state-space structure is proposed. The proposed RNN structure is local feedback. By using L2 sensitivity minimization, an optimal state-space realization can be synthesized with respect to finite precision implementations. The proposed structure is not only with minimal L2 sensitivity measure but also is L2-scaling constraint. This property may reduce the probability of overflow. Based upon back propagation learning algorithm, the proposed approach may lead to better performances when the RNN is implemented under finite precision devices. Finally, numerical examples are performed to illustrate the effectiveness of the proposed approach.
|
author2 |
Hsien-Ju Ko |
author_facet |
Hsien-Ju Ko Wen-Chung Hsu 許文仲 |
author |
Wen-Chung Hsu 許文仲 |
spellingShingle |
Wen-Chung Hsu 許文仲 The Implementations of High Robustness Recurrent Neural Networks |
author_sort |
Wen-Chung Hsu |
title |
The Implementations of High Robustness Recurrent Neural Networks |
title_short |
The Implementations of High Robustness Recurrent Neural Networks |
title_full |
The Implementations of High Robustness Recurrent Neural Networks |
title_fullStr |
The Implementations of High Robustness Recurrent Neural Networks |
title_full_unstemmed |
The Implementations of High Robustness Recurrent Neural Networks |
title_sort |
implementations of high robustness recurrent neural networks |
publishDate |
2018 |
url |
http://ndltd.ncl.edu.tw/handle/3gt6h8 |
work_keys_str_mv |
AT wenchunghsu theimplementationsofhighrobustnessrecurrentneuralnetworks AT xǔwénzhòng theimplementationsofhighrobustnessrecurrentneuralnetworks AT wenchunghsu jùgāoqiángjiànxìngzhīdìhuíshénjīngwǎnglùjiàgòushíxiàn AT xǔwénzhòng jùgāoqiángjiànxìngzhīdìhuíshénjīngwǎnglùjiàgòushíxiàn AT wenchunghsu implementationsofhighrobustnessrecurrentneuralnetworks AT xǔwénzhòng implementationsofhighrobustnessrecurrentneuralnetworks |
_version_ |
1719168197363499008 |