Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images
碩士 === 國立宜蘭大學 === 資訊工程學系碩士班 === 104 === At present, many artificial neural network (ANN) methods are used for solving data-mining tasks and the accuracy can be considerable if the methods are adequately trained. The process of training an ANN involves various aspects, including number of labeled sam...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2016
|
Online Access: | http://ndltd.ncl.edu.tw/handle/78268734257991391636 |
id |
ndltd-TW-104NIU00392012 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-104NIU003920122017-10-29T04:35:02Z http://ndltd.ncl.edu.tw/handle/78268734257991391636 Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images 藉由錯誤數據的聚類方式改善影像特徵點提取的權重 Tse Chang 張冊 碩士 國立宜蘭大學 資訊工程學系碩士班 104 At present, many artificial neural network (ANN) methods are used for solving data-mining tasks and the accuracy can be considerable if the methods are adequately trained. The process of training an ANN involves various aspects, including number of labeled samples, training duration, training efficiency, number of hidden layers, transfer functions and so on. If the actual test results are not the expected results, we cannot know which data dimension leads to data errors. ANN usually modifies the weight to train the test results. Instead of training graph drawing algorithms, ANN adds suitable weight to the results to move toward more accurate values. Therefore, this paper proposes an approach to help improve the image classification of ANN. While dealing with images, a parameter is usually set as the value to obtain feature vectors and we use it as the weight. In the simulation, the image feature points drawn by Speeded Up Robust Features (SURF) are taken for training and SURF therefore draws different feature points according to the obtained value. Based on the data, we perform the initial semi-supervised clustering and use the Modified K-Nearest Neighbor (MFKNN) algorithm for training and clustering. To handle unknown images, our approach, rather than making comparisons one by one, compare the cluster center for performance efficiency and acceleration and next proceed to the analysis of the results. Based on the characters of image feature points, we add additional values to high-error-rate groups to generate new feature points for training in the input layer of ANN. The final outcome after weight training will be compared with the Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network (GA-ANN). Tin-Yu Wu 吳庭育 2016 學位論文 ; thesis 45 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立宜蘭大學 === 資訊工程學系碩士班 === 104 === At present, many artificial neural network (ANN) methods are used for solving data-mining tasks and the accuracy can be considerable if the methods are adequately trained. The process of training an ANN involves various aspects, including number of labeled samples, training duration, training efficiency, number of hidden layers, transfer functions and so on. If the actual test results are not the expected results, we cannot know which data dimension leads to data errors. ANN usually modifies the weight to train the test results. Instead of training graph drawing algorithms, ANN adds suitable weight to the results to move toward more accurate values.
Therefore, this paper proposes an approach to help improve the image classification of ANN. While dealing with images, a parameter is usually set as the value to obtain feature vectors and we use it as the weight. In the simulation, the image feature points drawn by Speeded Up Robust Features (SURF) are taken for training and SURF therefore draws different feature points according to the obtained value. Based on the data, we perform the initial semi-supervised clustering and use the Modified K-Nearest Neighbor (MFKNN) algorithm for training and clustering. To handle unknown images, our approach, rather than making comparisons one by one, compare the cluster center for performance efficiency and acceleration and next proceed to the analysis of the results. Based on the characters of image feature points, we add additional values to high-error-rate groups to generate new feature points for training in the input layer of ANN. The final outcome after weight training will be compared with the Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network (GA-ANN).
|
author2 |
Tin-Yu Wu |
author_facet |
Tin-Yu Wu Tse Chang 張冊 |
author |
Tse Chang 張冊 |
spellingShingle |
Tse Chang 張冊 Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images |
author_sort |
Tse Chang |
title |
Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images |
title_short |
Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images |
title_full |
Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images |
title_fullStr |
Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images |
title_full_unstemmed |
Using the Erroneous Data Clustering to Improve the Feature Extraction Weights of Images |
title_sort |
using the erroneous data clustering to improve the feature extraction weights of images |
publishDate |
2016 |
url |
http://ndltd.ncl.edu.tw/handle/78268734257991391636 |
work_keys_str_mv |
AT tsechang usingtheerroneousdataclusteringtoimprovethefeatureextractionweightsofimages AT zhāngcè usingtheerroneousdataclusteringtoimprovethefeatureextractionweightsofimages AT tsechang jíyóucuòwùshùjùdejùlèifāngshìgǎishànyǐngxiàngtèzhēngdiǎntíqǔdequánzhòng AT zhāngcè jíyóucuòwùshùjùdejùlèifāngshìgǎishànyǐngxiàngtèzhēngdiǎntíqǔdequánzhòng |
_version_ |
1718558201968852992 |