Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors

碩士 === 逢甲大學 === 資訊工程所 === 100 === Associative image classification, which analyzes images in classes to establish a classifier for predicting the class label of a new image, is an important technique in content-based image retrieval. An associative image classifier is characterized by the extraction...

Full description

Bibliographic Details
Main Authors: Tsung-Che Li, 李宗哲
Other Authors: Ming-Yen Lin
Format: Others
Language:en_US
Published: 2012
Online Access:http://ndltd.ncl.edu.tw/handle/95581746689737737434
id ndltd-TW-100FCU05392058
record_format oai_dc
spelling ndltd-TW-100FCU053920582015-10-13T21:27:32Z http://ndltd.ncl.edu.tw/handle/95581746689737737434 Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors 結合SIFT描述子之高準確率關聯式圖像分類方法 Tsung-Che Li 李宗哲 碩士 逢甲大學 資訊工程所 100 Associative image classification, which analyzes images in classes to establish a classifier for predicting the class label of a new image, is an important technique in content-based image retrieval. An associative image classifier is characterized by the extraction of representative image features and the classification rules from frequent pattern mining in each image class using the representative features. Traditional image classifiers use low level features like colors but the result can be improved. In this thesis, we propose to use the Scale-Invariant Feature Transform (SIFT) descriptor, which extracts interesting points from inherent objects in an image, as the representative image feature to capture high level semantics in images. Our experimental results show that the classifier using SIFT descriptors is better than that using low level features by four times accuracy. Moreover, we design an algorithm to transform SIFT descriptors in an image class into a transactional dataset so as to enable the mining of the representative patterns. The transformation is necessary because SIFT descriptors may present interesting point-pairs between two images only. In addition, we develop an algorithm called ICRP to improve the rule selection in associative classifiers for higher accuracy. Most classifiers keep only one rule when classes have conflict rules. However, ICRP keeps useful rules even they may conflict, and prunes redundant rules by general-rule and k-data coverage mechanisms with class label considerations. Experimental results show that ICRP is approximately better than CPAR by 7% accuracy, CBA by 5.93% and CMAR by 1.88%, with UCI datasets. Furthermore, the proposed associative image classifier using SIFT and ICRP has an accuracy of 91% for classifying real images from Corel Gallery 1000000. Ming-Yen Lin 林明言 2012 學位論文 ; thesis 84 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 逢甲大學 === 資訊工程所 === 100 === Associative image classification, which analyzes images in classes to establish a classifier for predicting the class label of a new image, is an important technique in content-based image retrieval. An associative image classifier is characterized by the extraction of representative image features and the classification rules from frequent pattern mining in each image class using the representative features. Traditional image classifiers use low level features like colors but the result can be improved. In this thesis, we propose to use the Scale-Invariant Feature Transform (SIFT) descriptor, which extracts interesting points from inherent objects in an image, as the representative image feature to capture high level semantics in images. Our experimental results show that the classifier using SIFT descriptors is better than that using low level features by four times accuracy. Moreover, we design an algorithm to transform SIFT descriptors in an image class into a transactional dataset so as to enable the mining of the representative patterns. The transformation is necessary because SIFT descriptors may present interesting point-pairs between two images only. In addition, we develop an algorithm called ICRP to improve the rule selection in associative classifiers for higher accuracy. Most classifiers keep only one rule when classes have conflict rules. However, ICRP keeps useful rules even they may conflict, and prunes redundant rules by general-rule and k-data coverage mechanisms with class label considerations. Experimental results show that ICRP is approximately better than CPAR by 7% accuracy, CBA by 5.93% and CMAR by 1.88%, with UCI datasets. Furthermore, the proposed associative image classifier using SIFT and ICRP has an accuracy of 91% for classifying real images from Corel Gallery 1000000.
author2 Ming-Yen Lin
author_facet Ming-Yen Lin
Tsung-Che Li
李宗哲
author Tsung-Che Li
李宗哲
spellingShingle Tsung-Che Li
李宗哲
Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors
author_sort Tsung-Che Li
title Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors
title_short Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors
title_full Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors
title_fullStr Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors
title_full_unstemmed Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors
title_sort highly accurate associative image classification by incorporating sift descriptors
publishDate 2012
url http://ndltd.ncl.edu.tw/handle/95581746689737737434
work_keys_str_mv AT tsungcheli highlyaccurateassociativeimageclassificationbyincorporatingsiftdescriptors
AT lǐzōngzhé highlyaccurateassociativeimageclassificationbyincorporatingsiftdescriptors
AT tsungcheli jiéhésiftmiáoshùzizhīgāozhǔnquèlǜguānliánshìtúxiàngfēnlèifāngfǎ
AT lǐzōngzhé jiéhésiftmiáoshùzizhīgāozhǔnquèlǜguānliánshìtúxiàngfēnlèifāngfǎ
_version_ 1718063218047320064