Highly Accurate Associative Image Classification by Incorporating SIFT Descriptors

碩士 === 逢甲大學 === 資訊工程所 === 100 === Associative image classification, which analyzes images in classes to establish a classifier for predicting the class label of a new image, is an important technique in content-based image retrieval. An associative image classifier is characterized by the extraction...

Full description

Bibliographic Details
Main Authors: Tsung-Che Li, 李宗哲
Other Authors: Ming-Yen Lin
Format: Others
Language:en_US
Published: 2012
Online Access:http://ndltd.ncl.edu.tw/handle/95581746689737737434
Description
Summary:碩士 === 逢甲大學 === 資訊工程所 === 100 === Associative image classification, which analyzes images in classes to establish a classifier for predicting the class label of a new image, is an important technique in content-based image retrieval. An associative image classifier is characterized by the extraction of representative image features and the classification rules from frequent pattern mining in each image class using the representative features. Traditional image classifiers use low level features like colors but the result can be improved. In this thesis, we propose to use the Scale-Invariant Feature Transform (SIFT) descriptor, which extracts interesting points from inherent objects in an image, as the representative image feature to capture high level semantics in images. Our experimental results show that the classifier using SIFT descriptors is better than that using low level features by four times accuracy. Moreover, we design an algorithm to transform SIFT descriptors in an image class into a transactional dataset so as to enable the mining of the representative patterns. The transformation is necessary because SIFT descriptors may present interesting point-pairs between two images only. In addition, we develop an algorithm called ICRP to improve the rule selection in associative classifiers for higher accuracy. Most classifiers keep only one rule when classes have conflict rules. However, ICRP keeps useful rules even they may conflict, and prunes redundant rules by general-rule and k-data coverage mechanisms with class label considerations. Experimental results show that ICRP is approximately better than CPAR by 7% accuracy, CBA by 5.93% and CMAR by 1.88%, with UCI datasets. Furthermore, the proposed associative image classifier using SIFT and ICRP has an accuracy of 91% for classifying real images from Corel Gallery 1000000.