FAST FULL SEARCH FOR THE NEAREST NEIGHBOR - METHODS AND APPLICATIONS

博士 === 國立交通大學 === 資訊科學學系 === 86 === ABSTRACTIn this dissertation, we propose several kinds of fast searching algorithmfor various applications of Nearest Neighbour (NN) search. The first one is a reference-point-based fast searching method with N*(k/2 + 1...

Full description

Bibliographic Details
Main Authors: Wu, Kuang-Shyr (Keith), 吳匡時
Other Authors: Lin, Ja-Chen
Format: Others
Language:zh-TW
Published: 1997
Online Access:http://ndltd.ncl.edu.tw/handle/39151857804541275676
Description
Summary:博士 === 國立交通大學 === 資訊科學學系 === 86 === ABSTRACTIn this dissertation, we propose several kinds of fast searching algorithmfor various applications of Nearest Neighbour (NN) search. The first one is a reference-point-based fast searching method with N*(k/2 + 1) memory overhead. Here, N is the number of codewords and k is the dimensionality of each codeword. This method is used to accelerate the LBG codebook generation process for Vector Quantization (VQ) design. As for the second one, when the high/low means generated by the Block Truncation Coding (BTC) image compression technique are to be quantized using VQ, the computation time to search for the nearesthigh/low mean sample can be reduced significantly by the second approach. Thememory overhead for this approach is 2N only. The third approach is a new approach that kicks out many impossible candidates by a single kick-out conditionderived from Schwarz Inequality. Due to the efficiency and simplicity of theproposed condition, a considerable saving of the CPU time needed to encode a data set (using a given codebook) can be achieved. The memory overhead is as low as 1N (which is quite competitive). The performance are demonstrated using the example of encoding images by vector quantization (without BTC) when a code-book is given. Finally, we introduce a fast motion estimation method based onthe Hierarchical Use of Minkowski''s Inequality. Experimental results and complexity analysis are both given to show that the method outperforms many well-known methods such as PDE, SEA and TSS.