Improved classification of remote sensing imagery using image fusion techniques

Remote sensing is a quick and inexpensive way of gathering information about the Earth. It enables one to constantly get updated information from satellite images for real-time local and global mapping of environmental changes. Current classification methods used for extracting relevant knowledge fr...

Full description

Bibliographic Details
Main Author: Gormus, Esra Tunc
Published: University of Bristol 2013
Subjects:
Online Access:http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601185
Description
Summary:Remote sensing is a quick and inexpensive way of gathering information about the Earth. It enables one to constantly get updated information from satellite images for real-time local and global mapping of environmental changes. Current classification methods used for extracting relevant knowledge from this huge information pool are not very efficient because of the limited training samples and high dimensionality of the images. Information fusion is often used in order to improve the classification accuracy prior or after performing classification. However, these techniques cannot always successfully overcome the aforementioned issues. Therefore, in this thesis, new methods are introduced in order to increase the classification accuracy of remotely sensed data by means of information fusion techniques. This thesis is structured in three parts. In the first part, a novel pixel based image fusion technique is introduced to fuse optical and SAR image data in order to increase classification accuracy. Fused images obtained via conventional fusion methods may not contain enough information for subsequent processing such as classification or feature extraction. The proposed method aims to keep the maximum contextual and spatial information from the source data by exploiting the relationship between spatial domain cumulants and wavelet domain cumulants. The novelty of the method consists in integrating the relationship between spatial and wavelet domain cumulants of the source images into an image fusion process as well as in employing these wavelet cumulants for optimisation of weights in a Cauchy convolution based image fusion scheme. In the second part, a novel feature based image fusion method is proposed in order to increase the classification accuracy of hyperspectral images. An application of Empirical Mode Decomposition (EMD) to wavelet based dimensionality reduction is presented with an aim to generate the smallest set I of features that leads to better classification accuracy compared to single tech! niques. Useful spectral information for hyperspectral image classi6cation can be oj:>tained by applying the Wavelet Transform (WT) to each hyperspectral signature. As EMD has the ability to describe short term spatial changes in frequencies, it helps to get a better understanding of the spatial information of the signal. In order to take advantage of both spectral and spatial information, a novel dimensionality reduction method is introduced, which relies on using the wavelet transform of EMD features. This leads to better class separability and hence to better classification.