Unsupervised Transfer Learning via Relative Distance Comparisons

Primitive machine learning method such as Support Vector Machine (SVM) or k-Nearest Neighbor (k-NN) faces a major challenge when its training and test data is distributed with large-scale variations in lighting conditions, color, backgrounds, size, etc. The variation may be because training and test...

Full description

Bibliographic Details
Main Authors: Rakesh Kumar Sanodiya, Leehter Yao
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9117096/
Description
Summary:Primitive machine learning method such as Support Vector Machine (SVM) or k-Nearest Neighbor (k-NN) faces a major challenge when its training and test data is distributed with large-scale variations in lighting conditions, color, backgrounds, size, etc. The variation may be because training and testing data can come from related but some other domains. Considerable efforts have been made in the development of transfer learning methods. However, most current work focuses only on the following goals or objectives: i)Preservation of source domain discriminative information with Linear Discriminant Analysis (LDA); ii)Maximization of target domain variance; iii) Subspace alignment; iv) Minimization of marginal and conditional distribution by using the Maximum Mean Discrepancy (MMD) criterion;v)Preservation of original similarity of the data samples. Current approaches to preserve source domain discriminant information can easily misclassify the target domain samples which are distributed near the edge of the cluster. In order to overcome the limitations of existing transfer learning methods, we propose a novel Unsupervised Transfer Learning Via Relative Distance Comparisons (UTRDC) method. UTRDC optimizes all the aforementioned objectives jointly with a common projection vector matrix for both domains as well as uses the relative distance constraints for better inter-class separability and intra-class compactness. Furthermore, we extend our proposed method UTRDC to a kernelized version to deal with non-linear separable datasets. Extensive experimentation on two real-world problems datasets (PIE face and Office+Caltech) has proven that the proposed methods outperform several approaches to non-transfer learning and transfer learning.
ISSN:2169-3536