Deep Multiview Learning From Sequentially Unaligned Data

Multiview learning is concerned with machine learning problems, where data are represented by distinct feature sets or views. Recently, this definition has been extended to accommodate sequential data, i.e., each view of the data is in the form of a sequence. Multiview sequential data pose major cha...

Full description

Bibliographic Details
Main Authors: Doan Phong Tung, Atsuhiro Takasu
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9279307/
Description
Summary:Multiview learning is concerned with machine learning problems, where data are represented by distinct feature sets or views. Recently, this definition has been extended to accommodate sequential data, i.e., each view of the data is in the form of a sequence. Multiview sequential data pose major challenges for representation learning, including i) absence of sample correspondence information between the views, ii) complex relations among samples within each view, and iii) high complexity for handling multiple sequences. In this article, we first introduce a generalized deep learning model that can simultaneously discover sample correspondence and capture the cross-view relations among the data sequences. The model parameters can be optimized using a gradient descent-based algorithm. The complexity for computing the gradient is at most quadratic with respect to sequence lengths in terms of both computational time and space. Based on this model, we propose a second model by integrating the objective with reconstruction losses of autoencoders. This allows the second model to provide a better trade-off between view-specific and cross-view relations in the data. Finally, to handle multiple (more than two) data sequences, we develop a third model along with a convergence-guaranteed optimization algorithm. Extensive experiments on public datasets demonstrate the superior performances of our models over competing methods.
ISSN:2169-3536