Learning Aligned Cross-Modal Representations from Weakly Aligned Data

People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks c...

Full description

Bibliographic Details
Main Authors: Castrejon, Lluis (Author), Pirsiavash, Hamed (Author), Aytar, Yusuf (Contributor), Vondrick, Carl Martin (Contributor), Torralba, Antonio (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2017-12-29T19:43:54Z.
Subjects:
Online Access:Get fulltext
LEADER 02020 am a22002773u 4500
001 112989
042 |a dc 
100 1 0 |a Castrejon, Lluis  |e author 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Aytar, Yusuf  |e contributor 
100 1 0 |a Vondrick, Carl Martin  |e contributor 
100 1 0 |a Torralba, Antonio  |e contributor 
700 1 0 |a Pirsiavash, Hamed  |e author 
700 1 0 |a Aytar, Yusuf  |e author 
700 1 0 |a Vondrick, Carl Martin  |e author 
700 1 0 |a Torralba, Antonio  |e author 
245 0 0 |a Learning Aligned Cross-Modal Representations from Weakly Aligned Data 
260 |b Institute of Electrical and Electronics Engineers (IEEE),   |c 2017-12-29T19:43:54Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/112989 
520 |a People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality. 
520 |a National Science Foundation (U.S.) (Grant IIS-1524817) 
520 |a Google (Firm) (Faculty Research Award) 
520 |a Google (Firm) (Ph.D. Fellowship) 
546 |a en_US 
655 7 |a Article 
773 |t 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)