Learning invariant representations and applications to face verification

One approach to computer object recognition and modeling the brain's ventral stream involves unsupervised learning of representations that are invariant to common transformations. However, applications of these ideas have usually been limited to 2D affine transformations, e.g., translation and...

Full description

Bibliographic Details
Main Authors: Liao, Qianli (Contributor), Leibo, Joel Z. (Contributor), Poggio, Tomaso A. (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor), McGovern Institute for Brain Research at MIT (Contributor)
Format: Article
Language:English
Published: Neural Information Processing Systems Foundation, 2014-12-16T15:01:38Z.
Subjects:
Online Access:Get fulltext
LEADER 02672 am a22002413u 4500
001 92318
042 |a dc 
100 1 0 |a Liao, Qianli  |e author 
100 1 0 |a Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a McGovern Institute for Brain Research at MIT  |e contributor 
100 1 0 |a Liao, Qianli  |e contributor 
100 1 0 |a Leibo, Joel Z.  |e contributor 
100 1 0 |a Poggio, Tomaso A.  |e contributor 
700 1 0 |a Leibo, Joel Z.  |e author 
700 1 0 |a Poggio, Tomaso A.  |e author 
245 0 0 |a Learning invariant representations and applications to face verification 
260 |b Neural Information Processing Systems Foundation,   |c 2014-12-16T15:01:38Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/92318 
520 |a One approach to computer object recognition and modeling the brain's ventral stream involves unsupervised learning of representations that are invariant to common transformations. However, applications of these ideas have usually been limited to 2D affine transformations, e.g., translation and scaling, since they are easiest to solve via convolution. In accord with a recent theory of transformation-invariance, we propose a model that, while capturing other common convolutional networks as special cases, can also be used with arbitrary identity-preserving transformations. The model's wiring can be learned from videos of transforming objects---or any other grouping of images into sets by their depicted object. Through a series of successively more complex empirical tests, we study the invariance/discriminability properties of this model with respect to different transformations. First, we empirically confirm theoretical predictions for the case of 2D affine transformations. Next, we apply the model to non-affine transformations: as expected, it performs well on face verification tasks requiring invariance to the relatively smooth transformations of 3D rotation-in-depth and changes in illumination direction. Surprisingly, it can also tolerate clutter transformations'' which map an image of a face on one background to an image of the same face on a different background. Motivated by these empirical findings, we tested the same model on face verification benchmark tasks from the computer vision literature: Labeled Faces in the Wild, PubFig and a new dataset we gathered---achieving strong performance in these highly unconstrained cases as well." 
546 |a en_US 
655 7 |a Article 
773 |t Advances in Neural Information Processing Systems (NIPS)