Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video

The use of video sequences for face recognition has been relatively less studied compared to image-based approaches. In this paper, we present an analysis-by-synthesis framework for face recognition from video sequences that is robust to large changes in facial pose and lighting conditions. This req...

Full description

Bibliographic Details
Main Authors: Keyur Patel, Amit Roy-Chowdhury, Yilei Xu
Format: Article
Language:English
Published: SpringerOpen 2008-05-01
Series:EURASIP Journal on Advances in Signal Processing
Online Access:http://dx.doi.org/10.1155/2008/469698
id doaj-2dcd2e4a0423422a9a150c876c406ec7
record_format Article
spelling doaj-2dcd2e4a0423422a9a150c876c406ec72020-11-25T00:47:07ZengSpringerOpenEURASIP Journal on Advances in Signal Processing1687-61722008-05-01200810.1155/2008/469698Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in VideoKeyur PatelAmit Roy-ChowdhuryYilei XuThe use of video sequences for face recognition has been relatively less studied compared to image-based approaches. In this paper, we present an analysis-by-synthesis framework for face recognition from video sequences that is robust to large changes in facial pose and lighting conditions. This requires tracking the video sequence, as well as recognition algorithms that are able to integrate information over the entire video; we address both these problems. Our method is based on a recently obtained theoretical result that can integrate the effects of motion, lighting, and shape in generating an image using a perspective camera. This result can be used to estimate the pose and structure of the face and the illumination conditions for each frame in a video sequence in the presence of multiple point and extended light sources. We propose a new inverse compositional estimation approach for this purpose. We then synthesize images using the face model estimated from the training data corresponding to the conditions in the probe sequences. Similarity between the synthesized and the probe images is computed using suitable distance measurements. The method can handle situations where the pose and lighting conditions in the training and testing data are completely disjoint. We show detailed performance analysis results and recognition scores on a large video dataset.http://dx.doi.org/10.1155/2008/469698
collection DOAJ
language English
format Article
sources DOAJ
author Keyur Patel
Amit Roy-Chowdhury
Yilei Xu
spellingShingle Keyur Patel
Amit Roy-Chowdhury
Yilei Xu
Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video
EURASIP Journal on Advances in Signal Processing
author_facet Keyur Patel
Amit Roy-Chowdhury
Yilei Xu
author_sort Keyur Patel
title Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video
title_short Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video
title_full Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video
title_fullStr Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video
title_full_unstemmed Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video
title_sort integrating illumination, motion, and shape models for robust face recognition in video
publisher SpringerOpen
series EURASIP Journal on Advances in Signal Processing
issn 1687-6172
publishDate 2008-05-01
description The use of video sequences for face recognition has been relatively less studied compared to image-based approaches. In this paper, we present an analysis-by-synthesis framework for face recognition from video sequences that is robust to large changes in facial pose and lighting conditions. This requires tracking the video sequence, as well as recognition algorithms that are able to integrate information over the entire video; we address both these problems. Our method is based on a recently obtained theoretical result that can integrate the effects of motion, lighting, and shape in generating an image using a perspective camera. This result can be used to estimate the pose and structure of the face and the illumination conditions for each frame in a video sequence in the presence of multiple point and extended light sources. We propose a new inverse compositional estimation approach for this purpose. We then synthesize images using the face model estimated from the training data corresponding to the conditions in the probe sequences. Similarity between the synthesized and the probe images is computed using suitable distance measurements. The method can handle situations where the pose and lighting conditions in the training and testing data are completely disjoint. We show detailed performance analysis results and recognition scores on a large video dataset.
url http://dx.doi.org/10.1155/2008/469698
work_keys_str_mv AT keyurpatel integratingilluminationmotionandshapemodelsforrobustfacerecognitioninvideo
AT amitroychowdhury integratingilluminationmotionandshapemodelsforrobustfacerecognitioninvideo
AT yileixu integratingilluminationmotionandshapemodelsforrobustfacerecognitioninvideo
_version_ 1725261783327834112