A view transformation model based on sparse and redundant representation for human gait recognition

Background: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major...

Full description

Bibliographic Details
Main Authors: Abbas Ghebleh, Mohsen Ebrahimi Moghaddam
Format: Article
Language:English
Published: Wolters Kluwer Medknow Publications 2020-01-01
Series:Journal of Medical Signals and Sensors
Subjects:
Online Access:http://www.jmssjournal.net/article.asp?issn=2228-7477;year=2020;volume=10;issue=3;spage=135;epage=144;aulast=Ghebleh
id doaj-c6409737d1504f769e83c223868d0b88
record_format Article
spelling doaj-c6409737d1504f769e83c223868d0b882020-11-25T03:19:00ZengWolters Kluwer Medknow PublicationsJournal of Medical Signals and Sensors2228-74772020-01-0110313514410.4103/jmss.JMSS_59_19A view transformation model based on sparse and redundant representation for human gait recognitionAbbas GheblehMohsen Ebrahimi MoghaddamBackground: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major challenges to gait algorithms. Methods: We propose employment of a view transformation model based on sparse and redundant (SR) representation. More specifically, our proposed method trains a set of corresponding dictionaries for each viewing angle, which are then used in identification of a probe. In particular, the view transformation is performed by first obtaining the SR representation of the input image using the appropriate dictionary, then multiplying this representation by the dictionary of destination angle to obtain a corresponding image in the intended angle. Results: Experiments performed using CASIA Gait Database, Dataset B, support the satisfactory performance of our method. It is observed that in most tests, the proposed method outperforms the other methods in comparison. This is especially the case for large changes in the view angle, as well as the average recognition rate. Conclusion: A comparison with state-of-the-art methods in the literature showcases the superior performance of the proposed method, especially in the case of large variations in view angle.http://www.jmssjournal.net/article.asp?issn=2228-7477;year=2020;volume=10;issue=3;spage=135;epage=144;aulast=Gheblehbiometricsgait analysishuman identificationsparse and redundant representationview transformation modelview-invariant
collection DOAJ
language English
format Article
sources DOAJ
author Abbas Ghebleh
Mohsen Ebrahimi Moghaddam
spellingShingle Abbas Ghebleh
Mohsen Ebrahimi Moghaddam
A view transformation model based on sparse and redundant representation for human gait recognition
Journal of Medical Signals and Sensors
biometrics
gait analysis
human identification
sparse and redundant representation
view transformation model
view-invariant
author_facet Abbas Ghebleh
Mohsen Ebrahimi Moghaddam
author_sort Abbas Ghebleh
title A view transformation model based on sparse and redundant representation for human gait recognition
title_short A view transformation model based on sparse and redundant representation for human gait recognition
title_full A view transformation model based on sparse and redundant representation for human gait recognition
title_fullStr A view transformation model based on sparse and redundant representation for human gait recognition
title_full_unstemmed A view transformation model based on sparse and redundant representation for human gait recognition
title_sort view transformation model based on sparse and redundant representation for human gait recognition
publisher Wolters Kluwer Medknow Publications
series Journal of Medical Signals and Sensors
issn 2228-7477
publishDate 2020-01-01
description Background: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major challenges to gait algorithms. Methods: We propose employment of a view transformation model based on sparse and redundant (SR) representation. More specifically, our proposed method trains a set of corresponding dictionaries for each viewing angle, which are then used in identification of a probe. In particular, the view transformation is performed by first obtaining the SR representation of the input image using the appropriate dictionary, then multiplying this representation by the dictionary of destination angle to obtain a corresponding image in the intended angle. Results: Experiments performed using CASIA Gait Database, Dataset B, support the satisfactory performance of our method. It is observed that in most tests, the proposed method outperforms the other methods in comparison. This is especially the case for large changes in the view angle, as well as the average recognition rate. Conclusion: A comparison with state-of-the-art methods in the literature showcases the superior performance of the proposed method, especially in the case of large variations in view angle.
topic biometrics
gait analysis
human identification
sparse and redundant representation
view transformation model
view-invariant
url http://www.jmssjournal.net/article.asp?issn=2228-7477;year=2020;volume=10;issue=3;spage=135;epage=144;aulast=Ghebleh
work_keys_str_mv AT abbasghebleh aviewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition
AT mohsenebrahimimoghaddam aviewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition
AT abbasghebleh viewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition
AT mohsenebrahimimoghaddam viewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition
_version_ 1724624409342246912