Unsupervised Region Attention Network for Person Re-Identification

As supervised person re-identification (Re-Id) requires massive labeled pedestrian data and it is very difficult to collect sufficient labeled data in reality, unsupervised Re-Id approaches attract much more attention than the former. Existing unsupervised person Re-Id models learn global features o...

Full description

Bibliographic Details
Main Authors: Chenrui Zhang, Yangxu Wu, Tao Lei
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8897584/
Description
Summary:As supervised person re-identification (Re-Id) requires massive labeled pedestrian data and it is very difficult to collect sufficient labeled data in reality, unsupervised Re-Id approaches attract much more attention than the former. Existing unsupervised person Re-Id models learn global features of pedestrian from whole images or several constant patches. These models ignore the difference of each region in the whole pedestrian images for feature representation, such as occluded and pose invariant regions, and thus reduce the robustness of models for cross-view feature learning. To solve these issues, we propose an Unsupervised Region Attention Network (URAN) that can learn the cross-view region attention features from the cropped pedestrian images, fixed by region importance weights on images. The proposed URAN designs a Pedestrian Region Biased Enhance (PRBE) loss to produce high attention weights for most important regions in pedestrian images. Furthermore, the URAN employs a first neighbor relation grouping algorithm and a First Neighbor Relation Constraint (FNRC) loss to provide the training direction of the unsupervised region attention network, such that the region attention features are discriminant enough for unsupervised person Re-Id task. In experiments, we consider two popular datasets, Market1501 and DukeMTMC-reID, as evaluation of PRBE and FNRC loss, and their balance parameter to demonstrate the effectiveness and efficiency of the proposed URAN, and the experimental results show that the URAN provides better performance than the-state-of-the-arts (higher than existing methods at least 1.1%).
ISSN:2169-3536