Pedestrian alignment network for large-scale person re-identification

Zhedong Zheng, Liang Zheng, Yi Yang*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    301 Citations (Scopus)

    Abstract

    Person re-identification (re-ID) is mostly viewed as an image retrieval problem. This task aims to search a query person in a large image pool. In practice, person re-ID usually adopts automatic detectors to obtain cropped pedestrian images. However, this process suffers from two types of detector errors: excessive background and part missing. Both errors deteriorate the quality of pedestrian alignment and may compromise pedestrian matching due to the position and scale variances. To address the misalignment problem, we propose that alignment be learned from an identification procedure. We introduce the pedestrian alignment network (PAN) which allows discriminative embedding learning pedestrian alignment without extra annotations. We observe that when the convolutional neural network learns to discriminate between different identities, the learned feature maps usually exhibit strong activations on the human body rather than the background. The proposed network thus takes advantage of this attention mechanism to adaptively locate and align pedestrians within a bounding box. Visual examples show that pedestrians are better aligned with PAN. Experiments on three large-scale re-ID datasets confirm that PAN improves the discriminative ability of the feature embeddings and yields competitive accuracy with the state-of-The-Art methods.

    Original languageEnglish
    Article number8481710
    Pages (from-to)3037-3045
    Number of pages9
    JournalIEEE Transactions on Circuits and Systems for Video Technology
    Volume29
    Issue number10
    DOIs
    Publication statusPublished - Oct 2019

    Fingerprint

    Dive into the research topics of 'Pedestrian alignment network for large-scale person re-identification'. Together they form a unique fingerprint.

    Cite this