Cross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron

Worapan Kusakunniran*, Qiang Wu, Jian Zhang, Hongdong Li

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    62 Citations (Scopus)

    Abstract

    Gait has been shown to be an efficient biometric feature for human identification at a distance. However, performance of gait recognition can be affected by view variation. This leads to a consequent difficulty of cross-view gait recognition. A novel method is proposed to solve the above difficulty by using view transformation model (VTM). VTM is constructed based on regression processes by adopting multi-layer perceptron (MLP) as a regression tool. VTM estimates gait feature from one view using a well selected region of interest (ROI) on gait feature from another view. Thus, trained VTMs can normalize gait features from across views into the same view before gait similarity is measured. Moreover, this paper proposes a new multi-view gait recognition which estimates gait feature on one view using selected gait features from several other views. Extensive experimental results demonstrate that the proposed method significantly outperforms other baseline methods in literature for both cross-view and multi-view gait recognitions. In our experiments, particularly, average accuracies of 99%, 98% and 93% are achieved for multiple views gait recognition by using 5 cameras, 4 cameras and 3 cameras respectively.

    Original languageEnglish
    Pages (from-to)882-889
    Number of pages8
    JournalPattern Recognition Letters
    Volume33
    Issue number7
    DOIs
    Publication statusPublished - 1 May 2012

    Fingerprint

    Dive into the research topics of 'Cross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron'. Together they form a unique fingerprint.

    Cite this