Fully automatic pose-invariant face recognition via 3D pose normalization

Akshay Asthana*, Tim K. Marks, Michael J. Jones, Kinh H. Tieu, M. V. Rohith

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    197 Citations (Scopus)

    Abstract

    An ideal approach to the problem of pose-invariant face recognition would handle continuous pose variations, would not be database specific, and would achieve high accuracy without any manual intervention. Most of the existing approaches fail to match one or more of these goals. In this paper, we present a fully automatic system for pose-invariant face recognition that not only meets these requirements but also outperforms other comparable methods. We propose a 3D pose normalization method that is completely automatic and leverages the accurate 2D facial feature points found by the system. The current system can handle 3D pose variation up to ±45° in yaw and ±30° in pitch angles. Recognition experiments were conducted on the USF 3D, Multi-PIE, CMU-PIE, FERET, and FacePix databases. Our system not only shows excellent generalization by achieving high accuracy on all 5 databases but also outperforms other methods convincingly.

    Original languageEnglish
    Title of host publication2011 International Conference on Computer Vision, ICCV 2011
    Pages937-944
    Number of pages8
    DOIs
    Publication statusPublished - 2011
    Event2011 IEEE International Conference on Computer Vision, ICCV 2011 - Barcelona, Spain
    Duration: 6 Nov 201113 Nov 2011

    Publication series

    NameProceedings of the IEEE International Conference on Computer Vision

    Conference

    Conference2011 IEEE International Conference on Computer Vision, ICCV 2011
    Country/TerritorySpain
    CityBarcelona
    Period6/11/1113/11/11

    Fingerprint

    Dive into the research topics of 'Fully automatic pose-invariant face recognition via 3D pose normalization'. Together they form a unique fingerprint.

    Cite this