Monocular and stereo methods for AAM learning from video

Jason Saragih*, Roland Goecke

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    15 Citations (Scopus)

    Abstract

    The active appearance model (AAM) is a powerful method for modeling deformable visual objects. One of the major drawbacks of the AAM is that it requires a training set of pseudo-dense correspondences over the whole database. In this work, we investigate the utility of stereo constraints for automatic model building from video. First, we propose a new method for automatic correspondence finding in monocular images which is based on an adaptive template tracking paradigm. We then extend this method to take the scene geometry into account, proposing three approaches, each accounting for the availability of the fundamental matrix and calibration parameters or the lack thereof. The performance of the monocular method was first evaluated on a pre-annotated database of a talking face. We then compared the monocular method against its three stereo extensions using a stereo database.

    Original languageEnglish
    Title of host publication2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
    DOIs
    Publication statusPublished - 2007
    Event2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 - Minneapolis, MN, United States
    Duration: 17 Jun 200722 Jun 2007

    Publication series

    NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
    ISSN (Print)1063-6919

    Conference

    Conference2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
    Country/TerritoryUnited States
    CityMinneapolis, MN
    Period17/06/0722/06/07

    Fingerprint

    Dive into the research topics of 'Monocular and stereo methods for AAM learning from video'. Together they form a unique fingerprint.

    Cite this