Facial performance transfer via deformable models and parametric correspondence

Akshay Asthana*, Miles De La Hunty, Abhinav Dhall, Roland Goecke

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    15 Citations (Scopus)

    Abstract

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a 'meaningful transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

    Original languageEnglish
    Article number6025350
    Pages (from-to)1511-1519
    Number of pages9
    JournalIEEE Transactions on Visualization and Computer Graphics
    Volume18
    Issue number9
    DOIs
    Publication statusPublished - 2012

    Fingerprint

    Dive into the research topics of 'Facial performance transfer via deformable models and parametric correspondence'. Together they form a unique fingerprint.

    Cite this