Empowering Simple Binary Classifiers for Image Set Based Face Recognition

Munawar Hayat*, Salman H. Khan, Mohammed Bennamoun

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    22 Citations (Scopus)


    Face recognition from image sets has numerous real-life applications including recognition from security and surveillance systems, multi-view camera networks and personal albums. An image set is an unordered collection of images (e.g., video frames, images acquired over long term observations and personal albums) which exhibits a wide range of appearance variations. The main focus of the previously developed methods has therefore been to find a suitable representation to optimally model these variations. This paper argues that such a representation could not necessarily encode all of the information contained in the set. The paper, therefore, suggests a different approach which does not resort to a single representation of an image set. Instead, the images of the set are retained in their original form and an efficient classification strategy is developed which extends well-known simple binary classifiers for the task of multi-class image set classification. Unlike existing binary to multi-class extension strategies, which require multiple binary classifiers to be trained over a large number of images, the proposed approach is efficient since it trains only few binary classifiers on very few images. Extensive experiments and comparisons with existing methods show that the proposed approach achieves state of the art performance for image set classification based face and object recognition on a number of challenging datasets.

    Original languageEnglish
    Pages (from-to)479-498
    Number of pages20
    JournalInternational Journal of Computer Vision
    Issue number3
    Publication statusPublished - 1 Jul 2017


    Dive into the research topics of 'Empowering Simple Binary Classifiers for Image Set Based Face Recognition'. Together they form a unique fingerprint.

    Cite this