Discriminative non-linear stationary subspace analysis for video classification

Mahsa Baktashmotlagh*, Mehrtash Harandi, Brian C. Lovell, Mathieu Salzmann

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    37 Citations (Scopus)

    Abstract

    Low-dimensional representations are key to the success of many video classification algorithms. However, the commonly-used dimensionality reduction techniques fail to account for the fact that only part of the signal is shared across all the videos in one class. As a consequence, the resulting representations contain instance-specific information, which introduces noise in the classification process. In this paper, we introduce non-linear stationary subspace analysis: a method that overcomes this issue by explicitly separating the stationary parts of the video signal (i.e., the parts shared across all videos in one class), from its non-stationary parts (i.e., the parts specific to individual videos). Our method also encourages the new representation to be discriminative, thus accounting for the underlying classification problem. We demonstrate the effectiveness of our approach on dynamic texture recognition, scene classification and action recognition.

    Original languageEnglish
    Article number6857376
    Pages (from-to)2353-2366
    Number of pages14
    JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
    Volume36
    Issue number12
    DOIs
    Publication statusPublished - 1 Dec 2014

    Fingerprint

    Dive into the research topics of 'Discriminative non-linear stationary subspace analysis for video classification'. Together they form a unique fingerprint.

    Cite this