Breaking video into pieces for action recognition

Ying Zheng, Hongxun Yao*, Xiaoshuai Sun, Xuesong Jiang, Fatih Porikli

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    3 Citations (Scopus)

    Abstract

    We present a simple yet effective approach for human action recognition. Most of the existing solutions based on multi-class action classification aim to assign a class label for the input video. However, the variety and complexity of real-life videos make it very challenging to achieve high classification accuracy. To address this problem, we propose to partition the input video into small clips and formulate action recognition as a joint decision-making task. First, we partition all videos into two equal segments that are processed in the same manner. We repeat this procedure to obtain three layers of video subsegments, which are then organized in a binary tree structure. We train separate classifiers for each layer. By applying the corresponding classifiers to video subsegments, we obtain a decision value matrix (DVM). Then, we construct an aggregated representation for the original full-length video by integrating the elements of the DVM. Finally, we train a new action recognition classifier based on the DVM representation. Our extensive experimental evaluations demonstrate that the proposed method achieves significant performance improvement against several compared methods on two benchmark datasets.

    Original languageEnglish
    Pages (from-to)22195-22212
    Number of pages18
    JournalMultimedia Tools and Applications
    Volume76
    Issue number21
    DOIs
    Publication statusPublished - 1 Nov 2017

    Fingerprint

    Dive into the research topics of 'Breaking video into pieces for action recognition'. Together they form a unique fingerprint.

    Cite this