Recognizing human interaction by multiple features

Zhen Dong*, Yu Kong, Cuiwei Liu, Hongdong Li, Yunde Jia

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    9 Citations (Scopus)

    Abstract

    In this paper, we address the problem of recognizing human interaction of two persons from videos. We fuse global and local features to build a more expressive and discriminative action representation. The representation based on multiple features is robust to motion ambiguity and partial occlusion in interactions. Moreover, action context information is utilized to capture the interdependencies between interaction class and individual action classes of two persons. We introduce a hierarchical random field model which integrates large-scale global feature, local spatial-temporal feature and action context information into a unified framework. Results on UT-Interaction dataset show that our method is quite effective in recognizing human interaction.

    Original languageEnglish
    Title of host publication1st Asian Conference on Pattern Recognition, ACPR 2011
    Pages77-81
    Number of pages5
    DOIs
    Publication statusPublished - 2011
    Event1st Asian Conference on Pattern Recognition, ACPR 2011 - Beijing, China
    Duration: 28 Nov 201128 Nov 2011

    Publication series

    Name1st Asian Conference on Pattern Recognition, ACPR 2011

    Conference

    Conference1st Asian Conference on Pattern Recognition, ACPR 2011
    Country/TerritoryChina
    CityBeijing
    Period28/11/1128/11/11

    Fingerprint

    Dive into the research topics of 'Recognizing human interaction by multiple features'. Together they form a unique fingerprint.

    Cite this