Encouraging LSTMs to Anticipate Actions Very Early

Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, Lars Andersson

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    116 Citations (Scopus)


    In contrast to the widely studied problem of recognizing an action given a complete sequence, action anticipation aims to identify the action from only partially available videos. As such, it is therefore key to the success of computer vision applications requiring to react as early as possible, such as autonomous navigation. In this paper, we propose a new action anticipation method that achieves high prediction accuracy even in the presence of a very small percentage of a video sequence. To this end, we develop a multi-stage LSTM architecture that leverages context-aware and action-aware features, and introduce a novel loss function that encourages the model to predict the correct class as early as possible. Our experiments on standard benchmark datasets evidence the benefits of our approach; We outperform the state-of-the-art action anticipation methods for early prediction by a relative increase in accuracy of 22.0% on JHMDB-21, 14.0% on UT-Interaction and 49.9% on UCF-101.

    Original languageEnglish
    Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Number of pages10
    ISBN (Electronic)9781538610329
    Publication statusPublished - 22 Dec 2017
    Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
    Duration: 22 Oct 201729 Oct 2017

    Publication series

    NameProceedings of the IEEE International Conference on Computer Vision
    ISSN (Print)1550-5499


    Conference16th IEEE International Conference on Computer Vision, ICCV 2017


    Dive into the research topics of 'Encouraging LSTMs to Anticipate Actions Very Early'. Together they form a unique fingerprint.

    Cite this