Video saliency detection via sparsity-based reconstruction and propagation

Runmin Cong, Jianjun Lei*, Huazhu Fu, Fatih Porikli, Qingming Huang, Chunping Hou

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    85 Citations (Scopus)

    Abstract

    Video saliency detection aims to continuously discover the motion-related salient objects from the video sequences. Since it needs to consider the spatial and temporal constraints jointly, video saliency detection is more challenging than image saliency detection. In this paper, we propose a new method to detect the salient objects in video based on sparse reconstruction and propagation. With the assistance of novel static and motion priors, a single-frame saliency model is first designed to represent the spatial saliency in each individual frame via the sparsity-based reconstruction. Then, through a progressive sparsity-based propagation, the sequential correspondence in the temporal space is captured to produce the inter-frame saliency map. Finally, these two maps are incorporated into a global optimization model to achieve spatio-temporal smoothness and global consistency of the salient object in the whole video. The experiments on three large-scale video saliency datasets demonstrate that the proposed method outperforms the state-of-the-art algorithms both qualitatively and quantitatively.

    Original languageEnglish
    Article number8704996
    Pages (from-to)4819-4831
    Number of pages13
    JournalIEEE Transactions on Image Processing
    Volume28
    Issue number10
    DOIs
    Publication statusPublished - Oct 2019

    Fingerprint

    Dive into the research topics of 'Video saliency detection via sparsity-based reconstruction and propagation'. Together they form a unique fingerprint.

    Cite this