Saliency-Aware Video Object Segmentation

Wenguan Wang, Jianbing Shen*, Ruigang Yang, Fatih Porikli

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    442 Citations (Scopus)

    Abstract

    Video saliency, aiming for estimation of a single dominant object in a sequence, offers strong object-level cues for unsupervised video object segmentation. In this paper, we present a geodesic distance based technique that provides reliable and temporally consistent saliency measurement of superpixels as a prior for pixel-wise labeling. Using undirected intra-frame and interframe graphs constructed from spatiotemporal edges or appearance and motion, and a skeleton abstraction step to further enhance saliency estimates, our method formulates the pixel-wise segmentation task as an energy minimization problem on a function that consists of unary terms of global foreground and background models, dynamic location models, and pairwise terms of label smoothness potentials. We perform extensive quantitative and qualitative experiments on benchmark datasets. Our method achieves superior performance in comparison to the current state-of-the-art in terms of accuracy and speed.

    Original languageEnglish
    Pages (from-to)20-33
    Number of pages14
    JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
    Volume40
    Issue number1
    DOIs
    Publication statusPublished - Jan 2018

    Fingerprint

    Dive into the research topics of 'Saliency-Aware Video Object Segmentation'. Together they form a unique fingerprint.

    Cite this