TY - JOUR
T1 - Video saliency detection via sparsity-based reconstruction and propagation
AU - Cong, Runmin
AU - Lei, Jianjun
AU - Fu, Huazhu
AU - Porikli, Fatih
AU - Huang, Qingming
AU - Hou, Chunping
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Video saliency detection aims to continuously discover the motion-related salient objects from the video sequences. Since it needs to consider the spatial and temporal constraints jointly, video saliency detection is more challenging than image saliency detection. In this paper, we propose a new method to detect the salient objects in video based on sparse reconstruction and propagation. With the assistance of novel static and motion priors, a single-frame saliency model is first designed to represent the spatial saliency in each individual frame via the sparsity-based reconstruction. Then, through a progressive sparsity-based propagation, the sequential correspondence in the temporal space is captured to produce the inter-frame saliency map. Finally, these two maps are incorporated into a global optimization model to achieve spatio-temporal smoothness and global consistency of the salient object in the whole video. The experiments on three large-scale video saliency datasets demonstrate that the proposed method outperforms the state-of-the-art algorithms both qualitatively and quantitatively.
AB - Video saliency detection aims to continuously discover the motion-related salient objects from the video sequences. Since it needs to consider the spatial and temporal constraints jointly, video saliency detection is more challenging than image saliency detection. In this paper, we propose a new method to detect the salient objects in video based on sparse reconstruction and propagation. With the assistance of novel static and motion priors, a single-frame saliency model is first designed to represent the spatial saliency in each individual frame via the sparsity-based reconstruction. Then, through a progressive sparsity-based propagation, the sequential correspondence in the temporal space is captured to produce the inter-frame saliency map. Finally, these two maps are incorporated into a global optimization model to achieve spatio-temporal smoothness and global consistency of the salient object in the whole video. The experiments on three large-scale video saliency datasets demonstrate that the proposed method outperforms the state-of-the-art algorithms both qualitatively and quantitatively.
KW - Video saliency detection
KW - color and motion prior
KW - forward-backward propagation
KW - global optimization
KW - sparse reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85070442127&partnerID=8YFLogxK
U2 - 10.1109/TIP.2019.2910377
DO - 10.1109/TIP.2019.2910377
M3 - Article
SN - 1057-7149
VL - 28
SP - 4819
EP - 4831
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
IS - 10
M1 - 8704996
ER -