TY - JOUR
T1 - Edge Preserving and Multi-Scale Contextual Neural Network for Salient Object Detection
AU - Wang, Xiang
AU - Ma, Huimin
AU - Chen, Xiaozhi
AU - You, Shaodi
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2018/1
Y1 - 2018/1
N2 - In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
AB - In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
KW - RGB-D saliency detection
KW - Salient object detection
KW - edge preserving
KW - multi-scale context
KW - object mask
UR - http://www.scopus.com/inward/record.url?scp=85030671350&partnerID=8YFLogxK
U2 - 10.1109/TIP.2017.2756825
DO - 10.1109/TIP.2017.2756825
M3 - Article
SN - 1057-7149
VL - 27
SP - 121
EP - 134
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
IS - 1
M1 - 8049485
ER -