TY - GEN
T1 - DSD
T2 - 16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017
AU - Feng, David
AU - Barnes, Nick
AU - You, Shaodi
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/1
Y1 - 2017/7/1
N2 - Structural edge detection is the task of finding edges between significant surfaces in a scene. This can underpin many computer vision tasks such as sketch recognition and 3D scene understanding, and is important for conveying scene structure for navigation with assistive vision. Identifying structural edges from a depth image can be challenging because surface structure that differentiates edges is not well represented in this format. We derive a depth input encoding, the Depth Surface Descriptor (DSD), that captures the first order properties of surfaces, allowing for improved classification of surface geometry that corresponds to structural edges. We apply the DSD feature to salient edge detection on RGB-D images using a fully convolutional neural network with deep supervision. We evaluate our method on both a new RGB-D dataset containing prosthetic vision scenarios, and the SUNRGBD dataset, and show that our approach produces improved performance compared to existing methods by 4%.
AB - Structural edge detection is the task of finding edges between significant surfaces in a scene. This can underpin many computer vision tasks such as sketch recognition and 3D scene understanding, and is important for conveying scene structure for navigation with assistive vision. Identifying structural edges from a depth image can be challenging because surface structure that differentiates edges is not well represented in this format. We derive a depth input encoding, the Depth Surface Descriptor (DSD), that captures the first order properties of surfaces, allowing for improved classification of surface geometry that corresponds to structural edges. We apply the DSD feature to salient edge detection on RGB-D images using a fully convolutional neural network with deep supervision. We evaluate our method on both a new RGB-D dataset containing prosthetic vision scenarios, and the SUNRGBD dataset, and show that our approach produces improved performance compared to existing methods by 4%.
UR - http://www.scopus.com/inward/record.url?scp=85046266292&partnerID=8YFLogxK
U2 - 10.1109/ICCVW.2017.181
DO - 10.1109/ICCVW.2017.181
M3 - Conference contribution
T3 - Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017
SP - 1536
EP - 1544
BT - Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 October 2017 through 29 October 2017
ER -