TY - GEN
T1 - Bringing Background into the Foreground
T2 - 16th IEEE International Conference on Computer Vision, ICCV 2017
AU - Saleh, Fatemeh Sadat
AU - Aliakbarian, Mohammad Sadegh
AU - Salzmann, Mathieu
AU - Petersson, Lars
AU - Alvarez, Jose M.
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/22
Y1 - 2017/12/22
N2 - Pixel-level annotations are expensive and timeconsuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results.
AB - Pixel-level annotations are expensive and timeconsuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results.
UR - http://www.scopus.com/inward/record.url?scp=85041893358&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2017.232
DO - 10.1109/ICCV.2017.232
M3 - Conference contribution
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2125
EP - 2135
BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 October 2017 through 29 October 2017
ER -