TY - GEN
T1 - Efficient transductive semantic segmentation
AU - Alvarez, Jose M.
AU - Salzmann, Mathieu
AU - Barnes, Nick
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/5/23
Y1 - 2016/5/23
N2 - Semantically describing the contents of images is one of the classical problems of computer vision. With huge numbers of images being made available daily, there is increasing interest in methods for semantic pixel labelling that exploit large image sets. Graph transduction provides a framework for the flexible inclusion of labeled data that can be exploited in the classification of unlabeled samples without requiring a trained classifier. Unfortunately, current approaches lack the scalability to tackle the joint segmentation of large image sets. Here we introduce an efficient flexible graph transduction approach to semantic segmentation that allows simple and efficient leveraging of large image sets without requiring separate computation of unary potentials, or a trained classifier. We demonstrate that this technique can handle far larger graphs than previous methods, and that results continue to improve as more labeled images are made available. Furthermore, we show that the method is able to benefit from dense or sparse unary labels when they are available.
AB - Semantically describing the contents of images is one of the classical problems of computer vision. With huge numbers of images being made available daily, there is increasing interest in methods for semantic pixel labelling that exploit large image sets. Graph transduction provides a framework for the flexible inclusion of labeled data that can be exploited in the classification of unlabeled samples without requiring a trained classifier. Unfortunately, current approaches lack the scalability to tackle the joint segmentation of large image sets. Here we introduce an efficient flexible graph transduction approach to semantic segmentation that allows simple and efficient leveraging of large image sets without requiring separate computation of unary potentials, or a trained classifier. We demonstrate that this technique can handle far larger graphs than previous methods, and that results continue to improve as more labeled images are made available. Furthermore, we show that the method is able to benefit from dense or sparse unary labels when they are available.
UR - http://www.scopus.com/inward/record.url?scp=84977640540&partnerID=8YFLogxK
U2 - 10.1109/WACV.2016.7477697
DO - 10.1109/WACV.2016.7477697
M3 - Conference contribution
T3 - 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
BT - 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - IEEE Winter Conference on Applications of Computer Vision, WACV 2016
Y2 - 7 March 2016 through 10 March 2016
ER -