TY - GEN
T1 - K-NN boosting prototype learning for object classification
AU - Piro, Paolo
AU - Barlaud, Michel
AU - Nock, Richard
AU - Nielsen, Frank
PY - 2013
Y1 - 2013
N2 - Image classification is a challenging task in computer vision. For example fully understanding real-world images may involve both scene and object recognition. Many approaches have been proposed to extract meaningful descriptors from images and classifying them in a supervised learning framework. In this chapter, we revisit the classic k-nearest neighbors (k-NN) classification rule, which has shown to be very effective when dealing with local image descriptors. However, k-NN still features some major drawbacks, mainly due to the uniform voting among the nearest prototypes in the feature space. In this chapter, we propose a generalization of the classic k-NN rule in a supervised learning (boosting) framework. Namely, we redefine the voting rule as a strong classifier that linearly combines predictions from the k closest prototypes. In order to induce this classifier, we propose a novel learning algorithm, MLNN (Multiclass Leveraged Nearest Neighbors), which gives a simple procedure for performing prototype selection very efficiently. We tested our method first on object classification using 12 categories of objects, then on scene recognition as well, using 15 real-world categories. Experiments show significant improvement over classic k-NN in terms of classification performances.
AB - Image classification is a challenging task in computer vision. For example fully understanding real-world images may involve both scene and object recognition. Many approaches have been proposed to extract meaningful descriptors from images and classifying them in a supervised learning framework. In this chapter, we revisit the classic k-nearest neighbors (k-NN) classification rule, which has shown to be very effective when dealing with local image descriptors. However, k-NN still features some major drawbacks, mainly due to the uniform voting among the nearest prototypes in the feature space. In this chapter, we propose a generalization of the classic k-NN rule in a supervised learning (boosting) framework. Namely, we redefine the voting rule as a strong classifier that linearly combines predictions from the k closest prototypes. In order to induce this classifier, we propose a novel learning algorithm, MLNN (Multiclass Leveraged Nearest Neighbors), which gives a simple procedure for performing prototype selection very efficiently. We tested our method first on object classification using 12 categories of objects, then on scene recognition as well, using 15 real-world categories. Experiments show significant improvement over classic k-NN in terms of classification performances.
KW - Boosting
KW - Object recognition
KW - Scene categorization
KW - k-NN classification
UR - http://www.scopus.com/inward/record.url?scp=84865556399&partnerID=8YFLogxK
U2 - 10.1007/978-1-4614-3831-1_3
DO - 10.1007/978-1-4614-3831-1_3
M3 - Conference contribution
SN - 9781461438304
T3 - Lecture Notes in Electrical Engineering
SP - 37
EP - 53
BT - Analysis, Retrieval and Delivery of Multimedia Content
T2 - 11th International Workshop on Image Analysis for Multimedia Interactive Services
Y2 - 12 April 2010 through 14 April 2010
ER -