TY - GEN
T1 - Boosting Bayesian MAP classification
AU - Piro, Paolo
AU - Nock, Richard
AU - Nielsen, Frank
AU - Barlaud, Michel
PY - 2010
Y1 - 2010
N2 - In this paper we redefine and generalize the classic k-nearest neighbors (k-NN) voting rule in a Bayesian maximum-a-posteriori (MAP) framework. Therefore, annotated examples are used for estimating pointwise class probabilities in the feature space, thus giving rise to a new instance-based classification rule. Namely, we propose to "boost" the classic k-NN rule by inducing a strong classifier from a combination of sparse training data, called "prototypes". In order to learn these prototypes, our MAPBOOST algorithm globally minimizes a multiclass exponential risk defined over the training data, which depends on the class probabilities estimated at sample points themselves. We tested our method for image categorization on three benchmark databases. Experimental results show that MAPBOOST significantly outperforms classic k-NN (up to 8%). Interestingly, due to the supervised selection of sparse prototypes and the multiclass classification framework, the accuracy improvement is obtained with a considerable computational cost reduction.
AB - In this paper we redefine and generalize the classic k-nearest neighbors (k-NN) voting rule in a Bayesian maximum-a-posteriori (MAP) framework. Therefore, annotated examples are used for estimating pointwise class probabilities in the feature space, thus giving rise to a new instance-based classification rule. Namely, we propose to "boost" the classic k-NN rule by inducing a strong classifier from a combination of sparse training data, called "prototypes". In order to learn these prototypes, our MAPBOOST algorithm globally minimizes a multiclass exponential risk defined over the training data, which depends on the class probabilities estimated at sample points themselves. We tested our method for image categorization on three benchmark databases. Experimental results show that MAPBOOST significantly outperforms classic k-NN (up to 8%). Interestingly, due to the supervised selection of sparse prototypes and the multiclass classification framework, the accuracy improvement is obtained with a considerable computational cost reduction.
UR - http://www.scopus.com/inward/record.url?scp=78149482583&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2010.167
DO - 10.1109/ICPR.2010.167
M3 - Conference contribution
SN - 9780769541099
T3 - Proceedings - International Conference on Pattern Recognition
SP - 661
EP - 665
BT - Proceedings - 2010 20th International Conference on Pattern Recognition, ICPR 2010
T2 - 2010 20th International Conference on Pattern Recognition, ICPR 2010
Y2 - 23 August 2010 through 26 August 2010
ER -