TY - JOUR
T1 - A Kernel-induced space selection approach to model selection in KLDA
AU - Wang, Lei
AU - Chan, Kap Luk
AU - Xue, Ping
AU - Zhou, Luping
PY - 2008
Y1 - 2008
N2 - Model selection in kernel linear discriminant analysis (KLDA) refers to the selection of appropriate parameters of a kernel function and the regularizer. By following the principle of maximum information preservation, this paper formulates the model selection problem as a problem of selecting an optimal kernel-induced space in which different classes are maximally separated from each other. A scatter-matrix-based criterion is developed to measure the "goodness" of a kernel-induced space, and the kernel parameters are tuned by maximizing this criterion. This criterion is computationally efficient and is differentiable with respect to the kernel parameters. Compared with the leave-one-out (LOO) or k-fold cross validation (CV), the proposed approach can achieve a faster model selection, especially when the number of training samples is large or when many kernel parameters need to be tuned. To tune the regularization parameter in the KLDA, our criterion is used together with the method proposed by Saadi etal. (2004). Experiments on benchmark data sets verify the effectiveness of this model selection approach.
AB - Model selection in kernel linear discriminant analysis (KLDA) refers to the selection of appropriate parameters of a kernel function and the regularizer. By following the principle of maximum information preservation, this paper formulates the model selection problem as a problem of selecting an optimal kernel-induced space in which different classes are maximally separated from each other. A scatter-matrix-based criterion is developed to measure the "goodness" of a kernel-induced space, and the kernel parameters are tuned by maximizing this criterion. This criterion is computationally efficient and is differentiable with respect to the kernel parameters. Compared with the leave-one-out (LOO) or k-fold cross validation (CV), the proposed approach can achieve a faster model selection, especially when the number of training samples is large or when many kernel parameters need to be tuned. To tune the regularization parameter in the KLDA, our criterion is used together with the method proposed by Saadi etal. (2004). Experiments on benchmark data sets verify the effectiveness of this model selection approach.
KW - Kernel linear discriminant analysis (KLDA)
KW - Kernel parameter tuning
KW - Kernel-induced space selection
KW - Model selection
UR - http://www.scopus.com/inward/record.url?scp=57749100079&partnerID=8YFLogxK
U2 - 10.1109/TNN.2008.2005140
DO - 10.1109/TNN.2008.2005140
M3 - Article
SN - 1045-9227
VL - 19
SP - 2116
EP - 2131
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 12
ER -