A Kernel-induced space selection approach to model selection in KLDA

Lei Wang*, Kap Luk Chan, Ping Xue, Luping Zhou

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    26 Citations (Scopus)

    Abstract

    Model selection in kernel linear discriminant analysis (KLDA) refers to the selection of appropriate parameters of a kernel function and the regularizer. By following the principle of maximum information preservation, this paper formulates the model selection problem as a problem of selecting an optimal kernel-induced space in which different classes are maximally separated from each other. A scatter-matrix-based criterion is developed to measure the "goodness" of a kernel-induced space, and the kernel parameters are tuned by maximizing this criterion. This criterion is computationally efficient and is differentiable with respect to the kernel parameters. Compared with the leave-one-out (LOO) or k-fold cross validation (CV), the proposed approach can achieve a faster model selection, especially when the number of training samples is large or when many kernel parameters need to be tuned. To tune the regularization parameter in the KLDA, our criterion is used together with the method proposed by Saadi etal. (2004). Experiments on benchmark data sets verify the effectiveness of this model selection approach.

    Original languageEnglish
    Pages (from-to)2116-2131
    Number of pages16
    JournalIEEE Transactions on Neural Networks
    Volume19
    Issue number12
    DOIs
    Publication statusPublished - 2008

    Fingerprint

    Dive into the research topics of 'A Kernel-induced space selection approach to model selection in KLDA'. Together they form a unique fingerprint.

    Cite this