Abstract
We present a framework for efficient extrapolation of reduced rank approximations, graph kernels, and locally linear embeddings (LLE) to unseen data. We also present a principled method to combine many of these kernels and then extrapolate them. Central to our method is a theorem for matrix approximation, and an extension of the representer theorem to handle multiple joint regularization constraints. Experiments in protein classification demonstrate the feasibility of our approach.
| Original language | English |
|---|---|
| Pages (from-to) | 721-729 |
| Number of pages | 9 |
| Journal | Neurocomputing |
| Volume | 69 |
| Issue number | 7-9 SPEC. ISS. |
| DOIs | |
| Publication status | Published - Mar 2006 |
Fingerprint
Dive into the research topics of 'Kernel extrapolation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver