The loss rank principle for model selection

Marcus Hutter*

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    4 Citations (Scopus)

    Abstract

    A key issue in statistics and machine learning is to automatically select the "right" model complexity, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. We suggest a novel principle (LoRP) for model selection in regression and classification. It is based on the loss rank, which counts how many other (fictitious) data would be fitted better. LoRP selects the model that has minimal loss rank. Unlike most penalized maximum likelihood variants (AIC,BIC,MDL), LoRP only depends on the regression functions and the loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN.

    Original languageEnglish
    Title of host publicationLearning Theory - 20th Annual Conference on Learning Theory, COLT 2007, Proceedings
    PublisherSpringer Verlag
    Pages589-603
    Number of pages15
    ISBN (Print)9783540729259
    DOIs
    Publication statusPublished - 2007
    Event20th Annual Conference on Learning Theory, COLT 2007 - San Diego, CA, United States
    Duration: 13 Jun 200715 Jun 2007

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume4539 LNAI
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Conference

    Conference20th Annual Conference on Learning Theory, COLT 2007
    Country/TerritoryUnited States
    CitySan Diego, CA
    Period13/06/0715/06/07

    Fingerprint

    Dive into the research topics of 'The loss rank principle for model selection'. Together they form a unique fingerprint.

    Cite this