Random subclass bounds

Shahar Mendelson*, Petra Philips

*Corresponding author for this work

    Research output: Contribution to journalConference articlepeer-review

    3 Citations (Scopus)

    Abstract

    It has been recently shown that sharp generalization bounds can be obtained when the function class from which the algorithm chooses its hypotheses is "small" in the sense that the Rademacher averages of this function class are small. Seemingly based on different arguments, generalization bounds were obtained in the compression scheme, luckiness, and algorithmic luckiness frameworks in which the "size" of the function class is not specified a priori. We show that the bounds obtained in all these frameworks follow from the same general principle, namely that coordinate projections of this function subclass evaluated on random samples are "small" with high probability.

    Original languageEnglish
    Pages (from-to)329-343
    Number of pages15
    JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume2777
    DOIs
    Publication statusPublished - 2003
    Event16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003 - Washington, DC, United States
    Duration: 24 Aug 200327 Aug 2003

    Fingerprint

    Dive into the research topics of 'Random subclass bounds'. Together they form a unique fingerprint.

    Cite this