Sharper lower bounds on the performance of the empirical risk minimization algorithm

Guillaume Lecué*, Shahar Mendelson

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    9 Citations (Scopus)

    Abstract

    We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function T and the learning class F, the excess risk of the empirical risk minimization algorithm is lower bounded by Esup q∈Q Gq/δ,/n where (Gq)q∈Q is a canonical Gaussian process associated with Q (a well chosen subset of F) and δ is a parameter governing the oscillations of the empirical excess risk function over a small ball in F.

    Original languageEnglish
    Pages (from-to)605-613
    Number of pages9
    JournalBernoulli
    Volume16
    Issue number3
    DOIs
    Publication statusPublished - Aug 2010

    Fingerprint

    Dive into the research topics of 'Sharper lower bounds on the performance of the empirical risk minimization algorithm'. Together they form a unique fingerprint.

    Cite this