Abstract
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function T and the learning class F, the excess risk of the empirical risk minimization algorithm is lower bounded by Esup q∈Q Gq/δ,/n where (Gq)q∈Q is a canonical Gaussian process associated with Q (a well chosen subset of F) and δ is a parameter governing the oscillations of the empirical excess risk function over a small ball in F.
Original language | English |
---|---|
Pages (from-to) | 605-613 |
Number of pages | 9 |
Journal | Bernoulli |
Volume | 16 |
Issue number | 3 |
DOIs | |
Publication status | Published - Aug 2010 |