Abstract
In this correspondence, we present a simple argument that proves that under mild geometric assumptions on the class F and the set of target functions Τ, the empirical minimization algorithm cannot yield a uniform error rate that is faster than 1√k in the function learning setup. This result holds for various loss functionals and the target functions from Τ that cause the slow uniform error rate are clearly exhibited.
Original language | English |
---|---|
Pages (from-to) | 3797-3803 |
Number of pages | 7 |
Journal | IEEE Transactions on Information Theory |
Volume | 54 |
Issue number | 8 |
DOIs | |
Publication status | Published - Aug 2008 |