Abstract
We study the performance of empirical risk minimization in prediction and estimation problems that are carried out in a convex class and relative to a sufficiently smooth convex loss function. The framework is based on the small-ball method and thus is suited for heavy-tailed problems. Moreover, among its outcomes is that a well-chosen loss, calibrated to fit the noise level of the problem, negates some of the ill-effects of outliers and boosts the confidence level—leading to a gaussian like behaviour even when the target random variable is heavy-tailed.
Original language | English |
---|---|
Pages (from-to) | 459-502 |
Number of pages | 44 |
Journal | Probability Theory and Related Fields |
Volume | 171 |
Issue number | 1-2 |
DOIs | |
Publication status | Published - 1 Jun 2018 |