Learning without concentration for general loss functions

Shahar Mendelson*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    23 Citations (Scopus)

    Abstract

    We study the performance of empirical risk minimization in prediction and estimation problems that are carried out in a convex class and relative to a sufficiently smooth convex loss function. The framework is based on the small-ball method and thus is suited for heavy-tailed problems. Moreover, among its outcomes is that a well-chosen loss, calibrated to fit the noise level of the problem, negates some of the ill-effects of outliers and boosts the confidence level—leading to a gaussian like behaviour even when the target random variable is heavy-tailed.

    Original languageEnglish
    Pages (from-to)459-502
    Number of pages44
    JournalProbability Theory and Related Fields
    Volume171
    Issue number1-2
    DOIs
    Publication statusPublished - 1 Jun 2018

    Fingerprint

    Dive into the research topics of 'Learning without concentration for general loss functions'. Together they form a unique fingerprint.

    Cite this