On aggregation for heavy-tailed classes

Shahar Mendelson*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

We introduce an alternative to the notion of ‘fast rate’ in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense. While it is well known that such a rate cannot always be attained by a learning procedure (i.e., a procedure that selects a function in the given class), we introduce an aggregation procedure that attains that rate under rather minimal assumptions—for example, that the Lq and L2 norms are equivalent on the linear span of the class for some q> 2 , and the target random variable is square-integrable. The key components in the proof include a two-sided isomorphic estimator on distances between class members, which is based on the median-of-means; and an almost isometric lower bound of the form N-1∑i=1Nf2(Xi)≥(1-ζ)Ef2 which holds uniformly in the class. Both results only require that the Lq and L2 norms are equivalent on the linear span of the class for some q> 2.

Original languageEnglish
Pages (from-to)641-674
Number of pages34
JournalProbability Theory and Related Fields
Volume168
Issue number3-4
DOIs
Publication statusPublished - 1 Aug 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'On aggregation for heavy-tailed classes'. Together they form a unique fingerprint.

Cite this