Regularization and the small-ball method I: sparse recovery

Guillaume Lecue, Shahar Mendelson

    Research output: Contribution to journalArticlepeer-review

    Abstract

    We obtain bounds on estimation error rates for regularization procedures of the form f^∈argminf∈F(1N∑i=1N(Yi−f(Xi))2+λΨ(f)) when Ψ is a norm and F is convex. Our approach gives a common framework that may be used in the analysis of learning problems and regularization problems alike. In particular, it sheds some light on the role various notions of sparsity have in regularization and on their connection with the size of subdifferentials of Ψ in a neighborhood of the true minimizer. As proof of concept we extend the known estimates for the LASSO, SLOPE and trace norm regularization.
    Original languageEnglish
    Pages (from-to)611-641
    JournalAnnals of Statistics
    Volume46
    Issue number2
    DOIs
    Publication statusPublished - 2018

    Fingerprint

    Dive into the research topics of 'Regularization and the small-ball method I: sparse recovery'. Together they form a unique fingerprint.

    Cite this