Abstract
We obtain bounds on estimation error rates for regularization procedures of the form f ∈ argmin f∈F N1i=N1 Yi − f(Xi)2 + λ(f) when is a norm and F is convex. Our approach gives a common framework that may be used in the analysis of learning problems and regularization problems alike. In particular, it sheds some light on the role various notions of sparsity have in regularization and on their connection with the size of subdifferentials of in a neighborhood of the true minimizer. As “proof of concept” we extend the known estimates for the LASSO, SLOPE and trace norm regularization.
Original language | English |
---|---|
Pages (from-to) | 611-641 |
Number of pages | 31 |
Journal | Annals of Statistics |
Volume | 46 |
Issue number | 2 |
DOIs | |
Publication status | Published - Apr 2018 |
Externally published | Yes |