1-regularized linear regression: Persistence and oracle inequalities

Peter L. Bartlett, Shahar Mendelson, Joseph Neeman*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

29 Citations (Scopus)

Abstract

We study the predictive performance of ℓ1-regularized linear regression in a model-free setting, including the case where the number of covariates is substantially larger than the sample size. We introduce a new analysis method that avoids the boundedness problems that typically arise in model-free empirical minimization. Our technique provides an answer to a conjecture of Greenshtein and Ritov (Bernoulli 10(6):971-988, 2004) regarding the "persistence" rate for linear regression and allows us to prove an oracle inequality for the error of the regularized minimizer. It also demonstrates that empirical risk minimization gives optimal rates (up to log factors) of convex aggregation of a set of estimators of a regression function.

Original languageEnglish
Pages (from-to)193-224
Number of pages32
JournalProbability Theory and Related Fields
Volume154
Issue number1-2
DOIs
Publication statusPublished - Oct 2012
Externally publishedYes

Fingerprint

Dive into the research topics of 'ℓ1-regularized linear regression: Persistence and oracle inequalities'. Together they form a unique fingerprint.

Cite this