Convergence rates in ℓ1-regularization when the basis is not smooth enough

Jens Flemming*, Markus Hegland

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    11 Citations (Scopus)

    Abstract

    Sparsity promoting regularization is an important technique for signal reconstruction and several other ill-posed problems. Theoretical investigation typically bases on the assumption that the unknown solution has a sparse representation with respect to a fixed basis. We drop this sparsity assumption and provide error estimates for nonsparse solutions. After discussing a result in this direction published earlier by one of the authors and co-authors, we prove a similar error estimate under weaker assumptions. Two examples illustrate that this set of weaker assumptions indeed covers additional situations which appear in applications.

    Original languageEnglish
    Pages (from-to)464-476
    Number of pages13
    JournalApplicable Analysis
    Volume94
    Issue number3
    DOIs
    Publication statusPublished - 4 Mar 2015

    Fingerprint

    Dive into the research topics of 'Convergence rates in ℓ1-regularization when the basis is not smooth enough'. Together they form a unique fingerprint.

    Cite this