Exploiting sparsity in adaptive filters

Richard K. Martin*, William A. Sethares, Robert C. Williamson, C. Richard Johnson

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    103 Citations (Scopus)

    Abstract

    This paper studies a class of algorithms called natural gradient (NG) algorithms. The least mean square (LMS) algorithm is derived within the NG framework, and a family of LMS variants that exploit sparsity is derived. This procedure is repeated for other algorithm families, such as the constant modulus algorithm (CMA) and decision-directed (DD) LMS. Mean squared error analysis, stability analysis, and convergence analysis of the family of sparse LMS algorithms are provided, and it is shown that if the system is sparse, then the new algorithms will converge faster for a given total asymptotic MSE. Simulations are provided to confirm the analysis. In addition, Bayesian priors matching the statistics of a database of real channels are given, and algorithms are derived that exploit these priors. Simulations using measured channels are used to show a realistic application of these algorithms.

    Original languageEnglish
    Pages (from-to)1883-1894
    Number of pages12
    JournalIEEE Transactions on Signal Processing
    Volume50
    Issue number8
    DOIs
    Publication statusPublished - Aug 2002

    Fingerprint

    Dive into the research topics of 'Exploiting sparsity in adaptive filters'. Together they form a unique fingerprint.

    Cite this