Convergence of exponentiated gradient algorithms

Simon I. Hill*, Robert C. Williamson

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

27 Citations (Scopus)

Abstract

This paper studies three related algorithms: the (traditional) gradient descent (GD) algorithm, the exponentiated gradient algorithm with positive and negative weights (EG± algorithm), and the exponentiated gradient algorithm with unnormalized positive and negative weights (EGU± algorithm). These algorithms have been previously analyzed using the "mistake-bound framework" in the computational learning theory community. In this paper, we perform a traditional signal processing analysis in terms of the mean square error. A relationship between the learning rate and the mean squared error (MSE) of predictions is found for the family of algorithms. This is used to compare the performance of the algorithms by choosing learning rates such that they converge to the same steady-state MSE. We demonstrate that if the target weight vector is sparse, the EG± algorithm typically converges more quickly than the GD or EGU± algorithms that perform very similarly. A side effect of our analysis is a reparametrization of the algorithms that provides insights into their behavior. The general form of the results we obtain are consistent with those obtained in the mistake-bound framework. The application of the algorithms to acoustic echo cancellation is then studied, and it is shown in some circumstances that the EG± algorithm will converge faster than the other two algorithms.

Original languageEnglish
Pages (from-to)1208-1215
Number of pages8
JournalIEEE Transactions on Signal Processing
Volume49
Issue number6
DOIs
Publication statusPublished - Jun 2001
Externally publishedYes

Fingerprint

Dive into the research topics of 'Convergence of exponentiated gradient algorithms'. Together they form a unique fingerprint.

Cite this