Abstract
This paper studies three related algorithms: the (traditional) gradient descent (GD) algorithm, the exponentiated gradient algorithm with positive and negative weights (EG± algorithm), and the exponentiated gradient algorithm with unnormalized positive and negative weights (EGU± algorithm). These algorithms have been previously analyzed using the "mistake-bound framework" in the computational learning theory community. In this paper, we perform a traditional signal processing analysis in terms of the mean square error. A relationship between the learning rate and the mean squared error (MSE) of predictions is found for the family of algorithms. This is used to compare the performance of the algorithms by choosing learning rates such that they converge to the same steady-state MSE. We demonstrate that if the target weight vector is sparse, the EG± algorithm typically converges more quickly than the GD or EGU± algorithms that perform very similarly. A side effect of our analysis is a reparametrization of the algorithms that provides insights into their behavior. The general form of the results we obtain are consistent with those obtained in the mistake-bound framework. The application of the algorithms to acoustic echo cancellation is then studied, and it is shown in some circumstances that the EG± algorithm will converge faster than the other two algorithms.
Original language | English |
---|---|
Pages (from-to) | 1208-1215 |
Number of pages | 8 |
Journal | IEEE Transactions on Signal Processing |
Volume | 49 |
Issue number | 6 |
DOIs | |
Publication status | Published - Jun 2001 |
Externally published | Yes |