TY - JOUR
T1 - Multi-step gradient methods for networked optimization
AU - Ghadimi, Euhanna
AU - Shames, Iman
AU - Johansson, Mikael
PY - 2013
Y1 - 2013
N2 - We develop multi-step gradient methods for network-constrained optimization of strongly convex functions with Lipschitz-continuous gradients. Given the topology of the underlying network and bounds on the Hessian of the objective function, we determine the algorithm parameters that guarantee the fastest convergence and characterize situations when significant speed-ups over the standard gradient method are obtained. Furthermore, we quantify how uncertainty in problem data at design-time affects the run-time performance of the gradient method and its multi-step counterpart, and conclude that in most cases the multi-step method outperforms gradient descent. Finally, we apply the proposed technique to three engineering problems: resource allocation under network-wide budget constraint, distributed averaging, and Internet congestion control. In all cases, our proposed algorithms converge significantly faster than the state-of-the art.
AB - We develop multi-step gradient methods for network-constrained optimization of strongly convex functions with Lipschitz-continuous gradients. Given the topology of the underlying network and bounds on the Hessian of the objective function, we determine the algorithm parameters that guarantee the fastest convergence and characterize situations when significant speed-ups over the standard gradient method are obtained. Furthermore, we quantify how uncertainty in problem data at design-time affects the run-time performance of the gradient method and its multi-step counterpart, and conclude that in most cases the multi-step method outperforms gradient descent. Finally, we apply the proposed technique to three engineering problems: resource allocation under network-wide budget constraint, distributed averaging, and Internet congestion control. In all cases, our proposed algorithms converge significantly faster than the state-of-the art.
KW - Distributed optimization
KW - accelerated gradient methods
KW - fast convergence
KW - primal and dual decomposition
KW - robustness analysis
UR - http://www.scopus.com/inward/record.url?scp=84885164451&partnerID=8YFLogxK
U2 - 10.1109/TSP.2013.2278149
DO - 10.1109/TSP.2013.2278149
M3 - Article
SN - 1053-587X
VL - 61
SP - 5417
EP - 5429
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
IS - 21
M1 - 6578176
ER -