TY - JOUR
T1 - Asynchronous Distributed Optimization via Dual Decomposition and Block Coordinate Subgradient Methods
AU - Lin, Yankai
AU - Shames, Iman
AU - Nesic, Dragan
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2021/9
Y1 - 2021/9
N2 - In this article, we study the problem of minimizing the sum of potentially nondifferentiable convex cost functions with partially overlapping dependences in an asynchronous manner, where communication in the network is not coordinated. We study the behavior of an asynchronous algorithm based on dual decomposition and block coordinate subgradient methods under assumptions weaker than those used in the literature. At the same time, we allow different agents to use local stepsizes with no global coordination. Sufficient conditions are provided for almost sure convergence to the solution of the optimization problem. Under additional assumptions, we establish a sublinear convergence rate that, in turn, can be strengthened to the linear convergence rate if the problem is strongly convex and has Lipschitz gradients. We also extend available results in the literature by allowing multiple and potentially overlapping blocks to be updated at the same time with nonuniform and potentially time-varying probabilities assigned to different blocks. A numerical example is provided to illustrate the effectiveness of the algorithm.
AB - In this article, we study the problem of minimizing the sum of potentially nondifferentiable convex cost functions with partially overlapping dependences in an asynchronous manner, where communication in the network is not coordinated. We study the behavior of an asynchronous algorithm based on dual decomposition and block coordinate subgradient methods under assumptions weaker than those used in the literature. At the same time, we allow different agents to use local stepsizes with no global coordination. Sufficient conditions are provided for almost sure convergence to the solution of the optimization problem. Under additional assumptions, we establish a sublinear convergence rate that, in turn, can be strengthened to the linear convergence rate if the problem is strongly convex and has Lipschitz gradients. We also extend available results in the literature by allowing multiple and potentially overlapping blocks to be updated at the same time with nonuniform and potentially time-varying probabilities assigned to different blocks. A numerical example is provided to illustrate the effectiveness of the algorithm.
KW - Asynchronous algorithms
KW - distributed optimization
KW - networked control systems
UR - http://www.scopus.com/inward/record.url?scp=85102703031&partnerID=8YFLogxK
U2 - 10.1109/TCNS.2021.3065644
DO - 10.1109/TCNS.2021.3065644
M3 - Article
SN - 2325-5870
VL - 8
SP - 1348
EP - 1359
JO - IEEE Transactions on Control of Network Systems
JF - IEEE Transactions on Control of Network Systems
IS - 3
ER -