Abstract
This paper considers a cross-layer adaptive modulation system that is modeled as a Markov decision process. We study how to utilize the monotonicity of the optimal transmission policy to relieve the computational complexity of dynamic programming (DP). In this system, a scheduler controls the bit rate of the m-quadrature amplitude modulation in order to minimize the long-term losses incurred by the queue overflow in the data link layer and the transmission power consumption in the physical layer. The work is done in two steps. First, we observe the L-convexity and submodularity of DP to prove that the optimal policy is always nondecreasing in queue occupancy/state and derive the sufficient condition for it to be nondecreasing in both queue and channel states. We also show that, due to the L-convexity of DP, the variation of the optimal policy in queue state is restricted by a bounded marginal effect. The increment of the optimal policy between adjacent queue states is no greater than one. Second, we use the monotonicity results to present two low complexity algorithms: monotonic policy iteration (MPI) based on L-convexity and discrete simultaneous perturbation stochastic approximation (DSPSA). We run experiments to show that the time complexity of MPI based on L-convexity is much lower than that of DP and the conventional MPI that is based on submodularity and DSPSA is able to adaptively track the optimal policy when the system parameters change.
Original language | English |
---|---|
Article number | 7509618 |
Pages (from-to) | 3771-3785 |
Number of pages | 15 |
Journal | IEEE Transactions on Communications |
Volume | 64 |
Issue number | 9 |
DOIs | |
Publication status | Published - Sept 2016 |