TY - GEN
T1 - Optimal use of experience in first person shooter environments
AU - Aitchison, Matthew
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/8
Y1 - 2019/8
N2 - Although reinforcement learning has made great strides recently, a continuing limitation is that it requires an extremely high number of interactions with the environment. In this paper, we explore the effectiveness of reusing experience from the experience replay buffer in the Deep Q-Learning algorithm. We test the effectiveness of applying learning update steps multiple times per environmental step in the VizDoom environment and show first, this requires a change in the learning rate, and second that it does not improve the performance of the agent. Furthermore, we show that updating less frequently is effective up to a ratio of 4:1, after which performance degrades significantly. These results quantitatively confirm the widespread practice of performing learning updates every 4th environmental step.
AB - Although reinforcement learning has made great strides recently, a continuing limitation is that it requires an extremely high number of interactions with the environment. In this paper, we explore the effectiveness of reusing experience from the experience replay buffer in the Deep Q-Learning algorithm. We test the effectiveness of applying learning update steps multiple times per environmental step in the VizDoom environment and show first, this requires a change in the learning rate, and second that it does not improve the performance of the agent. Furthermore, we show that updating less frequently is effective up to a ratio of 4:1, after which performance degrades significantly. These results quantitatively confirm the widespread practice of performing learning updates every 4th environmental step.
KW - Deep Learning
KW - Experience Replay
KW - Game AI
KW - Reinforcement Learning
UR - http://www.scopus.com/inward/record.url?scp=85073110236&partnerID=8YFLogxK
U2 - 10.1109/CIG.2019.8848049
DO - 10.1109/CIG.2019.8848049
M3 - Conference contribution
T3 - IEEE Conference on Computatonal Intelligence and Games, CIG
BT - IEEE Conference on Games 2019, CoG 2019
PB - IEEE Computer Society
T2 - 2019 IEEE Conference on Games, CoG 2019
Y2 - 20 August 2019 through 23 August 2019
ER -