Adaptive optics control with multi-agent model-free reinforcement learning

B. Pou*, F. Ferreira, E. Quinones, D. Gratadour, M. Martin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Citations (Scopus)

Abstract

We present a novel formulation of closed-loop adaptive optics (AO) control as a multi-agent reinforcement learning (MARL) problem in which the controller is able to learn a non-linear policy and does not need a priori information on the dynamics of the atmosphere. We identify the different challenges of applying a reinforcement learning (RL) method to AO and, to solve them, propose the combination of model-free MARL for control with an autoencoder neural network to mitigate the effect of noise. Moreover, we extend current existing methods of error budget analysis to include a RL controller. The experimental results for an 8m telescope equipped with a 40x40 Shack-Hartmann system show a significant increase in performance over the integrator baseline and comparable performance to a model-based predictive approach, a linear quadratic Gaussian controller with perfect knowledge of atmospheric conditions. Finally, the error budget analysis provides evidence that the RL controller is partially compensating for bandwidth error and is helping to mitigate the propagation of aliasing.

Original languageEnglish
Pages (from-to)2991-3015
Number of pages25
JournalOptics Express
Volume30
Issue number2
DOIs
Publication statusPublished - 17 Jan 2022

Fingerprint

Dive into the research topics of 'Adaptive optics control with multi-agent model-free reinforcement learning'. Together they form a unique fingerprint.

Cite this