Abstract
We discuss a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. These environments can be non-Markov, non-ergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence |
| Editors | Alexander Ihler and Dominik Janzing |
| Place of Publication | Canada |
| Publisher | AUAI Press |
| Pages | 417-426pp |
| Edition | Peer reviewed |
| ISBN (Print) | 9781510827806 |
| Publication status | Published - 2016 |
| Event | 32nd Conference on Uncertainty in Artificial Intelligence 2016 - Jersey City, New Jersey, USA Duration: 1 Jan 2016 → … |
Conference
| Conference | 32nd Conference on Uncertainty in Artificial Intelligence 2016 |
|---|---|
| Period | 1/01/16 → … |
| Other | June 25-29 2016 |