A Monte-Carlo AIXI approximation

Joel Veness*, Kee Siong Ng, Marcus Hutter, William Uther, David Silver

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    112 Citations (Scopus)

    Abstract

    This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. Our approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a new Monte-Carlo Tree Search algorithm along with an agent-specific extension to the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a variety of stochastic and partially observable domains. We conclude by proposing a number of directions for future research.

    Original languageEnglish
    Pages (from-to)95-142
    Number of pages48
    JournalJournal of Artificial Intelligence Research
    Volume40
    DOIs
    Publication statusPublished - Jan 2011

    Fingerprint

    Dive into the research topics of 'A Monte-Carlo AIXI approximation'. Together they form a unique fingerprint.

    Cite this