Competing with an infinite set of models in reinforcement learning

Phuong Nguyen, Odalric Ambrym Maillard, Daniil Ryabko, Ronald Ortner

    Research output: Contribution to journalConference articlepeer-review

    8 Citations (Scopus)

    Abstract

    We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable staterepresentation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T2/3, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound.

    Original languageEnglish
    Pages (from-to)463-471
    Number of pages9
    JournalJournal of Machine Learning Research
    Volume31
    Publication statusPublished - 2013
    Event16th International Conference on Artificial Intelligence and Statistics, AISTATS 2013 - Scottsdale, United States
    Duration: 29 Apr 20131 May 2013

    Fingerprint

    Dive into the research topics of 'Competing with an infinite set of models in reinforcement learning'. Together they form a unique fingerprint.

    Cite this