PAC bounds for discounted MDPs

Tor Lattimore*, Marcus Hutter

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    78 Citations (Scopus)

    Abstract

    We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (mdps). We prove a new bound for a modified version of Upper Confidence Reinforcement Learning (ucrl) with only cubic dependence on the horizon. The bound is unimprovable in all parameters except the size of the state/action space, where it depends linearly on the number of non-zero transition probabilities. The lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors provided the transition matrix is not too dense.

    Original languageEnglish
    Title of host publicationAlgorithmic Learning Theory - 23rd International Conference, ALT 2012, Proceedings
    Pages320-334
    Number of pages15
    DOIs
    Publication statusPublished - 2012
    Event23rd International Conference on Algorithmic Learning Theory, ALT 2012 - Lyon, France
    Duration: 29 Oct 201231 Oct 2012

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume7568 LNAI
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Conference

    Conference23rd International Conference on Algorithmic Learning Theory, ALT 2012
    Country/TerritoryFrance
    CityLyon
    Period29/10/1231/10/12

    Fingerprint

    Dive into the research topics of 'PAC bounds for discounted MDPs'. Together they form a unique fingerprint.

    Cite this