Compress and control

Joel Veness, Marc Bellemare, Marcus Hutter, Alvin Chua, Guilliaume Desjardins

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    This paper describes a new information-theoretic policy evaluation technique for reinforcement learning. This technique converts any compression or density model into a corresponding estimate of value. Under appropriate stationarity and ergodicity conditions, we show that the use of a sufficiently powerful model gives rise to a consistent value function estimator. We also study the behavior of this technique when applied to various Atari 2600 video games, where the use of suboptimal modeling techniques is unavoidable. We consider three fundamentally different models, all too limited to perfectly model the dynamics of the system. Remarkably, we find that our technique provides sufficiently accurate value estimates for effective on-policy control. We conclude with a suggestive study highlighting the potential of our technique to scale to large problems.
    Original languageEnglish
    Title of host publication29th AAAI Conference on Artificial Intelligence, AAAI 2015
    EditorsQ.Yang and M.Wolldridge
    Place of PublicationUnited States
    PublisherAmerican Association for Artificial Intelligence (AAAI) Press
    Pages3016--3023
    EditionPeer Reviewed
    ISBN (Print)9781577356981
    Publication statusPublished - 2015
    EventConference on Artificial Intelligence (AAAI 2015) - Austin, United States
    Duration: 1 Jan 2015 → …

    Conference

    ConferenceConference on Artificial Intelligence (AAAI 2015)
    Period1/01/15 → …
    OtherJanuary 25-30, 2015

    Fingerprint

    Dive into the research topics of 'Compress and control'. Together they form a unique fingerprint.

    Cite this