Feature markov decision processes

Marcus Hutter*

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    5 Citations (Scopus)

    Abstract

    General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework, Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in the companion article [Hut09].

    Original languageEnglish
    Title of host publicationProceedings of the 2nd Conference on Artificial General Intelligence, AGI 2009
    PublisherAtlantis Press
    Pages61-66
    Number of pages6
    ISBN (Print)9789078677246
    DOIs
    Publication statusPublished - 2009
    Event2nd Conference on Artificial General Intelligence, AGI 2009 - Arlington, VA, United States
    Duration: 6 Mar 20099 Mar 2009

    Publication series

    NameProceedings of the 2nd Conference on Artificial General Intelligence, AGI 2009

    Conference

    Conference2nd Conference on Artificial General Intelligence, AGI 2009
    Country/TerritoryUnited States
    CityArlington, VA
    Period6/03/099/03/09

    Fingerprint

    Dive into the research topics of 'Feature markov decision processes'. Together they form a unique fingerprint.

    Cite this