TY - GEN
T1 - Feature markov decision processes
AU - Hutter, Marcus
PY - 2009
Y1 - 2009
N2 - General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework, Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in the companion article [Hut09].
AB - General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework, Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in the companion article [Hut09].
UR - http://www.scopus.com/inward/record.url?scp=77955216758&partnerID=8YFLogxK
U2 - 10.2991/agi.2009.30
DO - 10.2991/agi.2009.30
M3 - Conference contribution
SN - 9789078677246
T3 - Proceedings of the 2nd Conference on Artificial General Intelligence, AGI 2009
SP - 61
EP - 66
BT - Proceedings of the 2nd Conference on Artificial General Intelligence, AGI 2009
PB - Atlantis Press
T2 - 2nd Conference on Artificial General Intelligence, AGI 2009
Y2 - 6 March 2009 through 9 March 2009
ER -