TY - GEN
T1 - Self-optimizing and Pareto-optimal policies in general environments based on Bayes-mixtures
AU - Hutter, Marcus
N1 - Publisher Copyright:
© Springer-Verlag Berlin Heidelberg 2002.
PY - 2002
Y1 - 2002
N2 - The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle t action yt results in perception xt and reward rt, where all quantities in general may depend on the complete history. The perception xt and reward rt are sampled from the (reactive) environmental probability distribution μ. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if μ is known. Reinforcement learning is usually used if μ is unknown. In the Bayesian approach one defines a mixture distribution ξ as a weighted sum of distributions ν∈, where is any class of distributions including the true environment μ. We show that the Bayes-optimal policy pξ based on the mixture ξ is self-optimizing in the sense that the average value converges asymptotically for all ν∈ to the optimal value achieved by the (infeasible) Bayes-optimal policy pμ which knows μ in advance. We show that the necessary condition that admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on . As an example application, we discuss ergodic Markov decision processes, which allow for self-optimizing policies. Furthermore, we show that pξ is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in all environments ν∈ and a strictly higher value in at least one.
AB - The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle t action yt results in perception xt and reward rt, where all quantities in general may depend on the complete history. The perception xt and reward rt are sampled from the (reactive) environmental probability distribution μ. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if μ is known. Reinforcement learning is usually used if μ is unknown. In the Bayesian approach one defines a mixture distribution ξ as a weighted sum of distributions ν∈, where is any class of distributions including the true environment μ. We show that the Bayes-optimal policy pξ based on the mixture ξ is self-optimizing in the sense that the average value converges asymptotically for all ν∈ to the optimal value achieved by the (infeasible) Bayes-optimal policy pμ which knows μ in advance. We show that the necessary condition that admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on . As an example application, we discuss ergodic Markov decision processes, which allow for self-optimizing policies. Furthermore, we show that pξ is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in all environments ν∈ and a strictly higher value in at least one.
UR - http://www.scopus.com/inward/record.url?scp=84937417436&partnerID=8YFLogxK
U2 - 10.1007/3-540-45435-7_25
DO - 10.1007/3-540-45435-7_25
M3 - Conference contribution
T3 - Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)
SP - 364
EP - 379
BT - Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings
A2 - Kivinen, Jyrki
A2 - Sloan, Robert H.
PB - Springer Verlag
T2 - 15th Annual Conference on Computational Learning Theory, COLT 2002
Y2 - 8 July 2002 through 10 July 2002
ER -