Practical linear value-approximation techniques for first-order MDPs

Scott Sanner*, Craig Boutilier

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

20 Citations (Scopus)

Abstract

Recent work on approximate linear programming (ALP) techniques for first-order Markov Decision Processes (FOMDPs) represents the value function linearly w.r.t. a set of first-order basis functions and uses linear programming techniques to determine suitable weights. This approach offers the advantage that it does not require simplification of the first-order value function, and allows one to solve FOMDPs independent of a specific domain instantiation. In this paper, we address several questions to enhance the applicability of this work: (1) Can we extend the first-order ALP framework to approximate policy iteration and if so, how do these two algorithms compare? (2) Can we automatically generate basis functions and evaluate their impact on value function quality? (3) How can we decompose intractable problems with universally quantified rewards into tractable subproblems? We propose answers to these questions along with a number of novel optimizations and provide a comparative empirical evaluation on problems from the ICAPS 2004 Probabilistic Planning Competition.

Original languageEnglish
Title of host publicationProceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, UAI 2006
Pages409-417
Number of pages9
Publication statusPublished - 2006
Externally publishedYes
Event22nd Conference on Uncertainty in Artificial Intelligence, UAI 2006 - Cambridge, MA, United States
Duration: 13 Jul 200616 Jul 2006

Publication series

NameProceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, UAI 2006

Conference

Conference22nd Conference on Uncertainty in Artificial Intelligence, UAI 2006
Country/TerritoryUnited States
CityCambridge, MA
Period13/07/0616/07/06

Fingerprint

Dive into the research topics of 'Practical linear value-approximation techniques for first-order MDPs'. Together they form a unique fingerprint.

Cite this