Multilevel Monte Carlo for solving POMDPs on-line

Marcus Hoerger, Hanna Kurniawati*, Alberto Elfes

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    2 Citations (Scopus)

    Abstract

    Planning under partial observability is essential for autonomous robots. A principled way to address such planning problems is the Partially Observable Markov Decision Process (POMDP). Although solving POMDPs is computationally intractable, substantial advancements have been achieved in developing approximate POMDP solvers in the past two decades. However, computing robust solutions for systems with complex dynamics remains challenging. Most on-line solvers rely on a large number of forward simulations and standard Monte Carlo methods to compute the expected outcomes of actions the robot can perform. For systems with complex dynamics, for example, those with non-linear dynamics that admit no closed-form solution, even a single forward simulation can be prohibitively expensive. Of course, this issue exacerbates for problems with long planning horizons. This paper aims to alleviate the above difficulty. To this end, we propose a new on-line POMDP solver, called Multilevel POMDP Planner (MLPP), that combines the commonly known Monte-Carlo-Tree-Search with the concept of Multilevel Monte Carlo to speed up our capability in generating approximately optimal solutions for POMDPs with complex dynamics. Experiments on four different problems involving torque control, navigation and grasping indicate that MLPP substantially outperforms state-of-the-art POMDP solvers.

    Original languageEnglish
    Pages (from-to)196-213
    Number of pages18
    JournalInternational Journal of Robotics Research
    Volume42
    Issue number4-5
    DOIs
    Publication statusPublished - Apr 2023

    Fingerprint

    Dive into the research topics of 'Multilevel Monte Carlo for solving POMDPs on-line'. Together they form a unique fingerprint.

    Cite this