TY - JOUR
T1 - Multilevel Monte Carlo for solving POMDPs on-line
AU - Hoerger, Marcus
AU - Kurniawati, Hanna
AU - Elfes, Alberto
N1 - Publisher Copyright:
© The Author(s) 2022.
PY - 2023/4
Y1 - 2023/4
N2 - Planning under partial observability is essential for autonomous robots. A principled way to address such planning problems is the Partially Observable Markov Decision Process (POMDP). Although solving POMDPs is computationally intractable, substantial advancements have been achieved in developing approximate POMDP solvers in the past two decades. However, computing robust solutions for systems with complex dynamics remains challenging. Most on-line solvers rely on a large number of forward simulations and standard Monte Carlo methods to compute the expected outcomes of actions the robot can perform. For systems with complex dynamics, for example, those with non-linear dynamics that admit no closed-form solution, even a single forward simulation can be prohibitively expensive. Of course, this issue exacerbates for problems with long planning horizons. This paper aims to alleviate the above difficulty. To this end, we propose a new on-line POMDP solver, called Multilevel POMDP Planner (MLPP), that combines the commonly known Monte-Carlo-Tree-Search with the concept of Multilevel Monte Carlo to speed up our capability in generating approximately optimal solutions for POMDPs with complex dynamics. Experiments on four different problems involving torque control, navigation and grasping indicate that MLPP substantially outperforms state-of-the-art POMDP solvers.
AB - Planning under partial observability is essential for autonomous robots. A principled way to address such planning problems is the Partially Observable Markov Decision Process (POMDP). Although solving POMDPs is computationally intractable, substantial advancements have been achieved in developing approximate POMDP solvers in the past two decades. However, computing robust solutions for systems with complex dynamics remains challenging. Most on-line solvers rely on a large number of forward simulations and standard Monte Carlo methods to compute the expected outcomes of actions the robot can perform. For systems with complex dynamics, for example, those with non-linear dynamics that admit no closed-form solution, even a single forward simulation can be prohibitively expensive. Of course, this issue exacerbates for problems with long planning horizons. This paper aims to alleviate the above difficulty. To this end, we propose a new on-line POMDP solver, called Multilevel POMDP Planner (MLPP), that combines the commonly known Monte-Carlo-Tree-Search with the concept of Multilevel Monte Carlo to speed up our capability in generating approximately optimal solutions for POMDPs with complex dynamics. Experiments on four different problems involving torque control, navigation and grasping indicate that MLPP substantially outperforms state-of-the-art POMDP solvers.
KW - Agent-based systems
KW - autonomous agents
KW - non-holonomic motion planning
UR - http://www.scopus.com/inward/record.url?scp=85132111957&partnerID=8YFLogxK
U2 - 10.1177/02783649221093658
DO - 10.1177/02783649221093658
M3 - Article
SN - 0278-3649
VL - 42
SP - 196
EP - 213
JO - International Journal of Robotics Research
JF - International Journal of Robotics Research
IS - 4-5
ER -