TY - JOUR
T1 - Partially Observable Markov Decision Processes and Robotics
AU - Kurniawati, Hanna
N1 - Publisher Copyright:
Copyright © 2022 by Annual Reviews.
PY - 2022
Y1 - 2022
N2 - Planning under uncertainty is critical to robotics. The partially observable Markov decision process (POMDP) is a mathematical framework for such planning problems. POMDPs are powerful because of their careful quantification of the nondeterministic effects of actions and the partial observability of the states. But for the same reason, they are notorious for their high computational complexity and have been deemed impractical for robotics. However, over the past two decades, the development of sampling-based approximate solvers has led to tremendous advances in POMDP-solving capabilities. Although these solvers do not generate the optimal solution, they can compute good POMDP solutions that significantly improve the robustness of robotics systems within reasonable computational resources, thereby making POMDPs practical for many realistic robotics problems. This article presents a review of POMDPs, emphasizing computational issues that have hindered their practicality in robotics and ideas in sampling-based solvers that have alleviated such difficulties, together with lessons learned from applying POMDPs to physical robots.
AB - Planning under uncertainty is critical to robotics. The partially observable Markov decision process (POMDP) is a mathematical framework for such planning problems. POMDPs are powerful because of their careful quantification of the nondeterministic effects of actions and the partial observability of the states. But for the same reason, they are notorious for their high computational complexity and have been deemed impractical for robotics. However, over the past two decades, the development of sampling-based approximate solvers has led to tremendous advances in POMDP-solving capabilities. Although these solvers do not generate the optimal solution, they can compute good POMDP solutions that significantly improve the robustness of robotics systems within reasonable computational resources, thereby making POMDPs practical for many realistic robotics problems. This article presents a review of POMDPs, emphasizing computational issues that have hindered their practicality in robotics and ideas in sampling-based solvers that have alleviated such difficulties, together with lessons learned from applying POMDPs to physical robots.
KW - Motion planning
KW - POMDP
KW - Planning under uncertainty
UR - http://www.scopus.com/inward/record.url?scp=85126691063&partnerID=8YFLogxK
U2 - 10.1146/annurev-control-042920-092451
DO - 10.1146/annurev-control-042920-092451
M3 - Review article
SN - 2573-5144
VL - 5
SP - 253
EP - 277
JO - Annual Review of Control, Robotics, and Autonomous Systems
JF - Annual Review of Control, Robotics, and Autonomous Systems
ER -