TY - GEN
T1 - Scalable, parallel best-first search for optimal sequential planning
AU - Kishimoto, Akihiro
AU - Fukunaga, Alex
AU - Botea, Adi
PY - 2009
Y1 - 2009
N2 - Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters. In particular, we focus on an approach which distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in the optimal sequential version of the Fast Downward planner. The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 128 processors, a significant increase compared to previous work in parallelizing planners. We show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require hundreds of gigabytes of RAM to solve. We also show that this approach scales well for a single, shared-memory multicore machine.
AB - Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters. In particular, we focus on an approach which distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in the optimal sequential version of the Fast Downward planner. The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 128 processors, a significant increase compared to previous work in parallelizing planners. We show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require hundreds of gigabytes of RAM to solve. We also show that this approach scales well for a single, shared-memory multicore machine.
UR - http://www.scopus.com/inward/record.url?scp=78650608544&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9781577354062
T3 - ICAPS 2009 - Proceedings of the 19th International Conference on Automated Planning and Scheduling
SP - 201
EP - 208
BT - ICAPS 2009 - Proceedings of the 19th International Conference on Automated Planning and Scheduling
T2 - 19th International Conference on Automated Planning and Scheduling, ICAPS 2009
Y2 - 19 September 2009 through 23 September 2009
ER -