Yuu Jinnai


Home
Project: Parallel Best-First Search
Project: Automated Skill Discovery
Softwares

GitHub



Japanese
Japanese: Open Data Structures
Japanese: ヒューリスティック探索入門


Hosted on GitHub Pages — Theme by orderedlist

Efficient Work Distribution Methods for Parallel Best-First Search

The A* algorithm is used in many areas of AI, including planning, scheduling, and path-finding. Parallelization of A* search can yield speedups as well as a way to overcome memory limitations - the aggregated memory available in a cluster can allow problems that can’t be solved using a single machine to be solved. Thus, designing scalable, parallel search algorithms is an important goal.

The major challenges which need to be addressed when designing parallel search algorithms are search overhead and communication overhead. Previous works in parallel best-first search only mentioned one of the two overheads, thus did not scale to large scale distributed environment.

We developed Abstract Zobrist hashing, a new work distribution method for paralel best-first search. Abstract Zobrist hashing reduces communication overhead while maintaining search overhead reasonably low. We evaluated parallel best-first search algorithms on sliding-tiles, grid path-finding, multiple sequence alignment, and classical planning in a 48 cores bare-metal cluster as well as a cloud cluster in EC2 with 128 cores, and showed that Abstract Zobrist hashing significantly outperforms other methods.

Code

All codes we used for the experiments are available at my GitHub.

Open Questions

Many questions are yet unanswered including:

Publications