ASNets: Deep Learning for Generalised Planning

Sam Toyer, Sylvie Thiébaux, Felipe Trevizan, Lexing Xie

    Research output: Contribution to journalArticlepeer-review

    50 Citations (Scopus)

    Abstract

    In this paper, we discuss the learning of generalised policies for probabilistic and classical planning problems using Action Schema Networks (ASNets). The ASNet is a neural network architecture that exploits the relational structure of (P)PDDL planning problems to learn a common set of weights that can be applied to any problem in a domain. By mimicking the actions chosen by a traditional, non-learning planner on a handful of small problems in a domain, ASNets are able to learn a generalised reactive policy that can quickly solve much larger instances from the domain. This work extends the ASNet architecture to make it more expressive, while still remaining invariant to a range of symmetries that exist in PPDDL problems. We also present a thorough experimental evaluation of ASNets, including a comparison with heuristic search planners on seven probabilistic and deterministic domains, an extended evaluation on over 18,000 Blocksworld instances, and an ablation study. Finally, we show that sparsity-inducing regularisation can produce ASNets that are compact enough for humans to understand, yielding insights into how the structure of ASNets allows them to generalise across a domain.

    Original languageEnglish
    Pages (from-to)1-68
    Number of pages68
    JournalJournal of Artificial Intelligence Research
    Volume68
    DOIs
    Publication statusPublished - 4 May 2020

    Fingerprint

    Dive into the research topics of 'ASNets: Deep Learning for Generalised Planning'. Together they form a unique fingerprint.

    Cite this