Dynamic algorithm selection using reinforcement learning

Warren Armstrong*, Peter Christen, Eric McCreath, Alistair P. Rendell

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    22 Citations (Scopus)

    Abstract

    It is often the case that many algorithms exist to solve a single problem, each possessing different performance characteristics. The usual approach in this situation is to manually select the algorithm which has the best average performance. However, this strategy has drawbacks in cases where the optimal algorithm changes during an invocation of the program, in response to changes in the program's state and the computational environment. This paper presents a prototype tool that uses reinforcement learning to guide algorithm selection at runtime, matching the algorithm used to the current state of the computation. The tool is applied to a simulation similar to those used in some computational chemistry problems. It is shown that the low dimensionality of the problem enables the optimal choice of algorithm to be determined quickly, and that the learning system can react rapidly to phase changes in the target program.

    Original languageEnglish
    Title of host publicationIntegrating AI and Data Mining - 1st International Workshop Proceedings, AIDM 2006
    Pages18-25
    Number of pages8
    DOIs
    Publication statusPublished - 2006
    Event1st International Workshop on Integrating AI and Data Mining, AIDM 2006 - Hobart, Australia
    Duration: 4 Dec 20065 Dec 2006

    Publication series

    NameIntegrating AI and Data Mining - 1st International Workshop Proceedings, AIDM 2006

    Conference

    Conference1st International Workshop on Integrating AI and Data Mining, AIDM 2006
    Country/TerritoryAustralia
    CityHobart
    Period4/12/065/12/06

    Fingerprint

    Dive into the research topics of 'Dynamic algorithm selection using reinforcement learning'. Together they form a unique fingerprint.

    Cite this