Reinforcement learning with a corrupted reward channel

Tom Everitt, Victoria Krakovna, Laurent Orseau, Shane Legg

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    36 Citations (Scopus)

    Abstract

    No real-world reward function is perfect. Sensory errors and software bugs may result in agents getting higher (or lower) rewards than they should. For example, a reinforcement learning agent may prefer states where a sensory error gives it the maximum reward, but where the true reward is actually small. We formalise this problem as a generalised Markov Decision Problem called Corrupt Reward MDP. Traditional RL methods fare poorly in CRMDPs, even under strong simplifying assumptions and when trying to compensate for the possibly corrupt rewards. Two ways around the problem are investigated. First, by giving the agent richer data, such as in inverse reinforcement learning and semi-supervised reinforcement learning, reward corruption stemming from systematic sensory errors may sometimes be completely managed. Second, by using randomisation to blunt the agent's optimisation, reward corruption can be partially managed under some assumptions.

    Original languageEnglish
    Title of host publication26th International Joint Conference on Artificial Intelligence, IJCAI 2017
    EditorsCarles Sierra
    PublisherInternational Joint Conferences on Artificial Intelligence
    Pages4705-4713
    Number of pages9
    ISBN (Electronic)9780999241103
    DOIs
    Publication statusPublished - 2017
    Event26th International Joint Conference on Artificial Intelligence, IJCAI 2017 - Melbourne, Australia
    Duration: 19 Aug 201725 Aug 2017

    Publication series

    NameIJCAI International Joint Conference on Artificial Intelligence
    Volume0
    ISSN (Print)1045-0823

    Conference

    Conference26th International Joint Conference on Artificial Intelligence, IJCAI 2017
    Country/TerritoryAustralia
    CityMelbourne
    Period19/08/1725/08/17

    Fingerprint

    Dive into the research topics of 'Reinforcement learning with a corrupted reward channel'. Together they form a unique fingerprint.

    Cite this