Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective

Tom Everitt*, Marcus Hutter, Ramana Kumar, Victoria Krakovna

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    38 Citations (Scopus)

    Abstract

    Can humans get arbitrarily capable reinforcement learning (RL) agents to do their bidding? Or will sufficiently capable RL agents always find ways to bypass their intended objectives by shortcutting their reward signal? This question impacts how far RL can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we study when an RL agent has an instrumental goal to tamper with its reward process, and describe design principles that prevent instrumental goals for two different types of reward tampering (reward function tampering and RF-input tampering). Combined, the design principles can prevent reward tampering from being an instrumental goal. The analysis benefits from causal influence diagrams to provide intuitive yet precise formalizations.

    Original languageEnglish
    Pages (from-to)6435-6467
    Number of pages33
    JournalSynthese
    Volume198
    DOIs
    Publication statusPublished - Nov 2021

    Fingerprint

    Dive into the research topics of 'Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective'. Together they form a unique fingerprint.

    Cite this