TY - GEN
T1 - Self-modification of policy and utility function in rational agents
AU - Everitt, Tom
AU - Filan, Daniel
AU - Daswani, Mayank
AU - Hutter, Marcus
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2016.
PY - 2016
Y1 - 2016
N2 - Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify – for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby ‘escaping’ the control of their creators. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.
AB - Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify – for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby ‘escaping’ the control of their creators. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.
UR - http://www.scopus.com/inward/record.url?scp=84977556831&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-41649-6_1
DO - 10.1007/978-3-319-41649-6_1
M3 - Conference contribution
SN - 9783319416489
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 1
EP - 11
BT - Artificial General Intelligence - 9th International Conference, AGI 2016, Proceedings
A2 - Steunebrink, Bas
A2 - Wang, Pei
A2 - Goertzel, Ben
PB - Springer Verlag
T2 - 9th International Conference on Artificial General Intelligence, AGI 2016
Y2 - 16 July 2016 through 19 July 2016
ER -