TY - GEN
T1 - Computational Dualism and Objective Superintelligence
AU - Bennett, Michael Timothy
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - The concept of intelligent software is flawed. The behaviour of software is determined by the hardware that “interprets” it. This undermines claims regarding the behaviour of theorised, software superintelligence. Here we characterise this problem as “computational dualism”, where instead of mental and physical substance, we have software and hardware. We argue that to make objective claims regarding performance we must avoid computational dualism. We propose a pancomputational alternative wherein every aspect of the environment is a relation between irreducible states. We formalise systems as behaviour (inputs and outputs), and cognition as embodied, embedded, extended and enactive. The result is cognition formalised as a part of the environment, rather than as a disembodied policy interacting with the environment through an interpreter. This allows us to make objective claims regarding intelligence, which we argue is the ability to “generalise”, identify causes and adapt. We then establish objective upper bounds for intelligent behaviour. This suggests AGI will be safer, but more limited, than theorised.
AB - The concept of intelligent software is flawed. The behaviour of software is determined by the hardware that “interprets” it. This undermines claims regarding the behaviour of theorised, software superintelligence. Here we characterise this problem as “computational dualism”, where instead of mental and physical substance, we have software and hardware. We argue that to make objective claims regarding performance we must avoid computational dualism. We propose a pancomputational alternative wherein every aspect of the environment is a relation between irreducible states. We formalise systems as behaviour (inputs and outputs), and cognition as embodied, embedded, extended and enactive. The result is cognition formalised as a part of the environment, rather than as a disembodied policy interacting with the environment through an interpreter. This allows us to make objective claims regarding intelligence, which we argue is the ability to “generalise”, identify causes and adapt. We then establish objective upper bounds for intelligent behaviour. This suggests AGI will be safer, but more limited, than theorised.
KW - AGI
KW - AI safety
KW - computational dualism
KW - enactivism
KW - pancomputationalism
UR - http://www.scopus.com/inward/record.url?scp=85200656911&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-65572-2_3
DO - 10.1007/978-3-031-65572-2_3
M3 - Conference contribution
AN - SCOPUS:85200656911
SN - 9783031655715
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 22
EP - 32
BT - Artificial General Intelligence - 17th International Conference, AGI 2024, Proceedings
A2 - Thórisson, Kristinn R.
A2 - Sheikhlar, Arash
A2 - Isaev, Peter
PB - Springer Science and Business Media Deutschland GmbH
T2 - 17th International Conference on Artificial General Intelligence, AGI 2024
Y2 - 12 August 2024 through 15 August 2024
ER -