TY - JOUR
T1 - Philosophical Specification of Empathetic Ethical Artificial Intelligence
AU - Bennett, Michael Timothy
AU - Maruyama, Yoshihiro
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2022/6/1
Y1 - 2022/6/1
N2 - In order to construct an ethical artificial intelligence (AI) two complex problems must be overcome. First, humans do not consistently agree on what is or is not ethical. Second, contemporary AI and machine learning methods tend to be blunt instruments which either search for solutions within the bounds of predefined rules or mimic behavior. An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, possess and be able to infer intent, and explain not just its actions but its intent. Using enactivism, semiotics, perceptual symbol systems, and symbol emergence, we specify an agent that learns not just arbitrary relations between signs but their meaning in terms of the perceptual states of its sensorimotor system. Subsequently it can learn what is meant by a sentence and infer the intent of others in terms of its own experiences. It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal. As such it may learn a concept of what is most likely to be considered ethical by the majority within a population of humans, which may then be used as a goal. The meaning of abstract symbols is expressed using perceptual symbols of raw, multimodal sensorimotor stimuli as the weakest (consistent with Ockham's Razor) necessary and sufficient concept, an intensional definition learned from an ostensive definition, from which the extensional definition or category of all ethical decisions may be obtained. Because these abstract symbols are the same for both situation and response, the same symbol is used when either performing or observing an action. This is akin to mirror neurons in the human brain. Mirror symbols may allow the agent to empathize, because its own experiences are associated with the symbol, which is also associated with the observation of another agent experiencing something that symbol represents.
AB - In order to construct an ethical artificial intelligence (AI) two complex problems must be overcome. First, humans do not consistently agree on what is or is not ethical. Second, contemporary AI and machine learning methods tend to be blunt instruments which either search for solutions within the bounds of predefined rules or mimic behavior. An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, possess and be able to infer intent, and explain not just its actions but its intent. Using enactivism, semiotics, perceptual symbol systems, and symbol emergence, we specify an agent that learns not just arbitrary relations between signs but their meaning in terms of the perceptual states of its sensorimotor system. Subsequently it can learn what is meant by a sentence and infer the intent of others in terms of its own experiences. It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal. As such it may learn a concept of what is most likely to be considered ethical by the majority within a population of humans, which may then be used as a goal. The meaning of abstract symbols is expressed using perceptual symbols of raw, multimodal sensorimotor stimuli as the weakest (consistent with Ockham's Razor) necessary and sufficient concept, an intensional definition learned from an ostensive definition, from which the extensional definition or category of all ethical decisions may be obtained. Because these abstract symbols are the same for both situation and response, the same symbol is used when either performing or observing an action. This is akin to mirror neurons in the human brain. Mirror symbols may allow the agent to empathize, because its own experiences are associated with the symbol, which is also associated with the observation of another agent experiencing something that symbol represents.
KW - Artificial intelligence (AI) robotics
KW - empathetic AI
KW - enactivism
KW - ethical AI
KW - symbol emergence
UR - http://www.scopus.com/inward/record.url?scp=85111598193&partnerID=8YFLogxK
U2 - 10.1109/TCDS.2021.3099945
DO - 10.1109/TCDS.2021.3099945
M3 - Article
SN - 2379-8920
VL - 14
SP - 292
EP - 300
JO - IEEE Transactions on Cognitive and Developmental Systems
JF - IEEE Transactions on Cognitive and Developmental Systems
IS - 2
ER -