Philosophical Specification of Empathetic Ethical Artificial Intelligence

Michael Timothy Bennett, Yoshihiro Maruyama*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    12 Citations (Scopus)

    Abstract

    In order to construct an ethical artificial intelligence (AI) two complex problems must be overcome. First, humans do not consistently agree on what is or is not ethical. Second, contemporary AI and machine learning methods tend to be blunt instruments which either search for solutions within the bounds of predefined rules or mimic behavior. An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, possess and be able to infer intent, and explain not just its actions but its intent. Using enactivism, semiotics, perceptual symbol systems, and symbol emergence, we specify an agent that learns not just arbitrary relations between signs but their meaning in terms of the perceptual states of its sensorimotor system. Subsequently it can learn what is meant by a sentence and infer the intent of others in terms of its own experiences. It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal. As such it may learn a concept of what is most likely to be considered ethical by the majority within a population of humans, which may then be used as a goal. The meaning of abstract symbols is expressed using perceptual symbols of raw, multimodal sensorimotor stimuli as the weakest (consistent with Ockham's Razor) necessary and sufficient concept, an intensional definition learned from an ostensive definition, from which the extensional definition or category of all ethical decisions may be obtained. Because these abstract symbols are the same for both situation and response, the same symbol is used when either performing or observing an action. This is akin to mirror neurons in the human brain. Mirror symbols may allow the agent to empathize, because its own experiences are associated with the symbol, which is also associated with the observation of another agent experiencing something that symbol represents.

    Original languageEnglish
    Pages (from-to)292-300
    Number of pages9
    JournalIEEE Transactions on Cognitive and Developmental Systems
    Volume14
    Issue number2
    DOIs
    Publication statusPublished - 1 Jun 2022

    Fingerprint

    Dive into the research topics of 'Philosophical Specification of Empathetic Ethical Artificial Intelligence'. Together they form a unique fingerprint.

    Cite this