Responsible Artificial Intelligence and Journal Publishing

Research output: Contribution to journalEditorialpeer-review

3 Citations (Scopus)

Abstract

The aim of this opinion piece is to examine the responsible use of artificial intelligence (AI) in relation to academic journal publishing. The work discusses approaches to AI with particular attention to recent developments with generative AI. Consensus is noted around eight normative themes for principles for responsible AI and their associated risks. A framework from Shneiderman (2022) for human-centered AI is employed to consider journal publishing practices that can address the principles of responsible AI at different levels. The resultant AI principled governance matrix (AI-PGM) for journal publishing shows how countermeasures for risks can be employed at the levels of the author-researcher team, the organization, the industry, and by government regulation. The AI-PGM allows a structured approach to responsible AI and may be modified as developments with AI unfold. It shows how the whole publishing ecosystem should be considered when looking at the responsible use of AI—not just journal policy itself.
Original languageEnglish
Article number11
Pages (from-to)48-60
Number of pages13
JournalJournal of the Association for Information Systems
Volume25
Issue number1
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Responsible Artificial Intelligence and Journal Publishing'. Together they form a unique fingerprint.

Cite this