A Scenario-Based Design Pack for Exploring Multimodal Human-GenAI Relations

Josh Andres*, Chris Danta, Andrea Bianchi, Sahar Farzanfar, Gloria Milena Fernandez-Nieto, Alexa Becker, Tara Capel, Frances Liddell, Shelby Hagemann, Ned Cooper, Sungyeon Hong, Li Lin, Eduardo Benitez Sandoval, Anna Brynskov, Hubert Dariusz Zajac, Zhuying Li, Tianyi Zhang, Arngeir Berge

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference Paperpeer-review

Abstract

Generative AI technologies are reshaping everyday environments by enabling multimodal interaction. As their ubiquity and agentic capacities grow, there is a pressing need to understand how these systems reshape human–computer interaction in relational, social, and systemic terms. We introduce a scenario-based design pack for investigating Human–GenAI relations. Grounded in assemblage theory and structured around a three-stage process—Prepare, Make, Reflect—the pack supports the prototyping, analysis, and critical reflection of emergent sociotechnical configurations. We evaluated the pack across three deployments: an ACM workshop (n=22), a multidisciplinary design session (n=20), and a university HCI class (n=260). Participants generated scenarios that surfaced relational issues of power, agency, visibility, and care. We contribute the design pack alongside an exploratory framework to advance relational enquiry into multimodal Human–GenAI relations, support more inclusive and socially responsive GenAI practices, and complement FATE approaches by grounding fairness, accountability, and transparency in lived, multimodal configurations.

Original languageEnglish
Title of host publicationProceedings of the 27th International Conference on Multimodal Interaction (ICMI ’25), October 13–17, 2025, Canberra, ACT, Australia.
EditorsRam Subramanian, Yukiko I. Nakano, Tom Gedeon, Mohan Kankanhalli, Tanaya Guha, Jainendra Shukla, Gelareh Mohammadi, Oya Celiktutan
Place of PublicationNew York
PublisherAssociation for Computing Machinery (ACM)
Pages145-154
Number of pages10
ISBN (Electronic)9798400714993
DOIs
Publication statusPublished - 13 Oct 2025
Event27th ACM International Conference on Multimodal Interaction, ICMI 2025: Safe and responsible multimodal interactions - Canberra, Australia
Duration: 13 Oct 202517 Oct 2025
https://icmi.acm.org/2025/
https://dl.acm.org/doi/proceedings/10.1145/3716553 (ICMI '25 Proceedings)

Publication series

NameProceedings of the International Conference on Multimodal Interaction (ICMI)
PublisherACM
Number2025/10

Conference

Conference27th ACM International Conference on Multimodal Interaction, ICMI 2025
Abbreviated titleICMI '25
Country/TerritoryAustralia
CityCanberra
Period13/10/2517/10/25
OtherICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling such as representations, fusion, data, and systems. The study of social interactions englobes both human-human interactions and human-computer interactions. A unique aspect of ICMI is its multidisciplinary nature which values scientific discoveries and technical modeling achievements equally, with an eye toward impactful applications for the good of people and society.
Internet address

Fingerprint

Dive into the research topics of 'A Scenario-Based Design Pack for Exploring Multimodal Human-GenAI Relations'. Together they form a unique fingerprint.

Cite this