Abstract
Generative AI technologies are reshaping everyday environments by enabling multimodal interaction. As their ubiquity and agentic capacities grow, there is a pressing need to understand how these systems reshape human–computer interaction in relational, social, and systemic terms. We introduce a scenario-based design pack for investigating Human–GenAI relations. Grounded in assemblage theory and structured around a three-stage process—Prepare, Make, Reflect—the pack supports the prototyping, analysis, and critical reflection of emergent sociotechnical configurations. We evaluated the pack across three deployments: an ACM workshop (n=22), a multidisciplinary design session (n=20), and a university HCI class (n=260). Participants generated scenarios that surfaced relational issues of power, agency, visibility, and care. We contribute the design pack alongside an exploratory framework to advance relational enquiry into multimodal Human–GenAI relations, support more inclusive and socially responsive GenAI practices, and complement FATE approaches by grounding fairness, accountability, and transparency in lived, multimodal configurations.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 27th International Conference on Multimodal Interaction (ICMI ’25), October 13–17, 2025, Canberra, ACT, Australia. |
| Editors | Ram Subramanian, Yukiko I. Nakano, Tom Gedeon, Mohan Kankanhalli, Tanaya Guha, Jainendra Shukla, Gelareh Mohammadi, Oya Celiktutan |
| Place of Publication | New York |
| Publisher | Association for Computing Machinery (ACM) |
| Pages | 145-154 |
| Number of pages | 10 |
| ISBN (Electronic) | 9798400714993 |
| DOIs | |
| Publication status | Published - 13 Oct 2025 |
| Event | 27th ACM International Conference on Multimodal Interaction, ICMI 2025: Safe and responsible multimodal interactions - Canberra, Australia Duration: 13 Oct 2025 → 17 Oct 2025 https://icmi.acm.org/2025/ https://dl.acm.org/doi/proceedings/10.1145/3716553 (ICMI '25 Proceedings) |
Publication series
| Name | Proceedings of the International Conference on Multimodal Interaction (ICMI) |
|---|---|
| Publisher | ACM |
| Number | 2025/10 |
Conference
| Conference | 27th ACM International Conference on Multimodal Interaction, ICMI 2025 |
|---|---|
| Abbreviated title | ICMI '25 |
| Country/Territory | Australia |
| City | Canberra |
| Period | 13/10/25 → 17/10/25 |
| Other | ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling such as representations, fusion, data, and systems. The study of social interactions englobes both human-human interactions and human-computer interactions. A unique aspect of ICMI is its multidisciplinary nature which values scientific discoveries and technical modeling achievements equally, with an eye toward impactful applications for the good of people and society. |
| Internet address |
|
Fingerprint
Dive into the research topics of 'A Scenario-Based Design Pack for Exploring Multimodal Human-GenAI Relations'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver