As a hub for research-creation, the Immersive Arts Space tackles a multifaceted challenge by pursuing and promoting an integrated program of transdisciplinary inquiry in and across research, teaching, service and production. The team members have their professional roots in film, game design, interaction design, music, computer science and engineering, and often members have a background in more than one discipline. The IASpace is part of the research cluster of the Digitalization Initiative of the Zurich Higher Education Institutions (DIZH).
Towards new multi-user dramaturgies in interactive and immersive performance
Leading House Asia, 2024/25
This interdisciplinary project brings together two internationally recognized labs working at the intersection of arts and sciences – the Immersive Arts Space and the Visualization Research Center (VRC) at the Hong Kong Baptist University (HKBU). The project will develop a pilot prototype for a new kind of multi-user interactive performance where the audience can individually and collectively influence the dramaturgical evolution of the event. Through a major Hong Kong government innovation grant, the VRC under the leadership of Jeffrey Shaw, a pioneer in the development of immersive technological environments, has developed a world-leading interactive multimedia visualization system called nVis.
‘Leading Houses’ are intended as start-up funding for international research projects. These are competence networks mandated by the State Secretariat for Education, Research and Innovation (SBFI).
Collaborating Reseachers: Martin Fröhlich, Stella Speziali, Chris Salter
Self-Organized Reality
Development of emergent person-environment behaviors in Mixed Reality environments, Leading House Japan, 2024/25
This interdisciplinary project is an international collaboration between The Ikegami Lab at the University of Tokyo and Immersive Arts Space. The project aims to study social co-presence (the sense of being with others) in a shared mixed reality (MR) environment, Here, MR is achieved via “video passthrough”– a technique which delivers live video images through tiny cameras in the head mounted display which approximates what one would see if directly looking into the real surrounding world. The project’s aim is to measure how self-organization in this MR space, characterized by changes in information richness and concurrency could affect the sense of co-presence of others inhabiting the same MR space. Self-organization, here defined as emergent structures that evolve without central control and in response to environmental stimuli, will be achieved by creating audio-visual environments through generative AI processes that are controlled in real time by both physiological markers and of non-verbal interaction (interpersonal spatial distance).
‘Leading Houses’ are short term projects intended as start-up funding for international research projects. These are competence networks mandated by the State Secretariat for Education, Research and Innovation (SERI).
Collaborating Reseachers: Chris Elvis Leisi, Oliver Sahli, Chris Salter
Probing XR’s Futures. Design Fiction, Bodily Experience, Critical Inquiry
Swiss National Science Foundation (SNSF) project, 2023-2027
“Extended reality (XR) devices like Apple’s recently announced Vision Pro or Meta’s Oculus Quest 3 enable new possibilities for mixing the real world with a computationally generated one, promising to “change interaction as we know it.” Yet, there is little research on exactly how XR might reshape bodily subjectivity and experience. Probing XR’s Futures utilizes a critically-historically informed, practice-based design approach to examines how XR technologies reimagine bodily subjectivity, interaction and experience, on the one hand, and how bodily experience could reimagine XR, on the other. The 4-year project employs critical, creative, conceptual and empirical approaches to address three questions: How is everyday interaction in XR achieved? How will XR change interaction and what social reciprocity and mutual access will be enabled? What concrete effects and forms of discipline will be enacted on disabled bodies interacting in XR? The objective is to use design fiction, a design research method that prototypes objects and scenarios to provoke new ways of thinking about the future, as a form of critical inquiry to probe the present and future of social interaction in XR in three different settings and contexts: the lab, public space and in collaboration with disabled researchers and communities. Situated at the Immersive Arts Space at the Zurich University of the Arts, the project is at the interdisciplinary intersection of Critical VR studies, Science and Technology Studies (STS) and experimental media design. It will constitute one of the first in the context of Swiss and German speaking design research to develop alternative thinking and experimental aesthetic-design analysis, reflection and critique of XR directly in situated action and use with the general public.
Team:
Christopher Salter (Project Lead)
Puneet Jain (PhD Candidate)
Eric Larrieux (Researcher)
Chris Elvis Leisi (Researcher)
Oliver Sahli (Researcher)
Philippe Sormani (Senior Researcher)
Stella Speziali (Researcher)
Project Partners:
Andreas Uebelbacher (Access for All Foundation)
John David Howes (Concordia University Montreal, Sociology/Anthropology)
Sabine Himmelsbach (Haus der elektronischen Künste Basel, HeK)
Pilar Orero (Universitat Autònoma de Barcelona, Transmedia Research Group)
Lorenza Mondada (Universität Basel, Institut für Französische Sprach- und Literaturwissenschaft)
Performing AI
Swiss National Science Fund (SNSF) project, 2025-2029
Performing AI’s goal is to contextualize AI as a dynamic social and cultural artifact that is discursively and practically constituted (that is, performed) in specific contexts and situations. In other words, what does “AI” do, why and how does it do what it does, and what effects does it produce across different disciplines? The project takes the theoretical and conceptual lenses of performance and performativity for navigating AI’s messy entanglements between the social and political, technical and aesthetic.
The project has three core objectives: 1) understand how AI is performed differently in its multiple constitutions (discursive, material, situated) and in/across disciplines; 2) provide interdisciplinary research and training opportunities for a next generation of researchers to grapple with the complex, multi-scalar nature of AI; and (3) explore new forms of critical public engagement with AI across arts, science, policy and technology.
Performing AI will thus study AI’s performances in the making in three sites – the policy space, experimental scientific and artistic research labs, and otherwise mundane spaces. Examining AI in the making, the project explores how AI is discursively enacted in policy and governance and examines the material agency of AI in robotics, artificial life and digital arts where human actors have to interact with machinic systems in real time. It also draws upon and develops ethnographic and ethnomethodological approaches to trace the situated action and production of AI in public settings of the everyday including a museum as well as in hybrid art, science and technology laboratories.
Project Partners:
Anna Jobin (University of Friobourg)
Olivier Glassey (University of Lausanne)
Takashi Ikegami (University of Tokyo)
Christopher Salter (Zurich University of the Arts).
SNSF-Project: Spatial Dis/Continuities in Telematic Performances
Unraveling distributed space through 3D-audio, embodiment and Spatial Augmented Reality. SNSF-project by the Institute for Computer Music and Sound Technology (ICST), 2020-2024.
Telematic performances connect two or more geographically distant stages via communication technologies in such a way that performers can interact with each other. The audience experiences a specific insight into the overall event at each of the different locations, with the real presence of the local and virtual presence of the performers simultaneously overlapping.
Previous research has focused on technological developments that enable a high-quality connection with low latency, as well as the minimization or artistic handling of latency, i.e. the delay in the transmission of audio and video signals. Due to the mostly flat representation of the remote performers on a static screen (video) and in stereo monitoring (audio), the spatial information of the remote stage is largely lost. The research project investigates how spatial characteristics from the stages involved are superimposed. It draws on current technologies such as 3D audio (where sounds are reproduced in high spatial resolution) and spatial augmented reality techniques (where visual information from the remote stage is integrated into the local scenography by means of motion tracking and video mapping). Particular attention is paid to forms of embodiment and communication processes that enable an immersive experience.
Credits:
Patrick Müller (ICST, project lead)
Benjamin Burger (ICST)
Joel de Giovanni (ICST)
Martin Fröhlich (IAS, ICST)
Roman Haefeli (ICST)
Johannes Schütt (ICST)
Matthias Ziegler (ICST)
Hannah Walter (MA Transdiciplinary)