Navigation – Plan du site

AccueilNuméros9Démarches artistiques et création...Composing in virtual immersion: a...

Démarches artistiques et créations avatariales

Composing in virtual immersion: avatar and representation

Questioning interactions and co-presence in a 3D audio spatialization environment for virtual reality
Christine Webster et Sophia Kourkoulakou
Traduction de Kaylen Baker
Cet article est une traduction de :
Composer en immersion virtuelle: avatar et représentation [fr]

Résumé

This article presents a specific case study centered around the question of music composers in immersion, stemming from the EUR ArTeC project “VR Auditory Space, Sound Spatialization in VR Immersion.” How does creating a spatialization environment designed for virtual reality determine an electronic music composer’s way of interacting with this space? In-game programming and editing requires thinking about how to integrate a full-body or complete avatarial representation. This article seeks to address all of the problems we have encountered from a technical, philosophical, and critical point of view.

Haut de page

Texte intégral

The VRAS project (virtual reality auditory space)

1Since 2019, the MUSIDANSE research team at the Center for Computer Science Research and Musical Creation (CICM) at the Université Paris 8, along with the EnsadLab Spatial Media at the École des Arts Décoratifs de Paris, have been collaborating on an academic project entitled “ArTeC VR Auditory Space.”1 This project, intended as proof of concept, will test the feasibility of an immersive 3D spatialization interface made for electronic music composers and digital artists.2 It aims to develop a compatible extension for a virtual spatialization system in the HOA Library (a high order ambisonic spatialization collection).3

  • 4 Unity 3D is a unified multiplatform software package made for video games and virtual reality.

2Our goal is to create a virtual reality environment for composers based both on video game techniques and 3D ambisonic and binaural spatialization. The first step of the project consisted of developing a high order ambisonics (HOA) library for VR, which could then be inserted into Unity 3D as a software component. The second step, developing the in-game spatialization environment, involved transposing part of the Unity 3D4 functionalities needed to manage HOA AudioSources directly into the virtual scene within the VRAuditory Space. Consequently, two issues arose: the subject’s interactions once immersed in the 3D interfaces we developed, and the need for metaphorical physical feedback, or avatarial representation. Is the presence of an avatar in the spatialization environment necessary, or not?

3While the current use and integration of avatars in the third person seems to have been optimized ever since the emergence of computer-based virtual worlds and massively multiplayer online games, many problems have remained in the case of VR immersion: the emergence of stereoscopic headsets in the 2010s has limited its use, because the physical body must be partially or fully synchronized with an avatar. In our particular case, we had to take in a whole spectrum of considerations, from the avatar’s design to various tracking systems (inverse kinematics, motion capture, etc.) alongside issues of network architectures. This prompted us to ask: What sort of avatar do we need? What type of appearance? And what technologies could we use to integrate it into VRAS?

Music composers and VR immersion: new multi-dimensional challenges

4In practice, an electroacoustic music composer works in a hybrid environment, partly composed of machines and instruments (acoustic instruments, hardware synthesizers, controllers, microphones, mixing desk, hardware effects, and loudspeakers) which make up a portion of the classical recording and mixing studio, as well as computer representations of this space, shown in 2D. Digital audio workstations (DAW) such as ProTools, Ableton Live, and Cubase, have become digital metaphors of the original electroacoustic studio, synthesizing a set of digital audio processes in order to create a composition by remotely manipulating a console, effects, and virtual instruments within the confines of the computer screen. These two spaces communicate with each other through the use of devices and processes, such as sound cards, MIDI protocol, and OSC protocol.

5Furthermore, machine sound reproduction techniques virtualize a human’s natural hearing range through a process of acoustic energy transduction in the electric field which follows the channel signal capture-transformation-diffusion which the computer translates digitally through an analog-to-digital (ADC) conversion at the input and digital-to-analog (DAC) conversion at the output. The concept of virtuality and virtualization is therefore at the heart of sound reproduction techniques and has been since the 19th century (Sterne, 2015), notably with the reproduction of the eardrum’s operating mechanism, found in the diaphragm of a microphone and loudspeaker.

6When we transpose a sound diffusion device using speakers into virtual reality, as described by Fuchs (2000), we move from one form of virtualization to another. We integrate several existing layers of virtualization that are operating at a certain level (machine, computer) into a new set (shared digital space) which contains and virtualizes all of them.

7In his book Le traité de la réalité virtuelle, Philippe Fuchs (2000) explains:

Virtual reality offers a complementary dimension for spectators: it provides them with an environment in which they become actors, thus enabling a sensory-motor experience in an artificial world that is either imaginary, symbolic, or a simulation of certain aspects of the real world. To be considered virtual reality, two conditions must be met: the subject must be immersed in the simulated environment, and the subject must interact with this environment. (p. 380)

8With the VRAS project, the virtual auditory space and the compositional space (accessible through the graphical interface) merge in the shared digital space.5 In this context, the place occupied by the composer-auditor when in full VR immersion (equipped with a stereoscopic headset, audio headset, and controllers, with positional tracking in six degrees of freedom), undergoes a certain number of sensory and operational transformations across several levels.

  • The subjective listening space. A virtual auditory space is synthesized by the VRAS 3D auditory display, i.e., a third-order HOA library developed for the project, which has been integrated into the Unity scene as a component. The sound field synthesized by HOA is decoded at the output as binaural, and reaches the listener through an audio headset. Head tracking is accomplished using the Unity virtual camera coupled with the stereoscopic headset, also called a head-mounted display (HMD), which incorporates an audio headset (headphones). Consequently, even the slightest movement made by the listener’s head gets synchronized with the camera, which scans the hearing range synthesized in-game. This device enables the positioning of monophonic, stereophonic, multiphonic, and ambisonic 2D and 3D sources in the VR scene, in relation to the human’s actual position in the 3D space.

  • The graphical user interface (GUI). With the VRAS project, editing the audio source configuration happens in real time (in editing/playback mode), through a graphical user interface shown in-game. The immersed composer-auditor edits the sources of each spatialization Unity preset, but she also manipulates the positioning and trajectories of point sources, which she can arrange freely in the space. All of these interactions are done using the HTC Vive headset controller. At the end of the process, each VRAS session can be saved, shared, and opened by a third party.

  • Mobility. VRAS was designed as a space of plural mobility. We envisioned three modes of locomotion for the composer and listener: wandering in a natural, physical way, within the confines of the tracking space; teleportation from one point in the VRAS space to another; and a Human Joystick mode. In the Human Joystick mode, simple forward-backward, left-right, and up-down movements of the chest replace the classic controller movements on a Gamepad or alphanumeric keyboard (with the arrow keys) by moving the body along the vertical axis. This type of mobility, associated with free spatialization, lets us conceive of complex, innovative spatial structures in all dimensions of the VRAS space.

  • Presence. The phenomenon of presence, or embodiment, in VR immersion describes the psychological sensation of being there, present from within the virtual environment, of being an integral part of it. This sensation, which is one of the keystones of VR immersion, is reinforced by environmental copresence (objects in the space), social copresence (other people/avatars in the space) and one’s perception of themself through an avatar.

9Our finalized version of VRAS from 2021 is single-user, takes a first person view, and does not offer a metaphorical representation through a depicted avatar. The matter remains completely open and under study, especially since we hope to develop a multi-user version, an idea we began to explore in 2020 and 2021. Consequently, we will need to revisit the history of the avatar and its capabilities in order to further our study.

Looking back at avatarial representations, from the 2000s to now

10Avatarial representations have evolved alongside information science and technology in the fields of computer science and data networks, when applications require partial or complete involvement of the body of the “immersant,” or immersed person (flight simulators, video games, virtual reality). The user is then projected into the virtual space by their intermediary, a “steerable virtual shell,” or avatar (Lucas & Amato, 2013).

11In the field of computer science, human-machine interaction really took off in 1983, with the Apple Lisa operating system’s 2D graphical interface, alphanumeric keyboard, and mouse. The principle of this configuration is still in effect in current computer systems. With Apple Lisa, an avatar’s steering is done via a 2D screen, keyboard keys, and mouse, or even joystick. This visualization and control system allows the user to alternate between a third person controller and a first person view.

12In a massively multiplayer video game, the first person view is preferred for an FPS (first person shooter game), while third person controller is preferred for a team game such as World of Warcraft, where the player must have a large perspective when interacting with the environment, the game interface, and other teammates. The choice of view is also related to the concept of character. Instead of adopting cinematic terms such as “first person” or “third person” when discussing avatars, as has been the case in the past, we suggest thinking about avatars as having “modified positions,” where one can play “inside” or “outside” of the character (Gazzard, 2009).

13Furthermore:

Daniel and Nadia Thalmann proposed classifying virtual actors into 4 types (Thalmann, 1996): avatars corresponding to representations of visitors in virtual worlds; guided actors directed by the user, but with movements that do not exactly replicate those of the manipulator; autonomous actors, equipped with capabilities to act and perceive in the virtual world; and perceptive actors, aware of other actors and humans. (Plessiet, 2019, quoting Thalmann)

14Virtual reality uses an audiovisual representation system and a control system that differ from those of 2D computing. The use of stereoscopy establishes a constant first person view. This raises questions of bilocation and of the subject’s embodiment. Bilocation describes the fact of being present at the same time in two distinct places: the physical body is located in its base position in real life (IRL), while the subject is projected inside the digital world, “in world,” through sight and hearing. This projection results in generating a sense of corporeal inclusion, or corporeality—the sensation of being part of the digital space. This experience depends largely on suspension of disbelief, when the bilocalized physical body believes, under the control of sensory-motor manipulations coming from the stereoscopic helmet, it is part of the digital world. This psychological and cognitive experience is perceived as one unified sensation across the body, making the subject feel as though they live entirely in some “elsewhere,” with or without avatarial representation. Not having an avatarial representation will only reinforce this sensation. Embodiment also occurs on an auditory level. Thanks to the stereoscopic helmet’s tracking, as well as the linked audio helmet’s tracking, both of which are involved in rendering the binaural auditory scene, the subject will have the impression of perceiving external sound (since a virtual auditory scene is perceived in just the same way as a natural auditory scene would be, coming from outside oneself and moving toward the point of localization in the simulated world).

15Since the 2000’s, the use of avatars has become widespread in massively multiplayer online video games, metaverses, social networks, and collaborative VR platforms. The Sims, Second Life (SL), and World of Warcraft (WOW) all share the same approach, that of a third-person avatar, with SL and WOW sometimes incorporating a switch to first person view, configured and controlled by an alphanumeric keyboard and mouse. Interpersonal relations are enriched through a built-in voice chat, allowing users to speak with others also currently present on the simulation. Means of locomotion include walking, running, flying, and teleportation. Since the 2010s, Big Tech companies such as Google and Meta (formerly Facebook) have been particularly interested in virtual reality. In 2012, Facebook tested a prototype of metaverse, Cloud Party, accessible through a profile created on their social network. The proposed feature was a clone of Linden Lab’s Second Life, and was never fully implemented. As with many social networks (Twitter, LinkedIn, Discord, WhatsApp), avatarial representation often remains limited to the user’s image profile, text (merging with their publications), and use of emoticons and likes.

16In 2021, in response to the Covid-19 pandemic which forced billions of people around the world into online spaces, Facebook launched the beta version of Horizon Workrooms, a metaverse made exclusively for Oculus Quest 2 headset owners. Tracking performed by the Oculus Quest 2 headset directs avatarial representation in Horizon Workrooms, and by default offers only three-point tracking: head (Hmd), hands, and controllers. As a result, avatars here are only represented as half-body, amputated at the legs. Meanwhile, last year, the French trade show Laval Virtual was entirely transformed into Laval Virtual Worlds, a 3D virtual platform on the Virbela application, and operated using the same model as Second Life or massively multiplayer online games. To date, the most advanced virtual reality social platform in terms of accomplishing avatarial representation is VRChat. It allows players to interact across virtual rooms called “Worlds.” In VRChat, avatar configuration is integrated by default from an array of public templates and/or provides the option to customize an avatar through Unity 3D. The full-body avatars are controlled either in third person (in 2D on a computer screen) or in first person view, meaning in full body VR immersion, with Hmd and controllers. Body tracking is in six-point, and goes as far as to include lip synchronization and eye blinking. It is the most complete system to date in terms of avatar development and sound, since it also supports 3D audio (inherited from Unity 3D). 

Why use an avatar? A case study

17VR immersion through a device requires the subject to experience the VRAS environment through a first person view. Consequently, the avatar’s perspective and the issue of full-body representation were taken into account from the very beginning of the project. As inspiration and reference point, we looked to OneSight, a mixed reality project developed by NASA’s Jet Propulsion Laboratory for Microsoft’s Hololens glasses. Since the VRAS project is exclusively VR, there was no hybridization of IRL reality and the VR scene; we preferred to work with the HTC Vive headset, which is versatile in this respect. When studying other immersive projects that preceded VRAS, we were able to observe that the absence of an avatar (who would have been representing the subject) did not seem to pose any problems in mono-immersion, as long as the subject was in a sitting or sitting-standing position, which are secure cognitive positions, given that the subject does not have to use their whole body to move in the virtual space. In the Empty Room project (Webster & Sèdes, 2018), the listener could move around while staying seated by using a joystick-style controller from Gamepad. In a project on comparative listening between an acousmonium projected IRL and its VR reproduction (Webster et al., 2020), the comparison was made using a single-reference listening point (or “sweet spot”), located at the same place in IRL and in VR, with the subject standing with their head upright, without having any additional physical demand apart from listening and making small movements of the head.

18As soon as we move to a standing position and begin to wander around using spatialization devices, the absence of avatarial visual feedback quickly becomes problematic, especially with regard to maintaining general balance: when standing, as long as the gaze is fixed along a frontal, lateral, and slightly elevated axis, the body’s balance remains stable overall. Both visualization and the use of controllers relying on a hand-brain association help the body synchronize these movements. As a result, interacting with objects in the space is done quite naturally. However, as soon as we move, shifting our gaze below or beyond the controllers, (especially when we look down to the spot where we should find our body, yet find instead the absence of an avatar), a perceptual break occurs, which can disrupt the subject’s experience of moving, and transform the sensation of being digitally present into an uneasy, even unpleasant one. Brief dizziness may occur, as well as a sort of temporary shock. The subject will first freeze, then continue moving again, but now less confidently. This discord between perception and cognition has the effect of pulling the subject out of the immersive experience.

19Beyond this subjective problem, an intersubjective issue arises, in the context of a multi-presence immersion, reinforcing the need for an avatar: it is necessary to see (others) and be seen (by others), in order to interact with the VRAS environment in a collaborative way, which includes listening together to the 3D audio scene and editing or moving a sound source in the VRAS environment in turns.

20With our project, a correctly synchronized full-body avatar should allow us to:

  • privilege a standing position in roomscale mode with a fully-operating and completely synchronized avatarial representation

  • consolidate the subjective physical presence, avoiding cognitive breaks and unintentional exits from immersion

  • work in a collaborative, intersubjective in-game mode

  • consider the avatar as an extension of the in-game graphical interface by transferring to it some of the editing functions

21To achieve some of these goals, we first went through an avatar modeling phase, followed by the model tracking phase, before we moved on to integrating the trackable model into a multiplayer configuration.

22To model the VRAS avatar, we used a program called Make Human. To synchronize humans and avatars we used the plugin Vrik. For tracking we used a six-point device with HTC Vive: head, left hand, right hand, pelvis, left foot, and right foot.

23During the synchronization testing phase, we encountered inconsistency problems related to the position feedback sent by Vrik, in which the avatar’s position did not correspond to the human subject’s position. We also found problems related to the homotheticity between the size of the human subject and its avatarial representation. It was essential to create a perfect match between the operator’s and avatar’s sizes. While height is easy to replicate, we had problems with arm length, which in turn led to problems between the hand and controllers. In very close first person view, we found perspective distortions regarding textures, and physical distortions regarding avatar hands and arms.

Figure 1

Figure 1

VRAS avatar in Unity 3D. On the left, visualization of the articulation points set up with the VRik program on our model. On the right, male VRAS model made in Make Human. Screenshots.

Credits: Christine Webster & Sophia Kourkoulakou.

24For the multiplayer immersion testing phase, we tested two networking solutions on Unity that were compatible with the VRAS project, Unity’s Network Manager, which allowed us to create a local client/server connection system, and Photon, a cloud-based client/server solution which is an independent multiplayer networking platform compatible with Unity.

25After conducting a first series of tests in LAN (local area network) mode between two PCs, we moved on to the testing phase involving Photon PUN and the Photon Unity Network, conducting this VR immersion x 2 operators test with the free PUN version, which allowed us to instantiate a total of 20 players using the Photon cloud.

26Testing took place in Paris, between the Place des Fêtes and Belleville, during the first strict Covid-19 lockdown, between March and May of 2020.

27To successfully pull off instantiation through the cloud, we went through a number of steps, first in VR immersion, steering avatars through a third person controller, and then in VR immersion, steering our VRAS avatars in first person view.

28In third-person view, remote immersion testing with two VRAS avatars immediately changed the immersion experience; the mere presence of avatars in the VRAS work environment changed the relationship we had with it. We experienced others’ perception of the space and its digital scale, which in turn transformed the scale of the workspace. We were able to project ourselves into this dynamic. Therefore, we proved that a full-body VR immersion was absolutely essential.

29In 2020, we successfully created a first full-body VR immersion in first person view with two VRAS avatars, and in 2021 we achieved a triple-avatar immersion. Yet we still have many technical issues to solve. Tracking issues remain, soliciting a diverse array of studies and experiments. For the time being, the VR industry favors three-point tracking, using (virtual) hands and controllers, and a partial avatarial representation (upper body only). Even though VRChat and Unreal Engine are currently offering easy avatar integration options on their platform, the modeling and implementation tasks involved present a complex challenge for artistic production companies and independent researchers.

Figure 2

Figure 2

Unity 3D, remote immersion between 3 avatars, realized in May 2021. The camera point of view is avatar 1 observing avatars 2 and 3, who are trying to pass an object to each other at a distance. Screenshots.

Credits: Christine Webster & Sophia Kourkoulakou.

An avatar retrospective: lost and found bodies and posthumanism

30The challenges concerning avatarial representation we face with the VRAS project come out of a long process of transformation initiated back in the 19th century, with the rise of image and sound reproduction using mechanical, electronic, and computer technologies. These techniques are based, on the one hand, on the creation of virtual forms that replicates part of the inner ear (the tympanum, or eardrum) using tympanic machines (microphones, tape recorders, loudspeakers), and on the other hand, on the virtualization of the eye’s (or pupil’s) internal functioning, using photographic and cinematographic techniques.

31Reproduction techniques apply to all bodies in any given experimentation field; the microphone applies to the acoustic field, while the camera concerns the visual field. These technical processes tear off something of reality, as R. Murray Shafer (1979) remarks:

We have dissociated sound from its source; we have torn it from its natural orbit and given it an amplified and independent existence (p. 134)

32With the digital stereo system, sound becomes acousmatic, a word Pierre Shaeffer (1966) defines as “What one hears without seeing the source it comes from.” A virtual auditory space emerges out of a place where an interpreter and an instrument are physically absent, stripped of the here and now, of its original “aura,” as Walter Benjamin (1953) would have called it, endlessly updating itself exactly as it was before, because of its very reproducibility. Eventually, through these processes, reality no longer exists, and only representation is real: “The real can never be represented, representation alone can be represented. For in order to be represented, the real must be known, and knowledge is already a form of representation.” (Altman, 1992).

33In the informational realm, digitization’s tearing off from reality is also expressed as a movement of integration: the virtual object (text, sound, image) is integrated into a space of object-oriented programming (OOP) and projected on the computer screen. Avatarial representation is always linked to a certain quality of projected space, which it belongs to. In the same way that sound is directly related to acoustic space, or the photographed subject is connected to its environment, avatars cannot be dissociated from the digital screen space, be it the Web page, the metaverse space, or the VR scene. Depending on the quality of the space concerned, several embodiment strategies are possible. The beginning of the Web was characterized by becoming one and appearing with the text; our exchanges in current social networks are characterized by prolonging this presence through multimedia fusion (text, icon, photo, video, music). Virtual reality adds an additional level of representation by reintroducing the entire human body into the human-machine interaction loop, thereby acting as both the remote operator and being remotely represented through it. Through VR immersion, we have returned to the possibility of experiencing the uniqueness of the here and now.

  • 6 Norbert Wiener (1894-1964), American mathematician, theorist, and researcher, and the founding fath (...)
  • 7 Turing and Von Neumann’s take on the idea of the computer as a brain equivalent is described in Phi (...)
  • 8 This was a maxim of Plato’s in Pythagoras, the idea being that with the rise of information and art (...)
  • 9 This refers to a machine’s ability to imitate human conversation. The test was described by Alan Tu (...)

34This paradox highlights virtual reality’s interest in putting the human body at the center of human-machine relationships, and consequently shaking up radical positions that have been emerging in the techno-sciences since the era of cybernetics. For cyberneticians, information is an immaterial entity that fluctuates according to regulation principles through feedback, as theorized by Norbert Wiener.6 In 1950, Wiener theorized that it would be possible to extract the informational model of a human being and connect it to other information machines. In 1954, after the Macy conference in Paris, computer science disassociated itself from the cybernetic trend and entered a new era, driven by an idea that focused on a brain-computer analogy, a principle endorsed by Turing and Von Neumann.7 This approach views mathematical and statistical logic as universal values, the only ones capable of understanding and transforming the world. Over time, computers are expected to replace human decision-making and knowledge-forming. Human beings would no longer be the measure of all things.8 The computer would become a thinking machine, capable of imitating humans (see the Turing test9). In Turing and Von Neumann’s vision of computers, the human body has completely disappeared, and the question now becomes not who thinks but what thinks.

35This question of the human body, and by extension the digital body and its representation, was introduced in the 1990s, at the same time as the Cyberpunk trend and Neal Stephenson’s book, Snowcrash (The Virtual Samurai, 1992), which described and prefigured what will become the avatar condition on informational networks. Since then, avatars have become a reality that must be taken into consideration when studying the continuum of information technologies.

36Katherine Hayles, in How We Became Post-Human (1999), argues that information has “lost its body,” not due to it being hidden, but because it is embodied differently. In the chapter “Toward Embodied Virtuality,” Hayles makes four arguments about the ways in which humans are already becoming post-human: first, in privileging computational texture over materiality, corporeality is biologically seen as an “accident of nature” rather than the inevitable consequence of life. Moreover, the concept of post-human consciousness is considered an epiphenomenon: an accessory phenomenon that accompanies an essential phenomenon, without playing a role in its appearance. Her third argument stipulates that post-human theory considers the body to be the original prosthesis, which one learns to manipulate using extensions or replacements with another synthetic prosthesis, which together work to continue a deep learning process that started before birth. Her fourth and most important argument is that post-human theory configures the human in a way that it can be connected and directly integrated with intelligent machines.

37In this post-humanist vision, there would be no essential difference or absolute demarcation between bodily existence and its digital simulation, between cybernetic mechanism and biological organism, between robot technology and human goals, but rather that of a hybrid future:

With posthumanism, we no longer consider a technological interface as taking the form of an instrument’s ergonomics, extending the body while remaining external, but as taking the form of a reciprocal penetration that challenges its separation and centrality by questioning its identity and degree of freedom. (Antonioli, 2011, p. 173‑180)

38This vision is completely disassociated from the techno-determinist trend, and even more so from transhumanism, which embraces technology in a totalitarian way, without truly criticizing its issues. It is rooted rather in the cyborg approach, as defined in the post-feminist writing of Donna Haraway (1984).

A “cyborg” approach also implies a new look at technology, in which the machine can no longer be considered simply “a thing” or “a tool.” Machines are an aspect of our corporeality and our sensibility, an essential element in producing subjectivity. Machines cannot be reduced to a threat of domination, since we are called to be responsible for them, just as we are responsible for the boundaries and limits we set (Antonioli, 2011, p. 173‑180).

  • 10 According to Haraway, Chthulucene (Haraway spells Chthulu her own way) is not a direct reference to (...)

39Jumping off from Haraway’s cyborg approach, we can observe that what fundamentally characterizes post-humanist thought is, first of all, the act of putting the human being at the center of the human/machine dynamic; this is precisely the case with virtual reality. With virtual reality, the human being once again becomes the measure of all things; this dynamic results in an intelligent and rational symbiosis between living and non-living ecosystems. In this way, post-humanism differs from transhumanism, because (in keeping with its criticism of Enlightenment humanism) it privileges hybridization—making with, creating with—which are categorically opposed to the perspective of man’s complete dissolution within the informational matrix, as the Transhumanists view it. Haraway suggests thinking and creating (“think we must”), taking the concept of sympoiesis as a starting point: a concept she defines as “organizing complex, dynamic, responsive, localized, and historical systems” in a human/machine/living dynamic, which does away with the human/non-human binary opposition. With this approach, Haraway says she is moving beyond posthumanism and Anthropocene/Capitalocene issues, towards the “Chtuhlucene,” referring to the strange deities imagined by fantasy writer H. P. Lovecraft.10

Conclusion: virtualities beyond current formats?

40If virtual reality is situated at one end of the Reality-Virtuality continuum, as described by Milgram (1994), and assuming that there is nothing else that could be conceivable beyond information theory as it exists, then VR could be considered the final outcome point, the place where all types and levels of possible virtualizations are synthesized (multimedia, hypermedia, metamedia, etc.). We no longer need to wonder if virtual elements and virtualization are desirable, since they have already permeated our world and shifted our way of being in it, seeing as we have perceived the world through machine eyes and ears since the 19th century; computerization has only made this process more fluid. However, we may soon reach a technological and structural impasse, because current technologies do not integrate a total VR immersion simulation, even though future devices in the works show promise of a world made for Web 3.0.

41The concept of “post-human” repositions the boundaries of what it means to lead a human life, by encompassing both living beings and machines. However, since the 1950s, information technologies have belonged to an elite group of mostly white males, who are subordinate to the power of the market, subverting the role of information technologies in what Guattari calls “micro-fascisms” (Genosko, 2017).

42Since the 1980s, there have been attempts to represent humans (as a white man) in a functioning virtual reality environment. But these technologies have only been partially developed (automotive industry, flight simulators), and the promise of moving collectively towards a democratic and fluid virtual reality has been irremediably postponed. In the space of ten years, due to large groups’ monopoly over the VR market (groups such as Facebook, Apple, and recently ByteDance), virtual reality has gone from an experimental stage to an essentially “corporate” stage, involving private brands and companies, which has led to a certain number of problems for artists and researchers alike.

43In order to explore virtual reality in depth, without stalling at the gaming stage, the field must open up again; we must be allowed the freedom to conduct our research outside the constraints and control of VR platforms, and not be forced to adhere to an exclusively corporate vision of virtual reality. We are at the gates of a multiple world, a world we have only scratched the surface of, whose potential we have only just begun to make out; this world extends far beyond the currently limited model that resembles a “futuristic” capitalist dystopia.

Haut de page

Bibliographie

Altman, R. (1992). Sound theory, sound practice. AFI, Routledge.

Benjamin, W. (2016). L’œuvre dart à l’époque de sa reproductibilité technique [1935]. Allia.

Breton, P. (1987). Une histoire de linformatique. La Découverte.

Fuchs, P. Moreau, G., & Berthoz, A. (2000). Le traité de la réalité virtuelle volume 1; L’Homme et l’environnement virtuel. Presses des Mines.

Gazzard, A. (2009). The avatar and the player: Understanding the relationship beyond the screen [Conference]. Conference in Games and Virtual Worlds for Serious Applications, University of Hertfordshire College Lane, Hatfield, Royaume-Uni.

Genosko, G. (2017). Les trous noirs de la politique: résonances du microfascisme. La Deleuziana, revue en ligne de philosophie, 5.

Haraway, D. (1984). A cyborg manifesto. Socialist Review, 80, 65-108.

Haraway, D. (2020). Vivre avec le trouble. Les éditions des mondes à faire.

Hayles, N. K. (1999). How We Became Post-Human. Virtual Bodies in Cybernetics, Literature and Informatics. The University of Chicago Press.

Lucas, J. F. & Amato, É. A. (2013). Mondes, points de vue, personnages: lavatar comme enveloppe pilotable. In É. A. Amato & É. Perény (eds.), Les avatars jouables des mondes numériques. Théories, terrains et témoignages de pratiques interactives (p. 109-133). Hermès-Lavoisier.

Murray Shafer, R. (1979). Le paysage sonore. Jean-Claude Lattès.

Platon. (1998). Protagoras. Flammarion

Plessiet, C. (2019). Quand la marionnette coupe ses fils. Recherches sur l’acteur virtuel. Art et histoire de l’art. Habilitation de diriger des recherches, (INRéV) du laboratoire Arts des images et art contemporain (AIAC EA4010) Université Paris 8 Vincennes Saint-Denis.

Shaeffer, P. (1966). Le traité des objets musicaux. Seuil.

Stephenson, N. (1992). Le Samouraï virtuel (Snow Crash). Le livre de poche.

Sterne, J. (2015). Une histoire de la modernité sonore. La Découverte/Philharmonie de Paris-Cité de la musique.

Thalmann, D. (1996). A new generation of synthetic actors: The real-time and interactive perceptive actors (p. 200-219). Computer Graphics Lab. https://pdfs.semanticscholar.org/9043/36c4809b4f191132642f786927a3d00eeeed.pdf.

Turing, A. M. (1950). Computing machinery and intelligence. Mind.

Webster, C. & Sèdes, A. (2018). Empty room, exploring the plasticity of electroacoustic music spatialization in VR with Ambisonic 3D and Binaural techniques [Conference]. Conference on Sound Ecology and Media Culture, October 2018, Darmstadt-Dieburg, Germany. hal-02276901

Webster, C., Raboisson, N., Lamarche, O., Couprie, P., & Genevois, H. (2020). Vers un acousmonium en immersion VR en ambisonie 3D et binaural [Conference]. Journées d’informatique musicale, October 2020, Strasbourg, France. hal-02977660

Haut de page

Notes

1 EUR ArTeC project, “VR Auditory Space.”

2 VR Auditory Space, demo.

3 Developed at Labex Arts-H2H between 2012 and 2015, the HOA Library is a set of C++ and FAUST classes and software implementations in the form of Max, PureData, and VST objects made for higher order ambisonics.

4 Unity 3D is a unified multiplatform software package made for video games and virtual reality.

5 VRAS demo walktrough.

6 Norbert Wiener (1894-1964), American mathematician, theorist, and researcher, and the founding father of cybernetics.

7 Turing and Von Neumann’s take on the idea of the computer as a brain equivalent is described in Philippe Breton’s book, Une histoire de l’informatique, published by La Découverte in 1987.

8 This was a maxim of Plato’s in Pythagoras, the idea being that with the rise of information and artificial intelligence, human reason would eventually be replaced by machine reason and consciousness.

9 This refers to a machine’s ability to imitate human conversation. The test was described by Alan Turing in 1950 in his publication Computer Machinery and Intelligence.

10 According to Haraway, Chthulucene (Haraway spells Chthulu her own way) is not a direct reference to H. P. Lovecraft’s racist and misogynistic nightmare, but a metaphor for the sprawling forces at work in living creatures: “Chthulucene is entangled in a myriad of temporalities and spatialities, and in a myriad of entities-within-intra-active-assemblies, which can fall under the category of more-than-human, other-than-human, inhuman, and human-as-human” (Haraway, 2017, p. 223).

Haut de page

Table des illustrations

Titre Figure 1
Légende VRAS avatar in Unity 3D. On the left, visualization of the articulation points set up with the VRik program on our model. On the right, male VRAS model made in Make Human. Screenshots.
Crédits Credits: Christine Webster & Sophia Kourkoulakou.
URL http://journals.openedition.org/hybrid/docannexe/image/2968/img-1.jpg
Fichier image/jpeg, 186k
Titre Figure 2
Légende Unity 3D, remote immersion between 3 avatars, realized in May 2021. The camera point of view is avatar 1 observing avatars 2 and 3, who are trying to pass an object to each other at a distance. Screenshots.
Crédits Credits: Christine Webster & Sophia Kourkoulakou.
URL http://journals.openedition.org/hybrid/docannexe/image/2968/img-2.jpg
Fichier image/jpeg, 305k
Haut de page

Pour citer cet article

Référence électronique

Christine Webster et Sophia Kourkoulakou, « Composing in virtual immersion: avatar and representation »Hybrid [En ligne], 9 | 2022, mis en ligne le 30 novembre 2022, consulté le 03 février 2023. URL : http://journals.openedition.org/hybrid/2968 ; DOI : https://doi.org/10.4000/hybrid.2968

Haut de page

Auteurs

Christine Webster

Christine Webster is a composer and sound engineer. Since 2008, she has been experimenting with the spatialization of electroacoustic music in virtual reality installations. She developed the VRAS project as part of her dissertation, which explores and analyzes emerging methods of composition and spatialization in shared digital spaces. This dissertation is in its final stages of development at the Paris 8 University in the Doctoral School of Aesthetic Science and Art Technologies, and at the Center for Computer Science Research and Musical Creation (CICM), under the supervision of Anne Sèdes and co-supervisor François Garnier, head of the EnsadLab Spatial Media group.

Articles du même auteur

Sophia Kourkoulakou

Sophia Kourkoulakou is currently at work on her dissertation, (Design pour) l’Art interactif: une techno-anthropologie numérique [(Design for) Interactive Art: A Digital Techno-Anthropology], at the Paris 8 University in the Doctoral School of Aesthetic Science and Art Technologies, in the Visual Art and Contemporary Art (AIAC) lab, with the Digital Images and Virtual Reality team (INRéV), under the supervision of Chu Yin Chen and François Garnier. She is a member of the Spatial Media research group at the EnsadLab at the École des Arts Décoratifs de Paris/PSL.

Haut de page

Droits d’auteur

Tous droits réservés

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search