Could you give a brief description of the project?
“The goal of AVATAR is to explore the embodiment of users in virtual environments through avatars for the purposes of interacting with and perceiving them”, explains Ludovic Hoyet, coordinator of AVATAR.
“We know that being able to see your body allows you to perform certain tasks. The overarching ambition of the project is to enable users to perform natural actions by wearing a virtual reality headset, simultaneously improving the acceptability of the avatar in the eyes of its user and the avatar’s capacity to improve the user’s experience of the virtual environment.”
Could you give a brief description of the project?
“It all started with a discussion between colleagues in Rennes.” The MimeTIC team’s focus is on the modelling of humans, particularly through movement. HYBRID, meanwhile, is concerned with virtual reality. The core problem for AVATAR emerged at the point at which the two converge.
“Following positive initial results, which were achieved through our two teams working together, including on the extent to which users were able to get to grips with a structurally different hand (one with six fingers, for example), we came up with the idea of setting up an Inria Challenge. We built our team by seeking out skillsets that we were lacking: we went to Grenoble for expertise on volumetric capture, to GraphDeco for expertise on rendering and POTIOC and LOKI for expertise on creating dedicated avatar interfaces and user feedback.”
What is it that makes your project innovative?
Despite the growing interest in virtual reality, it is rare for applications to use avatars that are capable of using all of the possibilities enabled by complete representation.
You need to understand how a user is embodied and how they are able to interact naturally with the avatar.
There is expertise within AVATAR covering the whole processing chain, from modelling shape and texture to animation, interaction and obtaining multisensory feedback, allowing them “to tackle transdisciplinary questions.”
How did your project come about?
The project has two main areas of focus: one technological, the other fundamental. The technological aspect is concerned with the representation of the avatar and animation. The fundamental aspect is concerned with perception and embodiment within this avatar, but also with the psychology (sense of embodiment) and the feeling of embodiment.
What have the first results been?
“With regard to the fundamental aspect, papers have been published on virtual embodiment. In terms of the technological aspect, there have been results regarding to what extent the movements of an individual can be adapted to an avatar with a different body type. For instance, how can you make sure that the contacts a user makes when they move are replicated correctly when the avatar is bigger, slimmer or stockier, while retaining the semantics of the action being performed?
How will your discoveries be used? Are there any particular fields that you're working towards?
First and foremost, all applications of virtual reality in video games and immersive experiences, but also in entertainment through immersive cinema.
When it comes to cinema, we are still passive for the moment, but in the longer term users could be able to live this experience as a minor character, like an extra, allowing a degree of control over the action
There are further applications in the fields of psychology and medicine, including in motor rehabilitation. One of the partners of the project, Mel Slater (University of Barcelona), who is an expert in the field, is interested in psychological issues such as phobias and behavioural bias.
There is a great deal of potential when it comes to video conferencing applications, exploiting non-verbal language and positioning in space in order to improve communication.
Finally, there is sport, reproducing conditions linked to stress and immersion in a competitive context in order to improve motor skills through the use of virtual reality.
What’s next?
We have work starting on new ways of controlling avatars and obtaining multisensorial feedback on what is happening in the surrounding environment. Progress is also being made in terms of facial animation and deep learning.
The major target for the challenge over the next two years will be to create a common platform that we can share with all other Inria teams working on avatars, helping to accelerate new research.
Coordinator : Ludovic Hoyet, MimeTIC (Inria Rennes)
Partners : GraphDeco (Inria Sophia), Hybrid (Inria Rennes), Loki (Inria Lille), MimeTIC (Inria Rennes), Morpheo (Inria Grenoble), Potioc (Inria Bordeaux), Mel Slater (Uni. Barcelona), InterDigital and Faurecia.