Project-team

MULTISPEECH

Multimodal Speech in Interaction
Multimodal Speech in Interaction

MULTISPEECH is a joint research team between the Université of LorraineInria, and CNRS. It is part of department D4 “Natural language and knowledge processing” of LORIA.

Multispeech team consider speech as a multimodal signal with different facets: acoustic, facial, articulatory, gestural, etc. The general objective of Multispeech is to study the analysis and synthesis of the different facets of this multimodal signal and their multimodal coordination in the context of human-human or human-computer interaction.

The research program is organized along the three following axes:

  • Data-efficient and privacy-preserving learning.
  • Extracting information from speech signals.
  • Multimodal Speech: generation and interaction.
Centre(s) inria
Inria centre at Université de Lorraine
In partnership with
CNRS,Université de Lorraine

Contacts

Team leader

Emmanuelle Deschamps

Team assistant

Sylvie Hilbert

Team assistant

Delphine Hubert

Team assistant

Elsa Maroko

Team assistant

Gallown Nizard

Team assistant

News