Every month, the joint laboratory invites outside speakers to take part in seminars for its partners.

Robin Algayres “Generative Spoken Language Modelling”

Abstract: Generative Spoken Language Modelling is the task of learning the acoustic and linguistic characteristics of a language from raw audio (no text, no labels). These spoken language models (SLM) are trained autoregressively on the speech signal by segmenting speech into sequences of (continuous or discrete) tokens. In by-passing the transcription of text into speech, SLM preserves important prosodic information (emotions, rhythms, intonations,…) that are useful for the task of language modelling. A set of metrics has recently been developed to automatically evaluate the learned representations at acoustic and linguistic levels for both encoding and generation.

Bio: Robin Algayres is a PhD student under the supervision of Emmanuel Dupoux (ENS/Inria/Meta AI) and Benoit Sagot (Inria). During his PhD he worked on speech segmentation into words and spoken language model that works on segmented speech tokens.

 

Daniel Stoller “Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages”

(Best Industry Paper Award at IEEE ICASSP 2023)

Abstract: Lyrics alignment gained considerable attention in recent years. State-of-the-art systems either re-use established speech recognition toolkits, or design end-to-end solutions involving a Connectionist Temporal Classification (CTC) loss. However, both approaches suffer from specific weaknesses: toolkits are known for their complexity, and CTC systems use a loss designed for transcription which can limit alignment accuracy. In this paper, we use instead a contrastive learning procedure that derives cross-modal embeddings linking the audio and text domains. This way, we obtain a novel system that is simple to train end-to-end, can make use of weakly annotated training data, jointly learns a powerful text model, and is tailored to alignment. The system is not only the first to yield an average absolute error below 0.2 seconds on the standard Jamendo dataset but it is also robust to other languages, even when trained on English data only. Finally, we release word-level alignments for the JamendoLyrics Multi-Lang dataset.

Bio: Daniel Stoller is a researcher working on machine-learning for audio. He obtained his PhD from Queen Mary University of London before joining MIQ, the music intelligence team at Spotify. His interests include audio source separation, lyrics alignment, generative modelling and representation learning.