Every month, the joint laboratory invites outside speakers to take part in seminars for its partners.

Matthieu Wyart (EPFL): “Learning  Hierarchical Representations with Deep Nets and Large Language Models”

Abstract: Learning generic tasks in high dimension is impossible. Yet, deep networks can classify images, and large models can learn the structure of language and produce meaningful texts. In both cases, building a hierarchical representation of the data is believed to be key to success. How is it achieved? How many data are needed for that, and how does it depend on the data structure? I will introduce models of synthetic hierarchical data for which an understanding of these questions is emerging, both for supervised and self-supervised learning. In the latter case, our theoretical framework predicts that for networks trained on next token prediction, a hierarchical representation is built bottom-up as training size increases. The analysis leads to predictions that we test in Shakespeare and Wikipedia datasets. In this framework, scaling laws of training curves are associated to a sequence of emergent phenomena where deeper and deeper representations are progressively built.
Bio: Matthieu Wyart  is a French physicist. He is professor of physics at EPFL (École Polytechnique Fédérale de Lausanne) and the head of the Physics of Complex Systems Laboratory. Wyart’s research encompassed field such as the architecture of allosteric materials, the theory of deep learning, the elasticity and mechanical stability in disordered solids, the granular and suspension flows, the glass and rigidity transitions, the marginal stability at random close packing and other glasses, and the yielding transition. More recently his work has focused on machine learning, in particular data structure and generative models. M. Wyart is the recipient of the Simons Investigator Award, the Sloan Fellowship, the G. Carrier Fellowship, the Dresden Physics prize and is a fellow of the American Physical Society.