Planned seminars

Europe/Lisbon —

Florent Krzakala

Florent Krzakala, EPFL

The increasing dimensionality of data in the modern machine learning age presents new challenges and opportunities. The high-dimensional settings allow one to use powerful asymptotic methods from probability theory and statistical physics to obtain precise characterizations and develop new algorithmic approaches. There is indeed a decades-long tradition in statistical physics with building and solving such simplified models of neural networks.

I will give examples of recent works that build on powerful methods of physics of disordered systems to analyze different problems in machine learning and neural networks, including overparameterization, kernel methods, and the gradient descent algorithm in a high dimensional non-convex setting.

Europe/Lisbon —

Joan Bruna

Joan Bruna, Courant Institute and Center for Data Science, NYU

High-dimensional learning remains an outstanding phenomena where experimental evidence outpaces our current mathematical understanding, mostly due to the recent empirical successes of Deep Learning algorithms. Neural Networks provide a rich yet intricate class of functions with statistical abilities to break the curse of dimensionality, and where physical priors can be tightly integrated into the architecture to improve sample efficiency. Despite these advantages, an outstanding theoretical challenge in these models is computational, ie providing an analysis that explains successful optimization and generalization in the face of existing worst-case computational hardness results.

In this talk, I will focus on the framework that lifts parameter optimization to an appropriate measure space. I will cover existing results that guarantee global convergence of the resulting Wasserstein gradient flows, as well as recent results that study typical fluctuations of the dynamics around their mean field evolution. We will also discuss extensions of this framework beyond vanilla supervised learning, to account for symmetries in the function, as well as for competitive optimization.

Europe/Lisbon —

Bin Dong

Bin Dong, BICMR, Peking University

Deep learning continues to dominate machine learning and has been successful in computer vision, natural language processing, etc. Its impact has now expanded to many research areas in science and engineering. In this talk, I will mainly focus on some recent impact of deep learning on computational mathematics. I will present our recent work on bridging deep neural networks with numerical differential equations. On the one hand, I will show how to design transparent deep convolutional networks to uncover hidden PDE models from observed dynamical data. On the other hand, I will present our preliminary attempt to establish a deep reinforcement learning based framework to solve 1D scalar conservation laws, and a meta-learning approach for solving linear parameterized PDEs based on the multigrid method.

Europe/Lisbon —

Carola-Bibiane Schönlieb

Carola-Bibiane Schönlieb, DAMTP, University of Cambridge

Inverse problems in imaging range from tomographic reconstruction (CT, MRI, etc) to image deconvolution, segmentation, and classification, just to name a few. In this talk I will discuss approaches to inverse imaging problems which have both a mathematical modelling (knowledge driven) and a machine learning (data-driven) component. Mathematical modelling is crucial in the presence of ill-posedness, making use of information about the imaging data, for narrowing down the search space. Such an approach results in highly generalizable reconstruction and analysis methods which come with desirable solutions guarantees. Machine learning on the other hand is a powerful tool for customising methods to individual data sets. Highly parametrised models such as deep neural networks in particular, are powerful tools for accurately modelling prior information about solutions. The combination of these two paradigms, getting the best from both of these worlds, is the topic of this talk, furnished with examples for image classification under minimal supervision and for tomographic image reconstruction.

Europe/Lisbon —

Tommaso Dorigo

Tommaso Dorigo, Italian Institute for Nuclear Physics

I will discuss the impact of nuisance parameters on the effectiveness of supervised classification in high energy physics problems, and techniques that may mitigate or remove their effect in the search for optimal selection criteria and variable transformations. The approaches discussed include nuisance parametrized models, modified or adversary losses, semi supervised learning approaches and inference-aware techniques.

Europe/Lisbon —

Gitta Kutyniok

Gitta Kutyniok, Institut für Mathematik - TU Berlin
To be announced

Europe/Lisbon —

René Vidal

René Vidal, Mathematical Institute for Data Science, Johns Hopkins University
To be announced

Europe/Lisbon —

Anna C. Gilbert

Anna C. Gilbert, Yale University
To be announced

Europe/Lisbon —

Xavier Bresson

Xavier Bresson, Nanyang Technological University
To be announced

Europe/Lisbon —

Caroline Uhler

Caroline Uhler, MIT and Institute for Data, Systems and Society
To be announced

Europe/Lisbon — Online

Thomas Strohmer

Thomas Strohmer, University of California, Davis
To be announced