Planned seminars

Europe/Lisbon
Instituto Superior Técnicohttps://tecnico.ulisboa.pt

Inês Hipólito

Inês Hipólito, Humboldt-Universität

Living beings do an extraordinary thing. By being alive they are resisting the second law of thermodynamics. This law stipulates that open, living systems tend to dissipation by the increase of entropy or chaos. From minimal cognitive organisms like plants to more complex organisms equipped with nervous systems, all living systems adjust and adapt to their environments, thereby resisting the second law. Impressively, while all animals cognitively enact and survive their local environments, more complex systems do so also by actively constructing their local environments, thereby not only defying the second law, but also (evolution) selective properties. Because all living beings defy the second law by adjusting and engaging with the environment, a prominent question is how do living organisms persist while engaging in adaptive exchanges with their complex environments? In this talk I will offer an overview of how the Free Energy Principle (FEP) offers a principled solution to this problem. The FEP prescribes that living systems maintain themselves by remaining in non-equilibrium steady states by restricting themselves to a limited number of states; it has been widely applied to explain neurocognitive function and embodied action, develop artificial intelligence and inspire psychopathology models.

Europe/Lisbon
Instituto Superior Técnicohttps://tecnico.ulisboa.pt

Petar Veličković

Petar Veličković, DeepMind and University of Cambridge

The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach — such as computer vision, playing Go, or protein folding — are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation.

While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This talk is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications.

Such a 'geometric unification' endeavour in the spirit of Felix Klein's Erlangen Program serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.