– Europe/Lisbon
Online
The least-control principle for learning at equilibrium
A large number of models of interest in both neuroscience and machine learning can be expressed as dynamical systems at equilibrium. This class of systems includes deep neural networks, equilibrium recurrent neural networks, and meta-learning. In this talk I will present a new principle for learning equilibria with a temporally - and spatially - local rule. Our principle casts learning as a least-control problem, where we first introduce an optimal controller to lead the system towards a solution state, and then define learning as reducing the amount of control needed to reach such a state. We show that incorporating learning signals within a dynamics as an optimal control enables transmitting activity-dependent credit assignment information, avoids storing intermediate states in memory, and does not rely on infinitesimal learning signals. In practice, our principle leads to strong performance matching that of leading gradient-based learning methods when applied to an array of benchmarking experiments. Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.