Recent seminars

Europe/Lisbon
Online

Andreas Döpp

Andreas Döpp, Ludwig-Maximilians-Universität München | Faculty of Physics
Machine-learning strategies in laser-plasma physics

The field of laser-plasma physics has experienced significant advancements in the past few decades, owing to the increasing power and accessibility of high-power lasers. Initially, research in this area was limited to single-shot experiments with minimal exploration of parameters. However, recent technological advancements have enabled the collection of a wealth of data through both experimental and simulation-based approaches.

In this seminar talk, I will present a range of machine learning techniques that we have developed for applications in laser-plasma physics [1]. The first part of my talk will focus on Bayesian optimization, where I will showcase our latest findings on multi-objective and multi-fidelity optimization of laser-plasma accelerators and neural networks [2-4].

In the second part of the talk, I will discuss machine learning solutions for tackling complex inverse problems, such as image deblurring or extracting 3D information from 2D sensors [5-6]. Specifically, I will discuss various adaptations of established convolutional network architectures, such as the U-Net, as well as novel physics-informed retrieval methods like deep algorithm unrolling. These techniques have shown promising results in overcoming the challenges posed by these intricate inverse problems.

References

  1. Data-driven Science and Machine Learning Methods in Laser-Plasma Physics
  2. Expected hypervolume improvement for simultaneous multi-objective and multi-fidelity optimization
  3. Multi-objective and multi-fidelity Bayesian optimization of laser-plasma acceleration
  4. Pareto Optimization of a Laser Wakefield Accelerator
  5. Measuring spatio-temporal couplings using modal spatio-spectral wavefront retrieval
  6. Hyperspectral Compressive Wavefront Sensing

Additional file

document preview

Doepp slides.pdf

Europe/Lisbon
Room P3.10, Mathematics Building — Online

Rui Castro

Rui Castro, Mathematics Department, TU Eindhoven
Anomaly detection for a large number of streams: a permutation/rank-based higher criticism approach

Anomaly detection when observing a large number of data streams is essential in a variety of applications, ranging from epidemiological studies to monitoring of complex systems. High-dimensional scenarios are usually tackled with scan-statistics and related methods, requiring stringent distributional assumptions for proper test calibration. In this talk we take a non-parametric stance, and introduce two variants of the higher criticism test that do not require knowledge of the null distribution for proper calibration. In the first variant we calibrate the test by permutation, while in the second variant we use a rank-based approach. Both methodologies result in exact tests in finite samples. Our permutation methodology is applicable when observations within null streams are independent and identically distributed, and we show this methodology is asymptotically optimal in the wide class of exponential models. Our rank-based methodology is more flexible, and only requires observations within null streams to be independent. We provide an asymptotic characterization of the power of the test in terms of the probability of mis-ranking null observations, showing that the asymptotic power loss (relative to an oracle test) is minimal for many common models. As the proposed statistics do not rely on asymptotic approximations, they typically perform better than popular variants of higher criticism relying on such approximations. Finally, we demonstrate the use of these methodologies when monitoring the content uniformity of an active ingredient for a batch-produced drug product, and monitoring the daily number of COVID-19 cases in the Netherlands.

Based on joint work with Ivo Stoepker, Ery Arias-Castro and Edwin van de den Heuvel:
https://arxiv.org/abs/2009.03117

Europe/Lisbon
Online

Harry Desmond

Harry Desmond, University of Portsmouth
Exhaustive Symbolic Regression (or how to find the best function for your data)

Symbolic regression aims to find optimal functional representation of datasets, with broad applications across science. This is traditionally done using a “genetic algorithm” which stochastically searches function space using an evolution-inspired method for generating new trial functions. Motivated by the uncertainties inherent in this approach — and its failure on seemingly simple test cases — I will describe a new method which exhaustively searches and evaluates function space. Coupled to a model selection principle based on minimum description length, Exhaustive Symbolic Regression is guaranteed to find the simple equations that optimally balance simplicity with accuracy on any dataset. I will describe how the method works and showcase it on Hubble rate measurements and dynamical galaxy data.

Based on work with Deaglan Bartlett and Pedro G. Ferreira:
https://arxiv.org/abs/2211.11461
https://arxiv.org/abs/2301.04368

Additional file

document preview

Desmond_slides.pdf

Europe/Lisbon
Online

Diogo Gomes

Diogo Gomes, KAUST
Mathematics for data science and AI - curriculum design, experiences, and lessons learned

In this talk, we will explore the importance of mathematical foundations for AI and data science and the design of an academic curriculum for graduate students. While traditional mathematics for AI and data science has focused on core techniques like linear algebra, basic probability, and optimization methods (e.g., gradient and stochastic gradient descent), several advanced mathematical techniques are now essential to understanding modern data science. These include ideas from the calculus of variations in spaces of random variables, functional analytic methods, ergodic theory, control theory methods in reinforcement learning, and metrics in spaces of probability measures. We will discuss the author's experience designing an applied mathematics curriculum on data science and draw on the author's experience and lessons learned in teaching an advanced course on the mathematical foundations of data science. This talk aims to promote discussion and exchange of ideas on how mathematicians can play an important role in AI and data science and better equip our students to excel in this field.

Additional file

document preview

Gomes Diogo slides.pdf

Europe/Lisbon
Online

Paulo Rosa

Paulo Rosa, Deimos
Deep Reinforcement Learning based Integrated Guidance and Control for a Launcher Landing Problem

Deep Reinforcement Learning (Deep-RL) has received considerable attention in recent years due to its ability to make an agent learn how to take optimal control actions, given rich observation data via the maximization of a reward function. Future space missions will need new on-board autonomy capabilities with increasingly complex requirements at the limits of the vehicle performance. This justifies the use of machine learning based techniques, in particular reinforcement learning in order to allow exploring the edge of the performance trade-off space. The guidance and control systems development for Reusable Launch Vehicles (RLV) can take advantage of reinforcement learning techniques for optimal adaption in the face of multi-objective requirements and uncertain scenarios.

In AI4GNC - a project funded by the European Space Agency (ESA), led by DEIMOS and participated by INESC-ID, the University of Lund, and TASC - a Deep-RL algorithm was used to train an actor-critic agent to simultaneously control the engine thrust magnitude and the two TVC gimbal angles to land a RLV in 6-DoF simulation. The design followed an incremental approach, progressively augmenting the number of degrees of freedom and introducing more complexity factors such as nonlinearity in models. Ultimately, the full 6-DoF problem was addressed using a high fidelity simulator that includes a nonlinear actuator model and a realistic vehicle aerodynamic model. Starting from an initial vehicle state along a reentry trajectory, the problem consists of precisely land the RLV while ensuring system requirements satisfaction, such as saturation and rate limits in the actuation, and aiming at fuel consumption optimality. The Deep Deterministic Policy Gradient (DDPG) algorithm was adopted as candidate strategy to allow the design of an integrated guidance and control algorithm in continuous action and observation spaces.

The results obtained are very satisfactory in terms of landing accuracy and fuel consumption. These results were also compared to a more classical and industrially used solution, due to its capability to yield satisfactory landing accuracy and fuel consumption, composed of a successive convexification guidance and a PID controller tuned independently for the non-disturbed nominal scenario. A reachability analysis was also performed to assess the stability and robustness of the closed-loop system composed by the integrated guidance and control NN, trained for the 1-DoF scenario, and the RLV dynamics.

Taking into account the fidelity of the benchmark adopted and the results obtained, this approach is deemed to have a significant potential for further developments and ultimately space industry applications, such as In-Orbit Servicing (IOS) and Active Debris Removal (ADR), that also require a high level of autonomy.

Additional file

document preview

Rosa slides.pdf