2020 seminars

Europe/Lisbon
Online

Francisco Förster Burón
Francisco Förster Burón, Universidad de Chile

The ALeRCE astronomical alert broker

A new generation of large aperture and large field of view telescopes is allowing the exploration of large volumes of the Universe in an unprecedented fashion. In order to take advantage of these new telescopes, notably the Vera C. Rubin Observatory, a new time domain ecosystem is developing. Among the tools required are fast machine learning aided discovery and classification algorithms, interoperable tools to allow for an effective communication with the community and follow-up telescopes, and new models and tools to extract the most physical knowledge from these observations. In this talk I will review the challenges and progress of building one of these systems: the Automatic Learning for the Rapid Classification of Events (ALeRCE) astronomical alert broker. ALeRCE is an alert annotation and classification system led by an interdisciplinary and interinstitutional group of scientists from Chile since 2019. ALeRCE is focused around three scientific cases: transients, variable stars and active galactic nuclei. Thanks to its state-of-the-art machine learning models, ALeRCE has become the 3rd group to report most transient candidates to the Transient Name Server, and it is enabling new science with different astrophysical objects, e.g. AGN science. I will discuss some of the challenges associated with the problem of alert classification, including the ingestion of multiple alert streams, annotation, database management, training set building, feature computation and distributed processing, machine learning classification and visualization, or the challenges of working in large interdisciplinary teams. I will also show some results based on the real‐time ingestion and classification using the Zwicky Transient Facility (ZTF) alert stream as input, as well as some of the tools available.

Europe/Lisbon
Online

Pedro Domingos
Pedro Domingos, University of Washington

Deep Networks Are Kernel Machines

Deep learning's successes are often attributed to its ability to automatically discover new representations of the data, rather than relying on handcrafted features like other learning methods. In this talk, however, I will show that deep networks learned by the standard gradient descent algorithm are in fact mathematically approximately equivalent to kernel machines, a learning method that simply memorizes the data and uses it directly for prediction via a similarity function (the kernel). This greatly enhances the interpretability of deep network weights, by elucidating that they are effectively a superposition of the training examples. The network architecture incorporates knowledge of the target function into the kernel. The talk will include a discussion of both the main ideas behind this result and some of its more startling consequences for deep learning, kernel machines, and machine learning at large.

Additional file

document preview

Domingos_P.pdf

Europe/Lisbon
Online

Kathryn Hess

Of mice and men

Motivated by the desire to automate classification of neuron morphologies, we designed a topological signature, the Topological Morphology Descriptor (TMD), that assigns a "barcode" to any any finite binary tree embedded in ${\mathbb R}^3$. Using the TMD we performed an objective, stable classification of pyramidal cells in the rat neocortex, based only on the shape of their dendrites.

In this talk, I will introduce the TMD, then focus on a very recent application to comparing mouse and human cortical neurons and characterizing the differences between them. I'll also briefly discuss the role of machine learning in our work.

This talk is based on collaborations led by Lida Kanari of the Blue Brain Project.

Additional file

document preview

Hess slides.pdf