Rongjie Lai, Rensselaer Polytechnic Institute
Learning Manifold-Structured Data using Deep Neural Networks: Theory and Applications
Deep artificial neural networks have made great success in many problems in science and engineering. In this talk, I will discuss our recent efforts to develop DNNs capable of learning non-trivial geometry information hidden in data. In the first part, I will discuss our work on advocating the use of a multi-chart latent space for better data representation. Inspired by differential geometry, we propose a Chart Auto-Encoder (CAE) and prove a universal approximation theorem on its representation capability. CAE admits desirable manifold properties that conventional auto-encoders with a flat latent space fail to obey. We further establish statistical guarantees on the generalization error for trained CAE models and show their robustness to noise. Our numerical experiments also demonstrate satisfactory performance on data with complicated geometry and topology. If time permits, I will discuss our work on defining convolution on manifolds via parallel transport. This geometric way of defining parallel transport convolution (PTC) provides a natural combination of modeling and learning on manifolds. PTC allows for the construction of compactly supported filters and is also robust to manifold deformations. I will demonstrate its applications to shape analysis and point clouds processing using PTC-nets. This talk is based on a series of joint work with my students and collaborators.