Learning Tasks in the Wasserstein Space

Abstract

Detecting differences and building classifiers between distributions, given only finite samples, are important tasks in a number of scientific fields. Optimal transport (OT) has evolved as the most natural concept to measure the distance between distributions and has gained significant importance in machine learning in recent years. There are some drawbacks to OT: computing OT can be slow, and it often fails to exploit reduced complexity in case the family of distributions is generated by simple group actions.

If we make no assumptions on the family of distributions, these drawbacks are difficult to overcome. However, in the case that the measures are generated by push-forwards by elementary transformations, forming a low-dimensional submanifold of the Wasserstein manifold, we can deal with both of these issues on a theoretical and on a computational level. In this talk, we’ll show how to embed the space of distributions into a Hilbert space via linearized optimal transport (LOT), and how linear techniques can be used to classify different families of distributions generated by elementary transformations and perturbations. The proposed framework significantly reduces both the computational effort and the required training data in supervised learning settings. We demonstrate the algorithms in pattern recognition tasks in imaging and provide some medical applications.

This is joint work with Alex Cloninger, Keaton Hamm, Harish Kannan, Varun Khurana, and Jinjie Zhang.

Date
2022, Oct 27 10:00 AM PDT
Event
KI Seminar
Location
Online (zoom)
Registration
Sign up for the mailing list to receive the connection details

%