Kernel Approximation of Wasserstein and Fisher-Rao Gradient flows

Abstract

Gradient flows have emerged as a powerful framework for analyzing machine learning and statistical inference algorithms. Motivated by several applications in statistical inference, generative models, generalization, and robustness of learning algorithms, I will provide a few new results regarding the kernel approximation of gradient flows, such as a hidden link between the gradient flows of kernel maximum-mean discrepancy and relative entropies. These findings not only advance our theoretical understanding but also provide practical tools for enhancing machine learning algorithms. I will showcase inference and sampling algorithms using a new kernel approximation of the Wasserstein-Fisher-Rao (a.k.a. Hellinger-Kantorovich) gradient flows, which have better convergence characterization and improved performance in computation.

Date
2024, Dec 16 10:45 AM PST
Event
KI Seminar
Location
Online (zoom)
Registration
Sign up for the mailing list to receive the connection details

Speaker Bio:

Jia-Jie Zhu is a machine learner, applied mathematician, and research group leader at the Weierstrass Institute, Berlin. Previously, he worked as a postdoctoral researcher in machine learning at the Max-Planck-Institute for Intelligent Systems, Tübingen, and received his Ph.D. training in optimization, at the University of Florida, USA. He is interested in the intersection of machine learning, analysis, and optimization, on topics such as gradient flows of probability measures, optimal transport, and robustness of learning and optimization algorithms.