Understanding adversarial robustness via optimal transport.

Abstract

Deep learning-based approaches have succeeded surprisingly in various fields of sciences. In particular, one of the first and most successful achievements of them is image classification. Now, deep learning-based algorithms perform even better than humans in classification problems. However, people in machine learning community observed that it is possible to deteriorate the performance of neural networks seriously by adding a well-designed small noise, which is called ‘adversarial attack’. Although humans still classify this new image correctly. the machine completely fails to classify this new image. Since this issue is serious in practice, for example security or self-driving car, practioners want to develop more robust machines against such adversarial attack, which motivates ‘adversarial training problem’. However, until very recently there has been no theoretical understanding of it. In this talk, I will present the recent progress of understanding adversarial training problem. The key idea of connecting the two areas originates from (Wasserstein) barycenter problem, one of the famous implications of optimal transport theory. I will introduce ‘generalized barycenter problem’, the extension of classical barycenter problem, and its multimarginal optimal transport formulations. Through the lens of those tools, one can understand the geometric structure of adversarial training problems. One crucial advantage of this result is that it allows to utilize many computational optimal transport tools. Lastly, if time is permitted, I will give the result of the existence of optimal robust classifiers which not only extends the binary setting case to the multiclass one but also provides a clean interpretation by duality.

Date
2023, Sep 28 10:00 AM PDT
Event
KI Seminar
Location
Online (zoom)