Bandit learning of Nash equilibria in monotone games

Abstract

Game theory is a powerful framework to address optimization and learning of multiple interacting agents referred to as players. In a multi-agent setting, the notion of Nash equilibrium captures a desirable solution as it exhibits stability, that is, no player has incentive to deviate from this solution. From the viewpoint of learning the question is whether players can learn their Nash equilibrium strategies with limited information about the game. In this talk, I address our work on designing distributed algorithms for players so that they can learn the Nash equilibrium based only on information regarding their experienced payoffs. I discuss the convergence of the algorithm and its applicability to a large class of monotone games.

Date
2021, Jan 29 2:00 PM PST
Speaker
Maryam Kamgarpour (University of British Columbia (ECE))
Location
Online (zoom)
Pacific Institute for the Mathematical Sciences

This event is part of the Pacific Interdisciplinary Hub on Optimal Transport (PIHOT) which is a collaborative research group (CRG) of the Pacific Institute for the Mathematical Sciences (PIMS).