Independent learning in stochastic games
Asuman Ozdaglar
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USAMuhammed O. Sayin
Electrical and Electronics Engineering Department in Bilkent University, Ankara, TurkeyKaiqing Zhang
LIDS and CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
This book chapter is published open access.
Abstract
Reinforcement learning (RL) has recently achieved tremendous successes in many artificial intelligence applications. Many of the forefront applications of RL involve multiple agents, e.g., playing chess and Go games, autonomous driving, and robotics. Unfortunately, the framework upon which classical RL builds is inappropriate for multiagent learning, as it assumes an agent’s environment is stationary and does not take into account the adaptivity of other agents. In this review paper, we present the model of stochastic games due to Shapley (1953) for multiagent learning in dynamic environments. We focus on the development of simple and independent learning dynamics for stochastic games: each agent is myopic and chooses best-response type actions to other agents’ strategy without any coordination with her opponent. There has been limited progress on developing convergent best-response type independent learning dynamics for stochastic games. We present our recently proposed simple and independent learning dynamics that guarantee convergence in zero-sum stochastic games, together with a review of other contemporaneous algorithms for dynamic multiagent learning in this setting. Along the way, we also reexamine some classical results from both the game theory and RL literature, to situate both the conceptual contributions of our independent learning dynamics, and the mathematical novelties of our analysis. We hope this review paper serves as an impetus for the resurgence of studying independent and natural learning dynamics in game theory, for the more challenging settings with a dynamic environment.