Optimal control with learning on the fly: a toy problem

  • Charles Fefferman

    Princeton University, USA
  • Bernat Guillén Pegueroles

    Princeton University, USA
  • Clarence W. Rowley

    Princeton University, USA
  • Melanie Weber

    Princeton University, USA
Optimal control with learning on the fly: a toy problem cover
Download PDF

This article is published open access under our Subscribe to Open model.

Abstract

We exhibit optimal control strategies for a simple toy problem in which the underlying dynamics depend on a parameter that is initially unknown and must be learned. We consider a cost function posed over a finite time interval, in contrast to much previous work that considers asymptotics as the time horizon tends to infinity. We study several different versions of the problem, including Bayesian control, in which we assume a prior distribution on the unknown parameter; and “agnostic” control, in which we assume nothing about the unknown parameter. For the agnostic problems, we compare our performance with that of an opponent who knows the value of the parameter. This comparison gives rise to several notions of “regret”, and we obtain strategies that minimize the “worst-case regret” arising from the most unfavorable choice of the unknown parameter. In every case, the optimal strategy turns out to be a Bayesian strategy or a limit of Bayesian strategies.

Cite this article

Charles Fefferman, Bernat Guillén Pegueroles, Clarence W. Rowley, Melanie Weber, Optimal control with learning on the fly: a toy problem. Rev. Mat. Iberoam. 38 (2022), no. 1, pp. 175–187

DOI 10.4171/RMI/1275