Gradient descent on infinitely wide neural networks: global convergence and generalization

  • Francis Bach

    Inria & Ecole Normale Supérieure, PSL Research University, Paris, France
  • Lénaïc Chizat

    Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
Gradient descent on infinitely wide neural networks: global convergence and generalization cover
Download Chapter PDF

This book chapter is published open access.

Abstract

Many supervised machine learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many mathematical guarantees exist. Models which are nonlinear in their parameters such as neural networks lead to nonconvex optimization problems for which guarantees are harder to obtain. In this paper, we consider two-layer neural networks with homogeneous activation functions where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived.