On Efficient Algorithms for Computing Near-Best Polynomial Approximations to High-Dimensional, Hilbert-Valued Functions from Limited Samples
Ben Adcock
Simon Fraser University, Burnaby, CanadaSimone Brugiapaglia
Concordia University, Montreal, CanadaNick Dexter
Simon Fraser University, Burnaby, CanadaSebastian Moraga
Simon Fraser University, Burnaby, Canada
This book is published open access.
Sparse polynomial approximation is an important tool for approximating high-dimensional functions from limited samples – a task commonly arising in computational science and engineering. Yet, it lacks a complete theory. There is a well-developed theory of best -term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions. There are also increasingly mature methods such as (weighted) -minimization for practically computing such approximations. However, whether these methods achieve the rates of the best -term approximation is not fully understood. Moreover, these methods are not algorithms per se, since they involve exact minimizers of nonlinear optimization problems. This paper closes these gaps by affirmatively answering the following question: are there robust, efficient algorithms for computing sparse polynomial approximations to finite- or infinite-dimensional, holomorphic and Hilbert-valued functions from limited samples that achieve the same rates as the best -term approximation? We do so by introducing algorithms with exponential or algebraic convergence rates that are also robust to sampling, algorithmic and physical discretization errors. Our results involve several developments of existing techniques, including a new restarted primal-dual iteration for solving weighted -minimization problems in Hilbert spaces. Our theory is supplemented by numerical experiments demonstrating the efficacy of these algorithms.