Numerical stability of algorithms at extreme scale and low precisions
Nicholas J. Higham
Department of Mathematics, University of Manchester, Manchester, M13 9PL, UK
This book chapter is published open access.
Abstract
The largest dense linear systems that are being solved today are of order . Single-precision arithmetic, which has a unit roundoff , is widely used in scientific computing, and half-precision arithmetic, with , is increasingly being exploited as it becomes more readily available in hardware. Standard rounding error bounds for numerical linear algebra algorithms are proportional to , with growing at least linearly with . Therefore we are at the stage where these rounding error bounds are not able to guarantee any accuracy or stability in the computed results for some extreme-scale or low-accuracy computations. We explain how rounding error bounds with much smaller constants can be obtained. Blocked algorithms, which break the data into blocks of size , lead to a reduction in the error constants by a factor or more. Two architectural features also reduce the error constants: extended precision registers and fused multiply–add operations, either at the scalar level or in mixed precision block form. We also discuss a new probabilistic approach to rounding error analysis that provides error constants that are the square roots of those of the worst-case bounds. Combining these different considerations provides new understanding of the numerical stability of extreme scale and low precision computations in numerical linear algebra.