# Lanczos Error Bounds

Tyler Chen

This is a companion piece to the publication:

@article{lanczos_error_bounds,
title = {Error bounds for the Lanczos algorithm for function approximation},
author = {Tyler Chen, Anne Greenbaum, Cameron Musco, Christopher Musco},
year = {},
}

## Why should I care?

Computing the product of a matrix function \(f(\mathbf{A})\) with a vector \(\mathbf{b}\), where \(\mathbf{A}\) is a symmetric matrix and \(f : \mathbb{R} \to \mathbb{R}\) is a scalar function, is a fundamental task in many disciplines, including scientific computing and data science.
Perhaps the most well known example is when \(f(x) = 1/x\) in which case \(f(\mathbf{A}) \mathbf{b} = \mathbf{A}^{-1} \mathbf{b}\) corresponds to solving a linear system of equations.
Other commonly used functions include the exponential and logarithm, square root, and sign function, which have applications such as
differential equations ,
Gaussian sampling ,
principle component projection/regression ,
quantum chromodynamics ,
eigenvalue counting , etc. .

## Introduction

In practice, the error of the Lanczos-FA approximation to \(f(\mathbf{A}) \mathbf{b}\) is close to optimal among all polynomial approximations of degree \(< k\).
If \(f(x) = 1/x\) and the error is measured in the \(\mathbf{A}\)-norm for some positive definite \(\mathbf{A}\), then Lanczos-FA is indeed optimal.
However, despite decades of research, similar optimality or near optimality results for a wider range of functions and norms remains limited to a few specific functions, and often involves constants which are overly pessimistic in practice.