1/12
Approximation of functions
matenum1
+
Thiele theorem
5/12
matenum1
for continuous fractions is analogous to Taylor for polynomials.
We want a formula
We know:
f(x) = f(0) +
the function in full (at any point by a slow method)
x |
x |
x |
+
+
+ ···
|r1(0) |r2(0) |r3(0)
Example: ln(1 + x)
where recursively:
values (sometimes also derivatives) at discrete points
R−1 = 0
Quality of data:
R0 = ln(1 + x)
x=0
r0 = 1/(1/(1 + x)) = 1 + x = 1
R0 = f(x)
arbitrary precision
R1 = R−1 + r0 = 1 + x
Rj(x) = Rj−2(x) + rj−1(x)
j+1
rj(x) =
Rj0(x)
approximate (experiment, simulation)
Methods:
r1 = 2/(1 + x) 0 = 2
R2 = R0 + r1 = ln(1 + x) + 2
3
x=0
r2 =
= 3(1 + x) = 3
1/(1 + x)
...
Taylor (McLaurin) / Padé (rational function), Thiele
interpolation
splines
r2n = 2n + 1, r2n+1 = 2/(n + 1)
orthogonal systems of functions
x|
x |
x|
x |
x|
x |
ln(1 + x) = +
+ +
+ +
+ ···
|1 |2/1 |3 |2/2 |5 |2/3
Chebyshev (best) approximation
least square method – fitting, correlation, regression
2/12
Taylor (MacLaurin)
matenum1
∞ (n)
X
f (0) n
x
n!
n=0
a
i=0
accurate close to x = 0
Let us denote the partial sum as fn(x) =
the larger x, the less accurate
n
X
aibi(x)
i=0
Then the coefficients ai minimize the following “error” of the approximation
(integral of squares of the deviations):
Zb
|fn(x) − f(x)|2 dx
(1)
convergence not guaranteed, e.g.:
exp(−1/x) for x > 0,
f(x) =
0
for x 6 0
a
is smooth (all derivatives at x = 0 are
zero), but not analytic (zero radius of
convergence of the Taylor series)
Notes:
To approximate certain classes of decaying functions, an unbounded
interval can be considered, cf. mat-lin2.mw.
Example. Study the convergence (partial sums) of the Taylor series of function sin(x) at x = 0.
credit: Wikipedia
Scalar product with a weight can be used (see, e.g., Chebyshev)
3/12
Padé
matenum1
The Padé approximation of function f(x) at x = 0 is the rational function:
f(x) ≈
matenum1
Let bn(x) be a complete system of real orthonormal functions (basis) on
interval [a, b]. Then (in some sense of convergence. . . )
Zb
∞
X
f(x) =
aibi(x), where an =
f(x)bn(x) dx
MacLaurin (shifted x = 0 → x = x0 = Taylor)
All derivatives must be known, in R, C, . . . :
f(x) =
6/12
Approximation by orthogonal functions
Pk(x)
,
Pn−k(x)
Pl(x) =
l
X
aixi
7/12
Example – Fourier series
matenum1
R
Fourier series for functions f(x), x ∈ [0, 2π], such that 02π |f2| dx exists:
basis = {1, sin x, cos x, sin 2x, cos 2x . . .}
good for “smooth enough” periodic functions
i=0
which has the same derivatives up to f(n)(0) (i.e., the same Taylor expansion)
limited advantage for numerical purposes (slow sin, cos)
2
Accurate close to x = 0, inaccurate for large x
Often (but not always) more accurate than Taylor of the same order
Taylor-expand both sides of the equation and compare the coefficients
Use the Thiele theorem for the continuous fraction
k
X
sin(ix)
i=1
i
partial sum
1
Calculation:
0
exact
k=2
k=4
k=8
k=16
k=32
k=64
Application:
-1
Speed up the convergence, e.g., of the virial equation of state
-2
-2
0
2
x
4/12
Continuous fraction
matenum1
Equivalent expression for the rational function
E.g.:
a0 +
x
x
a1 + a +
2
x
a3
= a0 +
x|
x|
x|
+
+
|a1 |a2 |a3
Infinite continuous fraction (example):
x| 12x2| 22x2| 32x2|
arctan x = +
+
+
+ ···
|1 | 3
| 5
| 7
converges for x ∈ R
Taylor expansion:
arctan x = x −
x3 x5
+
− +···
3
5
converges for x 6 1
8/12
Approximation by orthogonal functions – first guess matenum1
Let-us orthogonalize functions {1, x, x2, . . .} at
interval [−1, 1] by the Gram-Schmidt method.
The resulting set of orthogonal polynomials is called Legendre polynomials (see
matenum1.mw).
least-square-type
mizes (1))
approximation
(mini-
larger deviations near the interval ends
9/12
Chebyshev polynomials – better
are polynomials in [−1, 1] orthogonal with weight 1/
Tn(x) = cos(n arccos x)
for n 6= m
0
Tn(x)Tm(x)
p
π
for n = m = 0
dx =
2
−1
1−x
π/2 for n = m 6= 0
matenum1
p
1 − x2
Z1
The expansion:
n
f(x) ≈
c0 X
+
ciTn(x)
2
i=1
Z
2 1 f(x)Tn(x)
p
dx
ci =
π −1 1 − x2
Named by Pafnuty Lvovich Chebyshev
Russian:
Czech: Pafnutij Lvovič Čebyšev
close to the minimax approximation
The Chebyshev best (minimax) approximation
10/12
matenum1
= the approximation which minimizes the maximum deviation:
h
i
min max f(x) − Pn(x)
where Pn(x) =
Pn
ai
x
i
i=0 ai x is a polynomial with n + 1 coefficients
The best approximation always exists
The deviation f(x) − Pn(x) has:
– n + 2 extremes, with alternating maxima and minima
– n + 1 zero points
– the expansion by Chebyshev polynomials is usually quite close to it
Similar statement for a rational function (of n + 1 independent parameters).
Example: ln(x) in interval [2, 4] approximated by the Chebyshev polynomials (—) and the best approximation (—); the
deviation is shown.
credit: Wikipedia
Interpolation
11/12
matenum1
A function is known at discrete points, (xi, yi), i = 1..n.
We want a polynomial coinciding with these points.
There is a unique polynomial of the order n − 1 (to xn−1)
= Lagrange interpolation polynomial:
(x1 − x)(x2 − x)(x3 − x) · · · (xn − x)
y(x) = y1
(x1 − x1)(x2 − x1)(x3 − x1) · · · (xn − x1)
(x1 − x)(x2 − x)(x3 − x) · · · (xn − x)
+ y2
(x1 − x2)(x2 − x2)(x3 − x2) · · · (xn − x2)
...
(x1 − x)(x2 − x)(x3 − x) · · · (xn − x)
+ yn
(x1 − xn)(x2 − xn)(x3 − xn) · · · (xn − xn)
Can be simplified for equidistant x’s
May be inaccurate close to endpoints for equidistant subintervals
Can be extended if also derivatives are known
Splines
12/12
matenum1
A function is known at discrete points, (xi, yi), i = 0..n.
We want a set of polynomials piecewise in intervals [xi, xi+1].
The most common cubic splines: (total 4n constants):
coincide at points (xi, yi) (2n conditions)
continuous derivatives (n − 1 conditions)
continuous 2nd derivatives (n − 1 conditions)
There are 2 conditions for the coefficients left. We may demand zero 2nd
derivatives at the ending points, or to minimize the squared deviation of
maximum error
If we know the 1st derivatives, we may want to reproduce them; then, we
must resign to continuous 2nd derivatives (or increase the order).
Useful to approximate a complex function on a computer (e.g., interaction of Gaussian charges)
pros: simple to obtain
a few floating point operations
cons: calculation of an integer i required (may be slow)
tables need notfit into a cache
© Copyright 2026 Paperzz