Computational math: Assignment 4

Computational math: Assignment 4
Based on Ting Gao
1. (Exercise 12.2)
Solution: (a) Derive a formula for matrix A.
Denote the n-vector of data at xj to be zj , i.e.

 


z1
c0
1 x1 · · · xn−1
1
.

 
..
..   ..  =
Z =  ...  =  ...
.
.   .  XC
zn
cn−1
1 xn · · · xn−1
n
where cj (j = 0 · · · n − 1) are the coefficients of degree n − 1 polynomial.
Denote


1 y1 · · · y1n−1

..
.. 
Y =  ...
.
. 
1
n−1
ym
···
ym
Hence,
∀Z ∈ Rn
AZ = A(XC) = (Y C),
i.e.,
(AX − Y )C = 0,
∀C ∈ Rn
Therefore,
A = Y X −1 .
(b) Write a program to calculate A and plot kAk∞ .
See code Gaot4a1.m and Fig.1.
7
10
6
10
5
10
4
10
3
10
2
10
1
10
||A||∞
Lebesgue constants
0
10
0
5
10
15
n
Figure 1:
1
20
25
30
(c) The ∞-norm condition number κ of the problem of interpolating the constant
function 1?
Consider the input: n-vector data, i.e., constant function 1 at xj . We denote it as Z =
T
[1, 1, · · · , 1]T . The output is p(yj ). Then f (x) = Ax = 1 1 · · · 1
∈ Rn ,
−1
where A = Y X as above. Hence, J(x) = A and for n = 1, 2, . . . , 30, the condition
number is
kAk∞
kJk∞
=
= kAk∞ .
κ=
kf (x)k∞ /kxk∞
1/1
We can find when n increase, it will be ill-conditioned.
(d) How close is your result for n = 11 to the bound implicit in Fig.11.1?
See Fig.3
5
4
3
2
1
0
−1
−1
−0.5
0
0.5
1
Figure 2:
2. Find the closest Marc-32 machine number to x = −9.8 and the round-off error?
Solution: Since on a Marc-32 machine, we have
(−9.8)10 = (−1)1 · (1.225)10 · 23 .
We can obtain
3
(9.8)+
10 = (1.00111001100110011001101)2 · 2 = (9.800000190734863)10
and
3
(9.8)−
10 = (1.00111001100110011001100)2 · 2 = (9.799999237060547)10 .
Since |9.8−(9.8)+ | < |9.8−(9.8)− |, the closest machine number of −9.8 is −9.800000190734863,
and the relative round-off error is 0.000000019462741 = 1.9462741 × 10−8 .
3. Suggest ways to avoid loss of significance in these calculations:
√
(a) x2 + 1 − 1;
We can use rationalization when the cancelation occurs near |x| 1.
p
x2
x2 + 1 − 1 = p
.
x2 + 1 + 1
2
(b) sin(x) − tan(x).
We can use triangular formula when the cancelation occurs near x = kπ(k ∈ Z)
x
2 sin2 ( )
cos(x) − 1
2 .
sin(x) − tan(x) = sin(x)
= − sin(x)
cos(x)
cos(x)
√
4. If at most 2 bits of precision can be lost in the computation of y = x2 + 1 − 1,
what restriction must be placed on x assuming computed as it is?
Solution: Based on the Theorem, we have
1− p
1
x2 + 1
Hence, we can get
≥
1
.
22
√
|x| ≥

√

 x2 + 1 − 1,

p
x2
x2 + 1 + 1
7
.
3
√
if |x| ≥ 37 ,
, else.
14.1. True of False?
(b) sin x = O(1) as x → 0.
1
(c) log x = O(x 100 ) as x → ∞.
(f) fl(π) − π = O(machine ). (We do not mention that the limit is machine → 0, since that
is implicit for all expressions O(machine in this book.)
(g) fl(nπ) − nπ = O(machine ), uniformly for all integers n. (Here nπ represents the
exact mathematical quantity, not the result of a floating point calculation.)
(b) True, because | sin(x)| ≤ 1 when x → 0.
(c) True, because | log(x)/x1/100 | → ∞ when x → 0.
(f) True, because f l(π) can approximate π better when machine → 0.
(g) False, because f l(nπ) − nπ = O(machine ) does not hold uniformly for all
n ∈ Z.
14.2. (a) Show that (1 + O(machine ))(1 + O(machine )) = 1 + O(machine ). The
precise meaning of this statement is that if f is a function satisfying f (machine ) =
(1 + O(machine ))(1 + O(machine )) as machine → 0, then f also satisfies f (machine ) =
1 + O(machine ) as machine → 0.
Solution:
(1 + O(machine ))(1 + O(machine ))
=1 + 2O(machine ) + O(machine )O(machine )
=1 + O(machine ) + o(machine )
=1 + O(machine )
3