Homework 2
Matthew Jin
April 10, 2014
1a)
The relative error is given by |ŷ−y|
y , where ŷ represents the observed output
value, and y represents the theoretical output value. In this case, the observed
output value is 0, as shown in the following code:
format long
a=sqrt(1+10^(-16));
b=a-1 % b represents the actual function output
Output:
b =
0
= |0−y|
= |−y|
Therefore, |ŷ−y|
y
y
y = 1.
So the relative error of a naive implementation of f (x) is 1.
1b)
The first four terms of the Taylor series for f about zero may be determined
by writing out the first 5 terms of the binomial expansion of (1 + x2 )1/2 , and
then subtracting 1: n 2
n 4
n 6
n 8
2 1/2
(1 + x )
=1+
x +
x +
x +
x , where n = 1/2.
1
2
3
4
Note that although combinatorials do not make sense for fractional n, the formula for combinatorics still applies as a shorthand way of writing the coefficients
of the binomial expansion. After plugging in n appropriately, the expression
comes out to
1
f (x) = 1 +
=
x2
2!
−
x4
8
x2
2!
+
−
x6
16
x4
8
−
+
x6
16
−
5x8
128
−1
5x8
128
1c)
For this part, focus on which of the operations could result in catastrophic
cancellation, which would√in turn result in a magnified
√ relative error. Consider
2 ; in the computer,
separately
the
operation
1
+
x
1 + x2 is evaluated as
√
√
√
√
2
2
2
2
1 + x (1 + 1 ) = 1 + x + 1 1 + x ≈ 1 + x + 1 , because
x2 is very
√
small. Overall, the machine would evaluate the expression 1 + x2 − 1 and
output (excluding
the other, less significant errors present among the entire op√
eration) 1 + x2 + 1 − 1. The relative error of the machine would overall be,
letting the first term of √the Taylor series
of f(x) centered at 0 to be the theoret√
√
| 1+x2 +1 −1− 1+x2 +1|
2mach
1
2
ical value of 1 + x ,
= 2
x2
x2 ≤
x2 .
2
To find the xc for which the relative error is about 10−12 for all x > xc ,
−16
evaluate 10−12 = 2∗10
, which yields an xc of about 0.0141. xc is on the order
x2
of 10−2 .
1d)
One would need the first three terms of the expression. One allows the ab2
solute error to be dominated by the first omitted Taylor series term and lets x2
to be the approximation of the theoretical answer. The absolute error of the
4th Taylor series term is the one that most nearly results in a 12 digit accuracy:
relative =
|−5∗.01418 |
128
.01412
2
≈ 6.139 ∗ 10−13 , which indeed implies roughly 12 digits of
accuracy, if one uses a three-term Taylor series approximation.
1e)
Letting xc = .0141, one implements the following MATLAB code:
format long
x=.0141; %this value represents the calculated x_c
a=(x^2)/2-(x^4)/8+(x^6)/16 % a represents the three-term taylor
% approximation of the function at x_c
2
b=sqrt(1+x^2)-1 % b represents the naive implementation’s
%approximation of the function at x_c
reldiff=abs((a-b))/b %reldiff represents the relative
%difference between the taylor approximation and the naive implementation
Output:
a =
9.940005981411550e-05
b =
9.940005981401434e-05
reldiff =
1.017666014307927e-12
One can see that indeed, the two outputs from the two methods differ relatively only on the order of 10−12 .
2a)
Using the function goodnewton developed by Dan Cianci, I run the following
script:
f = @(x)x^3-1;
Df=@(x) 3*x^2;
goodnewton(f,Df,1i,1*10^(-8))
Output:
x0 =
-0.333333333333333 + 0.666666666666667i
x0 =
3
-0.582222222222222 + 0.924444444444444i
x0 =
-0.508790803289319 + 0.868165511887349i
x0 =
-0.500068739067393 + 0.865982218692540i
x0 =
-0.499999996289030 + 0.866025398338587i
ans =
-0.500000000000000 + 0.866025403784439i
So one can see that the Newton iteration converges on − 12 +
√
root is − 21 − 23 . The three roots appear as follows:
√
3
2 .
The third
The angle between the three roots is 2π
3 . To calculate this, let the root be
re . Then to determine the roots of f (z) = z 3 − 1, evaluate z 3 = 1. For some r
and θ, r3 e3iθ = 1. In order for this to be true, 3iθ must be equal to some integer
multiple of 2π, and r3 must be equal to 1, becuase reiθ = r(cos(θ) + isin(θ)).
iθ
4
4π
2π 4π
Thus, r=1 and θ[0, 2π
3 , 3 ], for three solutions: 0, 3 , 3 . Note that the angle
2π
between consecutive roots is 3 . This is because θ is equal to n ∗ 2π
3 , for integer
n. As one increments n by one, the angle between the roots increases by 2π
3 . Finally, one knows that there are only three unique roots because the increments
4π
of 2π
3 for θ larger than 3 will just cause the roots to cycle back to the ones
already noted.
2b)
The problem requires one to find all z of the form reiθ such that z 5 =
1
4π 6π 8π
r e
= 2. As in part (a), this implies that r = 2 5 and θ[0, 2π
5 , 5 , 5 , 5 ].
1
1
2π
1
4π
1
6π
1
8π
Overall, the solutions are 2 5 e0i , 2 5 ei 5 , 2 5 ei 5 , 2 5 ei 5 , and 2 5 ei 5 .
5 5iθ
2c)
The following code was used to determine in which of the three basins each
point in the complex plane lies:
a=-2:.01:2;
[X,Y]=meshgrid(a,a); %set up the grid of values over the range and increment of a
f=@(x) x.^3-1;
Df=@(x) 3*x.^2;
tol=.01;
Z=goodnewton2(f,Df,X+Y*1i,tol); %call goodnewton2, a modified version of Dan Cianci’s
% goodnewton, to output the value on which newton’s iteration converges
Z=sign(angle(Z)); %if the angle is 2pi/3, return "1",
%if angle is 0, return "0"
%if angle is -2pi/3, return "-1"
surf(X,Y,Z)
imagesc(a,a,Z);
The code for goodnewton2 is as follows:
function [ r ] = goodnewton2( f,Df,x0,tol )
%goodnewton2 uses Newton iteration to find a root of f with initial
%guess x0 to the specified tolerance tol
%
%
%Inputs:
5
%f,Df are the function and derivative of the function respectively
%x0 the initial starting point of the iteration
%tol specifies the relative error of the first iterate and the last
%iterate
%
%
%Outputs:
%r- the computed root
%
%Cianci 4/1/2014, edited by Matthew Jin 4/9/2014
x1 = x0 - f(x0)./Df(x0);
i=0;
%implement the iteration 100 times
for k=1:100
x0=x1;
x1 = x0 - f(x0)./Df(x0);
end
r=x1;
end
The resulting plot appears as follows:
If one decreases the range of the meshgrid parameter, one can zoom in on
smaller portions of the plot:
6
Decreasing the range of the meshgrid parameter by 10x and then by 100x,
one can see that the shape of the original graph is still retained, indeed confirming that this figure is a fractal:
7
√
0
x
−1
3a) κ = | ff(x)x
(x), κ(x) = | sin1−x
−1 (x) |, which eval(x) |, so for the function sin
√
2
.999999
2
1−0.999999
uated at x=0.999999 yields κ(0.999999) = | sin−1
(0.999999) | ≈ 450.563.
3b) Applying the same formula for κ as above to ln(x), one finds κ(x) =
1
x∗ x
1
| ln(x)
| = | ln(x)
|. This expression is large when |ln(x)| is small, which occurs in
the neighborhood of x=1.
3c) The roots√of the function f (x) = x2 − 2x + c are given by the quadratic
√
formula: r = 2± 24−4c = 1 ± 1 − c. Again applying the formula for κ to r,
1
one finds κ(c) = |
±c∗ 12 ∗(1−c)− 2
√
1± 1−c
−16
c √
c
| = | 2(1±√1−c)
| = | 2(√1−c±(1−c))
|. If c =
1−c
−16
−16
1 − 10−16 , then κ(1 − 10
) = | 2(√1−1+101−10
| = | 2(101−10
−8 ±10−16 |.
−16 ±(1−1+10−16 ))
This yields κ(1 − 10−16 ) =
2 ∗ 10−8 apart.
99999999
2
or
100000001
.
2
The roots are a distance of
10
˜
3d) A backwards stable algorithm will evaluate sin(1010 ) as sin(10
) =
10
sin(10 (1 + ))
= sin(1010 + 1010 )
= sin(1010 )cos(1010 ) + cos(1010 )sin(1010 )
Let = 10−16 .
10
˜
Then sin(10
) = sin(1010 )cos(10−6 ) + cos(1010 )sin(10−6 ).
To further simplify this problem, consider the Taylor series expansions of
cos(x) and sin(x). For small x, sin(x) ≈ x. On the other hand, for small x,
2
cos(x) ≈ 1 − x2 . Therefore, sin(10−6 ) ≈ 10−6 , and cos(10−6 ) ≈ 1.
8
10
˜
From here, one can see that sin(10
) = sin(1010 ) + cos(1010 )sin(10−6 ).
The latter term clearly indicates the absolute error; to find the relative error,
10
)sin(10−6 )|
we evaluate |cos(10sin(10
≈ 1.79099 ∗ 10−6 . The backwards-stable algo10 )
rithm would therefore probably be accurate to about 6 digits when computing
sin(1010 ).
4) First, use Taylor’s theorem to expand f (x + h) and f (x − h) around x.
f (x + h) = f (x) + f 0 (x)(x + h − x) +
3
x) +
f 0000 (q)
4!
∗ (x + h − x)
= f (x) + f 0 (x)(h) +
and x+h.
f 00 (x)
2
∗ (x + h − x)2 +
f 000 (x)
3!
∗ (x + h −
4
f 00 (x)
2
2 (h )
+
f 000 (x)
3
3! (h )
+
f 0000 (q)
4
4! (h ),
for some q between x
Similarly, substituting in x-h instead of x+h to the Taylor series, one gets
000
0000
00
f (x − h) = f (x) + f 0 (x)(−h) + f 2(x) (−h)2 + f 3!(x) (−h)3 + f 4!(p) (−h)4 for some
p between x and x-h
(x−h)
Thus, f (x+h)−2fh(x)+f
evaluates to (after cancelling out the odd powered
2
terms, due to them having equal magnitude and opposite sign),
000
2f (x)+f 00 (x)(h2 )+
0000
f 0000 (q)
f 0000 (p)
(h4 )+ 4! (h4 )−2f (x)
4!
h2
=
f 00 (x)+( f 4!(q) + f 4!(p) )(h2 ). Notice that the latter term represents the difference
between f”(x) and the computer’s execution of the finite difference formula–it
000
0000
is the error. Also, notice that f 4!(q) + f 4!(p) is a constant. Therefore, it immediately follows that the error for the finite difference formula for f”(x) is
O(h2 ).
To use this formula to approximate the second derivative of sin(5x) at x=1,
I created the following function:
%
%
%
%
%
This function accepts an input of h, and implements the finite difference
formula for f’’(x) for sin(5x) at x=1. It then computes the relative error of
this approximation against the theoretical value.
result=h2p4(h)
Matthew Jin April 08, 2014
function result=h2p4(h)
approx=(sin(5+5*h)-2*sin(5)+sin(5-5*h))./(h.^2); %implement the finite difference formula fo
result=abs((approx-(-25*sin(5))))/(-25*sin(5)); %compute the relative error
end
I then implemented this function in a separate test script as follows:
9
format long
x=10.^[-16:.05:0];
myanswer=arrayfun(@(x) h2p4(x),x); %apply function h2p4 to each element in array x
loglog(x,myanswer)
The output graph of epsilon with respect to h on a logarithmic scale for
epsilon on h appears as follows:
This graph is reasonable. The error due to rounding is approximately going to be on the order of O( mach
h2 ). This is because the computational errors
in the numerator of the finite difference approximation may in the worst-case
scenario constructively add machine error, which would then be divided by h2 .
Furthermore, as proved just above, the error caused by the formula for finite
difference approximation is on the order of O(h2 ). To find an approximation for
the h at which error is minimal, one might thus set h2 = ( mach
h2 ). This implies
that h ≈ 10−4 will result in the minimal error, which is indeed what we see on
the graph, after allowing for the effect of constant multipliers on the relative
significance of the error due to the formula vs. rounding error–the bottom of
the ”V” occurs in the neighborhood of 10−4 . One can see that for h larger
than approximately 10−4 , the Taylor series error takes over, with a slope on the
log-scale graph of approximately 2 (in that region, the relative error increases
approximately 10 orders of magnitude over an increase of approximately 5 orders of magnitude of h). This corroborates the result proven just above, in
which the formulaic error was proven to be O(h2 ). For h smaller than roughly
10−4 , the rounding error takes over, with the error increasing with a slope of
roughly 2 as h shrinks (again, the error increases about 10 orders of magnitude
over a decrease of about 5 orders of magnitude of h). This corroborates the idea
−10
that the rounding error occurs on the order O( mach
h2 ). For h smaller than 10
one can see that the graph becomes somewhat unpredictable, yet still appears
10
to maintain the slope of -2.
5a)
This MATLAB code would ideally run 10 times, and then terminate. In reality, it is an infinite loop. This is because decimals fractions, such as 0.1, cannot
be expressed precisely in the floating point system. The computer therefore
introduces some rounding error with each addition of 0.1. By the 10th addition,
the error is large enough that the value that the computer should compute to
be 1 is not, in fact, precisely equal to 1. This can be shown by running through
the loop ten times in debug mode and then checking the difference between the
output that should be 1 and 1. Below is the slightly modified code:
format long
x=0;
while x~=1
keyboard
x=x+.1
end
Here is the output and the check:
\begin{verbatim}
x =
0.100000000000000
x =
0.200000000000000
x =
0.300000000000000
x =
11
0.400000000000000
x =
0.500000000000000
x =
0.600000000000000
x =
0.700000000000000
x =
0.800000000000000
x =
0.900000000000000
x =
1.000000000000000
K>> x-1
ans =
-1.110223024625157e-16
Clearly, the computer perceives some difference between what is displayed
as 1 and 1, due to the error compounded with each addition of 0.1. To make
this code go a ”precise” number of times, one might use integer values rather
than decimals in the code. Because integers are elements of the floating point
system, the rounding error would be avoided.
format long
12
x=0;
while x~=10
x=x+1
end
b)
12
12
12
+1841 −1922 |
. PlugThe relative error can be computed by evaluating |17821782
12 +184112
ging this directly into MATLAB, one finds a relative error ≈ 2.7554 ∗ 10−10 .
Because log2 (192212 ) is about 130.9, it immediately follows that one would need
about 130 digits of binary to precisely handle integers of this size. I evaluate
192212 because it is slightly larger than 178212 + 184112 .
c)
Matrix multiplication of 2 n-by-n matrices involves n multiplications and n-1
additions for each of the product matrix elements (for a total of 2n-1 operations
per element), due to the nature of the dot product. In the product matrix there
are n2 elements, so to perform the entire matrix multiplication n2 (2n − 1) operations must be done. By the same argument, to multiply an n-by-n matrix
with a column vector of length n, one must perform n(2n-1) operations (n elements in the product, 2n-1 operations to compute each element). To compute
the product ABx, one can either compute (AB)x or A(Bx). Computing (AB)x
would require n2 (2n − 1) + n(2n − 1) operations, whereas computing A(Bx)
would require 2n(2n − 1) operations. For any n > 1, A(Bx) requires fewer operations, thus reducing the chance for error and reducing the amount of time the
program might take to perform the multiplication. Therefore, it is the better
way of computing ABx.
Attribution: I worked with Michael Downs and Matt Marcus to reach some
of the solutions in this homework. I also adapted Dan Cianci’s Newton’s method
code for problem 2.
13
© Copyright 2026 Paperzz