Lecture4: Wrap-up of floating point and error analysis

2/6/2014
Computational Methods
CMSC/AMSC 460
Ramani Duraiswami,
Dept. of Computer Science
IEEE-754 (double precision)
0
1
11
0
0000000000
s
i
g
n
exponent
12
63
00000000000……000000000000
mantissa (significand)
(-1)S * 1 . f *
2 E-1023
Sign
1 is understood
Mantissa (w/o leading 1)
Base
Exponent
1
2/6/2014
IEEE-754 (single precision)
0
1
8
0 00000000
s
i
g
n
exponent
9
31
00000000000000000000000
mantissa (significand)
(-1)S * 1.f * 2 E-127
Sign
1 is understood
Mantissa (w/o leading 1)
Base
Exponent
Biased Exponent
•
•
•
•
•
•
•
•
•
•
Value offset from actual value by exponent bias.
For many math operations comparisons have to be made
2's complement, would make comparison harder.
Bias value to put within an unsigned range suitable for comparison.
Bits arranged so that sign bit is most significant, Biased exponent
next, mantissa in the least significant bits,
Allows high speed comparisons of floating point numbers using
fixed point hardware.
Bias value = 2k-1 -1 where k is the number of exponent bits (1023,
127)
Bias must be subtracted to get actual exponent.
single-precision, exponent is from −126 to +127 , biased by adding
127 to get a value in 1 .. 254 (0 and 255 are special)
double-precision exponent is from −1022 .. +1023, biased by
adding 1023 to get a value in 1 .. 2046 (0 and 2047 are special
2
2/6/2014
Special Numbers
Non normalized numbers
0
00000000000
s
i
g
n
exponent
000000000000……000000000000000
mantissa (significand)
(-1)S * 2 E * 1.f
E+1023 == 0
Powers
of
Two
f==0
0
f~=0
Non-normalized
typically
underflow
0
s
i
g
n
00000000
0 < E+1023 < 2047 E+1023 == 2047
Floating point
Numbers
∞
Not
A
Number
00000000000000000000000
exponent
mantissa (significand)
(-1)S * 2
E-127
* 1.M
E == 0
0 < E < 255
E == 255
M==0
0
Powers
of
Two

M!=0
Non-normalized
typically
underflow
Ordinary
Old
Numbers
Not
A
Number
3
2/6/2014
Effects of floating point
• eps is the distance from 1 to the next larger floating-point
number.
• Or number which when added to 1 gives a number
different from 1
• eps = 2-52
• In Matlab
Binary
Decimal
eps
2^(-52)
2.2204e-16
realmin 2^(-1022)
2.2251e-308
realmax (2-eps)*2^1023 1.7977e+308
Two issues
• Can we design algorithms to minimize errors
• Can we estimate errors based on knowledge of algorithm
– Error Analysis – forward and backward
4
2/6/2014
Error Analysis
• Computations should be as accurate and as error-free as
possible
• Sources of error:
– Poor models of a physical situation
– Ill-posed problems
– Errors due to representation of numbers on a computer and
successive operations with these
• How do we analyze an algorithm for correctness?
– Forward error analysis
– Backward error analysis
• Well posedness
Errors
•
1.
2.
3.
4.
Scientific computing involves lots of errors
Measurement errors:
Errors in properties
Inexact mathematical models
Discretization errors: something continuous is
represented discretely
5. Errors in the solution to discrete representations of
numbers
5
2/6/2014
Errors are inevitable
• Everybody did the best they could
• No one made any mistakes, yet answer could be wrong
• Goal of error analysis is to
– determine when the answer can be relied upon
– Which algorithms can be trusted for which data
• Last class we focused on errors due to the finite
representation of numbers
Error Analysis
• Two primary techniques of error analysis
– Forward Error Analysis
• Floating-point representation of the error is subjected to the same mathematical
operations as the data itself.
– Equation for the error itself
– Backward Error Analysis
• Attempt to regenerate the original mathematical problem from previously
computed solutions
– Minimizes error generation and propagation
6
2/6/2014
Error Analysis
• Forward and Backward error analysis
• Forward error analysis
– Assume that the problem we are solving is exactly specified
– Produce an approximate answer using the algorithm considered
True problem
*
True solution
(unknown)
#*
Computed
solution
(known)
– Goal of forward error analysis produce region guaranteed to
contain true soln.
– Report region and computed solution
Backward error analysis
• We know that our problem specification itself has error (“error in
initial data”)
• So while we think we are solving one problem we are actually
solving another
Unknown problem
(the one actually solved)
True problem
(known)
Region
containing
true
problem
and solved
problem
*
#
*#
True Solution
(Unknown)
Computed
solution
• Given an answer, determine how close the problem actually solved
is to the given problem.
• Report solution and input region
7
2/6/2014
Testing for Error Propagation
• Use the computed solution in the original problem
• Use Double or Extended Precision rather than Single
Precision
• Rerun the problem with slightly modified (incorrect) data
and look at the results
Well posed problems
• Hadamard postulated that for a problem to be “well
posed”
1. Solution must exist
2. It must be unique
3. Small changes to input data should cause small changes to
solution
• Many problems in science result in “ill-posed” problems.
– Numerically it is common to have condition 3 violated.
• Converting ill-posed problem to well-posed one is called
regularization.
• Will discuss this later in the course when talking about
optimization and Singular Value Decompositon.
8
2/6/2014
Matrix-vector product
• Matrix-vector multiplication
 M11 M12 M13  vx 
M  v  M 21 M 22 M 23  vy 
M 31 M 32 M 33  vz 
• Recall how to do matrix multiplication
• How many operations does this matrix vector product
take?
• How many operations does a general matrix vector product
take?
Ways to implement a matrix vector product
• Access matrix
– Element-by-element
along rows
– Element-by-element
along columns
– As column vectors
– As row vectors
• Discuss advantages
9
2/6/2014
Memory Hierarchy
• Run the matlab script matvec_class.m
• Report on results
10