Review III Dec 5

ECIV 301
Programming & Graphics
Numerical Methods for Engineers
REVIEW III
Topics
• Regression Analysis
– Linear Regression
– Linearized Regression
– Polynomial Regression
• Numerical Integration
– Newton Cotes
– Trapezoidal Rule
– Simpson Rules
– Gaussian Quadrature
Topics
• Numerical Differentiation
– Finite Difference Forms
• ODE – Initial Value Problems
– Runge Kutta Methods
• ODE – Boundary Value Problems
– Finite Difference Method
Regression
Often we are faced with the problem…
x
y
0.924
0.928
0.93283
0.93875
0.94
-0.00388
-0.00743
0.00569
0.00188
0.01278
0.015
0.01
0.005
0
-0.005
-0.01
0.92
0.925
0.93
0.935
0.94
what value of y corresponds to x=0.935?
0.945
Curve Fitting
Question 2: Is it possible to find a simple and
convenient formula that represents data
approximately ?
0.015
0.01
e.g. Best Fit ?
0.005
0
-0.005
Approximation
-0.01
0.92
0.925
0.93
0.935
0.94
0.945
Stress
Experimental Measurements
Strain
BEST FIT CRITERIA
l ( x )  a0  a1 x
y Stress
Error at each Point
ei  yi  l ( xi ) 
yi  a0  a1 xi
Strain
Best Fit => Minimize Error
Best Strategy
 e     y
n
2
n
i
i 1
y
i
i 1
i ,measured
i 1
n
 yi ,mod el  
2
 a0  a1 xi 
2
Best Fit => Minimize Error
n
n
 e     y  a
i 1
2
i
i 1
i
0
 a1 xi 
2
Objective:
What are the values of ao and a1
n
that minimize
 e 
i 1
2
i
?
Least Square Approximation
In our case
n
n
 e     y  a
i 1
2
i
i 1
i
0
 a1 xi   Sr a0 , a1 
2
Since xi and yi are known from given data
Sr a0 , a1 
 2  yi  a0  a1 xi   0
a0
i 1
n
Sr a0 , a1 
 2  yi  a0  a1 xi xi   0
a1
i 1
n
Least Square Approximation
n
n
Sr a0 , a1  n
  yi   a0   a1 xi
a0
i 1
i 1
i 1
Sr a0 , a1 
2
  yi xi   a0 xi   a1 xi
a1
i 1
i 1
i 1
n
n
n
Least Square Approximation


na0  a1   xi    yi
 i 1  i 1
n
n



2
a0   xi   a1   xi    yi xi
 i 1 
 i 1  i 1
n
n
n
2 Eqtns 2 Unknowns
Least Square Approximation
a1 
n
n
n
i 1
i 1
i 1
n  xi yi   xi  yi


n  x    xi 
i 1
 i 1 
n
n
2
2
i
n
x
a0  y  a1 x
x
i 1
i
n
n
y
y
i 1
n
i
Example
7
6
y = 0.8393x + 0.0714
5
4
Series1
Linear (Series1)
3
2
1
0
0
2
4
6
8
Quantification of Error
7
6
5
Average
4
Exper 1
n
3
2
y
1
y
i
i 1
n
24

 3.42
7
0
0
1
2
3
4
5
6
7
8
Quantification of Error
7

n
St   yi  y
6

2
i 1
5
Average
4
Exper 1
3
2
1
0
0
1
2
3
4
5
6
7
8
Quantification of Error
7
n

St   yi  y
6

2
i 1
5
Average
4
Exper 1
3
St
sy 
n 1
2
1
0
0
1
2
3
4
5
6
7
8
Quantification of Error
7
St
sy 
n 1
6
5
Average
4
Exper 1
3
2
1
n

St   yi  y

2
0
0
1
2
3
4
5
6
7
8
i 1
Standard Deviation Shows Spread Around mean Value
Quantification of Error
n
n
Sr   ei     yi  a0  a1 xi 
2
i 1
7
2
i 1
6
y = 0.8393x + 0.0714
5
4
Exper 1
Linear (Exper 1)
3
2
1
0
0
2
4
6
8
Quantification of Error
n
n
Sr   ei     yi  a0  a1 xi 
2
i 1
2
i 1
7
“Standard Deviation”
for Linear Regression
6
y = 0.8393x + 0.0714
5
4
Exper 1
Linear (Exper 1)
3
sy / x
2
1
0
0
2
4
6
8
Sr

n2
Quantification of Error
7

n
St   yi  y

2
i 1
6
5
Average
4
Exper 1
3
2
1
7
0
6
0
1
2
3
4
5
6
7
8
y = 0.8393x + 0.0714
5
4
Exper 1
Linear (Exper 1)
3
n
2
Sr    yi  a0  a1 xi 
Better Representation
i 1
1
Less Spread
0
0
2
4
6
8
2
Quantification of Error
7
7
6
6
y = 0.8393x + 0.0714
5
5
Average
4
4
Exper 1
3
n

St   yi  y
2
1

2
i 1
0
0
1
2
3
4
5
3
2
n
Sr    yi  a0  a1 xi 
1
6
7
i 1
8
0
0
St  S r
r 
St
2
2
Coefficient of
Determination
2
r r 
2
St  S r
St
Correlation Coefficient
4
6
8
Linearized Regression
bx
y  a1e
A  Bx
1
ln y  ln a1eb1x 
A  ln a1
ln a1  b1 x
B  b1
The Exponential Equation
Linearized Regression
y  a2 x
b2
A  Bx
log 10 y  log 10 a2 x 
A  log 10 a2
log 10 a2  b2 x
B  b2
b2
The Power Equation
Linearized Regression
x
y  a3
b3  x
1 b3 1 1


y a3 x a3
A  Bx
1
A
a3
b3
B
a3
The Saturation-Growth-Rate
Equation
Polynomial Regression
y  a0  a1 x  a2 x  e
2
A Parabola is Preferable
Polynomial Regression
Minimize
 ei  
n
2
i 1
n

 y i  a 0  a 1x i  a 2 x
i 1
Sr a 0 , a1 , a 2 

2 2

Polynomial Regression
n
Sr a 0 , a1 , a 2 
2
 2 y i  a 0  a1x i  a 2 x i   0
i 1
a 0
n
Sr a 0 , a1 , a 2 
2
 2 yi  a 0  a1x i  a 2 x i x i  0
i 1
a1


n
Sr a 0 , a1 , a 2 
2
2
 2 yi  a 0  a1x i  a 2 x i x i  0
i 1
a 2


Polynomial Regression


(n )a 0   x i a1   x a 2   yi
 x i a 0   x
2
i
2
i
a   x a
1
3
i
 x a   x a   x a
2
i
0
3
i
1
4
i
2
  x i yi
2
  x yi
3 Eqtns 3 Unknowns
2
i
Polynomial Regression
 x  a    y  
  

 x   x  a    x y 





  x   x  a   x y 
 n



x

i

  x i2


 x i 
2
i
3
i
2
i
3
i
4
i
0
i
1
i i
2
i i
2
Use any of the Methods we Learned
Polynomial Regression
With a0, a1, a2 known the Total Error

Sr a 0 , a1 , a 2    yi  a 0  a1x i  a 2 x
n
i 1
Standard Error
Coefficient of
Determination
sy x
Sr

n 3
S t  Sr
r 
St
2

2 2
Polynomial Regression
For Polynomial of Order m

Sr a 0 , a1 ,a m    yi  a 0  a1x i  a m x
n
i 1
Standard Error
Coefficient of
Determination
sy x
Sr

n  m  1
S t  Sr
r 
St
2

m 2
Numerical Integration &
Differentiation
Motivation
y f  xi  x   f xi 

x
x
Motivation
y f  xi  x   f xi 

x
x
Motivation
dy
f  xi  x   f  xi 
 lim
dx x0
x
Motivation
b
I   f x dx
a
AREA BETWEEN
a
AND
b
Motivation
d
v (t )  y (t )
dt
Motivation
b
y t    v t dx
a
Motivation
Motivation
Given
Calculate Derivative
Motivation
Given
Calculate
Think as Engineers!
In Summary
INTERPOLATE
In Summary
Newton-Cotes Formulas
Replace a complicated function or
tabulated data with an approximating
function that is easy to integrate
b
b
a
a
I   f x dx   f n x dx
f n x   ao  a1 x  an 1 x
n 1
 an x
n
In Summary
Also by piecewise
approximation
b
I   f  x dx
a

b xi 1
  f x dx
xi  a xi
n
Closed/Open Forms
CLOSED
OPEN
Trapezoidal Rule
Linear Interpolation
h 
Error  O 
 12 
3
Trapezoidal Rule
Multiple Application
Trapezoidal Rule
Multiple Application
Trapezoidal Rule
Multiple Application
x
a=xo x1
x2
f(x)
f(x0) f(x1) f(x2)
…
b=xn
f(xn-1) f(xn)
n 2
I  b  a 
xn-1
f x0   2 f xi   f  xn 
i 1
2n
Simpson’s 1/3 Rule
Quadratic Interpolation
h 
Error  O 
 90 
5
f 2 ( x )  a0  a1 x  a2 x
2
Simpson’s 3/8 Rule
 3h
Error  O
 80
5



Cubic Interpolation
f 2 ( x )  a0  a1 x  a2 x  a3 x
2
3
Gauss Quadrature
I  w1 f x1   w2 f x2 
x1
x2
General Case
Gauss Method calculates
pairs of wi, xi for the Integration limits
-1,1
I   f ( x )dx w1 f x1   w2 f x2 
1
1
For Other Integration Limits
Use Transformation
Gauss Quadrature
b
I   f ( x )dx
a
x  a0  a1 xG
For xg=-1, x=a
a  a0  a1
For xg=1, x=b
b  a0  a1
ba
a0 
2
ba
a1 
2
Gauss Quadrature
b
I   f ( x )dx
a

b  a   b  a xG
x
2

b  a
dx 
dx
2
G
ba
I   f ( x )dx 
 f ( x )dx
2 1
a
b
1
Gauss Quadrature
ba 1
I   f ( x )dx 
 f ( x )dx
2 1
a
b
ba n
I
 wi f xi 
2 1
Gauss Quadrature
Points Weighting
Factors wi
Function
Arguments
Error
2
W0=1.0
W1=1.0
X0=-0.577350269
X1= 0.577350269
F(4)(x)
3
W0=0.5555556 X0=-0.77459669
W1=0.8888888 X1=0.0
W2=0.5555556 X2=0.77459669
F(6)(x)
Gaussian Points
Points Weighting
Factors wi
Function
Arguments
Error
4
W0=0.3478548
X0=-0.861136312
F(8)(x)
W1=0.6521452
X1=-339981044
W2=0.6521452
X2=- 339981044
W3=0.3478548
X3=0.861136312
Gaussian Quadrature
Not a good method if function
is not available
FORWARD FINITE DIFFERENCE
Fig 23.1
BACKWARD FINITE DIFFERENCE
Fig 23.2
CENTERED FINITE DIFFERENCE
Fig 23.3
Data with Errors
ODE
IVP, BVP
Pendulum
d 
 sin  
m 2  mg
0
dt
 l 
2
d 
 sin  
 g
0
2
dt
 l 
2
W=mg
Ordinary
Differential
Equation
ODEs
d 
 sin  
 g
0
2
dt
 l 
2
Non Linear
Linearization
Assume  is small
sin   
d  g
    0
2
dt
l
2
ODEs
d  g



0


2
dt
l
2
Second Order
Systems of ODEs
d
y
dt
dy  g 
    0
dt  l 
ODE
y  0.5x 4  4 x3  10 x 2  8.5x  1
dy
3
2
 2 x  12 x  20 x  8.5
dx
ODE - OBJECTIVES
dy
3
2
 2 x  12 x  20 x  8.5
dx


y    2 x  12 x  20 x  8.5 dx
3
2
y  0.5x  4 x  10 x  8.5x  C
4
3
2
y  0.5x  4 x  10 x  8.5x  1
4
3
2
Undetermined
ODE- Objectives
Initial Conditions
y0  1
y  0.5x  4 x  10 x  8.5x  1
4
3
2
ODE-Objectives
Given
dy
 f x , y 
dx
Calculate
yx 
f 0, y   known I .C.
Runge-Kutta Methods
New Value = Old Value + Slope X Step Size
yi 1  yi  h
Runge Kutta Methods
yi 1  yi  h
Definition of  yields
different Runge-Kutta Methods
Euler’s Method
dy
 f x , y 
dx
yi 1  yi  h
Let
  f xi , yi 
Sources of Error
Roundoff:
Limited number of significant digits
Truncation:
Caused by discretization
• Local Truncation
• Propagated Truncation
Sources of Error
Propagated
Local
Euler’s Method
Heun’s Method
2-Steps
Predictor
Corrector
Heun’s Method
Predictor-Corrector
Solution in 2 steps
Let
  f xi , yi 
Predict
y
0
i 1
 yi  h
Heun’s Method
Corrector
Estimate

0
i 1
f xi , y
Let


f xi , yi   f xi , y

2
Correct
yi 1  yi  h
0
i 1

Error in Heun’s Method
The Mid-Point Method
yi 1  yi  h
Remember:
Definition of  yields different
Runge-Kutta Methods
Mid-Point Method
2-Steps
Predictor
Corrector
Mid-Point Method
Let
Predictor
  f xi , yi 
Predict
y
1
i
2
h
 yi  
2
Mid-Point Method
Estimate


f  x 1 , y 1 
i
i
2
 2
Let


  f  x 1 , y 1 
i
i
2
 2
Correct
yi 1  yi  h
Corrector
Runge Kutta – 2nd Order
dy
 f x , y 
dx
yi 1  yi  h
k1  f xi , yi 
f 0, y   known I .C.
2 
1
   k1  k 2 
3 
3
3
3 

k 2  f  xi  h , yi  k1h 
4
4 

Runge Kutta – 3rd Order
dy
 f x , y 
dx
yi 1  yi  h
k1  f xi , yi 
f 0, y   known I .C.
1
  k1  4k 2  k3 
6
1
1 

k 2  f  xi  h , yi  k1h 
2
2 

k3  f xi  h, yi  k1h  2k2 h
Runge Kutta – 4th Order
dy
 f x , y 
dx
yi 1  yi  h
k1  f xi , yi 
f 0, y   known I .C.
1
  k1  2k 2  2k3  k 4 
6
1
1 

k 2  f  xi  h , yi  k1h 
2
2 

1
1

 k  f x  h , y  k h 
k3  f  xi  h , yi  k 2 h 
4
i
i
3
2
2


Boundary Value Problems
CENTERED FINITE DIFFERENCE
Fig 23.3
Boundary Value Problems
xo
x1
x2
x3
... xn-1
xn
Boundary Value Problems
xo
x1
x2
x3
... xn-1
xn
y2  2 y1  y0  h f ( x1 , y1 )
2
Boundary Value Problems
xo
x1
x2
x3
... xn-1
xn
y3  2 y2  y1  h f ( x2 , y2 )
2
Boundary Value Problems
xo
x1
x2
x3
... xn-1
xn
y4  2 y3  y2  h f ( x3 , y3 )
2
Boundary Value Problems
xo
x1
x2
x3
... xn-1
xn
yn  2 yn 1  yn 2  h f ( xn 1 , yn 1 )
2
Boundary Value Problems
Collect Equations:
y2  2 y1  y0  h f ( x1 , y1 )
2
y3  2 y2  y1  h f ( x2 , y2 )
2
yn  2 yn 1  yn 2  h f ( xn 1 , yn 1 )
2
BOUNDARY CONDITIONS
Example
x1
x2
x3
x4
T5
T0
2
d T
 cTa  T   0
2
dx
Example
x1
x2
x3
x4
T0
T5
T2  2T1  T0
 cTa  T1   0
2
h
 T2  T1 2  ch   T0  ch Ta
2
2
Example
x1
x2
x3
x4
T0
T5
T3  2T2  T1
 cTa  T2   0
2
h
 T3  T2 2  ch   T1  ch Ta
2
2
Example
x1
x2
x3
x4
T0
T5
T4  2T3  T2
 cTa  T3   0
2
h
 T4  T3 2  ch   T2  ch Ta
2
2
Example
x1
x2
x3
x4
T0
T5
T5  2T4  T3
 cTa  T4   0
2
h
 T4  T3 2  ch   T2  ch Ta
2
2
Example
x1
x2
x3
x4
T0
2  ch 2

 1
 0

 0
T5
1
2  ch
0
2
1
1
2  ch
0
1
 T1  ch 2Ta  T0 

 T  
2
0   2   ch Ta 
 

2
 1  T3   ch Ta 
2 
2  ch  T4  ch 2Ta  T5 
0
2