lecturer notes 4th o..

LECTURE NOTES
TOPICS
LECTURE NOTE 1
INTRODUCTION:
Numerical Analysis isthe study of algorithms that use numerical approximation
for the problems of mathematical analysis .The field of numerical analysis
predates the invention of modern computers by many centuries.Linear
application was already in use more than 2000 years ago.Many great
mathematicians of the past were preocupied by numerical analysis,as is
obvious from the names of important algorithms like newton’s
method,Lagrange interpolation polynomial,Gauss elimination,or Euler’s
method.Direct methods compute the solution to a problem in finite number of
steps.These methods would give the precise answer if they were performed in
infinite precision arithmetic.Example include Gaussian elimination.In contrast to
direct methods,iterative methods are not expected to terminate in a finite
number of steps.Starting from an initial guess iterative methods form successive
approximations that converge tothe exact solution only in the limit.A
convergence test,ofteninvolving the residual is specified in order to decide
when a sufficiently accurate solution has been found.Even using infinite
precision arithmetic these methods would not reach the solution within a finite
number of steps .Examples include Newton’s method,bisection method,Jacobi
iteration.In computational matrix algebra,iterative methods are generally
needed for large problems.
Numerical methods are methods for solving problems numerically on
a computer or calculator.
In the decimal notation ,every real number is represented by
a finite or infinite sequence of decimal digits.For machine
computation the number must be replaced by a no.of finitely many
digits.
LECTURE NOTE 2
Most digital computers have two ways of representing
numbers,called fixed point and floating point.
In a fixed pont system all numbers are given with a fixed no of
decimal places.ex 56.358,0.008,2.000etc.
In a floating point system the no of significant digits is kept
fixed but the decimal point is floating.ex 0.62398x104 ,0.2417x10-12
Significant digit of a number c is any given digit of c,except possibly
for zeroes to the left of the first nonzero digit that serve only to fix
the position of the decimal point.
Error is the difference between the true value &
approximate value.There are 3 types of errors.
i)Absolute Error: If a is approximate value of a number x then error=xa is known as absolute error.
ii)Relative error =absolute error/true value
iii)Percent error=relative errorx100
Rounding off rule:Discard the (k+1)th and all subsequent decimals.
a)If all the number thus discarded is less than half a
unit(5) in the kth place leave the kth decimal unchanged.(rounding
down).
b)If it is greater than half a unit(5) in the kth place
add 1 to kth decimal(rounding up).
c)If it exactly half a unit(5) round off to the nearest
even decimal.
Absolute and Relative Errors:
Absolute Error: Suppose that
and
denote the true and approximate
values of a datum then the error incurred on approximating
by
and the absolute error
is given
i.e. magnitude of the error is given by
Relative Error: Relative Error or normalized error
datum
by
by an approximate value
is defined by
in representing a true
and
Sometimes
is defined by
LECTURE NOTE 3
TOPICS
LECTURE NOTE 4
Numerical Errors:
Numerical errors arise during computations due to round-off errors and
truncation errors.
Round-off Errors:
Round-off error occurs because computers use fixed number of bits and
hence fixed number of binary digits to represent numbers. In a numerical
computation round-off errors are introduced at every stage of computation.
Hence though an individual round-off error due to a given number at a given
numerical step may be small but the cumulative effect can be significant.
When the number of bits required for representing a number are less then
the number is usually rounded to fit the available number of bits. This is
done either by chopping or by symmetric rounding.
Chopping: Rounding a number by chopping amounts to dropping the extra
digits. Here the given number is truncated. Suppose that we are using a
computer with a fixed word length of four digits. Then the truncated
representation of the number
will be
. The digits
will be
dropped. Now to evaluate the error due to chopping let us consider the
normalized representation of the given number i.e.
chopping error in representing
.
So in general if a number is the true value of a given number and
is the normalized form of the rounded (chopped) number
and
is the normalized form of the chopping error then
Since
, the chopping error
Symmetric Round-off Error :
In the symmetric round-off method the last retained significant digit is
rounded up by 1 if the first discarded digit is greater or equal to 5.In other
words, if
in
is such that
1 before chopping
then the last digit in
is raised by
. For example let
be two given numbers to be rounded to five
digit numbers. The normalized form x and y are
and
. On rounding these numbers to five digits we get
and
respectively. Now w.r.t
In either case error
here
.
Truncation Errors:
Often an approximation is used in place of an exact mathematical
procedure. For instance consider the Taylor series expansion of say
i.e.
Practically we cannot use all of the infinite number of terms in the series for
computing the sine of angle x. We usually terminate the process after a
certain number of terms. The error that results due to such a termination or
truncation is called as 'truncation error'.
TOPICS
LECTURE NOTE 5
ITERATION METHOD:
To solve the eq f(x)=0when there is no formula
for the exact soln ,we can use an approximation method ,in particular
an iteration method i.e.a method in which we start from an initial
approx x0,and compute step by step approximations x1,x2,….of an
unknown solution of f(x)=0.
In general ,iteration methods are easy to program because
the computational operations are the same in each step just the data
change from step to step.There are various methods to solve by iteration
method.
BISECTION METHOD: If a function f(x) is continuous in the closed
interval [a,b] and f(a)and f(b) are of opposite signs i.e f(a)f(b)<0,then
there exists at least one root of f(x)=0between a&b.Let
f(a)f(b)<0,h=(a+b)/2 the middle pt of [a,b].Now if f(h)=0then h is a
root of f(x)=0.If f(h)≠0 then either f(a)f(h)< 0 or f(b)f(h)< 0.
If f(a)f(h)< 0 then root lies between (a,h) and if f(b)f(h)< 0
Then root lies between (h,b).Now the interval [a,b] is reduced to (a,h)
or (h,b).In this way the process is repeated until the root is obtained
to the desired accuracy.
LECTURE NOTE 6
EXAMPLE: Find a root of x3-4x-9=0 by Bisection Method correct upto
3 decimal places.
Solution: Now f(1)=-12< 0 ,f(2)=-9< 0 ,f(3)=6> 0
Now f(2)f(3)< 0
Therefore a root lies between 2&3 .
We take x0=(2+3)/2=2.5
f(2.5)Type equation here.A root lies between 2.5 &3
X1=2.75
In this way we will proceed to find the root to the desired accuracy.
TOPICS
LECTURE NOTE 7
FIXED POINT ITERATION METHOD:
In this method we start with a
number x0 & a continuous function g(x) and
compute=g(x0),x2=g(x1),…..
In general xn+1=g(xn)………….(1)
The numbers x1,x2,x3,………..xn+1 are called iterates and the
function g is called an iteration function.
If the sequence x1,x2,x3,………..xn+1 converges to α,then
α = lim 𝑔(𝑥n-1)=g( lim 𝑥n-1)=g(α) since g is continuous.
𝑛→∞
𝑛→∞
Therefore if the sequence { x1,x2,x3,………..xn+1 } is cgt then it
converges to a fixed point of iteration function.
The method of generating successive iterates with the
half of an arbitrary number x0 and a function g is known as fixed point
iteration.
Problem:  Find a root of x4-x-10 = 0
Consider g1(x) = 10 / (x3-1) and the fixed point iterative scheme xi+1=10 / (xi3 1), i = 0, 1, 2, . . .let the initial guess x0 be 2.0
i 0
1
2
3
4
5
6
7
8
xi 2 1.429 5.214 0.071 -10.004 -9.978E-3 -10 -9.99E-3 -10
So the iterative process with g1 gone into an infinite loop without converging.
Consider another function g2(x) = (x + 10)1/4 and the fixed point iterative
scheme
xi+1= (xi + 10)1/4, i = 0, 1, 2, . . .
let the initial guess x0 be 1.0, 2.0 and 4.0
i 0
1
2
3
4
5
6
xi 1.0 1.82116 1.85424 1.85553 1.85558 1.85558
xi 2.0 1.861
1.8558 1.85559 1.85558 1.85558
xi 4.0 1.93434 1.85866 1.8557 1.85559 1.85558 1.85558
That is for g2 the iterative process is converging to 1.85558 with any initial
guess.
Consider g3(x) =(x+10)1/2/x and the fixed point iterative scheme
xi+1=( xi+10)1/2 /xi, i = 0, 1, 2, . . .
let the initial guess x0 be 1.8,
i 0
1
2
3
4
5
6
.
.
.
98
.
xi 1.8 1.9084 1.80825 1.90035 1.81529 1,89355 1.82129 . 1.8555
.
TOPICS
LECTURE NOTE 8
NEWTON RAPHSON METHOD:
To find a simple (non repeated) real
root of the equation f(x)=0 we may use N-R Method.
Let x0 be an approx value of the desired root and h be the
small correction to it.Then x1=x0+h is the root of the given eq
f(x0+h)=0
By Taylor’s series expansion
f(x0+h)=f(x0)+h f’(x0)+h2 f’’(x0)/2 +……
Since h is very small neglecting higher orders of h we get
f(x0)+h f’(x0)=0
h=-f(x0)/ f’(x0)
Now x1= x0 - f(x0)/ f’(x0)
Taking successive approximations x2,x3,………..xn+1 we get
Xn+1=xn- f(xn)/ f’(xn
Examples
Square root of a number
Consider the problem of finding the square root of a number. Newton's method is
one of many methods of computing square roots.
For example, if one wishes to find the square root of 612, this is equivalent to
finding the solution to
The function to use in Newton's method is then,
with derivative,
With an initial guess of 10, the sequence given by Newton's method is
Where the correct digits are underlined. With only a few iterations one can
obtain a solution accurate to many decimal places.
Solution of cos(x) = x3
Consider the problem of finding the positive number x with cos(x) = x3. We can
rephrase that as finding the zero of f(x) = cos(x) − x3. We have f'(x) =
−sin(x) − 3x2. Since cos(x) ≤ 1 for all x and x3 > 1 for x > 1, we know that our
solution lies between 0 and 1. We try a starting value of x0 = 0.5. (Note that a
starting value of 0 will lead to an undefined result, showing the importance of
using a starting point that is close to the solution.)
The correct digits are underlined in the above example. In particular, x6 is correct
to the number of decimal places given. We see that the number of correct digits
after the decimal point increases from 2 (for x3) to 5 and 10, illustrating the
quadratic convergence.
TOPICS
LECTURE NOTE 9
CONVERGENCE OF N-R METHOD:
Expanding f(x) about nth
iterate xn in Taylor’s series
f(x)=f(xn)+(x-xn) f’(xn)+(x- xn)2 f”(⋋)/2 ------------(1)
where ⋋ lies
between x & xn
[a,b] is the interval containing the root α and the iterates xn,n=0,1,2,..
For x= α eq (1) reduces to
0= f(xn)+( α -xn) f’(xn)+( α - xn)2 f”(⋋)/2
0= f(xn)+en f’(xn)+en2 f”(⋋)/2
(en= α -xn)
en +f(xn)/f’(xn)=- en2 f”(⋋)/2f’(xn) ……….(2)
But en+1 = α -xn+1
= α – [xn- f(xn)/ f’(xn)]
=( α -xn)+ f(xn)/ f’(xn)
=en+ f(xn)/ f’(xn)………………………..(3)
From eq 2&3 en+1=- en2 f”(⋋)/2f’(xn
we get N-R Method converges quqdratically
TOPICS
LECTURE NOTE 10
REGULA FALSI METHOD:
In this method it requires to know the
Approximation location of the root in the given interval(a,b).Let at
the pts a=x0 ,b=x1 the values of the function f(x0) and f(x1) are of
opposite signs so that one root must lies between these 2 pts.Now
we replace the curve y=f(x) by the eq of chord joining [x0,f(x0)],[ x1,
f(x1)] as
y- f(x0)=[f (x1) - f(x0)]/(x1-x0)(x-x0)………(1)
The point where the above chord (1) cuts the x-axis is obtained by
putting y=0 in (1).
So from (1)
- f(x0)=[f (x1) - f(x0)]/(x1-x0)(x-x0)
[f (x1) - f(x0)] (x-x0)=-f(x0)( x1-x0)
X=x0 - f(x0)/ [f (x1) - f(x0)]…………(2)
This value is taken as 2nd approximation
X2=x0 - f(x0) (x1-x0)/ [f (x1) - f(x0)]
The iteration formula is
Xn+1=xn-1 - f(xn-1)/ [f (xn) - f(xn-1)]
The convergce process in the bisection method is very slow. It depends only on
the choice of end points of the interval [a,b]. The function f(x) does not have any
role in finding the point c (which is just the mid-point of a and b). It is used only
to decide the next smaller interval [a,c] or [c,b]. A better approximation to c can
be obtained by taking the straight line L joining the points (a,f(a)) and (b,f(b))
intersecting the x-axis. To obtain the value of c we can equate the two
expressions of the slope m of the line L.
m=
f(b) - f(a)
(b-a)
=
0 - f(b)
(c-b)
=> (c-b) * (f(b)-f(a)) = -(b-a) * f(b)
f(b) * (b-a)
f(b) - f(a)
Now the next smaller interval which brackets the root can be obtained by
checking
c=b-
f(a) * f(b) < 0 then b = c
> 0 then a = c
= 0 then c is the root.
Selecting c by the above expression is called Regula-Falsi method or False
position method.
Find the root of tan(x)-x-1 = 0
The graph of this equation is given in the figure.
Let a = 0.5 and b = 1.5
Iteration
No.
1
2
3
4
5
6
a
c
f(a) * f(c)
1.5 0.576
0.8836 (+ve)
1.5 0.644
0.8274 (+ve)
1.5 0.705
0.762 (+ve)
1.5 0.76
0.692 (+ve)
1.5 0.808
0.616 (+ve)
1.5 0.851
0.541 (+ve)
.
.
.
.
.
33
1.128 1.5 1.129
1.859E-4 (+ve)
34
1.129 1.5 1.129
2.947E-6 (+ve)
So one of the roots of tan(x)-x-1 = 0 is approximately 1.129.
TOPICS
0.5
0.576
0.644
0.705
0.76
0.808
b
LECTURE NOTE 11
MULLER METHOD:
This method takes a similar approach like secant method but projects a
parabola through 3 points.It consists of deriving the coefficients of theparabola
that goes through 3 points.These coefficients can then be substituted into the
quadratic formula to obtain the point where the parabola intercepts the x-axis
i.e the root estimate.The approach is facilitated by writing the parabolic eqn in a
convenient form
F2(x)=a(x-x2)2+b(x-x2)+c
F(x0)=a(x0-x2)2+b(x0-x2)+c
F(x1)= a(x1-x2)2+b(x1-x2)+c
F(x2)= a(x2-x2)2+b(x2-x2)+c
So
c=f(x2)
To solve the remaining coefficients i.e a,b algebraic manipulation can be used.
Let us define a number of differences
H0=x1-x0
h= x2-x1
𝛿 0=[f(x1)-f(x0)]/ (x1-x0)
𝛿 1=[f(x2)-f(x1)]/ (x2-x1)
Now a= (𝛿1- 𝛿 0)/(h1+h0), b=ah1+ 𝛿 1 , c= f(x2)
f(x2)
To find the root we use the formula
X3=x2+ (-2c)/b±√𝑏 2 − 4𝑎𝑐
TOPICS
LECTURE NOTE 12
LU DECOMPOSITION METHOD:
In this method, the coefficient matrix A of the system of eqns is decomposed or
factorized into the product of a lower triangular matrix L and an upper
triangular matrix U.
We write the matrix A as A=LU
Using the matrix multiplication rule to multiply the matrices L& U and
comparing the elements of the resulting matrix with those of A we obtain
li1ui1+li2ui2+ …+linuin=aij,
where lij=0 for j> 𝑖 & uij=0 for i>j
The system of eqns involve n parameter family of solutions.To produce a
unique soln it is convenient to choose either uii=1 or lii=1
When we choose (i) lii=1the method is called the Doolittle’s method
(ii) uii=1the method is called Crout’s method
Let A be a square matrix. An LU factorization refers to the factorization of A,
with proper row and/or column orderings or permutations, into two factors, a
lower triangular matrix L and an upper triangular matrix U,
In the lower triangular matrix all elements above the diagonal are zero, in the
upper triangular matrix, all the elements below the diagonal are zero. For
example, for a 3-by-3 matrix A, its LU decomposition looks like this:
Without a proper ordering or permutations in the matrix, the factorization may
fail to materialize. For example, it is easy to verify (by expanding the matrix
multiplication) that
. If
, then at least one of and
has
to be zero, which implies either L or U is singular. This is impossible if A is
nonsingular. This is a procedural problem. It can be removed by simply
reordering the rows of A so that the first element of the permuted matrix is
nonzero. The same problem in subsequent factorization steps can be removed the
same way, see the basic procedure below.
It turns out that a proper permutation in rows (or columns) is sufficient for the
LU factorization. The LU factorization with Partial Pivoting refers often to the
LU factorization with row permutations only,
where L and U are again lower and upper triangular matrices, and P is a
permutation matrix which, when left-multiplied to A, reorders the rows of A. It
turns out that all square matrices can be factorized in this form,[3] and the
factorization is numerically stable in practice. This makes LUP decomposition a
useful technique in practice.
An LU factorization with full pivoting involves both row and column
permutations,
where L, U and P are defined as before, and Q is a permutation matrix that
reorders the columns of A.
An LDU decomposition is a decomposition of the form
where D is a diagonal matrix and L and U are unit triangular matrices, meaning
that all the entries on the diagonals of L and U are one.
Above we required that A be a square matrix, but these decompositions can all be
generalized to rectangular matrices as well. In that case, L and D are square
matrices both of which have the same number of rows as A, and U has exactly
the same dimensions as A. Upper triangular should be interpreted as having only
zero entries below the main diagonal, which starts at the upper left corner.
Example
We factorize the following 2-by-2 matrix:
One way to find the LU decomposition of this simple matrix would be to simply
solve the linear equations by inspection. Expanding the matrix multiplication
gives
This system of equations is underdetermined. In this case any two non-zero
elements of L and U matrices are parameters of the solution and can be set
arbitrarily to any non-zero value. Therefore to find the unique LU
decomposition, it is necessary to put some restriction on L and U matrices. For
example, we can conveniently require the lower triangular matrix L to be a unit
triangular matrix (i.e. set all the entries of its main diagonal to ones). Then the
system of equations has the following solution:
Substituting these values into the LU decomposition above yields
TOPICS
LECTURE NOTE 13
INTERPOLATION:
In the mathematical field of numerical analysis, interpolation is a method of
constructing new data points within the range of a discrete set of known data
points.
In engineering and science, one often has a number of data points, obtained by
sampling or experimentation, which represent the values of a function for a
limited number of values of the independent variable. It is often required to
interpolate (i.e. estimate) the value of that function for an intermediate value of
the independent variable. This may be achieved by curve fitting or regression
analysis.
A different problem which is closely related to interpolation is the approximation
of a complicated function by a simple function. Suppose the formula for some
given function is known, but too complex to evaluate efficiently. A few known
data points from the original function can be used to create an interpolation based
on a simpler function. Of course, when a simple function is used to estimate data
points from the original, interpolation errors are usually present; however,
depending on the problem domain and the interpolation method used, the gain in
simplicity may be of greater value than the resultant loss in accuracy
LECTURE NOTE 14
LAGRANGE INTERPOLATION:
To find a polynomial Pn(x) satisfying the condition Pn(xi)=f(xi) where Pn is
interpolating polynomial
Let P(x)=a1x+a0 (where a0, a1are constants )
Satisfying the interpolating conditions
F(x0)=P(x0)= a1x0+a0
F(x1)=P(x1)= a1x1+a0
Elliminating a0, a1 from the above eqns we get
P(x)=(x-x1)f(x0)/( x0-x1)+ (x-x0)f(x1)/( x1-x0)
=L0(x)f0+L1(x)f1
Where L0(x), L1(x) are called Lagrange fundamental polynomials satisfying the
conditions Li(xj)={1for i=j,0 for i≠ 𝑗
Therefore the linear Lagrange polynomial is P1(x)= L0(x)f0+L1(x)f1
Quadratic interpolationis given by
P2(x)= L0(x)f0+L1(x)f1+ L2(x)f2
In general Pn(x)= L0(x)f0+L1(x)f1+ L2(x)f2+… Ln-1(x)fn-1
Compute f(0.3) for the data
x 0 1
3
4
7
f 1 3 49
129
813
using Lagrange's interpolation formula (Analytic value is 1.831)
(x - x1) (x - x2)(x- x3)(x - x4)
f(x)
=
(x - x0)(x - x1) (x - x2)(x - x3)
f0 + . . .
+
(x0 - x1) (x0 - x2)(x0 - x3)(x0 x4)
f4
(x4 - x0)(x4 - x1)(x4 - x2)(x4 x3)
(0.3 - 1)(0.3 - 3)(0.3 - 4)(0.3 7)
(0.3 - 0)(0.3 - 3)(0.3 - 4)(0.3 7)
3
+
1+
=
(-1) (-3)(-4)(-7)
1 x (-2)(-3)(-6)
(0.3 - 0)(0.3 - 1)(0.3 - 4)(0.3 7)
(0.3 - 0)(0.3 - 1)(0.3 - 3)(0.3 7)
49
+
3 x 2 x (-1)(-4)
129
+
4 x 3 x 1 (-3)
(0.3 - 0)(0.3 - 1)(0.3 - 3)(0.3 4)
813
7x6x4x3
= 1.831
LECTURE NOTE 15
NEWTON DIVIDED DIFFERENCE:
Let us assume that the function f(x) is linear f(xi) - f(xj)
then we have
(xi - xj)
where xi and xj are any two tabular points, is independent of xi and xj. This ratio
is called the first divided difference of f(x) relative to xi and xj and is denoted by
f [xi, xj]. That is
f [xi, xj] f(xi) - f(xj) = f [xj,
=
(xi - xj) xi]
Since the ratio is independent of xi and xj we can write f [x0, x] = f [x0, x1]
f(x) - f(x0)
=
f [x0,
x1]
(x - x0)
f(x) = f(x0) + (x - x0) f [x0, x1]
x0
f(x0) x
1
=
|
|=
f1 - f0
f0x1 - f1x0
x
+
x1
x - x0
f(x1) x1 - x0
x1 - x0
x
So if f(x) is approximated with a linear polynomial then the function value at any
point x can be calculated by using f(x)  P1(x) = f(x0) + (x - x1) f [x0, x1]
where f [x0, x1] is the first divided difference of f relative to x0 and x1.
Similarly if f(x) is a second degree polynomial then the secant slope defined
above is not constant but a linear function of x. Hence we have
f [x1, x2] - f [x0,
x1]
x2 - x 0
is independent of x0, x1 and x2. This ratio is defined as second divided difference
of f relative to x0, x1 and x2. The secind divided difference are denoted as
f [x1, x2] - f [x0,
x1]
f [x0, x1,
=
x2]
x2 - x0
Now again since f [x0, x1,x2] is independent of x0, x1 and x2 we have
f [x1, x0, x] = f [x0, x1, x2]
f [x0, x] - f [x1,
x0]
f [x0, x1,
= x2]
x - x1
f [x0, x] = f [x0, x1] + (x - x1) f [x0, x1, x2]
f [x] - f [x0]
f [x0, x1] + (x - x1) f [x0,
= x1, x2]
x - x0
f(x) = f [x0] + (x - x0) f [x0, x1] + (x - x0) (x - x1) f [x0, x1, x2]
LECTURE NOTE 16
NEWTON FORWARD INTERPOLATION:
Consider the equation of the linear interpolation optained in the earlier section :
f1 - f0
f0x1 - f1x0
x
+
f(x)P1(x) =
ax-1b =
x1 - x0
=
x1 - x0
1
[(x1 - x)f0 + (x-x0)f1]
(x1 - x0)
x1 - x
x - x0
(f1f 0)
+
f0
+
x1 - x0
x1 - x0
x - x0
=
= f0 + r f0
x - x0
f0
x1 - x0
(f1f 0)
f0
+
x1 - x0
[ r = (x - x0) / (x1 - x0) f0 = f1 - f0 ]
since x1 - x0 is the step lenght h, r can be written as (x - x0)/h and will be
between (0, 1).
Consider the function value (xi, fi) i = 0,1,2,--5 then the forward difference table
is
xi
fi
x0
f0
fi
2fi
3fi
4fi
5fi
f0 = f1- f0
x1
2f0 = f1f0
f1
f1 = f2 - f1
x2
2f1 = f2 f1
f2
f2 = f3 - f2
x3
2f2 = f3 f2
f3
f3 = f4 - f3
x4
f4
x5
f5
2f3 = f4 f3
f4 = f5 - f4
3f0 = 2f12f0
3f1 = 2f2 2f1
3f2 = 2f3 2f2
4f0 = 3f13f0
4f1 = 3f2 3f1
5f0 = 4f14f0
If f(x) is known at the following data points
2
3
4
xi 0 1
23
55
109
fi 1 7
then find f(0.5) and f(1.5) using Newton's forward difference formula.
Solution :
Forward difference table
xi
fi
0
1
fi
2fi
3fi
4fi
6
1
7
10
16
2
23
6
16
32
3
55
22
54
4
109
0
6
LECTURE NOTE 17
NEWTON BACKWARD DIFFERENCE TABLE:
As a particular case, lets again consider the linear approximation to f(x)
f1 - f0
f0x1 - f1x0
x
+
f(x)P1(x)
=
x1 - x0
x1 - x0
x1 - x
x - x0
f0
+
f1
x1 - x0
x1 - x0
x - x1
=
f1
-
x - x0
) f1
1
f 0 +(
x0 - x1
x - x0
x - x1
=
(f1f0 )
f1
x1 - x 0
= f1 + sf1
where s = (x - x1) / (x1 - x0) and f1 is the backward difference of f at x1.
Find f(0.15) using Newton backward difference table from the data
x
f(x)
f
2f
3f
4f
0.1
0.09983
0.2
0.19867
0.3
0.29552
0.4
0.38942
0.5
0.47943
0.09884
-0.00199
0.09685
-0.00156
-0.00355
0.0939
0.00121
-0.00035
-0.0039
0.09
s = ( x - xn ) / h = (0.15 - 0.5) / 0.1 = -3.5
s(s + 1) 2
s(s + 1)(s + 2) 3
s(s + 1)(s + 2)(s + 3) 4
 fn +
 fn +
 fn
2!
3!
(-3.5)(-3.5 +
(-3.5)(-3.5 + 1)(-3.5
= .97943 + (((-.00035)
1)
+ 2)
.0039)+
+
3.5)*.09 +
2!
3!
f(0.15) = fn + sfn +
(-3.5)(-3.5 + 1)(-3.5 + 2)(-3.5 + 3)
(.00121)
4!
= 0.97943 - 0.315 - 0.01706 + 0.000765625 + 0.00033086
= 0.14847
LECTURE NOTE 18
GAUSS SIEDAL METHOD:
Example 1: Solving a system of equations by the Gauss-Seidel method
Use the Gauss-Seidel method to solve the system
4x1 + x2 - x3
=3
x1 =
-1/4 x2 + 1/4 x3 + 3/4
<=>
2x1 + 7 x2 + x3
= 19
x2 = -2/7 x1
- 1/7 x3 + 19/7
x1 - 3 x2 +12 x3 = 31
x3 = -1/12 x1 + 1/4 x2
+ 31/12
Solution:
We have
0
=
-2/7
-1/4 1/4
x1
0 -1/7
x2
-1/12 1/4 0
The iteration formulas are
x1(k+1) =
-1/4 x2(k)
x2(k+1) = -2/7 x1(k+1)
<=>
x3(k+1) = -1/12 x1(k+1) + 1/4 x2(k+1)
x3
3/4
+
19/7
=G +
31/12
+ 1/4 x3(k) + 3/4
- 1/7 x3(k) + 19/7
+ 31/12
The difference between the Gauss-Seidel method and the Jacobi method is that
here we use the coordinates x1(k),...,xi-1(k) of x(k) already known to compute its ith
coordinate xi(k).
If we start from x1(0) = x2(0) = x3(0) = 0 and apply the iteration formulas, we obtain
k x1(k) x2(k) x3(k)
0
0
0
0
1 0,75 2,50 3,15
2 0,91 2,00 3,01
3 1,00 2,00 3,00
4 1,00 2,00 3,00
The exact solution is: x1 = 1, x2 = 2, x3 = 3.
For instance, when k=2, we have x2(2)= 2,00:
x2(2) = -2/7 x1(2) - 1/7 x3(1) + 19/7 = -2/7 · 0,72 - 1/7 · 3,15 + 19/7 = 2,00
LECTURE NOTE 19
TAYLOR’S SERIES:
Consider the one dimensional initial value problem
y' = f(x, y), y(x0) = y0
where
f is a function of two variables x and y and (x0 , y0) is a known point on the
solution curve.
If the existence of all higher order partial derivatives is assumed for y at x = x0,
then by Taylor series the value of y at any neibhouring point x+h can be written
as
y(x0+h) = y(x0) + h y'(x0) + h2 /2 y''(x0) + h3/3! y'''(x0) + . . . . . .
where ' represents the derivative with respect to x. Since at x0, y0 is known, y' at
x0 can be found by computing f(x0,y0). Similarly higher derivatives of y at x0
also can be computed by making use of the relation y' = f(x,y)
y'' = fx + fyy'
y''' = fxx + 2fxyy' + fyy y'2 + fyy''
and so on. Then
y(x0+h) = y(x0) + h f + h2 ( fx + fyy' ) / 2! + h3 ( fxx + 2fxyy' + fyy y'2 + fyy'' ) / 3!
+ o(h4)
Q Using Taylor series method, find y(0.1) for y' = x - y2 , y(0) = 1 correct upto
four decimal places.
Given y' = f(x,y) = x - y2
y'' = 1 - 2yy',
-6y''2
y''' = -2yy'' - 2y'2,
yiv = -2yy''' - 6y'y'',
yv = -2yyiv -8y'y'''
Since at x = 0, y = 1;
y' = -1,
y'' = 3,
y''' = -8,
yiv = 34 and yv = -186
The forth order Taylor's formula is
y(x) = y(x0) + (x-x0) y'(x0, y0) + (x-x0)2 y''(x0, y0)/2! + (x-x0)3 y'''(x0, y0)/3! +
(x-x0)4 yiv(x0, y0)/4! + h5yv(x0, y0)/5!
+. . .
= 1 - x + 3 x2/2! - 8 x3/3! + 34 x4/4! - 186 x5/5!
(since x0 = 0)
= 1 - x + 3 x2/2 - 4 x3/3 + 17 x4/12 - 31 x5/20
Now
y(0.1) = 1 - (0.1) + 3 (0.1)2/2 - 4 (0.1)3/3 + 17 (0.1)4/12 - 31 (0.1)5/20
= 0.9 + 3 (0.1)2/2 - 4 (0.1)3/3 + 17 (0.1)4/12 - 31 (0.1)5/20
= 0.915 - 4 (0.1)3/3 + 17 (0.1)4/12 - 31 (0.1)5/20
= 0.9137 + 17 (0.1)4/12 - 31 (0.1)5/20
= 0.9138 - 31 (0.1)5/20
= 0.9138
Since the value of the last term does not add upto first four decimal places, the
Taylor series formula of order four is sufficient to find y(0.1) accurate upto four
decimal places.
LECTURE NOTE 20
TRAPEZOIDAL RULE:
In numerical analysis, the trapezoidal rule (also known as the trapezoid rule or
trapezium rule) is a technique for approximating the definite integral
The trapezoidal rule works by approximating the region under the graph of the
function
as a trapezoid and calculating its area. It follows that
For a domain discretized into N equally spaced panels, or N+1 grid points a = x1
< x2 < ... < xN+1 = b, where the grid spacing is h=(b-a)/N, the approximation to
the integral becomes
Error analysis
The error of the composite trapezoidal rule is the difference between the value of
the integral and the numerical result:
There exists a number ξ between a and b, such that
LECTURE NOTE 21
SIMPSON’S 1/3RD RULE:
In numerical analysis, Simpson's rule is a method for numerical integration, the
numerical approximation of definite integrals. Specifically, it is the following
approximation:
If the interval of integration
is in some sense "small", then Simpson's rule
will provide an adequate approximation to the exact integral. By small, what we
really mean is that the function being integrated is relatively smooth over the
interval
. For such a function, a smooth quadratic interpolant like the one
used in Simpson's rule will give good results.
However, it is often the case that the function we are trying to integrate is not
smooth over the interval. Typically, this means that either the function is highly
oscillatory, or it lacks derivatives at certain points. In these cases, Simpson's rule
may give very poor results. One common way of handling this problem is by
breaking up the interval
into a number of small subintervals. Simpson's rule
is then applied to each subinterval, with the results being summed to produce an
approximation for the integral over the entire interval. This sort of approach is
termed the composite Simpson's rule.
Suppose that the interval
is split up in subintervals, with an even
number. Then, the composite Simpson's rule is given by
where
particular,
for
and
with
; in
. The above formula can also be written as
The error committed by the composite Simpson's rule is bounded (in absolute
value) by
where is the "step length", given by
[
Error
The error in approximating an integral by Simpson's rule is
where is some number between and .
LECTURE NOTE 22
GAUSS QUADRATURE FORMULA:
In numerical analysis, a quadrature rule is an approximation of the definite
integral of a function, usually stated as a weighted sum of function values at
specified points within the domain of integration. (See numerical integration for
more on quadrature rules.) An n-point Gaussian quadrature rule, named after
Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for
polynomials of degree 2n − 1 or less by a suitable choice of the points xi and
weights wi for i = 1, ..., n. The domain of integration for such a rule is
conventionally taken as [−1, 1], so the rule is stated as
Gaussian quadrature as above will only produce accurate results if the function
f(x) is well approximated by a polynomial function within the range [−1, 1]. The
method is not, for example, suitable for functions with singularities. However, if
the integrated function can be written as
, where g(x) is
approximately polynomial and ω(x) is known, then alternative weights and
points that depend on the weighting function ω(x) may give better results,
where
LECTURE NOTE 23
EULER METHOD:
In mathematics and computational science, the Euler method is a SN-order
numerical procedure for solving ordinary differential equations (ODEs) with a
given initial value. It is the most basic explicit method for numerical integration
of ordinary differential equations and is the simplest Runge–Kutta method. The
Euler method is named after Leonhard Euler,
Formulation of the method
Suppose that we want to approximate the solution of the initial value problem
Choose a value for the size of every step and set
of the Euler method from to
. Now, one step
The value of is an approximation of the solution to the ODE at time :
. The Euler method is explicit, i.e. the solution
is an explicit
function of for
.
While the Euler method integrates a first-order ODE, any ODE of order N can be
represented as a first-order ODE: to treat the equation
,
we introduce auxiliary variables
and obtain the equivalent
equation
This is a first-order system in the variable
and can be handled by Euler's
method or, in fact, by any other scheme for first-order systems.[4]
Example
Given the initial value problem
we would like to use the Euler method to approximate
.
Illustration of numerical integration for the equation
Blue is
the Euler method; green, the midpoint method; red, the exact solution,
The step size is h = 1.0.
The Euler method is
so first we must compute
function is defined by
. In this simple differential equation, the
. We have
By doing the above step, we have found the slope of the line that is tangent to the
solution curve at the point
. Recall that the slope is defined as the change in
divided by the change in , or
.
The next step is to multiply the above value by the step size , which we take
equal to one here:
Since the step size is the change in , when we multiply the step size and the
slope of the tangent, we get a change in value. This value is then added to the
initial value to obtain the next value to be used for computations.
The above steps should be repeated to find
,
and
.
Due to the repetitive nature of this algorithm, it can be helpful to organize
computations in a chart form, as seen below, to avoid making errors.
0 1 0 1
11
2
1 2 1 2
12
4
2 4 2 4
14
8
3 8 3 8
18
16
The conclusion of this computation is that
differential equation is
so
. The exact solution of the
LECTURE NOTE 24
IMPROVED EULER METHOD:
In mathematics and computational science, Heun's method may refer to the
improved[1] or modified Euler's method (that is, the explicit trapezoidal
rule[2]), or a similar two-stage Runge–Kutta method. It is named after Karl Heun
and is a numerical procedure for solving ordinary differential equations (ODEs)
with a given initial value. Both variants can be seen as extensions of the Euler
method into two-stage second-order Runge–Kutta methods.
The procedure for calculating the numerical solution to the initial value problem
via the improved Euler's method is:
by way of Heun's method, is to first calculate the intermediate value
then the final approximation
at the next integration point.
where is the step size and
LECTURE NOTE 25
RUNGE KUTTA METHOD:
and
One member of the family of Runge–Kutta methods is often referred to as
"RK4", "classical Runge–Kutta method" or simply as "the Runge–Kutta
method".
Let an initial value problem be specified as follows.
Here, y is an unknown function (scalar or vector) of time t which we would like
to approximate; we are told that , the rate at which y changes, is a function of t
and of y itself. At the initial time the corresponding y-value is . The function
f and the data , are given.
Now pick a step-size h>0 and define
for n = 0, 1, 2, 3, . . . , using
[1]
LECTURE NOTE 26
Solve
with step size h = 0.025by using Runge kutta method.
The method proceeds as follows:
The numerical solutions correspond to the underlined values
LECTURE NOTE 27
Matrix inverse:
The general system of linear equations in n variables can be written in matrix
form Ax = b, when a vector x is sought which satisfies this equation. We will
now use the inverse matrix A-1 to find this vector.
1. The inverse matrix
In Step 11, we observed that if the determinant of A is non-zero, it has an
inverse matrix A-1, when the solution of the linear system can be written as
x = A-1b, i.e., the solution of the system of linear equations can be
obtained by first finding the inverse of the coefficient matrix A, and then
forming the product A-1
1. As the 2 x 2 example, consider
and seek the inverse matrix
such that
This is equivalent to solving the two systems
The method proceeds as follows:
1. Form the augmented matrix:
2. Apply elementary row operations to the augmented matrix such
that A is transformed into an upper triangular matrix:
3. Solve the two systems:
using back-substitution. Note how the systems have been
constructed, using and columns of . From the first system, 3v1 = 2, 3v1u• = -2/3, and 2u1 + v<SUB.1< SUB> = 1, whence 2u1 = 1 +
2/3, u1 = 5/6. From the second system, 3v2 = 1, v2 = 1/3, and 2u2 + v2
= 0, whence 2u2 = -1/3, u2 = -1/6. Thus the required inverse matrix
is:
4. Check: AA-1 should be equal to I. Multiplication yields:
so that A-1 is correct
LECTURE NOTE 28
Definition
Example
An experiment is a situation involving
chance or probability that leads to
results called outcomes.
In the problem above, the
experiment is spinning the
spinner.
An outcome is the result of a single
trial of an experiment.
The possible outcomes are
landing on yellow, blue,
green or red.
An event is one or more outcomes of
an experiment.
One event of this
experiment is landing on
blue.
Probability is the measure of how
likely an event is.
The probability of landing
on blue is one fourth.
In order to measure probabilities, mathematicians have devised the following formula for
finding the probability of an event.
Probability Of An Event
P(A) =
The Number Of Ways Event A Can Occur
The total number Of Possible Outcomes
The probability of event A is the number of ways event A can
occur divided by the total number of possible outcomes. Let's
take a look at a slight modification of the problem from the top of the
page.
The probability of event A is the number of ways event A can
occur divided by the total number of possible outcomes. Let's
take a look at a slight modification of the problem from the top of the
page.
Experiment 1:
A spinner has 4 equal sectors colored yellow,
blue, green and red. After spinning the
spinner, what is the probability of landing on
each color?
Outcomes:
The possible outcomes of this experiment are
yellow, blue, green, and red.
Probabilities:
# of ways to land on yellow
P(yellow) =
1
=
total # of colors
4
# of ways to land on blue
P(blue)
=
1
=
total # of colors
4
# of ways to land on green
P(green) =
1
=
total # of colors
4
# of ways to land on red
P(red)
=
1
=
total # of colors
4
Experiment 2:
A single 6-sided die is rolled. What is the
probability of each outcome? What is the
probability of rolling an even number? of rolling
an odd number?
Outcomes:
The possible outcomes of this experiment are
1, 2, 3, 4, 5 and 6.
Probabilities:
# of ways to roll a 1
P(1)
P(2)
=
1
=
total # of sides
6
# of ways to roll a 2
1
=
=
total # of sides
6
# of ways to roll a 3
P(3)
=
1
=
total # of sides
6
# of ways to roll a 4
P(4)
P(5)
P(6)
=
1
=
total # of sides
6
# of ways to roll a 5
1
=
=
total # of sides
6
# of ways to roll a 6
1
=
=
total # of sides
6
# ways to roll an even number
P(even) =
3
=
1
=
total # of sides
6
2
# ways to roll an odd number
3
1
P(odd) =
=
total # of sides
=
6
2
Experiment 2 illustrates the difference between an outcome and an
event. A single outcome of this experiment is rolling a 1, or rolling a 2,
or rolling a 3, etc. Rolling an even number (2, 4 or 6) is an event, and
rolling an odd number (1, 3 or 5) is also an event.
In Experiment 1 the probability of each outcome is always the same.
The probability of landing on each color of the spinner is always one
fourth. In Experiment 2, the probability of rolling each number on the
die is always one sixth. In both of these experiments, the outcomes
are equally likely to occur. Let's look at an experiment in which the
outcomes are not equally likely.
Experiment 3:
A glass jar contains 6 red, 5 green, 8 blue and
3 yellow marbles. If a single marble is chosen
at random from the jar, what is the probability
of choosing a red marble? a green marble? a
blue marble? a yellow marble?
Outcomes:
The possible outcomes of this experiment are
red, green, blue and yellow.
Probabilities:
# of ways to choose red
P(red)
=
6
=
total # of marbles
P(blue)
=
22
# of ways to choose green
P(green) =
3
11
5
=
total # of marbles
22
# of ways to choose blue
8
=
=
total # of marbles
=
22
# of ways to choose yellow
P(yellow) =
4
11
3
=
total # of marbles
22
The outcomes in this experiment are not equally likely to occur. You
are more likely to choose a blue marble than any other color. You are
least likely to choose a yellow marble.
Experiment 4:
Choose a number at random from 1 to 5. What is
the probability of each outcome? What is the
probability that the number chosen is even? What
is the probability that the number chosen is odd?
Outcomes:
The possible outcomes of this experiment are 1, 2,
3, 4 and 5.
Probabilities:
# of ways to choose a 1
P(1)
P(2)
P(3)
=
1
=
total # of numbers
5
# of ways to choose a 2
1
=
=
total # of numbers
5
# of ways to choose a 3
1
=
=
total # of numbers
5
# of ways to choose a 4
P(4)
P(5)
=
1
=
total # of numbers
5
# of ways to choose a 5
1
=
=
total # of numbers
5
P(even) = # of ways to choose an even number = 2
total # of numbers
5
# of ways to choose an odd number
3
P(odd) =
=
total # of numbers
5
The outcomes 1, 2, 3, 4 and 5 are equally likely to occur as a result of
this experiment. However, the events even and odd are not equally
likely to occur, since there are 3 odd numbers and only 2 even
numbers from 1 to 5.
Summary:
The probability of an event is the measure of the
chance that the event will occur as a result of an
experiment. The probability of an event A is the
number of ways event A can occur divided by the
total number of possible outcomes. The probability of
an event A, symbolized by P(A), is a number between
0 and 1, inclusive, that measures the likelihood of an
event in the following way:


If P(A) > P(B) then event A is more likely to occur
than event B.
If P(A) = P(B) then events A and B are equally
likely to occur
LECTURE NOTE 29
RANDOM VARIABLES:
The outcome of an experiment need not be a number, for example, the
outcome when a coin is tossed can be 'heads' or 'tails'. However, we
often want to represent outcomes as numbers. A random variable is a
function that associates a unique numerical value with every outcome of
an experiment. The value of the random variable will vary from trial to trial
as the experiment is repeated.
There are two types of random variable - discrete and continuous.
A random variable has either an associated probability distribution
(discrete random variable) or probability density function (continuous
random variable).
Examples
1. A coin is tossed ten times. The random variable X is the number of
tails that are noted. X can only take the values 0, 1, ..., 10, so X is a
discrete random variable.
2. A light bulb is burned until it burns out. The random variable Y is its
lifetime in hours. Y can take any positive real value, so Y is a
continuous random variable.
The expected value (or population mean) of a random variable indicates
its average or central value. It is a useful summary value (a number) of
the variable's distribution.
Stating the expected value gives a general impression of the behaviour of
some random variable without giving full details of its probability
distribution (if it is discrete) or its probability density function (if it is
continuous).
Two random variables with the same expected value can have very
different distributions. There are other useful descriptive measures which
affect the shape of the distribution, for example variance.
The expected value of a random variable X is symbolised by E(X) or µ.
If X is a discrete random variable with possible values x1, x2, x3, ..., xn,
and p(xi) denotes P(X = xi), then the expected value of X is defined by:
where the elements are summed over all values of the random variable
X.
If X is a continuous random variable with probability density function f(x),
then the expected value of X is defined by:
Example
Discrete case : When a die is thrown, each of the possible faces 1, 2, 3,
4, 5, 6 (the xi's) has a probability of 1/6 (the p(xi)'s) of showing. The
expected value of the face showing is therefore:
µ = E(X) = (1 x 1/6) + (2 x 1/6) + (3 x 1/6) + (4 x 1/6) + (5 x 1/6) + (6
x 1/6) = 3.5
Notice that, in this case, E(X) is 3.5, which is not a possible value of X
LECTURE NOTE 30
Frequency Distributions
Recall also that in our general notation, we have a data set with n points arranged
in a frequency distribution with k classes. The class mark of the i'th class is
denoted xi; the frequency of the i'th class is denoted fi and the relative frequency
of th i'th class is denoted pi = fi / n.
Mean
The mean of a data set is simply the arithmetic average of the values in the set,
obtained by summing the values and dividing by the number of values. Recall
that when we summarize a data set in a frequency distribution, we are
approximating the data set by "rounding" each value in a given class to the class
mark. With this in mind, it is natural to define the mean of a frequency
distribution by
The mean is a measure of the center of the distribution. As you can see from the
algebraic formula, the mean is a weighted average of the class marks, with the
relative frequencies as the weight factors. We can compare the distribution to a
mass distribution, by thinking of the class marks as point masses on a wire (the
x-axis) and the relative frequencies as the masses of these points. In this analogy,
the mean is literally the center of mass--the balance point of the wire.
Recall also that we can think of the relative frequency distribution as the
probability distribution of a random variable X that gives the mark of the class
containing a randomly chosen value from the data set. With this interpretation,
the mean of the frequency distribution is the same as the mean (or expected
value) of X.
Variance and Standard Deviation
The variance of a data set is the arithmetic average of the squared differences
between the values and the mean. Again, when we summarize a data set in a
frequency distribution, we are approximating the data set by "rounding" each
value in a given class to the class mark. Thus, the variance of a frequency
distribution is given by
The standard deviation is the square root of the variance:
LECTURE NOTE 31
In probability theory and statistics, the binomial distribution with parameters n
and p is the discrete probability distribution of the number of successes in a
sequence of n independent yes/no experiments, each of which yields success
with probability p. A success/failure experiment is also called a Bernoulli
experiment or Bernoulli trial; when n = 1, the binomial distribution is a Bernoulli
distribution. The binomial distribution is the basis for the popular binomial test
of statistical significance.
The binomial distribution is frequently used to model the number of successes in
a sample of size n drawn with replacement from a population of size N. If the
sampling is carried out without replacement, the draws are not independent and
so the resulting distribution is a hypergeometric distribution, not a binomial one.
However, for N much larger than n, the binomial distribution is a good
approximation, and widely used.
Probability mass function
In general, if the random variable X follows the binomial distribution with
parameters n and p, we write X ~ B(n, p). The probability of getting exactly k
successes in n trials is given by the probability mass function:
for k = 0, 1, 2, ..., n, where
is the binomial coefficient, hence the name of the distribution. The formula can
be understood as follows: we want exactly k successes (pk) and n − k failures
(1 − p)n − k. However, the k successes can occur anywhere among the n trials, and
there are
different ways of distributing k successes in a sequence of n trials.
LECTURE NOTE 32
Definition
A discrete random variable X is said to have a Poisson distribution with
parameter λ > 0, if, for k = 0, 1, 2, …, the probability mass function of X is given
by:[6]
where


e is Euler's number (e = 2.71828...)
k! is the factorial of k.
The positive real number λ is equal to the expected value of X and also to its
variance
The Poisson distribution can be applied to systems with a large number of
possible events, each of which is rare. How many such events will occur during a
fixed time interval? Under the right circumstances, this is a random number with
a Poisson distribution
LECTURE NOTE 33
The hypergeometric distribution applies to sampling without replacement from a
finite population whose elements can be classified into two mutually exclusive
categories like Pass/Fail, Female/Male or Employed/Unemployed. As random
selections are made from the population, each subsequent draw decreases the
population causing the probability of success to change with each draw.
The following conditions characterize the hypergeometric distribution:


The result of each draw can be classified into one of two categories.
The probability of a success changes on each draw.
A random variable follows the hypergeometric distribution if its probability
mass function (pmf) is given by:[1]
Where:





is the population size
is the number of success states in the population
is the number of draws
is the number of successes
is a binomial coefficient
LECTURE NOTE 34
The simplest case of a normal distribution is known as the standard normal
distribution. This is a special case where μ=0 and σ=1, and it is described by this
probability density function:
The factor
in this expression ensures that the total area under the curve
ϕ(x) is equal to one.[6] The 1/2 in the exponent ensures that the distribution has
unit variance (and therefore also unit standard deviation). This function is
symmetric around x=0, where it attains its maximum value
; and has
inflection points at +1 and −1.
Authors may differ also on which normal distribution should be called the
"standard" one. Gauss himself defined the standard normal as having variance σ2
= 1/2, that is
Stigler[7] goes even further, defining the standard normal with variance σ2 =
1/2π :
LECTURE NOTE 35
DISTRIBUTION OF SEVERAL RANDOM VARIABLES:
DISCRETE 2 DIMENSIONAL DISTRIBUTION:
CONTINUOUS 2 DIMENSIONAL DISTRIBUTION:
LECTURE NOTE 36
RANDOM SAMPLING:
In statistics, a simple random sample is a subset of individuals (a sample)
chosen from a larger set (a population). Each individual is chosen randomly and
entirely by chance, such that each individual has the same probability of being
chosen at any stage during the sampling process, and each subset of k
individuals has the same probability of being chosen for the sample as any other
subset of k individuals.[1] This process and technique is known as simple random
sampling, and should not be confused with systematic random sampling. A
simple random sample is an unbiased surveying technique.
The sample mean is given by
Where n is sample size
The sample variance is given by
LECTURE NOTE 37
Estimation of parameters:
MAXIMUM LIKELYHOD METHO
D
Consider a discrete(or continuous) random variable X whose probability functionf(x)depends
on 𝜃,let us consider n independent sample values
A solution of above equation
so
LECTURE NOTE 38
CONFIDENCE INTERVALS
LECTURE NOTE 39
TESTING OF HYPOTHESIS:
Steps for testing
LECTURE NOTE 40
ACCEPTANCE SAMPLING:
LECTURE NOTE 41
CHI-SQUARE METHOD:
STEPS FOR CHI-SQUARE TEST
LECTURE NOTE 42
REGRESSION ANALYSIS:
Steps to find confidence interval for regression coefficient
LECTURE NOTE 43
COORELATION ANALYSIS:
The coorelation coefficient is given by
Steps for test of coorelation coefficient
LECTURE NOTE 44
UNIVERSITY QUESTIONS
LECTURE NOTE 45
UNIVERSITY QUESTIONS