download

11 July 2017
Metode Numerik II
1
Power method
•It’s a single vector iteration technique
•This method only generates only dominant eigenpairs
 , v
•It generates a sequence of vectors
: A k v 0 where v 0 is some non - zero initial vector
This sequence of vectors when normalized properly,
under reasonably mild conditions converge to a dominant
eigenvector associated with eigenvalue of largest modulus.
Methodolog
Start
: Choose a nonzero initial vector
y:
Iterate : for k = 1,2,…… until convergence, compute
vk 
1
Av k -1 where  k is a component of the vector Av k -1
αk
which has the maximum modulus
11 July 2017
Metode Numerik II
2
General form of the equations
 The general form of the equations
Ax  x
Ax  I x  0
A  I x  0
A  I   0
11 July 2017
Metode Numerik II
3
Power Method
The basic computation of the power method is
summarized as
Auk -1
uk 
Auk -1 
and lim uk  1
k 
The equation can be written as:
Auk -1  1uk -1  1 
11 July 2017
Auk -1
u
-1
Metode NumerikkII
4
Shift method
It is possible to obtain another eigenvalue from the
set equations by using a technique known as
shifting the matrix.
Ax  x
Subtract the a vector from each side, thereby
changing the maximum eigenvalue
Ax sI x    sx
11 July 2017
Metode Numerik II
5
Shift method
The eigenvalue, s, is the maximum value of the
matrix A. The matrix is rewritten in a form.
B  A  max I 
Use the Power method to obtain the largest
eigenvalue of [B].
11 July 2017
Metode Numerik II
6
Inverse Power Method
The inverse method is similar to the power method,
except that it finds the smallest eigenvalue. Using
the following technique.
Ax  x
1

 A Ax   A x
x  A x
1
11 July 2017
1
1
 x  Bx
Metode Numerik II
7
Inverse Power Method
The algorithm is the same as the Power method and
the “eigenvector” is not the eigenvector for the
smallest eigenvalue. To obtain the smallest
eigenvalue from the power method.
1
1
  

11 July 2017

1
1

  


Metode Numerik II
8
Accelerated Power Method
The Power method can be accelerated by using the
Rayleigh Quotient instead of the largest wk value.
Az1  1
The Rayeigh Quotient is defined as:
z' w
1 
z' z
11 July 2017
Metode Numerik II
9
Accelerated Power Method
The values of the next z term is defined as:
z2 
w
1
The Power method is adapted to use the new value.
11 July 2017
Metode Numerik II
10
Why and What's happening in power method
To apply power method, our assumptions should be
Assumption 1 : 1 is strictly greater th an i for i  2,3.......n
Assumption 2 :' A' has n eigen vect ors v1 , v 2 ........v n (where Av i  i v i )
which are a basis for n - space
Iterate : v k  A k v 0 for k  1,2,3.......
In view of assumption 2, v 0 can be expressed as
v 0  1v1   2 v 2  ........   n v n
But Av i  i v i , hence A k v i  i v i for k  1, so in view of assumtion 1,
k
A k v 0  11k v1   2 k2 v 2  ........   n kn v n
k
k


 2 
 n 




  1v1   2   v 2  .......   n   v n 


 1 
 1 
 11k v1 for large values of k, provided that 1  0
k
1
11k v1 is a scalar multiple of v1 , v k  A k v 0 will approach an eigenvecto r
for the dominant eigen valu e i (i.e, Av k  1v k )
So if v k is scaled so that its dominant component is 1, then
(dominant component of Av k )  1.(dominant component of v k )  1
11 July 2017
Metode Numerik II
11
Shifted Power method
Instead of iterating with ' A' B  A   I for positive  is any arbitrary value
The scalars ' ' are called shifts of origin
 , v  is an eigenpair
for ' A'     , v  in an eigenpair for ' A -  I'
Advantages
1. Less number of iterations for convergenc e
2. Shifting doesnt alter the eigenvecto rs but eigenvalue s would shift by ' '
1
2
3
4
5
….
Eigenvalues of A
n
Eigenvalues of (A(1   )(2   )(3   ) (4   )(5   )
11 July 2017

I) (  0)
(n   )
Metode Numerik II
12
Inverse power method-Shifted Inverse power me
Basic idea is that
 , v  is an eigenpair of A   1 , v  is an eigenpair
of A -1
 
So the dominant eigenvalue of A -1 is the reciprocal of the least dominant eigenvalue of A
if n  0, and n  i we can find
1
n
and an associated v n by applying the same iterative method
If A is singular, we shall be unable to find A -1 , indicating n  0
Advantages
1.Least dominant eigenpair of A
2.Faster convergence rate
Shifted Inverse power method:
The same mechanism follows like in the shifted power method and only thing is
that we will achieve faster convergence rates in comparison.
11 July 2017
Metode Numerik II
13
Rayliegh Quotient
The basic idea is to keep changing the shifts so that faster convergenc e occurs
Start : Choose an initial vector v 0 such that v 0
2
1
Iterate : for k  1,2,......, until convergenc e compute
 k  Av k -1 , v k -1 ,
vk 
1
A -  k I 1 v k -1
k
Where  k should be in a way that the 2 - norm of the vector v k is one. \
According to Rayliegh, If we know any eigen vect or in the system
we can calculate eigenvalue .
suppose v j is an eigenvecto r we know in the system.
Basic formulatio n can be made as Av j   j v j
pre - multiplyin g with v T  v Tj Av j  v Tj  j v j  We can find out correspond ing eigenvalue .
11 July 2017
Metode Numerik II
14
Deflation Techniques
Definition : Manipulate the system
After finding out the largest eigenvalue in the system,displace it in such away that
next larger value is the largest value in the system and apply power method.
Weilandt deflation technique
It’s a single vector deflation technique.
According to Weilandt the deflated matrix is of the form
A1  A -  u1v H where v is an arbitrary vector such that v h u1  1
and  is an appropriat e shift. Eigen valu es of the A are those except
for the eigenvalue 1 which is trasnform ed to 1   .
Basically the spectrum of A1 would be
 (A1 )  1   , 2 , 3 ,........, p 
11 July 2017
Metode Numerik II
15
Deflation with several vectors:
It uses the Schur decomposition
The basic idea is that if we know one vector of 2 - norm one can be
completed by (n - 1) additional vectors to form an orthonorma l basis of C n
That is achieved by writing the matrix in schur form.
Let q1 , q 2 , q 3 ,.......q j be a set of schur vect ors associated with the


eigenvalue s 1 , 2 ,........ j such that Q j  q1 , q 2 , q 3 ,.......q j is an orthonorma l matrix
whose columns form a basis of the eigen space associated with the eigenvalue s 1 , 2 ,........ j .
So the generaliza tion would be
Let  j  Diag  1 ,  2 ,...... j 
Then the eigenvalue s of the matrix A j  A - Q j  j Q Hj
~
~
are i  i   i for i  j and   i for i  j
11 July 2017
Metode Numerik II
16
Schur - Weilandt Deflation
For i  0,1,2,.......j - 1
1.Define A i  A i-1   i 1q i-1q iH-1 (initially define A 0  A) and compute
~
the dominant eigenvalue i of A i and correspond ing eigenvecto r u i
~
2. Orthonorma lize u i against q1 , q 2 ,.......q i-1 to get the vector q i
Practical deflation procedure :
The more general way to compute 2 , u 2 are
1. u  w 1 the left eigenvecto r. disadvanta ge is requiring left and right vect ors
but on the other hand right and left eigenvecto rs of A1are preserved.
2.v  u1which is often nearlyopti mal and preserves the schur vect ors.
3. Using a block of schur vect ors instead of a single vector.
Usually t he steps to be followed are
1)get vect or y  Ax
2)get the scalar t  v H x
3)compute y  y - tu 1 The above procedure requires only that the vectors u1and v be
kept in memory along with the matrix A. We deflate A1 again into A 2 and then into A 3 etc.
~
At each of the process we have A i  A i-1   u i v iH
11 July 2017
Metode Numerik II
17
Projection methods
Suppose if matrix ‘A’ is real and the eigenvalues are complex.consider power method
where dominant eigenvalues are complex and but the matrix is real.
Although the usual sequence
x j1   jAx j where  j is a normalizin g factor doesnt converge, A simple analysis
may show that the subspace spanned by the last two iterates x j1 , x j will contain
converging approximat ions to the complex pair of eigenvecto rs. A simple projection
technique on to those vectors will extract th e eigenvalue s and eigenvecto rs.
~
Approxiama te the exact eigenvecto r u, by a vector u belonging to some subspace K
of approximan ts , by imposing Petrov - Galerkin method that the residual vector of
~
u be orthogonal to some subspace L.
In orthogonal projection technique subspace L is same K.
In oblique projection technique there is no relation.
11 July 2017
Metode Numerik II
18
Orthogonal projection methods
Let A(n  n)  K(m).
According to eigenvalue problem : Find ' u'  C n and   C STAu  u.
~
~
In an orthogonal projection technique onto the subspace K, we take approx. u and 
~
~
~ ~ ~
 ~ ~~ 
with  in C and u in K ST. A u -  u  K that is  A u -  u , v   0 v  K


Assume some orthonorma l basis v1 , v 2 ,......, v m  of K is available and denote by V the matix
with column vec tors v1 , v 2 ,......, v m . Then we can solve the approximat e problem numericall y by
~
~


translatin g it into this basis. Letting u  Vy our equation becomes  AVy   Vy, v j   0, j  1,2,......m.


~
~
 y and  must satisfy Bm y   y with Bm  V H AV
If we denote by A m the linear tra nsformatio n of rank ' m' defined by A m  PK APK then we observe that
the restrictio n of this operator t o the subspace K is represente d by matrix Bm with respect to basis V.
11 July 2017
Metode Numerik II
19
What exactly is happening in orthogonal projection
Suppose v is the guess vector, take successive two iterations in the power method.
Form an orthonormal basis X = [v|Av].
Do the Gram - Schmidt process to QR factorize X.
Say v is nX1 and Av is ofcourse of nX1. So ‘X’ is now nX2 matrix.
Q = [q1|q2] where q1 = q1/norm(q1); q2 = projection onto q1;
now Q is perfectly orthonormalized.
q1,q2 form an orthonormal basis which spans x,x, which are corresponding
eigenvectors as it converges.
since the columns of Q k are orthonorma l, the projection of a vector v into the subspace spanned by
the columns of Q k is given by Q k Q Tk v k 1. Since the space spanned by the columns of v k 1 is nearly
equal to the space spanned by Q k , it follows that
Q k Q Tk v k 1  v k 1  AQ k
Replacing Q Tk v k 1 by its diagonaliz ation PP -1 we have Q k PP -1  AQ k post multiplyin g with P yields
Q k P  AQ k P ; The diagonal elements of  approximat e the dominant eigenvalue s and Q k P approximat es eigenvecto rs.
11 July 2017
Metode Numerik II
20
Rayleigh - Ritz procedure
step 1: Compute an orthonorma l basis v i i  1,2,....m of the subspace K. Let V  v1 , v 2 ,.....v m 
step 2 : Compute Bm  V H AV
~
step 3 : Compute the eigenvalue s of Bm and select the k desired ones  i  1,2,....k where k  m
~
step4 : Compute the eigenvecto rs y i , i  1,2....k of Bm associated with  i  1,2,....k and the
~
correspond ing approximat e eigenvecto rs of A, u i  Vy i , i  1,2,.....k
Rayleigh - Ritz procedure act under a fact that :
If K is invariant under A, then every approximat e eigenvalue , eigenvecto r pair
obtained from the orthogonal projection method onto K is exact.
K is invariant under A  There exists an orthogonal basis Q of K and an mXm matrix
C such that AQ  QC every eigenpair  , and y of C is such that  , Qy is an eigenpair
of A since AQy  QCy  Qy, So as long as Q is fixed  and y are eigenpair.
11 July 2017
Metode Numerik II
21
Ada Pertanyaan
???
11 July 2017
Metode Numerik II
22