notes

LU Decomposition
1.Introduction
We have seen how to construct the Y-bus
used in the matrix equation
I  YV
(1)
If we are given the bus voltages, we can
construct Y and then very easily find I.
Unfortunately, this is not generally what we
know. Rather, we typically know I (because
we know generation and load), and then we
must find V. This is easy enough, using
matrix inverses, i.e.,
1
V Y I
(2)
However, there is a very practical problem
with this. A model of the eastern US
interconnection can be 50000 buses. This
means that the Y-bus for this model is a
50000×50000 matrix. Direct matrix
inversion for this dimensionality is
computational suicide.
1
Fortunately, there is an alternative to direct
matrix inversion. We will never actually get
the inverse, but we will solve for V given I
in eq. (1).
The method that allows us to do this is
called LU decomposition. It is actually a
very widely known and used method in
many different disciplines.
In fact, using it to solve eq. (1) is not the
most common application in power systems.
A much more common application of LU
decomposition is in the numerical, iterative
algorithm used to solve the power flow
problem. But we will come to this later. For
now, let’s learn LU-decomposition on the
generic problem A x=b, motivated by the
specific application Y V=I.
If you have taken a linear algebra course
(from Math), this is familiar to you. If you
have not taken such a course, you should.
2
2.Forward-backward substitution
Consider that you are able to obtain A (or Y)
as the product of two special matrices, i.e.,
A  LU
(3)
that satisfy the following:
 Both L and U are square matrices of the
same dimension as A.
 U is an upper-triangular matrix, meaning
that all elements below the diagonal are 0.
 L is a lower-triangular matrix, meaning
that all elements above the diagonal are 0.
 All diagonal elements of U are 1.
So, for a 3×3 case, we would have:
 a11 a12 a13 
A  a 21 a 22 a 23 
a 31 a 32 a 33 
 l11 0
  l 21 l 22
 l 31 l 32
0  1 u12
0  0 1
l 33  0 0
3
u13 
u23   LU (4)
1 
For the moment, let’s not worry about how
to obtain L and U. Rather, let’s think about
what we can do if we get them.
What we know at this point is that A  LU ,
and A x  b . Therefore,
LU x  b
(5)
w Ux
(6)
Define:
Substitution of (6) into (5) results in
Lw  b
(7)
The situation is the following. We want to
find x. It appears that (6) and (7) are not
very helpful, because solving them for x and
w, respectively, will require an inverse. But
let’s take a closer look at eqs. (6) and (7), in
terms of the fully expressed matrix relations
and see if we can get x and w without matrix
inversion. If so, then our procedure will be
to use (7) to find w and then (6) to find x.
4
 w1  1 u12 u13   x1 
w   w2   0 1 u23   x2   U x
(8)
 w3  0 0
1   x3 
 l11 0 0   w1   b1 
Lw   l 21 l 22 0   w2   b2   b
(9)
 l 31 l 32 l 33   w3  b3 
Observe that
 Equation (9) can be solved for w without
inverting L and
 Equation (8) could then be solved for x
without inverting U.
Let’s start with (9). We see that we can
begin with the row 1 equation and proceed
as follows:
l11 w1  b1  w1  b1 / l11
(10)
b2  l 21w1
l 21w1  l 22 w2  b2  w2 
(11)
l 22
b3  l 31w1  l 32 w2
l 31w1  l 32 w2  l 33 w3  b3  w3 
l 33
(12)
5
This procedure is called forward
substitution. Consideration of the pattern of
calculation introduced by eqs. (10)-(12)
suggests a generalized formula for forward
substitution,
useful
for
computer
programming, as follows:
k 1
wk 
bk   l kj w j
j 1
(13)
l kk
Now that we have w, we can use (8) to find
x. We see that we can begin with the row 3
equation and proceed as follows:
x3  w3
(14)
x2  u23 x3  w2  x2  w2  u23 x3
(15)
x1  u12 x2  u13 x3  w1  x1  w1  u12 x2  u13 x3
(16)
This procedure is called backward
substitution. Consideration of the pattern of
calculation introduced by eqs. (14)-(16)
suggests a generalized formula for forward
6
substitution,
useful
for
programming, as follows:
xk  wk 
computer
n
u
j  k 1
kj
xj
(17)
where n is the dimension of the matrix.
3.Factorization using Crout algorithm
So we see that if we have L and U, we can
solve A x=b for x. So natural question at this
point is: How to find L and U?
The method of finding L and U from A is
called the LU factorization of A, otherwise
known as the LU-decomposition of A. You
will enjoy factorization .
To motivate it, let’s first look back at eq. (4).
 a11 a12 a13 
A  a 21 a 22 a 23 
a 31 a 32 a 33 
 l11 0
  l 21 l 22
 l 31 l 32
0  1 u12
0  0 1
l 33  0 0
7
u13 
u23   LU
1 
(4)
Based on eq. (4), we see that the following
sequence of calculations may be performed.
 Row 1 of A and L, across columns of U.
a11  l11
a12  l11u12  u12  a12 / l11
a13  l11u13  u13  a13 / l11

Row 2 of A and L, across columns of U.
a21  l21
a22  l21u12  l22  l22  a22  l21u12
a23  l 21u13
a23  l 21u13  l 22u23  u23 
l 22
 Row 3 of A and L, across columns of U:
a31  l 31
a32  l 31u12  l 32  l 32  a32  l 31u12
a33  l 31u13  l 32u23  l 33  l33  a33  l 31u13  l 32u23
The above is convincing evidence that we
will be able to perform the desired
factorization. Although we have done so for
only a 3×3 case, the procedure would also
work for a matrix of any dimension.
8
If you study closely the pattern of
calculation, you can convince yourself that
the following generalized formula can be
used in computer programming [1].
j 1
l ij  a ij   l is usj
s 1
(18)
i 1
uij 
a ij   l is usj
s 1
(19)
l ii
Use of eqs. (18,19) comprise what is known
as the Crout algorithm.
4.Method of Bergen & Vittal (Dolittle)
Your text uses a factorization where the L
matrix diagonal elements are 1’s and the U
matrix diagonal matrix elements are
nonunity, as indicated in eq. (4a).
9
 a11 a12 a13 
A  a21 a22 a23 
a31 a32 a33 
 1 0 0 u11 u12 u13 
 l21 1 0  0 u 22 u 23   LU (4a)
l31 l32 1  0
0 u33 
We can show how to obtain L and U in a
fashion similar to that of Section 3.0.
 Row 1 of A and L, across columns of U.
a11  u11
a12  u12
a13  u13
Row 2 of A and L, across columns of U.
a21
a21  l21u11  l21 
u11

a22  l21u12  u22  u22  a22  l21u12
a23  l21u13  u23  u23  a23  l21u13
 Row 3 of A and L, across columns of U:
10
a31
a31  l31u11  l31 
u11
a32  l31u12
a32  l31u12  l32u22  l32 
u22
a33  l31u13  l32u23  u33  u33  a33  l31u13  l32u23
5.Factorization with Gaussian elimination
Trick: Augment A matrix before you begin
the below algorithm by adding the vector b
as the n+1 column. Then, when you finish
the algorithm, you will have the vector w in
the n+1 column.
The algorithm is as follows:
1. Perform Gaussian elimination on A. Let
i=1. In each repetition below, row i is the
pivot row and aii is the pivot.
a. Lji=aji for j=i,…,n.
b.Divide row i by aii.
c. If [i=n, go to 2] else [go to d].
11
d.Eliminate all aji, j=i+1,…,n. This
means to make all elements directly
beneath the pivot equal to 0 by adding
an appropriate multiple of the pivot
row to each row beneath the pivot.
e. i=i+1, go to a.
2. The matrix U is what remains.
Example: Use LU decomposition to solve
for x in the below system of equations.
3 x1  3 x 2  6 x 3  9 x 4  1
x1  3 x 2  4 x 3  x 4  3
x1  2 x 2  5 x 3  6 x 4  2
 x3  3 x4  1
x1
In matrix form, the above is:
3
1

1

1
9  x1  1 
x   

3 4 1   2   3

2 5 6   x 3   2
   
0  1 3  x 4   1 
3
6
12
Form the augmented matrix.
3
1

1

1
9 1
3 4 1 3
2 5 6 2

0  1 3 1
3
6
Now perform the algorithm.
3
1

1

1
9 1
3
1
3 4 1 3

1
2 5 6 2
 ; L= 
0  1 3 1
1
3
6
0
0
_
_
0
_
_
_
0
0 
0

_
Divide first row by 3 and then add multiples
of it to remaining rows so that first element
in remaining rows gets zeroed.
2
3
1 1
0 2
2 2

0 1
3
3

0  1  3 0
1 / 3
8 / 3
5 / 3
 ; L=
2 / 3
13
3 0
1 2

1 1

1  1
0
0
_
_
0
0 
0

_
Divide second row by 2; then add multiples
of it to remaining rows so that second
element in remaining rows gets zeroed.
1
0

0

0
1 / 3
1 1  1 4 / 3
0 2
4 1 / 3 ;

0  2  1 6 / 3
1
2
3
L=
0
3 0
1 2
0

1 1
2

1  1  2
0
0 
0

_
Divide third row by 2 and then add multiples
of it to remaining rows so that third element
in remaining rows gets zeroed.
1
0

0

0
1 / 3
1 1  1 4 / 3
0 1 2 1 / 6 ;

0 0 3 7 / 3
1 2
3
L=
0
3 0
1 2
0

1 1
2

1  1  2
0
0
0

3
Divide third row by 3.
1
0

0

0
1/ 3 
1 1  1 4 / 3
 U

0 1 2 1/ 6

0 0 1 7 / 9
1 2
3
| w
 w Ux
Use backwards substitution to get x. This
method eliminates forward substitution step.
[1] Vlach and Singhal, “Computer Methods for Circuit Analysis and
Design,” 2nd edition, 1994.
14