Introduction to Krylov Subspace Methods
Ax b
A R nn , b R n
DEF:
2
b, Ab, A b,
Example:
A
Krylov sequence
10 -1 2 0
-1 11 -1 3
2 -1 10 -1
0 3 -1 8
Krylov sequence
b Ab A2b A3b A4b
1
1
1
1
11
12
10
10
118
141
100
106
1239
1651
989
1171
12717
19446
9546
13332
Introduction to Krylov Subspace Methods
DEF:
Krylov subspace
m ( A, b) span{ b, Ab,, A
Example:
A
DEF:
m 1
b}
Krylov subspace
10 -1 2 0
-1 11 -1 3
2 -1 10 -1
0 3 -1 8
3 ( A, b) span
Example:
Krylov matrix
K m ( A, b) b, Ab,, A
1
1 ,
11
m 1
b
118
141
100
106
11 118
11
12 ,
10
10
1
1
3 ( A, b)
1
1
12 141
10 100
10 106
Introduction to Krylov Subspace Methods
DEF:
Example:
Krylov matrix
K m ( A, b) b, Ab,, Am 1b
Remark:
A3b A( A( Ab))
1
1
3 ( A, b)
1
1
11 118
12 141
10 100
10 106
Conjugate Gradient Method
We want to solve the following linear system
Ax b
Conjugate Gradient Method
r0 b Ax0
A R nn , b R n
p0 r0
A SPD (symmetric positive definite)
for k 0 ,1,2 ,..
αk rkT rk /p kT Apk
x T Ax 0
xk 1 xk αk pk
x 0
rk 1 rk αk Apk
βk 1 rkT1rk 1 /rkT rk
pk 1 rk 1 βk 1 pk
end
Conjugate Gradient Method
Conjugate Gradient Method
Example: Solve:
r0 b Ax0
x1
6
10 -1 2 0
x
-1 11 -1 3 2 25
11
2 -1 10 -1 x3
0 3 -1 8 x4
15
p0 r0
for k 0 ,1,2 ,..
αk rkT rk /p kT Apk
xk 1 xk αk pk
rk 1 rk αk Apk
βk 1 rkT1rk 1 /rkT rk
pk 1 rk 1 βk 1 pk
end
0
x1
x2
x3
X4
r (k )
K=1
K=2
0 0.4716 0.9964
0 1.9651 1.9766
0 -0.8646 -0.9098
0 1.1791 1.0976
K=3
K=4
1.0015
1.9833
-1.0099
1.0197
1.0000
2.0000
-1.0000
1.0000
31.7 5.1503 1.0433 0.1929
0.0000
1
2
x*
1
1
Conjugate Gradient Method
Conjugate Gradient Method
r0 b Ax0
vectors
r0 , r1 , r2 ,
p0 r0
for k 0 ,1,2 ,..
αk rkT rk /p kT Apk
xk 1 xk αk pk
rk 1 rk αk Apk
βk 1 r r /r r
T
k 1 k 1
T
k k
pk 1 rk 1 βk 1 pk
end
p0 , p1 , p2 ,
x0 , x1 , x2 ,
constants
0 , 1 , 2 ,
0 , 1 , 2 ,
Conjugate Gradient Method
We want to solve the following linear system
A R
nn
Ax b
,b R
x T Ax 0
x 0
n
A SPD (symmetric positive definite)
Define:
f ( x) xT Ax xT b
1
2
Example:
2 1
A
1
2
1
b
1
quadratic function
f ( x1 , x2 ) x12 x1 x2 x22 x1 x2
Conjugate Gradient Method
Example:
2 1
A
1
2
1
b
1
f ( x1 , x2 ) x12 x1 x2 x22 x1 x2
xf
1 2 x1 x2
2 x1 x2 1
1
f f
b Ax
1 2 x2 x1
2 x2 x1 1
x2
Remark:
f ( x) 12 xT Ax xT b
f ( x) has minimum when r 0
f ( x) has minimum when Ax b
f r
Why not
max?
Conjugate Gradient Method
Remark:
f r
f ( x) 12 xT Ax xT b
f ( x) has minimum when r 0
f ( x) has minimum when Ax b
Problem (1)
Problem (1)
Find x s.t
Find x s.t
Ax b
f ( x) is minimum
IDEA:
Search for the minimum
f ( x) 12 xT Ax xT b
Conjugate Gradient Method
Example:
2 1
A
1
2
1
b
1
minimum
f ( x1 , x2 ) x12 x1 x2 x22 x1 x2
Conjugate Gradient Method
Method:
given x
Method:
x0
p0 , p1 , p2 ,
“search direction”
“step length”
( 0)
p0
x1
0
0 , 1 , 2 ,
xk 1 xk k pk
1
p1
x2
given direction p How to find
How we pick p
x*
Conjugate Gradient Method
Method:
given direction
given xk
pk
xk 1 xk pk
find so that f ( x) is minimized
d
f ( xk 1 ) f ( xk 1 )T d xk 1
d
d
d
T
rk 1
xk 1
d
rkT1 pk
rkT1 pk 0
(rk Apk )T pk 0
(b Axk 1 )T pk 0
rkT1 pk 0
rkT pk
T
pk Apk
(b Axk Apk )T pk 0
Conjugate Gradient Method
Method:
given xk
given direction
pk
xk 1 xk pk
find so that f ( x) is minimized
rkT pk
T
pk Apk
Conjugate Gradient Method
r0 b Ax0
p0 r0
for k 0 ,1,2 ,..
αk rkT rk /p kT Apk
xk 1 xk αk pk
rk 1 rk αk Apk
βk 1 rkT1rk 1 /rkT rk
pk 1 rk 1 βk 1 pk
end
How we pick p
INNER
PRODUCT
Inner Product
DEF:
We say that
, : R n R n R n
Is an inner product if
x, y x, y x, y R n
x y, z x, z y, z x, y, z R n , R
x, x 0 x 0 and x, x 0 iff x 0
Example: x, y yT x
Example:
In R 2 ,
x1 y1
, 2( x1 y1 x2 y2 ) x1 y2 x2 y1
x2 y2
Inner Product
DEF:
, : R n R n R n
We say that
Is an inner product if
x, y x, y x, y R n
x y, z x, z y, z x, y, z R n , R
x, x 0 x 0 and x, x 0 iff x 0
Example:
In R n ,
x, y H y Hx
T
where H is SPD
We define the norm
x
H
x, x H
Inner Product
DEF:
We say that
, : R n R n R n
Is symmetric bilinear form if
x, y x, y x, y R n
x y, z x, z y, z x, y, z R n , R
Example:
In R n ,
x, y H y Hx
T
where H is Symmetric
Inner Product
DEF:
u and v are orthogonal if u, v 0 (vT u 0)
DEF: u and v are H orthogonal if u, v 0 (vT Hu 0)
H
where H is SPD
H orthogonal H conjugate
Example:
1
u
2
5
v
4
2 1
H
1
2
Conjugate
Gradient
Conjugate Gradient Method
Method:
given xk
given direction
pk
xk 1 xk pk
find so that f ( x) is minimized
rkT pk
T
pk Apk
Conjugate Gradient Method
r0 b Ax0
p0 r0
for k 0 ,1,2 ,..
αk rkT rk /p kT Apk
xk 1 xk αk pk
rk 1 rk αk Apk
βk 1 rkT1rk 1 /rkT rk
pk 1 rk 1 βk 1 pk
end
How we pick p
Conjugate Gradient Method
Method:
given x0
f r (r is the gradient of f )
p0 r0 (good direction)
given xk
pk 1 rk k pk
0 pk , pk 1 A p
T
k 1
pk 1 , pk are A - Conjugate
Apk (rk k pk ) Apk
T
rkT Apk
k T
pk Apk
Conjugate Gradient Method
Method:
given xk
given direction
pk
xk 1 xk pk
find so that f ( x) is minimized
rkT pk
T
pk Apk
rkT Apk
k T
pk Apk
Conjugate Gradient Method
r0 b Ax0
p0 r0
for k 0 ,1,2 ,..
αk rkT rk /p kT Apk
xk 1 xk αk pk
rk 1 rk αk Apk
βk 1 rkT1rk 1 /rkT rk
pk 1 rk 1 βk 1 pk
end
Conjugate Gradient Method
Lemma:[Elman,Silvester,Wathen Book]
For any k such that x(k) x*, the vectors generated by the CG method satisfy
(i) r ( k ) , p ( j ) r ( k ) , p ( j ) 0, j k
(ii ) Ap ( k ) , p ( j ) 0, j k
(iii ) span{r ( 0) ,, r ( k 1) } span{ p ( 0) ,, p ( k 1) } K k ( A, r ( 0) )
Conjugate Gradient Method
k 0,1,2
xk
0.0000
0.0000
0.0000
0.0000
0.4716 0.9964 1.0015
1.9651 1.9766 1.9833
-0.8646 -0.9098 -1.0099
1.1791 1.0976 1.0197
1.0000
2.0000
-1.0000
1.0000
rk
6.0000 4.9781 -0.1681 -0.0123 0.0000
25.0000 -0.5464 0.0516 0.1166 -0.0000
-11.0000 -0.1526 -0.8202 0.0985 0.0000
15.0000 -1.1925 -0.6203 -0.1172 -0.0000
pk
6.0000 5.1362 0.0427 -0.0108
25.0000 0.1121 0.0562 0.1185
-11.0000 -0.4424 -0.8384 0.0698
15.0000 -0.7974 -0.6530 -0.1395
k
k
0.0786
0.1022
0.1193
0.1411
0.0263
0.0410
0.0342
0.0713
© Copyright 2026 Paperzz