Advanced Computer Graphics
Spring 2016
K. H. Ko
School of Mechatronics
Gwangju Institute of Science and Technology
Today’s Topics
Linear Algebra
Vector
Spaces
Determinants
Eigenvalues and Eigenvectors
SVD
2
Linear Combinations, Spans and
Subspaces
Linear Combination of Vectors
xk
∈ V, ck ∈ R for k=1,…,n.
n
c1 x1 cn xn ck xk
k 1
The span of A is the set
Let
V is a vector space and let A ⊂ V.
n
Span[ A] ck xk : n 0, ck R, xk A
k 1
The set of all finite linear combinations of vectors in A
3
Linear Combinations, Spans and
Subspaces
If a nonempty subset S of a vector space V
is itself a vector space, S is said to be a
subspace of V.
4
Linear Independence and Bases
Let (V,+,ᆞ) be a vector space. The vectors
x1,…,xn ∈ V. The vectors are linearly
dependent if
n
c x
k 1
k
k
c1 x1 cn xn 0
for some set of scalars c1,…,cn which are
not all zero.
If there is no such set of scalars, the vectors
are linearly independent.
5
Linear Independence and Bases
Let V be a vector space with subset A ⊂ V.
The set A is said to be a basis for V if A is
linearly independent and Span[A]=V.
number of vectors in A is the “smallest”. i.e
the cardinality of the set A, |A|, is the smallest.
The
For a vector space V that has a basis B of
cardinality |B|=n.
If
|A| > n, then A is linearly dependent.
If Span[A] = V, then |A| ≥ n
Every basis of V has cardinality n. The vector
space is said to have dimension dim(V) = n.
6
Inner Products, Length,
Orthogonality and Projection
Inner product: <x,y> = xTy
<cx+y,z>
= c<x,z> + <y,z>
<x,y> = |x||y|cosθ
The length of x : |x| = <x,x>1/2
Orthogonality of vectors
Two
vectors x and y are orthogonal if and only if
<x,y> = 0.
Projection: inner product is closely related.
The
projection of v onto u is proj(v,u) = <v,u>u.
7
Inner Products, Length,
Orthogonality and Projection
Projection
If
d is not a unit vector, projection is still welldefined.
d
d v, d
proj (v, d ) v,
d
| d | | d | d, d
8
Inner Products, Length,
Orthogonality and Projection
A set of nonzero vectors consisting of mutually
orthogonal vectors (each pair is orthogonal) must
be a linearly independent set.
If all the vectors in the set are unit length, the set
is said to be an orthonormal set of vectors.
It is always possible to construct an orthonormal set of
vectors from any linearly independent set of vectors.
Gram-Schmidt orthonormalization.
9
Inner Products, Length,
Orthogonality and Projection
Gram-Schmidt Orthonormalization
Iterative
process
j 1
v1
u1
, uj
| v1 |
v j v j , ui ui
i 1
j 1
| v j v j , ui ui |
( 2 j n)
i 1
10
Cross Product, Triple Products
Cross Product of the vectors u and v: uⅹv
The cross product is not commutative but anticommutative.
Triple Scalar Product of the vectors u,v,w:
<u,vⅹw>
Triple vector product fo the vectors u,v,w:
uⅹ(vⅹw) = sv + tw (It lies in the v-w plane.)
Determination of s and t
11
Orthogonal Subspaces
Let U and V be subspaces of Rn. The
subspaces are said to be orthogonal
subspaces if <x,y> = 0 for every x∈U and
every y∈V.
Orthogonal complement of U
U┸
= The largest dimension subspace V of Rn
that is orthogonal to a specified subspace U.
U x R n : x, y 0 for all y U
12
Rank
The rank of A is the number of pivots,
which is denoted as r.
The
true size of a matrix
Identical rows
If row 3 is a combination of rows 1 and 2
A
matrix A has full row rank if every row has a
pivot. No zero rows.
A matrix A has full column rank if every column
has a pivot.
13
Kernel of A
Define the kernel or nullspace of A to be
the set
= {x∈Rm : Ax = 0}
A basis for kernel(A) is constructed by solving
the system Ax = 0.
Kernel(A)
14
Range of A
A can be written as a block matrix of nⅹ1
column vectors A = [a1|…|am].
The expression Ax can be given as a linear
combination of these column vectors.
Ax x1a1 xma m
Treating A as a function A: Rm -> Rn, the
range of the function is
range( A) y R m : y Ax
for some x R n Spana1 ,..., a m
15
Fundamental Theorem of Linear
Algebra
If A is an nⅹm matrix with kernel(A) and
range(A), and if AT is the mⅹn transpose of
A with kernel(AT) and range(AT), then
= range(AT) ┸
Kernel(A) ┸ = range(AT)
Kernel(AT) = range(A) ┸
Kernel(AT) ┸ = range(A)
Kernel(A)
16
Projection and Least Squares
The projection p of a vector b∈Rn onto a
line through the origin with direction a
p
= a(aTa)-1aTb
The line is a one-dimensional subspace.
Projection of b onto a subspace S is equivalent to finding the point
in S that is closest to b.
17
Projection and Least Squares
The construction of a projection onto a
subspace is motivated by attempting to
solve the system of linear equations Ax=b
A
solution exists if and only if b ∈ range(A).
If b is not in the range of A, an application might
be satisfied with a vector x that is “close enough”
Find an x so that Ax-b is as close to the zero vector as
possible. -> Find x that minimizes the length |Ax-b|2.
The least squares problem
18
Projection and Least Squares
If the squared distance has a minimum of
zero, any such x must be a solution to the
linear system.
Geometrically, the minimizing process
amounts to finding the point p∈range(A)
that is closest to b.
Can
be obtained through
projection.
19
Projection and Least Squares
There is also always a point q∈range(A) =
kernel(AT) such that the distance from b to
kernel(AT) is a minimum.
It is obvious that the quantity |Ax-b|2 is
minimized if and only if Ax-b ∈ kernel(AT).
|Ax-b|2
is a minimum and AT(Ax-b)=0.
ATAx = ATb : normal equations corresponding to
the linear system Ax=b.
The projection of b onto range(A)
is p = Ax=A(ATA)-1ATb
20
Linear Transformations
Let V and W be vector spaces. A function L:
V -> W is said to be a linear transformation
whenever
L(x+y)
= L(x) + L(y) for all x,y ∈ V
L(cx) = cL(x) for all c ∈ R and for all x ∈ V
21
Determinants
A determinant is a scalar quantity
associated with a square matrix.
Geometric Interpretation
2ⅹ2
matrix: the area of a parallelogram formed
by the column vectors.
3ⅹ3 matrix: the volume of a parallelepiped
formed by the column vectors.
22
Determinants
A determinant is a scalar quantity
associated with a square matrix.
Geometric Interpretation
2ⅹ2
matrix: the area of a parallelogram formed
by the column vectors.
3ⅹ3 matrix: the volume of a parallelepiped
formed by the column vectors.
How to compute the determinant of a matrix
A?
23
Eigenvalues and Eigenvectors
Let A be an nⅹn matrix of complex-valued
entries.
The scalar λ∈C is said to be an eigenvalue
of A if there is a nonzero vector x such that
Ax= λx.
In
this case, x is said to be an eigenvector
corresponding to λ.
Geometrically, an eigenvector is a vector
that when transformed does not change
direction.
24
Eigenvalues and Eigenvectors
Let λ be an eigenvalue of a matrix A. The
eigenspace of λ is the set
Sλ
= {x∈Cn: Ax = λx}
To find eigenvalues and eigenvectors
Ax
– λIx = 0 <-> (A- λI)x = 0.
Nonzero solutions x.
Solve a characteristic equation det(A- λI) = 0.
25
Eigendecomposition for
Symmetric Matrices
A symmetric matrix nⅹn with real-valued
entries arises most frequently in
applications.
26
Eigendecomposition for
Symmetric Matrices
The eigenvalues of a real-valued symmetric
matrix must be real-valued and the
corresponding eigenvectors are naturally
real-valued.
If λ1 and λ2 are distinct eigenvalues for A,
then the corresponding eigenvectors x1 and
x2 are orthogonal.
27
Eigendecomposition for
Symmetric Matrices
If A is a square matrix, there always exists
an orthogonal matrix Q (eigenvector matrix)
such that QTAQ = U, where U is an upper
triangular matrix. The diagonal entries of U
are necessarily the eigenvalues of A.
If A is symmetric and QTAQ = U, then U
must be a diagonal matrix.
28
Eigendecomposition for
Symmetric Matrices
The symmetric matrix A is positive,
nonnegative, negative, nonpositive definite
if and only if its eigenvalues are positive,
nonnegative, negative, nonpositive.
The product of the n eigenvalues equals the
determinant of A
The sum of the n eigenvalues equals the
sum of the n diagonal entries of A.
29
Eigendecomposition for
Symmetric Matrices
Eigenvectors x1,…,xj that correspond to
distinct eigenvalues are linearly independent.
An n by n matrix that has n different
eigenvalues must be diagonalizable.
30
Eigendecomposition for
Symmetric Matrices
Let M be any invertible matrix. Then
B=M-1AM is similar to A.
No
matter which M we choose, A and B have the
same eigenvalues.
If x is an eigenvector of A then M-1x is an
eigenvector of B.
31
Singular Value Decomposition
The SVD is a highlight of linear algebra.
Typical Applications of SVD
Solving
a system of linear equations
Compression of a signal, an image, etc.
SVD approach can give an optimal low rank
approximation of a given matrix A.
Ex)
Replace the 256 by 512 pixel matrix by a
matrix of rank one: a column times a row.
32
Singular Value Decomposition
Overview of SVD
A
is any m by n matrix, square or rectangular.
We will diagonalize it. Its row and column
spaces are r-dim.
We choose special orthonormal bases v1,…vr for
the row space and u1,…,ur for the column space.
For those bases, we want each Avi to be in the
direction of ui.
In matrix form, these equations Avi=σiui become
AV=UΣ or A=UΣVT. This is the SVD.
33
Singular Value Decomposition
The Bases and the SVD
Start
with a 2 by 2 matrix.
Its rank is 2.
This matrix A is invertible.
Its row space is the plane R2.
We
want v1 and v2 to be perpendicular unit
vectors, an orthonormal basis.
We
also want Av1 and Av2 to be perpendicular.
Then
the unit vectors u1 and u2 of Av1 and Av2
are orthonormal.
34
Singular Value Decomposition
The Bases and the SVD
We
are aiming for orthonormal bases that
diagonalize A.
When the inputs are v1 and v2, the outputs are
Av1 and Av2. We want those to line up with u1
and u2.
The
basis vectors have to give Av1 = σ1u1 and
also Av2 = σ2u2.
The singular values σ1 and σ2 are the lengths |Av1| and
|Av2|.
35
Singular Value Decomposition
The Bases and the SVD
With
v1 and v2 as columns of V,
Av1 v2 1u1 2u2 u1
1 0
u2
0
2
matrix notation, that is AV=UΣ, or U-1AV = Σ
or UTAV = Σ.
Σ contains the singular values, which are
different from the eigenvalues.
In
36
Singular Value Decomposition
In SVD, U and V must be orthogonal
matrices.
basis VTV = I.
VT = V-1. UT = U-1
Orthonormal
This is the new factorization of A:
orthogonal times diagonal times orthogonal.
37
Singular Value Decomposition
There is a way to remove U and see V by
itself.: Multiply AT times A.
ATA
= (UΣVT)T(UΣVT) = VΣTΣVT
This becomes an ordinary diagonalization of the
crucial symmetric matrix ATA, whose
eigenvalues are σ12,σ22.
2
0 T
T
1
A A V
V
2
0 2
The columns of V are the eigenvectors of ATA. <- This is how we find V.
38
Singular Value Decomposition
Working Example
39
Q & A?
40
© Copyright 2026 Paperzz