Linear Least Squares Problem
Consider an equation for a stretched beam:
Y = x1 + x2 T
Where x1 is the original length, T is the force applied and x2
is the inverse coefficient of stiffness.
Suppose that the following measurements where taken:
T 10
15
20
Y 11.60 11.85 12.25
Corresponding to the overcomplete system:
11.60 = x1 + x2 10
11.85 = x1 + x2 15
- can not be satisfied exactly…
12.25 = x1 + x2 20
Math for CS
Lecture 4
1
Linear Least Squares Problem
Problem:
Given A(m x n), m≥n, b(m x 1) find x(n x 1) to minimize
||Ax-b||2.
•
If m > n, we have more equations than the number of
unknowns, there is generally no x satisfying Ax=b
exactly.
•
This is an overcomplete system.
Math for CS
Lecture 4
2
Linear Least Squares
There are three different algorithms for computing the least
square minimum.
1. Normal Equations (Cheap, less Accurate).
2. QR decomposition.
3. SVD (expensive, more reliable).
The first algorithm in the fastest and the least accurate among the
three. On the other hand SVD is the slowest and most accurate.
Math for CS
Lecture 4
3
Normal Equations 1
Minimize the squared Euclidean norm of the residual
vector:
To minimize we take the derivative with respect to x and
set it to zero:
Which reduces to an (n x n) linear system commonly
known as NORMAL EQUATIONS:
Math for CS
Lecture 4
4
Normal Equations 2
11.60 = x1 + x2 10
11.85 = x1 + x2 15
12.25 = x1 + x2 20
min||Ax-b||2
A
Math for CS
x
Lecture 4
b
5
Normal Equations 3
We must solve the system
For the following values
1 10
A 1 15
1 20
1 1 1
A
10 15 20
T
10.925
x ( AT A) 1 AT b
0.650
Math for CS
3 45
AT A
45 725
11.60
B 11.85
12.25
(ATA)-1AT is called a Pseudo-inverse
Lecture 4
6
QR factorization 1
• A matrix Q is said to be orthogonal if its columns are
orthonormal, i.e. QT·Q=I.
• Orthogonal transformations preserve the Euclidean norm
since
• Orthogonal matrices can transform vectors in various
ways, such as rotation or reflections but they do not
change the Euclidean length of the vector. Hence, they
preserve the solution to a linear least squares problem.
Math for CS
Lecture 4
7
QR factorization 2
Any matrix A(m·n) can be represented as
A = Q·R
,where Q(m·n) is orthonormal and R(n·n) is upper triangular:
r11
0
a 1 | ... | a n q 1 | ... | q n
0
0
Math for CS
Lecture 4
r12 r1n
r11
0
0 0 rnn
8
QR factorization 2
• Given A , let its QR decomposition be given as A=Q·R, where
Q is an (m x n) orthonormal matrix and R is upper triangular.
• QR factorization transform the linear least square problem into a
triangular least squares.
Matlab Code:
Q·R·x = b
R·x = QT·b
x=R-1·QT·b
Math for CS
Lecture 4
9
Singular Value Decomposition
• Normal equations and QR decomposition only work
for fully-ranked matrices (i.e. rank( A) = n). If A is
rank-deficient, that there are infinite number of
solutions to the least squares problems and we can
use algorithms based on SVD's.
• Given the SVD:
U(m x m) , V(n x n) are orthogonal
Σ is an (m x n) diagonal matrix (singular values of A)
The minimal solution corresponds to:
Math for CS
Lecture 4
10
Singular Value Decomposition
Matlab Code:
Math for CS
Lecture 4
11
Linear algebra review - SVD
Fact : for every A mn , rank( A) p,
there exist U m p , Vn p , Σ p p such that
UTU I
VTV I
Σ diag( 1 , 2 ,..., p ), i 0
p
A U Σ V T i u i v Ti
i 1
ATA V Σ2 VT
AA T U Σ 2 U T
Math for CS
Lecture 4
12
Approximation by a low-rank matrix
Fact II : let A mn have rank p.
p
let A U Σ V T i u i v Ti
i 1
be the SVD of A, 1 2 ... p
~ r
~ T ~
T
Then A i u i v i U Σ V , Σ diag( 1 , 2 ,..., r ), r p
i 1
Is the best rank r approximat ion to A in 2 :
~
~ T
min
AX 2 AA U ΣΣ V
X : rank ( X ) r
Math for CS
2
Lecture 4
2
r 1
13
Geometric Interpretation of the SVD
b Amn x
The image of the unit sphere under any mxn matrix is a hyperellipse
σ·v1
v1
σ·v2
v2
Math for CS
Lecture 4
14
Left and Right singular vectors
We can define the properties of A in terms of the shape of AS
S
AS
v1
v2
AS Amn S
σ·u2
σ·u1
Singular values of A are the lengths of principal axes of AS, usually
written in non-increasing order σ1 ≥ σ2 ≥ … ≥ σn
n left singular vectors of A are the unit vectors {u1,…, un}, oriented in the
directions of the principal semiaxes of AS numbered in correspondance with {σi}
n right singular vectors of A are the unit vectors {v1,…, vn}, of S, which are the
preimages of the principal semiaxes of AS:
Math for CS
Lecture 4
Avi= σiui
15
Singular Value Decomposition
Avi= σiui,
A
1≤i≤n
1
2
u1 u2 un
1 2 n
( n ,n )
( m ,n )
n ( n,n )
( m ,n )
AV U
A UV
*
- Singular Value decomposition
Matrices U,V are orthogonal and Σ is diagonal
Math for CS
Lecture 4
16
Matrices in the Diagonal Form
Every matrix is diagonal in appropriate basis:
b Amn x
Any vector b(m,1) can be expanded in the basis of left singular vectors of A {ui};
Any vector x(n,1) can be expanded in the basis of right singular vectors of A {vi};
Their coordinates in these new expansions are:
b' U b;
*
x' V x
*
Then the relation b=Ax can be expressed in terms of b’ and x’:
b A x U * b U * A x U *UV * x b x'
Math for CS
Lecture 4
17
Rank of A
Let p=min{m,n}, let r≤p denote the number of nonzero
singlular values of A,
Then:
The rank of A equals to r, the number of nonzero
singular values
Proof:
The rank of a diagonal matrix equals to the number of its
nonzero entries, and in the decomposition A=UΣV* ,U and V
are of full rank
Math for CS
Lecture 4
18
Determinant of A
For A(m,m),
m
| det( A) | i
i 1
Proof:
The determinant of a product of square matrices is the
product of their determinants. The determinant of a Unitary
matrix is 1 in absolute value, since: U*U=I. Therefore,
m
| det( A) || det(UV ) || det(U ) || det( ) || det(V ) || det( ) | i
*
*
i 1
Math for CS
Lecture 4
19
A in terms of singular vectors
For A(m,n), can be written as a sum of r rank-one matrices:
r
A j u j v *j
Proof:
(1)
j 1
If we write Σ as a sum of Σi, where Σi=diag(0,..,σi,..0), then
(1)
Follows from
A UV
Math for CS
Lecture 4
*
(2)
20
Norm of the matrix
The L2 norm of the vector is defined as:
n
x x x
T
2
i 1
2
xi
(1)
The L2 norm of the matrix is defined as:
A 2 sup
Ax
2
x
2
Therefore
A 2 max( i )
,where λi are the eigenvalues
Math for CS
A x i i x i
Lecture 4
21
Matrix Approximation in SVD basis
For any ν with 0 ≤ ν ≤r, define
v
A j u j v *j
(1)
j 1
If ν=p=min{m,n}, define σv+1=0. Then
A A
2
infmn A B 2 1
BC
rank ( B )
Math for CS
Lecture 4
22
© Copyright 2026 Paperzz