Linear Algebra and its Applications

Linear Algebra and its Applications-Inner product, real Euclidean spaces.
Definition. Given a linear space L on IR, we call inner product on L a function h, i : L × L → IR,
h, i : (x, y) → hx, yi satisfying:
1) hx, yi = hy, xi for every x, y ∈ L
2) hx, y + zi = hx, yi + hx, zi for every x, y, z ∈ L
3) chx, yi = hcx, yi for every x, y ∈ L, and every c ∈ IR
4) hx, xi ≥ 0 for every x ∈ L, and hx, xi = 0 if and only if for every x = o
A linear space on IR with inner product is called a real Euclidean space.
Example. On L = IRn , given x = (x1 , . . . , xn ), y = (y1 , . . . , yn ), we define the standard inner product
n
P
hx, yi =
xi yi .
i=1
Example. Verify that, on L = IR2 , an inner product can be defined as hx, yi = 2x1 y1 + x1 y2 + x2 y1 + x2 y2 .
Rb
Example. On C[a, b], the vector space of continuous functions defined on [a, b], hf, gi = f (t)g(t) dt is an
a
inner product.
Definition. Given a realpEuclidean space V , for every x ∈ V , we define the norm of x as the nonnegative
1
number ||x|| = hx, xi 2 = hx, xi.
Definition. A vector u ∈ V such that ||u|| = 1 is called a unit vector. Given any v ∈ V nonzero vector, we
v
define its normalization as u = ||v||
.
Theorem. Cauchy-Schwarz inequality. In an Euclidean space V it holds |hx, yi| ≤ ||x|| · ||y|| for every
x, y ∈ V .
Proof.
Theorem. In a Euclidean space the following properties hold.
1) ||x|| = 0 if and only if x = o
2) ||x|| > 0 if x 6= o
3) ||cx|| = |c|||x|| for every x ∈ V and every c ∈ IR
4) ||x + y|| ≤ ||x|| + ||y||, for every x, y ∈ V (triangular inequality).
Observation. From the Cauchy-Schwarz inequality it follows that −1 ≤
hx,yi
||x||·||y||
≤ 1 for every nonzero x, y.
Definition. In a real Euclidean space we define the angle between two nonzero vectors x, y as the angle
hx,yi
0 ≤ θ ≤ π such that cos θ = ||x||·||y||
.
Definition. Two vectors x, y in an Euclidean space V are orthogonal if hx, yi = 0
Example. In IR3 verify that the vectors x = (2, 3, 5) and y = (1, −4, 3) are not orthogonal but form an acute
angle.
Example. In IR3 are given u = (1, 1, 1), v = (1, 2, −3), w = (1, −4, 3), verify that u ⊥ v (u is orthogonal to
v), u ⊥ w but v is not orthogonal to w.
Example. In C[0, 1] consider f (t) = t − 3, g(t) = − 15
7 t + 1, h(t) = 3t − 2, verify that f is orthogonal to g
and h has norm 1.
Definition. A set of vectors S in an Euclidean space V is an orthogonal set if for every x, y ∈ S x ⊥ y. S
is orthonormal if it is orthogonal and ||x|| = 1 for every x ∈ S.
Example. In C[0, 2π], prove that the set S = {1, sin t, cos t, sin 2t, cos 2t, . . . , sin kt, cos kt, . . . } is an orthogonal set, normalize each vector of S to create an orthonormal set.
Theorem. In an Euclidean space V
1) Given a vector u ∈ V , u 6= o, the set u⊥ defined by u⊥ = {v ∈ V : u ⊥ v} is a subspace of V .
2) Let S be a subset of V , then S ⊥ = {v ∈ V : v ⊥ u, for every u ∈ S} is a subspace of V .
3) Let S be a subspace of V , then V = S ⊕ S ⊥ [V is the direct sum of S and S ⊥ , i.e. every v ∈ V can be
written in one and only one way as v = s + s⊥ , where s ∈ S and s⊥ ∈ S ⊥ ].
Proof.
Theorem. An orthogonal set of vectors is linearly independent.
Proof.
Theorem. Pythagoras. If {v1 , . . . , vn } is an orthogonal set of vectors, then ||v1 + v2 + · · · + vn ||2 =
||v1 ||2 + ||v2 ||2 + · · · + ||vn ||2 .
Proof.
Theorem. Given an Euclidean space V and B = {v1 , . . . , vn } an orthogonal basis, then for every vector
n
P
hx,vi i
ci vi where ci = hv
.
x=
i ,vi i
i=1
Definition. The coefficient ci =
hx,vi i
hvi ,vi i
is called Fourier coefficient.
Observation. If the basis is orthonormal then ci = hx, vi i.
Example. In IR3 verify that B = {u1 = (1, 2, 1), u2 = (2, 1, −4), u3 = (3, −2, 1)} is an orthogonal basis, find
[v]B , the coordinates of v = (7, 1, 9) with respect to B.
Example. In C[0, 2π], consider the orthogonal set Sk = {1, cos t, sin t, cos 2t, sin 2t, . . . , cos kt, sin kt} and the
space Vk = span Sk of trigonometric polynomials, then every f ∈ Vk can be expressed as
f (t) = c0 + a1 cos t + b1 sin t + · · · + ak cos kt + bk sin kt
and evaluating
the Fourier coefficients
we get
R 2π
R
R 2π
1 2π
1
c0 = 2π 0 f (t) dt , an = π 0 f (t) cos nt dt , bn = π1 0 f (t) sin nt dt , with 1 ≤ n ≤ k.
Observation. For a periodic function f (x) that is integrable on [0, 2π], the numbers
R 2π
R 2π
an = π1 0 f (x) cos(nx) dx, n ≥ 0 and bn = π1 0 f (x) sin(nx) dx, n ≥ 1
are called the Fourier coefficients
of f .
P∞
a0
The infinite sum 2 + n=1 [an cos(nx) + bn sin(nx)] is called the Fourier series of f .
We can introduce P
the partial sums of the Fourier series for f ,
N
a0
(SN f )(x) = 2 + n=1 [an cos(nx) + bn sin(nx)], N ≥ 0.
The partial sums for f are trigonometric polynomials. The functions SN f approximate the function f , and
the approximation improves as N tends to infinity. The Fourier series does not always converge, and even
when it does converge for a specific value x0 of x, the sum of the series at x0 may differ from the value f (x0 )
of the function. It is one of the main questions in harmonic analysis to decide when Fourier series converge,
and when the sum is equal to the original function.
Theorem. Let V be an Euclidean space with finite dimension, then V has an orthogonal basis (thus also an
orthonormal basis).
Proof. Gram-Schmidt orthogonalization process. Given any basis B = {v1 , . . . , vn } of V create a new
basis S = {y1 , . . . , yn } using the following process
y1 = v1
hv2 ,y1 i
y1
y2 = v2 − hy
1 ,y1 i
hv3 ,y1 i
3 ,y2 i
y3 = v3 − hv
hy2 ,y2 i y2 − hy1 ,y1 i y1
...
n ,yn−1 i
yn = vn − hyhvn−1
,yn−1 i yn−1 − · · · −
hvn ,y2 i
hy2 ,y2 i y2
−
hvn ,y1 i
hy1 ,y1 i y1
Example. Given W = span{v1 , v2 , v3 } ⊂ IR4 , where v1 = (1, 1, 1, 1), v2 = (1, 2, 4, 5), v3 = (1, −3, −4, −2),
find an orthogonal basis of W , then an orthonormal basis of W .
Definition. In an Euclidean space V we define the distance between two vectors x and y as d(x, y) = ||x−y||.
Definition. A linear transformation l : V → V , V Euclidean space, is symmetric if hl(v), wi = hv, l(w)i, for
every v, w ∈ V .
Theorem. Consider IRn with the standard inner product, a linear transformation l : IRn → IRn is symmetric
if and only if the matrix A(l) associated to l with respect to the standard basis of IRn is symmetric.
Theorem. Given a symmetric matrix A, the roots of the characteristic polynomial are all real, thus all
eigenvalue and eigenvectors of a symmetric matrix are real.
Theorem. Let V be an Euclidean space, dim V < ∞, l : V → V a linear symmetric transformation. Then,
there exists an orthonormal basis of V , B = {v1 , . . . , vn } which consists of eigenvectors of l. The matrix D
associated to l with respect to this orthonormal basis is diagonal. The matrix P formed by the coordinates
of the vi written as column is such that P−1 = PT and PT AP = D.
Example. Given l : IR2 → IR2 , l(x, y) = (2x + 2y, 2x + 5y), show that l is diagonalizable and find a basis of
IR2 made of orthonormal eigenvectors of l.