Chapter 5
Euclidean space R
In
5.1
Euclidean space R
In
Definition 5.1.1. Let n be a positive integer. Let
R
I n = {(a1 , a2 , · · · , an ) : ai ∈ R}
I
be the set of all n–tuples of real numbers. R
I n is called the n-space. An element (a1 , a2 , · · · , an )
is called a vector, and ai is called ith component of the vector.
Example 5.1.2. (1) For n = 1, 2, 3 we have R1 = {(a) : a ∈ R},
I R
I 2 = {(a1 , a2 )|ai ∈ R}.
I
R
I 3 = {(a1 , a2 , a3 ) : ai ∈ R}.
I
(2) Sampling functions.
For any function f : [0, 1] −→ R,
I
1
2
3
(f (0), f ( ), f ( ), f ( ), f (1)) ∈ R
I 4.
4
4
4
Remark 5.1.3. An element (a1 , a2 , ..., an ) in R
I n can be regarded either as a generalized
point or a vector. We write v = (a1 , a2 , ..., an ) when we treat it as a vector. O =
(0, 0, ..., 0) ∈ R
I n is called the zero vector.
Operations on R
In
67
For R
I 2 and R
I 3 we have defined addition, difference and scalar multiplication. Similar
operations can be introduced for any Rn .
Definition 5.1.4. Let u = (u1 , u2 , ..., un ), v = (v1 , v2 , ..., vn ) be vectors in R
I n and k a
scalar, we define
Sum: u + v = (u1 + v1 , ..., un + vn ).
Scalar multiplication: ku = (ku1 , ku2 , ..., kun ).
Negtive: −u = (−u1 , −u2 , ..., −un ).
Difference: u − v = u + (−v) = (u1 − v1 , ..., un − vn ).
Example 5.1.5. (1) 2(3, 6, 0, 1) − (1, 7, 4, 0) = (6, 12, 0, 2) + (−1, −7, −4, 0)
= (5, 5, −4, 2).
(2) Let u = (1, 0, 4, 2), v = (4, 8, 1, 0) ∈ R
I 3 , find w ∈ R
I 3 such that 2u − 2w = v.
Example 5.1.6. Let e1 = (1, 0, 0, 0), e2 = (0, 1, 0, 0), e3 = (0, 0, 1, 0), e4 = (0, 0, 0, 1).
Show that for any u = (a1 , a2 , a3 , a4 ) ∈ R
I 4 we have
u = a1 e 1 + a2 e 2 + a3 e 1 + a4 e 4 .
Proposition 5.1.7. (Basic properties) Let u, v, w be vectors in R
I n , and k, l be real numbers, then:
(a) u + v = v + u
(b) u + (v + w) = (u + v) + w
(c) u + 0 = u
(d) u + (−u) = 0
(e) k(lu) = (kl)u
(f ) k(u + v) = ku + kv
(g) (k + l)u = ku + lu.
(h) 1u = u.
68
5.2
Inner product
Definition 5.2.1. Let u = (u1 , u2 , ..., un ) and v = (v1 , v2 , ..., vn ) be two vectors in R
I n.
Then the inner product of u · v is defined by
u · v = u1 v1 + u2 v2 + ... + un vn .
The inner product is also called dot Product
Remark 5.2.2. The set R
I n with the above operations is called the Euclidean n–space.
Example 5.2.3. (1) If u = (1, 2, −1, 0), v = (3, 7, 0, 2) ∈ R
I 4 , then u·v = (1)(3)+(2)(7)+
(−1)(0) + (0)(2) = 17, u · u = 6.
(2) ei · ej = 0 if i 6= j, and it is 1 if i = j.
Theorem 5.2.4. ( Properties of inner product) If u, v, w are vectors in Rn and k, l are
scalars, then
(a) u · v = v · u
(b) (u + v) · w = u · w + v · w
(c) (ku) · v = k(u · v)
(d) u · u ≥ 0, and u · u = 0 iff u = 0.
Example 5.2.5. (1) (3u + 2v) · (4u + v) = (3u) · (4u + v) + (2v) · (4u + v)
= (3u) · (4u) + (3u) · (v) + (2v) · (4u) + (2v) · (v)
= (12)(u · u) + (3)(u · v) + (8)(v · u) + (2)(v · v)
= 12(u · u) + 11(u · v) + 2(v · v).
(2) (u + v) · (u − v) = u · u − 2u · v + v · v.
Definition 5.2.6. The norm of u = (u1 , u2 , ..., un ) ∈ R
I n is defined as
p
1
kuk = (u · u) 2 = u1 2 + u2 2 + · · · + u2n
For any two vectors u and ∨ in 2-space, |u · v| = kukkvk|cosθ| ≤ kukkvk. Is this
inequality holds in every Rn ?
69
Theorem 5.2.7. (Cauchy–Schwarz Inequality)
If u = (u1 , u2 , ..., un ) and v = (v1 , v2 , ..., vn ) are vectors in R
I n , then
|u · v| ≤ kukkvk
Example 5.2.8. (1) Verify the Cauchy-Schwarz inequality by u = (1, −1, 2), v = (−1, 2, 3).
(2) For any numbers a, b and angle θ,
(a sinθ + b cosθ)2 ≤ a2 + b2 .
(3) For any nonzero a, b, c,
(a2 + b2 + c2 )(1/a2 + 1/b2 + · · · + 1/c2 ) ≥ 9.
Theorem 5.2.9. For any two vectors u and v in R
I n,
ku + vk ≤ kuk + kvk.
Orthogonality
In spaces R
I 2 and R
I 3 , two vectors are orthogonal to each other if their dot product is
zero. Using this we can define orthogonality in any R
I n using inner product. This notion
has been found to be very useful.
Definition 5.2.10. Two vectors in R
I n are called orthogonal if
u · v = 0.
Example 5.2.11. In R
I 4 , u = (0, 1, 0, 0) and v = (0, 0, 0, 1) are orthogonal since u · v =
(0)(0) + (1)(0) + (0)(0) + (0)(1) = 0.
Theorem 5.2.12. ( Pythagoras Theorem ) If u and v are orthogonal vectors in R
I n , then
ku + vk2 = kuk2 + kvk2
70
Example 5.2.13. Prove that if ku + vk = ku − vk then u is orthogonal to v.
Proof
1
1
ku + vk = ku − vk implies ((u + v) · (u + v)) 2 = ((u − v) · (u − v)) 2 , and so
(u + v) · (u + v) = (u − v) · (u − v).
Hence (u · u) + 2(u · v) + (v · v) = (u · u) − 2(u · v) + (v · v), this implies that
4(u · v) = 0, u · v = 0. So u and v are orthogonal.
5.3
Linear transformations from R
I n to R
Im
Definition 5.3.1. (Linear transformation) A function f : R
I n −→ R
I m is called a linear
transformation if for each (x1 , x2 , · · · , xn ) ∈ Rn , f (x1 , · · · , xn ) = (w1 , w2 , · · · , wm ) ∈
Rm , where wi0 s are given by the following linear equations
w1 = a11 x1 + a12 x2 +
w2 = a21 x1 + a22 x2 +
···
···
..
.
+a1n xn
+a2n xn
wm = am1 x1 + am2 x2 + · · · +amn xn
where aij are fixed scalars determined by f .
Example 5.3.2. (1) f : R
I 2 −→ R
I 2 , f (x, y) = (x + y, 2x − y) is a linear transformation
from R
I 2 to R
I 2.
I[(2)]T : R
I 3 −→ R
I 3 , T (x1 , x2 , x3 ) = (2x1 − x3 , x2 + 4x3 , x1 + x2 + x3 ) is linear
transformation.
(3) g : R
I 2 −→ R
I 2 , g(x, y) = (xy, x) is not a linear transformation.
Example 5.3.3. (Matrix form) Given a linear transformation T : R
I n −→ R
I m , by the
definition there are scalars aij such that T (x1 , · · · , xn ) = (w1 , · · · , wm ) where wi = ai1 x1 +
· · · + ain xn . Let A = (aij )m×n be the m × n matrix with entries aij , then we see that
T (x) = Ax for any x = (x1 , · · · , xn ).
The m × n matrix A determined by T is called the standard matrix of the linear
transformation T . Obviously any linear transformation is uniquely determined by its
standard matrix.
71
Example 5.3.4. (1) The standard matrix of the rotation
T (x, y) = (xcosθ − ysinθ, xsinθ + ycostheta)
is
"
[T ] =
cosθ −sinθ
sinθ cosθ
#
(2) Let f : R3 −→ R2 be given by f (x, y, z) = (x, y). Find the standard matrix of f .
(3) Suppose that T is a linear transformation whose standard matrix is
1 1
[T ] = 0 −1
0 3
Find T (a, b).
Remark 5.3.5. (1) For any linear transformation T : R
I n −→ R
I m , if A is the standard
matrix of T , then we write T = TA . We also use [T ] to denote the standard matrix of T .
(2) For any m × n matrix A, there is a linear transformation
TA : R
I n −→ R
Im
defined by TA (x) = Ax for all x ∈ Rn . Hence there is a one-to-one correspondence
between linear transformations from R
I n to R
I m and m × n matrices.
Example 5.3.6. (1) Identity transformation: If I is the identity n × n matrix, then
TI (x) = Ix = x. So TI is the identity map on R
I n.
(2) Reflection operators: T : R
I 2 −→ R
I 2 sends each vector into its symmetric image
about the y-axis.
Let T (x) = w. Then w1 = −x, w2 = y. So T is linear and the standard matrix of T is
"
#
−1 0
.
0 1
72
Composition of linear transformation
Question: Is the composition T1 ◦ T2 of two linear transformations still a linear transformation? If yes, what is the relation between [T1 ◦ T2 ] and [T1 ],[T2 ]?
Theorem 5.3.7. If T1 : R
I n −→ R
I s and T2 : R
I s −→ R
I m are both linear transformations,
then T2 ◦ T1 : R
I n −→ R
I m is a linear transformation, and [T2 ◦ T1 ] = [T2 ][T1 ].
Example 5.3.8. T1 : R
I 2 −→ R
I 2 , T1 (x, y) = (x − y, x + y), T2 : R
I 2 −→ R
I 2 . Then
(T2 ◦ T1 )(x, y) = T2 (x − y, x + y) = (−x − y, −x + y). Hence
"
#
−1 −1
[T2 ◦ T1 ] =
= [T2 ][T1 ]
−1 1
5.4
Properties of linear transformations
Injective linear transformations
Example 5.4.1. The orthogonal projection F : R
I 2 −→ R
I 1 defined by F (x, y) = (x, 0) is
not a one-to-one function. For example, F (1, 4) = F (1, 5) = (1, 0). But (1, 4) 6= (1, 5).
But the linear transformation T : R2 −→ R2 that rotates each vector an angle θ is oneto-one.
Notice that for the above F and T ,
"
#
"
#
1 0
cosθ −sinθ
[F ] =
, , , [T ] =
.
0 0
sinθ cos θ
[F ] is not invertible, and [T ] is invertible. So whether a transformation is injective is
closely related to the invertibility of its standard matrix.
Theorem 5.4.2. Let T : Rn −→ Rn be a linear transformation. Then the following
statements are equivalent:
(a) T is injective.
73
(b) The standard matrix [T ] of T is invertible.
(c) T is surjective.
Remark 5.4.3. This theorem only applies to linear transformation from R
I n to itself, not
to arbitrary linear transformations.
Example 5.4.4. If Let T : R2 −→ R2 , T (x, y) = (xy , x − y). Determine if T is injective
(surjective ).
Example 5.4.5. Show that the linear transformation T : R
I 2 −→ R
I 2 defined by the equations
w1 = 2x1 + x2
w2 = 3x1 + 4x2
is one-to-one, and find T −1 (w1 , w2 ).
Solution: The standard matrix of T is
"
2 1
3 4
A=
#
,
"
A−1 =
4
5
− 15
− 35
2
5
#
So T −1 = TA−1 .
"
T −1 (w1 , w2 ) = A−1
w1
w2
#
.
Hence T −1 (w1 , w2 ) = ( 54 − 15 w2 , − 35 w1 + 25 w2 ).
Characteristic of linear transformation
74
"
=
4
w − 15 w2
5 1
− 35 + 25 w2
#
Theorem 5.4.6. A map T : R
I n −→ R
I m is a linear transformation if and only if it satisfies
the following two conditions:
(a) T (u + v) = T (u) + T (v),
(b) T (cu) = cT (u)
for all vector u, v in R
I n and any scalar c.
Corollary 5.4.7. If T : R
I n −→ R
I m is linear then for any vectors u1 , u2 , · · · , uk and
scalars l1 , l2 , · · · , lt ,
T (l1 u1 + l2 u2 + · · · + lk uk ) = l1 T (u1 ) + l2 T (u2 ) + · · · + lk T (uk ).
Example 5.4.8. Let T : R
I 2 −→ R
I 2 be a linear transformation such that T (1, 1) = (0, 1)
and T (−1, 1) = (1, 1).
(1) Find T (x, y).
(2) What is the standard matrix of T ?
(3) Determine if T is injective.
5.5
Eigenvalues and eigenvectors of linear operator
[Optional]
Definition 5.5.1. Let T : R
I n −→ R
I n be a linear operator. A scalar λ is called an
eigenvalue of T if there is a non-zero vector x in Rn such that T (x) = λx.
Remark 5.5.2. (1)If TA : R
I n −→ R
I n is the multiplication by A, then T (x) = λx if and
only if Ax = λx. So λ is an eigenvalue of T if and only if it is an eigenvalue of A.
(2) If λ is an eigenvalue of A and x is one eigenvector corresponding to λ. Then
(λI − A)x = 0. As x 6= 0, so det(A − λI) = 0. That is each eigenvalue of A satisfies this
equation. Conversely if λ satisfies this equation then it is an eigenvalue of A.
Example 5.5.3. Let T : R
I 2 −→ R
I 2 be the linear transformation defined by T (x, y) =
(x − y, x + y). The standard matrix of T is
"
#
1 −1
A=
1 1
75
λ−1
1
det(λI − A) = −1 λ + 1
So λ = 0 is the only eigenvalue of T .
76
= (λ − 1)(λ + 1) + 1
© Copyright 2026 Paperzz