LECtURE 12 Today we will prove the existence and uniqueness of

Lecture 12
Today we will prove the existence and uniqueness of the determinant.
Theorem 0.1. Let V be an F-vector space and {e1 , . . . , en } a basis. There exists a unique
n-linear alternating function f on V such that f (e1 , . . . , en ) = 1.
Proof. We will first prove uniqueness, so assume that f is an n-linear alternating function
on V such that f (v1 , . . . , vn ) = 1. We will show that f must have a certain form. Let
v1 , . . . , vn ∈ V and write then as
vk = a1,k e1 + · · · + an,k en .
We can then expand using n-linearity:
f (v1 , . . . , vn ) = f (a1,1 e1 + · · · + an,1 en , v2 , . . . , vn ) =
=
n
X
i1 =1
n
X
ai,1 f (ei , v2 , . . . , vn )
···
i1 =1
=
X
n
X
ai1 ,1 · · · ain ,n f (ei1 , . . . , ein )
in =1
ai1 ,1 · · · ain ,n f (ei1 , . . . , ein ) .
i1 ,...,in
Since f is alternating, all choices of i1 , . . . , in that are not distinct have f (ei1 , . . . , ein ) = 0.
So we can write this as
X
ai1 ,1 · · · ain ,n f (ei1 , . . . , ei,n ) .
i1 ,...,in distinct
The choices of distinct i1 , . . . , in can be made using permutations. Each permutation π ∈ Sn
gives exactly one such choice. So we can yet again write as
X
aπ(1),1 · · · aπ(n),n f (eπ(1) , . . . , eπ(n) ) .
π∈Sn
Using the lemma from last time, f (eπ(1) , . . . , eπ(n) ) = sgn(π)f (e1 , . . . , en ) = sgn(π), so
f (v1 , . . . , vn ) =
X
sgn(π)aπ(1),1 · · · aπ(n),n .
π∈Sn
If g is any other n-linear alternating function with g(e1 , . . . , en ) = 1 then the same computation as above gives the same formula for g(v1 , . . . , vn ), so f = g. This shows uniqueness.
For existence, we need to show that the formula above actually gives an n-linear alternating function with f (e1 , . . . , en ) = 1.
1. We first show f (e1 , . . . , en ) = 1. To do this, we write
ek = a1,k e1 + · · · + an,k en ,
where ak,j = 0 unless j = k, in which case it is 1. If π ∈ Sn is not the identity,
we can find k 6= j such that π(k) = j. This means that aπ(k),k = aj,k = 0 and so
sgn(π)aπ(1),1 · · · aπ(n),n = 0. Therefore
X
f (e1 , . . . , en ) =
sgn(π)aπ(1),1 · · · aπ(n),n = sgn(id)a1,1 · · · an,n = 1 .
π∈Sn
2. Next we show alternating. Suppose that vi = vj for some i 6= j and let τi,j be the
transposition (ij). Split all permutations into A, those which invert i and j and Sn \ A,
those which do not. Then if vk = a1,k e1 + · · · + an,k en , we can write
X
sgn(π)aπ(1),1 · · · aπ(n),n
f (v1 , . . . , vn ) =
π∈Sn
=
X
sgn(π)aπ(1),1 · · · aπ(n),n +
π∈A
=
X
X
sgn(πτi,j )aπτi,j (1),1 · · · aπτi,j (n),n
π∈Sn \A
sgn(π)[aπ(1),1 · · · aπ(n),n − aπτi,j (1),1 · · · aπτi,j (n),n ] .
π∈A
Note however that aπτi,j (i),i = aπ(j),i = aπ(j),j , since vi = vj . Similarly aπτi,j (j),j = aπ(i),i .
Therefore
aπ(1),1 · · · aπ(n),n = aπτi,j (1),1 · · · aπτi,j (n),n .
So the above sum is zero and we are done.
3. For n-linearity, we will just show it in the first coordinate. So let v, v1 , . . . , vn ∈ V and
c ∈ F. Writing
vk = a1,k e1 + · · · + an,k en and v = a1 e1 + · · · + an en ,
then cv + v1 = (ca1 + a1,1 )e1 + · · · + (can + an,1 )en , so
X
f (cv + v1 , v2 , . . . , vn ) =
sgn(π)(caπ(1) + aπ(1),1 )aπ(2),2 · · · aπ(n),n
π∈Sn
=c
X
sgn(π)aπ(1) aπ(2),2 · · · aπ(n),n +
π∈Sn
X
sgn(π)aπ(1),1 · · · aπ(n),n
π∈Sn
= cf (v, v2 , . . . , vn ) + f (v1 , . . . , vn ) .
One nice property of n-linear alternating functions is that they can determine when
vectors are linearly independent.
2
Theorem 0.2. Let V be an n-dimensional F-vector space and f a nonzero n-linear alternating function on V . Then {v1 , . . . , vn } is linearly independent if and only if f (v1 , . . . , vn ) 6= 0.
Proof. If n = 1 then the proof is an exercise, so take n ≥ 2 and first assume that the vectors
are linearly dependent. Then we can write one as a linear combination of the others. Suppose
for example that v1 = b2 v2 + · · · + bn vn . Then
f (v1 , . . . , vn ) = b2 f (v2 , v2 , . . . , vn ) + · · · + bn f (vn , v2 , . . . , vn ) = 0 .
Here we have used that f is alternating.
Conversely suppose that {v1 , . . . , vn } is linearly independent. Then it must be a basis.
We can then proceed exactly along the development given above and, if u1 , . . . , un are vectors
written as
uk = a1,k v1 + · · · + an,k vn ,
then if f (v1 , . . . , vn ) = 0, we find
X
f (u1 , . . . , un ) =
sgn(π)aπ(1),1 · · · aπ(n),n f (v1 , . . . , vn ) = 0 .
π∈Sn
Therefore f is zero. This is a contradiction, so f (v1 , . . . vn ) 6= 0.
Definition 0.3. Choosing V = Fn and e1 , . . . , en the standard basis, we write det (the
determinant) for the unique n-linear alternating function such that det(e1 , . . . , en ) = 1. If
A ∈ Mn,n (F) we define det(A) = det(~a1 , . . . , ~an ), where ~ai is the i-th column of A.
Corollary 0.4. Let A ∈ Mn,n (F). Then det(A) 6= 0 if and only if A is invertible.
Proof. By the previous theorem, det(A) 6= 0 if and only if the columns of A are linearly
independent. This is equivalent to saying that the column rank of A is n, or that A is
invertible.
One of the most important properties of the determinant is that it factors through products (compositions).
Theorem 0.5. Let A, B ∈ Mn,n (F). We have the following factorization:
det AB = det A · det B .
Proof. First if det A = 0 the matrix A cannot be invertible and therefore neither is AB, so
det AB = 0, proving the formula in that case. Otherwise we have det A 6= 0. In this case we
will use a method of proof that is very common when dealing with determinants. We will
define a function on matrices that is n-linear and alternating as a function of the columns,
mapping the identity to 1, and use the uniqueness of the determinant to see that it is just
the determinant function. So define f : Mn,n (F) → F by
f (B) =
det AB
.
det A
3
First note that if In is the n × n identity matrix, f (In ) = (det AIn )/ det A = 1. Next, if
B has two equal columns, its column rank is strictly less than n and so is the column rank
of AB, meaning that AB is non-invertible. This gives f (B) = 0/ det A = 0.
Last to show n-linearity of f as a function of the columns of B, write B in terms of its
columns as (~b1 , . . . , ~bn ). Note that if ei is the i-th standard basis vector, then we can write
~bi = Bei . Therefore the i-th column of AB is (AB)ei = A~bi and AB = (A~b1 , . . . , A~bn ). Thus
if ~b1 , ~b01 are column vectors and c ∈ F,
h
i
det A(c~b1 + ~b01 , ~b2 , . . . , ~bn ) = det(A(c~b1 + ~b01 ), A~b2 , . . . , A~bn )
= det(cA~b1 + A~b01 , A~b2 , . . . , A~bn )
= c det(A~b1 , A~b2 , . . . , A~bn ) + det(A~b01 , A~b2 , . . . , A~bn ) .
This means that det AB is n-linear (at least in the first column – the same argument works
for all columns), and so is f .
There is exactly one n-linear alternating function f with f (In ) = 1, so f (B) = det B.
Here are some consequences.
• Similar matrices have the same determinant.
Proof.
det P −1 AP = det P −1 det A det P = det A det P −1 det P = det A det In = det A .
• If A is invertible then det(A−1 ) =
1
.
det A
4