A Bridge between Algebra and Topology: Swan`s Theorem

A Bridge between Algebra and Topology:
Swan’s Theorem
Daniel Hudson
Contents
1 Vector Bundles
1
2 Sections of Vector Bundles
3
3 Projective Modules
4
4 Swan’s Theorem
5
Introduction
Swan’s Theorem is a beautiful theorem which connects algebra and topology by asserting that,
for compact Hausdorff spaces X, there is a 1-1 correspondence between vector bundles over
X and finitely generated C(X)-modules. Perhaps it is not surprising that vector bundles give
rise to algebraic structures, since they are families of vector spaces with additional structure,
but it is certainly striking that projective modules, which are defined in a purely algebraic
setting, have a rich geometric interpretation.
In the first section we develop the necessary background for vector bundles, and in the
second section we introduction sections of vector bundles. In the third section we develop
projective modules. We conclude with the statement and proof of Swan’s Theorem.
These (brief) notes are to accompany a short talk I gave at my graduate student seminar
at UVic in the fall of 2016. Comments are corrections are most welcome at [email protected].
1
Vector Bundles
Definition 1.1. Let X be a topological space. A (real) family of vector spaces over X is
a topological space E and a continuous surjection π : E → X such that, for all x ∈ X,
each Ex := π −1 ({x}) is a real vector space such that addition and scalar multiplication are
continuous. We call the map π the projection, and Ex the fiber over x ∈ X.
One can equally well talk about complex families of vector spaces over X. In this case,
the fibers are complex vector spaces. For simplicity we will restrict ourselves to the real case,
but the main result holds for complex vector bundles too.
Example 1.1. If X is a topological space, then, X × Rn are families of vector spaces over X
for all n ∈ N. It is called the trivial bundle of rank n.
Definition 1.2. If πE : E → X and πF : F → X are families of vector spaces over a topological space X and ϕ : E → F is a continuous map, then we say that ϕ is a homomorphism
of families (or just homomorphism, for short) if
1
1. The following diagram commutes:
ϕ
E
πE
X
~
/F
πF
2. for all x ∈ X, the map ϕx : Ex → Fx is a linear map of vector spaces.
If ϕ is a homeomorphism, then we call ϕ an isomorphism, and we write E ∼
= F.
Note in particular that if we consider points of E as ordered pairs (x, e), where e ∈ Ex ,
then condition 1 says that ϕ fixes x. Thus 2 makes sense. Furthermore, observe that if ϕ is
an isomorphism, then each ϕx is an isomorphism of vector spaces.
We are now ready to define a vector bundle.
Definition 1.3. If a family π : E → X is isomorphic to X × Rn for some n, the we call E
trivial. We call E locally trivial if, for all x ∈ X, there exists an open neighbourhood U ⊂ X
containing x such that E|U := π −1 (U ) ∼
= U × Rn for some n (which depends on U ). A locally
trivial family π : E → X is called a vector bundle.
Example 1.2. We will define a canonical line bundle (i.e. all the fibers are one dimensional)
on CPn , the complex projective space. Define
H ∗ := {(L, z) ∈ CPn × Cn+1 : z is a point in the line L}.
We give H the topology inherited as a subspace of CPn × Cn+1 and define the projection
π : H ∗ → CPn to be projection onto the first coordinate. Then the inverse image of each
point L ∈ CPn is that line as a subspace of Cn+1 , which carries an obvious vector space
structure. In particular, the fibers are one dimensional. Thus, to see that H ∗ is locally trivial
we have to find a section which is locally non-vanishing. Consider the sets
Ui := {[z0 , . . . , zn ] ∈ CPn : zi 6= 0}.
The Ui cover CPn , and on each Ui we define the map
z0
zn
s([z0 , . . . , zn ]) = [z0 , . . . , zn ],
,...,
∈ H[z∗ 0 ,...,zn ] .
zi
zi
Since the i-th coordinate of zz0i , . . . , zzni is 1, we s is non-vanishing on Ui , hence is the
appropriate section. Thus, H ∗ is a vector bundle.
In general, most operations that can be done with vector spaces can be done with vector
bundles too. The main example of this, for us at least, is the ability to form the direct sum
of vector bundles. Given, two vector bundles πE : E → X and πF : F → X, we define their
direct sum, or Whitney sum, as
E ⊕ F := {(e, f ) ∈ E × F : πE (e) = πF (f ).}
with the projection map πE⊕F : (e, f ) 7→ πE (e), and given the subspace topology. Fibrewise
we have that (E ⊕ F )x = Ex ⊕ Fx .
2
2
Sections of Vector Bundles
A critial notion for the proof of Swan’s Theorem will be that of a section, which we shall now
discuss.
Definition 2.1. A section of a family π : E → X is a continuous map s : X → E such that
π ◦ s = idX . The collection of all sections of E is denoted Γ(E).
Using sections we can come up with an equivalent characterization of vector bundles.
Proposition 2.1. A family π : E → X is a vector bundle if and only if for each x ∈ X
there exists an open neighbourhood U containing x and sections {s1 , . . . , sn } on U such that
{s1 (y), . . . , sn (y)} is a basis of Ey for each y ∈ U .
Proof. Suppose that ϕ : E|U → U × Rn is an isomorphism and let e1 , . . . , en be the standard
basis of Rn . Then we define sections {s1 , . . . , sn } on U by si (y) = ϕ−1 (y, ei ), and these clearly
have the desired properties.
Conversely, if we have such a collection of sections {s1 , . . . , sn } on U with the desired
properties, then we can define an isomorphism φ : U × Rn → E|U by
φ : (u, (v1 , . . . , vn )) 7→
n
X
vi si (u).
i=1
In particular, a vector bundle E is trivial if and only if we can find such sections defined
globally. This observation is critical for the main theorem.
Definition 2.2. A Hermitian structure on a vector bundle π : E → X is a continuous
choice of inner product h·, ·ix on each Ex . Here continuous means that for any two sections
s, t : X → E the map x 7→ hs(x), t(x)ix is a continuous function on X.
Proposition 2.2. Any vector bundle E over a compact Hausdorff space X can be given a
Hermitian structure.
Proof. First, suppose that E = X × Rn . If h·, ·i denotes the standard inner product on Rn ,
then we define h(x, v), (x, w)ix := hv, wi. To see that this is continuous, let {s1 , . . . , sn } be a
collection of sections which form a basis of Ex for each x ∈ X. Then for any i, j, the map
x 7→ hsi (x), sj (x)i factors as
X
(si ,sj )
/E×E
(pr2 ,pr2 )
/ Rn × Rn
h·,·i
/ R,
which shows that it is continuous.
For an arbitrary vector bundle E over X, let {Ui } be a finite collection of trivializations of
E. By the previous part, each E|Ui can be equipped with a Hermitian structure, say, h·, ·iUi .
Let {ρi } be a partition of unity of X subordinate to {Ui }. Define
ρi (u)hv, wiUi if v, w ∈ Eu ⊂ E|Ui ,
Ui
g (v, w) =
0
else.
Then
h·, ·iX :=
X
defines a Hermitian structure on E.
3
g Ui (·, ·)
3
Projective Modules
Modules over a ring R provide a generalization of vector spaces by allowing your scalars to
come from a general ring, rather than a field.
Definition 3.1. Let R be a ring. An R-module is an abelian group A equipped with a
multiplication R × A → A which distributes over the addition in R and in A and satisfies a
certain associativity condition; that is
1. (r + s)a = ra + sa for all r, s ∈ R, a ∈ A,
2. r(a + b) = ra + rb for all r ∈ R, a, b ∈ A,
3. r(sa) = (rs)a for all r, s ∈ R, a ∈ A.
If R has an identity then we say that A is unitary if 1a = a for all a ∈ A.
If A and B are R-modules, then a map ϕ : A → B is called R-linear if ϕ(ra + b) =
rϕ(a) + ϕ(b) for all r ∈ R, a, b ∈ A. If ϕ is bijective then we say that A and B are isomorphic
(as R-modules) and write A ∼
= B.
Just
L like with vector spaces, if {Ai }i∈I is a collection of R-modules we define their direct
sum i∈I Ai to be the collection of all i-tuples (ai )i∈I where ai ∈ Ai and ai = 0 for all but
finitely many i.
L
Definition 3.2. If R is a ring with unity, then a (unitary) R-module A is free is A ∼
= i∈I R
for some index set I. We say that A is projective if there is some R-module B such that A ⊕ B
is free. We say that A is finitely generated if there is a surjective homomorphism Rn → A.
Example 3.1. The most relevant example to us is that the sections of a vector bundle E
over X, form a (unitary) module over C(X), where
C(X) := {f : X → R : f is continuous}.
If s ∈ Γ(E) and f ∈ C(X) then we define the (f s)(x) := f (x)s(x). Proposition 2.1 says that
a vector bundle E is trivial if and only if Γ(E) is free and finitely generated C(X)-module.
Proposition 3.1. A finitely generated R-module A is projective if and only there exists an
idempotent P ∈ Mn (R) such that A ∼
= P (Rn ) for some n.
Proof. If A is finitely generated and projective then there is a split short exact sequence
ϕ
0 → ker(ϕ) ,→ Rn → A → 0
Hence there is an isomorphism φ : A ⊕ B ∼
= Rn , where B = ker(ϕ). We obtain the required
idempotent P by φ ◦ pr1 ◦ φ−1 .
Conversely, if A ∼
= P (Rn ) for some idempotent P ∈ Mn (R), then A ⊕ (1 − P )(Rn ) ∼
= Rn
which shows that A is projective.
4
4
Swan’s Theorem
We are now ready to state and prove Swan’s theorem.
Theorem 4.1 (Swan). Let X be a compact Hausdorff space. Then there exists a 1-1 correspondence between vector bundles over X and finitely generated projective C(X)-modules.
Moreover, the correspondence is given by
E 7→ Γ(E).
We break the proof up over a series of lemmas.
Lemma 4.1. If E is a vector bundle over X, then Γ(E) is finitely generated.
Proof. Let {Ui } be a finite covering of X of trivializations of E, and let {ρi } be a partition of
unity subbordinante to {Ui }. By Proposition 2.1, we see that each Γ(E|Ui ) is free and finitely
generated. If s ∈ Γ(E|Ui ) is a generator then we extend s to all of E by
ρi (x)s(x) for x ∈ Ui
s̃(x) =
0
else.
Since each Γ(E|Ui ) is generated by finitely many sections and there are finitely many such
Ui , this shows that Γ(E) is finitely generated.
Lemma 4.2. If E is any vector bundle over X, then there exists a vector bundle E ⊥ such
that E ⊕ E ⊥ ∼
= X × Rm for some m.
Proof. We will show that we can embed E in a trivial bundle, then we use a Hermitian
structure to define a projection onto E, which will in turn define E ⊥ .
Let {U1 , . . . , Un } be a finite open cover of X with trivializations {ϕ1 , . . . , ϕn } and let
{ρ1 , . . . , ρn } be a partition of unity subordinate to {U1 , . . . , Un }. If E has rank k (i.e. dim Ex =
k), then define
Φ : E → X × Rnk
(x, e) 7→ (x, ρ1 (x)1/2 ϕ1 (x, e), . . . , ρn (x)1/2 ϕn (x, e)).
If h·, ·i be a Hermitian structure on X × Rnk , then we observe that
2
kΦ(x, e)k =
n
X
ρi (x)kϕi (x, e)k2 ≥ 0,
i=1
with equality if and only if (x, e) was the zero vector in Ex , hence Φ is injective and E embeds
in X × Rnk . In particular, E is isomorphic to a sub-bundle of X × Rnk .
Now, thinking of E as a sub-bundle of X × Rnk , we use the Hermitian structure to define
the orthogonal projection Px : {x} × Rnk → Ex . Using the continuity of the Hermitian
form, we see that P is continuous, hence defining E ⊥ := (1 − P )(X × Rnk ) yields the desired
complementary bundle.
5
Theorem 4.2. If E is any vector bundle over X, then Γ(E) is a finitely generated and
projective C(X)-module.
Proof. Combine Lemma 4.1, 4.2 2.1 and that Γ(E) ⊕ Γ(F ) ∼
= Γ(E ⊕ F ).
This proves the first half of Swan’s theorem. Before we prove the second part, we recall Proposition 4.2. In terms of C(X) modules, this says that finitely generated projective
modules are in 1-1 correspondence with idempotents P ∈ Mn (C(X)). Since elements of
Mn (C(X)) correspond to continuous function X → Mn (R), we see that finitely generated
C(X)-modules are in 1-1 correspondence with idempotent valued functions P : X → Mn (R).
Thus, to conclude the proof of Swan’s theorem it is sufficient to prove the following.
Lemma 4.3. If P : X → Mn (R) is an idempotent valued function, then
Im(P ) := {(x, v) ∈ X × Rn : v ∈ Range(P (x))}
is a vector bundle over X whence equipped with the subspace topology, and projection given
by mapping onto the first coordinate. Moreover, Γ(Im(P )) = P (C(X))n .
Proof. The fibrewise vector space structure is clear, so we must prove local triviality.
Fix x0 ∈ X. Then Range(P (x0 )) is a k-dimensional subspace of Rn . Let v1 , . . . vk be a
basis for Range(P (x0 )) and extend it to a basis v1 , . . . , vn of Rn . The matrix valued function
P̃ (x) = [P (x)v1 |P (x)v2 | · · · |P (x)vk |vk+1 | · · · |vn ] ∈ Mn (R)
is such that P̃ (x0 ) is invertible. Indeed, since v1 , . . . , vk ∈ Range(P (x0 )), we see that P̃ (x0 )
is a matrix whose columns are a basis for Rn . Since GLn (R) is open in Mn (R), there exists
some open set U ⊂ X such that P̃ (x) is invertible for all x ∈ U . In particular, this says that,
for all x ∈ U , the vectors P (x)v1 , . . . , P (x)vk are linearly independent. Thus, setting
si (x) = P (x)vi ,
i = 1, . . . , k, x ∈ U,
gives sections of E|U which are a basis on each fibre, hence a trivialzation of E|U . Furthermore, a section of Im(P ) is, by definition, a continuous function s on X such that
s(x) ∈ Range(P (x)), from which we see that s ∈ P (C(X)n ). Thus, Γ(Im(P )) = P (C(X)n ),
which completes the proof.
References
[1] Atiyah, M. F., K-Theory, Benjamin, W. A., 1967.
[2] Emerson, H. Chapter on K-theory,
math465/a01/K_theory_chapter.
http://www.math.uvic.ca/courses/2016f/
[3] Rosenberg, J. Algebraic K-theory and its Applications, Graduate Texts in Mathematics,
Springer, 1994
6