COVECTORS AND FORMS – ELEMENTARY INTRODUCTION TO

COVECTORS AND FORMS – ELEMENTARY
INTRODUCTION TO DIFFERENTIAL GEOMETRY IN
EUCLIDEAN SPACE
NIKO MAROLA AND WILLIAM P. ZIEMER
Preface
The purpose of this note is to provide an elementary and more detailed treatment of the algebra of differential forms in Rn , and in R3
in particular. The content of the note is a much watered-down version of the beginning of Geometric Integration Theory by Hassler
Whitney [7]. There is also a more recent treatise on this subject
by Krantz–Parks [5]. It is assumed that the possible reader has
not received any formal intstruction in differential geometry, advanced
analysis or linear algebra. This deficiency created three drawbacks:
(a) The development of the Grassmann algebra of three space requires more than half the note.
(b) Several of the proofs are unduly computational, because to give
simple proofs would have involved introducing even more structure. See, for instance, the duality theorem
(R32 )∗ ∼ the dual space of R32
in § 7.1–2.
(c) No coordinate-free treatment as is done in Whitney [7].
Should you have any comments, please email at [email protected]
Contents
1. Vectors - Brief review of Euclidean 3-space
2. The space of covectors
3. Differential 1-forms
4. The space of 2-vectors
5. The space of 3-vectors
6. The space of p-vectors in Rn
7. The space of 2-covectors
7.1. Definition I
7.2. Definition II
1
2
3
4
6
9
11
12
12
13
2
MAROLA AND ZIEMER
8. The space of 3-covectors
9. The space of p-covectors in Rn
10. Applications to area theory
10.1. Case 1
10.2. Case 2
10.3. Case 3
10.4. General case
11. Differential 2-forms
12. Differential 3-forms
13. The exterior algebra of R3
13.1. Exterior products of k-covectors (k = 0, 1, 2, 3)
14. The algebra of differential forms
14.1. Exterior derivative of differential forms
15. Effects of a transformation on differential forms
16. The Gauss–Green–Stokes theorems
17. A glance at currents in Rn
References
16
18
18
18
20
21
21
22
26
28
28
31
32
35
48
52
53
1. Vectors - Brief review of Euclidean 3-space
We denote the basis vectors in R3 by e1 , e2 , e3 , and by e∗1 , e∗2 , e∗3 the
dual basis for the dual space (R3 )∗ = {f : f : R3 → R1 linear}
with e∗i (ej ) = δij for each index i, j. The standard inner product and
Euclidean norm on R3 are denoted by (·, ·) and | · |, respectively.
Addition of vectors, multiplication of a vector by a real number, and
the dot product of two vectors in R3 satisfy the following fundamental
rules:
For all vectors u, v, w ∈ R3 , and all scalars α, β ∈ R1
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
(u + v) + w = u + (v + w)
u+v =v+u
there is a zero vector 0 so that 0 + v = v
for each v there is a negative of v so that v + (−v) = 0
α(u + v) = αu + αv
(α + β)u = αu + βu
(αβ)u = α(βu)
1u = u
u·v =v·u
u · (v + w) = u · v + u · w
(αu) · v = α(u · v)
u · u ≥ 0, and u · u = 0 iff u = 0
COVECTORS AND FORMS
3
From these 12 basic properties, the following additional results can
be deduced:
13. 0v = 0
14. α0 = 0
15. 0 · u = 0
16. α(−u) = (−α)u = −(αu)
√
The norm of a vector, |u| = u · u, satisfies:
17. |u| = 0 iff u = 0
18. |αu| = |α||u|
19. |u + v| ≤ |u| + |v|
20. Schwarz inequality: |u · v| ≤ |u||v|
Properties 17–20 may be deduced from the twelve basic properties and
definition of norm.
2. The space of covectors
A covector is a linear map f : R3 → R1 . That is to say, it is a
function f satisfying
f (α1 v1 + α2 v2 ) = α1 f (v1 ) + α2 f (v2 ).
The set of all covectors is denoted
(R31 )∗ .
If f and g are covectors, and α ∈ R1 , we define:
(1) (f + g)(v) = f (v) + g(v),
(2) (αf )(v) = α(f (v)).
Proposition 2.1. The sum of covectors is a covector. The product of
a real number and a covector is a covector.
Proof. The proof is left to the reader.
Proposition 2.2. Addition and scalar multiplication of covectors satisfy the fundamental rules 1–8 of the preceding section.
Proof. The proof is left to the reader.
Proposition 2.3. A covector f ∈ (R31 )∗ is completely determined by
its value on the three basis vectors e1 , e2 , e3 of R3 .
Proof. If v ∈ R3 , then v can be uniquely expressed as v = α1 e1 +
α2 e2 + α3 e3 . Now using the linearity of the covector f , we have f (v) =
α1 f (e1 ) + α2 f (e2 ) + α3 f (e3 ). Since f (e1 ), f (e2 ), f (e3 ) are known, so
is f (v).
4
MAROLA AND ZIEMER
Proposition 2.4. Conversely, if a function f : R3 → R1 is defined by
selecting 3 numbers f (e1 ), f (e2 ), f (e3 ) arbitrarily, and extending the
definition of f to all vectors v ∈ R3 by the linearity
f (α1 e1 + α2 e2 + α3 e3 ) = α1 f (e1 ) + α2 f (e2 ) + α3 f (e3 ),
then the function f so defined is a covector.
Proof. The proof is left to the reader.
We now wish to define a basis for the space of covectors. We define
these covectors f1 , f2 , f3 by the formulae
0 if i 6= j
fi (δj ) =
1 if i = j.
By the preceding two propositions, f1 , f2 , f3 are well-defined covectors.
Note also that fi (a1 , a2 , a3 ) = ai , i = 1, 2, 3.
Exercise 2.5. Prove that the three covectors f1 , f2 , f3 just defined form
a basis for the space of covectors.
It is possible to define a dot product between covectors in such a way
that fundamental properties 9–12 of the preceding section are satisfied,
and thus obtain a complete analogy between the space of vectors (R3 )
and the space of covectors (R31 )∗ . Note also that all concepts of the
preceding sections can be generalized to n-space. In particular, an
n-dimensional covector would be a linear function f : Rn → R1 .
3. Differential 1-forms
Let Ω be an open set in R3 . A differential 1-form on Ω is a function
w : Ω → (R31 )∗ .
Thus to each point p ∈ Ω the differential 1-form w associates a linear
function w(p) : R3 → R1 . In other words, a differential 1-form is a
covector-valued function.
Example 3.1. Suppose that f : Ω → R1 , f ∈ C 1 (Ω). Then we define
a function
df : Ω → (R31 )∗
by the rule: for each point p ∈ Ω, df associates the covector df (p), i.e.,
the differential of the function f at the point p. We already know that
df (p) is a linear function for each p, so the function df just defined is
indeed a differential 1-form.
COVECTORS AND FORMS
5
Thus we see that the class of all differential 1-forms on open sets Ω
includes among its members all the ”differentials”, i.e., special 1-forms
which can be obtained from function f in the manner just illustrated.
However, the notion of differential 1-form is more general than the
notion of differential of a function; that is, there are 1-forms w which
cannot be expressed as df for any function f . We shall prove this
presently. We will drop the word ”differential” from the name for the
sake of brevity.
Proposition 3.2. Let w be a 1-form on Ω. Then w can be represented
by
w(p) = A1 (p)f1 + A2 (p)f2 + A3 (p)f3
for all p, where Ai , i = 1, 2, 3, is a real-valued function defined on Ω,
and fi , i = 1, 2, 3, is the standard basis covector defined in §2. This
representation is unique.
Proof. Fix a point p ∈ Ω. Then w(p) ∈ (R31 )∗ . Hence the covector
w(p) can be uniquely expressed in terms of the three basis covectors
f1 , f2 , f3 , by w(p) = A1 f1 + A2 f2 + A3 f3 . The coefficients A1 , A2 , A3 are
determined by the covector w(p), which in turn depends on the choice
of the point p ∈ Ω. Hence the Ai ’s are in fact functions of p.
Let w be a 1-form on Ω. Then the three functions A1 , A2 , A3 on
Ω are called the coordinate functions of w. A 1-form w is said to be
continuous (differentiable, in class C 1 , etc.) if and only if its coordinate
functions are continuous (differentiable, in class C 1 , etc.).
We now turn to the notion of integration of 1-forms over curves. Let
w be a 1-form on an open set Ω. Let γ : [a, b] → R3 be a smooth curve
such that trace(γ) ⊂ Ω. Then the integral of w over γ is given by
Z b
Z
w=
w(γ(t)) (γ 0 (t)) dt.
γ
a
In order that the left hand side should always exist, we shall assume
Rthat the 1-form w is continuous. Let us write
P3 out the definition of
w
in
coordinate
form.
Suppose
w(p)
=
i=1 Ai (p)fi , and γ(t) =
γ
(x(t), y(t), z(t)). Then
w(γ(t)) =
3
X
Ai (γ(t))fi .
i=1
Thus by using rules of addition and scalar multiplication of covectors,
we have that
w(γ(t)) (γ 0 (t)) = (A1 ◦ γ)x0 + (A2 ◦ γ)y 0 + (A3 ◦ γ)z 0 (t).
6
MAROLA AND ZIEMER
R
Now using the definition of γ w,
Z b
Z
w=
(A1 ◦ γ)x0 + (A2 ◦ γ)y 0 + (A3 ◦ γ)z 0 (t) dt.
γ
a
Example 3.3. For the sake of simplicity, and to show how the notion of
1-form can be generalized to n-space, where n 6= 3, let us integrate a 1form in the plane over a curve in the plane. Let γ(t) = (5 cos t, 5 sin t),
0 ≤ t ≤ 2π, and w(x, y) = x2 yf1 + xyf2 . Here of course f1 , f2 are basis
covectors for (R21 )∗ , defined by
0, i 6= j,
fi (ej ) =
1, i = j.
Then γ 0 (t) = (−5 sin t, 5 cos t), and w(γ(t)) = 125 cos2 t sin tf1 +25 cos t sin tf2 ,
so that
Z
Z 2π
w=
(−625 cos2 t sin2 t + 125 cos2 t sin t) dt.
γ
0
As a second example, let w : Ω → (R31 )∗ , where Ω = {(x, y, z) ∈ R3 :
xy > 0}, be given by w(x, y, z) = log(xy)f2 . Notice that the coordinate
functions A1 , A3 are zero. Let γ(t) = (2t, et , 1), 0 ≤ t ≤ 1. Then
Z
Z 1
w=
(log 2 + logt + t)et dt.
γ
0
Theorem 3.4. Let w be continuous 1-form on Ω (into (R31 )∗ ) and
γ : [a, b] → Ω, µ : [c, d] → Ω be smoothly equivalent curves. Then
Z
Z
w = w.
γ
µ
Proof. The proof is left to the reader.
4. The space of 2-vectors
Let u and v be any vectors in R3 . We consider expressions of the
form
u∧v
(read ”u wedge v”). This object is called the wedge product of the
vectors u and v. In general, we shall consider expressions of the form
ξ = α1 (u1 ∧ v1 ) + . . . + αk (uk ∧ vk ),
where the ui ’s and vi ’s are vectors in R3 , and the αi ’s are real numbers.
Such expressions are called 2-vectors in R3 . A 2-vector, then, is simply
a linear combination of wedge products. The set of all two vectors is
denoted R32 . Now, if ξ = α1 (u1 ∧ v1 ) + . . . + αk (uk ∧ vk ) and η =
COVECTORS AND FORMS
7
β1 (w1 ∧ x1 ) + . . . + βk (wk ∧ xk ) are 2-vectors, we may define the sum
by
ξ + η = α1 (u1 ∧ v1 ) + . . . + αk (uk ∧ vk ) + β1 (w1 ∧ x1 ) + . . . + βk (wk ∧ xk ).
That is, we merely string the two expressions together in one linear
combination. Similarly, if α is a real number, and ξ is as above, the we
define αξ by αα1 (u1 ∧ v1 ) + . . . + ααk (uk ∧ vk ).
We want the set R32 of 2-vectors, with the rules for addition and
scalar multiplication just defined, to satisfy fundamental rules 1–8 of
§1. Accordingly we make the following assumptions:
(1) α1 (u1 ∧ v1 ) + α2 (u2 ∧ v2 ) = α2 (u2 ∧ v2 ) + α1 (u1 ∧ v1 ),
(2) α(u ∧ v) + β(u ∧ v) = (α + β)(u ∧ v),
(3) 1(u ∧ v) = (u ∧ v).
Under these assumptions, the rules 1–8 of §1 are satisfied. In particular, the zero 2-vector will be denoted 0, and may be thought of as a
linear combination in which the coefficients are all zeros. We also make
the following four additional assumptions:
(4) (u1 + u2 ) ∧ v = (u1 ∧ v) + (u2 ∧ v),
(5) u ∧ (v1 ∧ v2 ) = (u ∧ v1 ) + (u ∧ v2 ),
(6) (αu) ∧ v = u ∧ (αv) = α(u ∧ v),
(7) u ∧ u = 0.
This completes our definition. Observe that the definition is axiomatic, rather than constructive.
Proposition 4.1. u ∧ v = −(v ∧ u).
Proof.
(u ∧ v) + (v ∧ u) = (u ∧ u) + (u ∧ v) + (v ∧ u) + (v ∧ v)
= u ∧ (u + v) + v ∧ (u + v)
= (u + v) ∧ (u + v) = 0.
A simple 2-vector is said to be one of the form (u ∧ v), i.e., a single
wedge product.
Assumptions 4–7 of the definition of the space of 2-vectors given
above give us hope of reducing more complicated 2-vectors to simple
2-vectors. For example 4 and 5 enable us to reduce certain 2-vectors
involving two wedge products to simple 2-vectors. Thus the 2-vector
(u1 ∧ v) + (u2 ∧ v) is simple, because it can be written as (u1 + u2 ) ∧ v
by 4.
Exercise 4.2. Prove that the following 2-vectors are simple:
8
MAROLA AND ZIEMER
a) 2(u ∧ v) + (u ∧ w) + (v + w) ∧ u ,
b) (u ∧ v) + (v ∧ w) + (w ∧ u).
Remark 4.3. It may seem that perhaps every 2-vector is simple. However, this is not the case.
We next wish to investigate the problem of finding basis 2-vectors.
Consider the 2-vectors e1 ∧ e2 , e1 ∧ e3 , and e2 ∧ e3 . It is easy to see
that every 2-vector can be written as a linear combination of these
three. These three 2-vectors are also linearly independent, hence we
have found a basis.
Using the linear independence of these basis 2-vectors, you should
be able to answer the question about whether all 2-vectors are simple.
Thus far we have obtained a good analogy between the space of 2vectors and the space R3 , at least so far as fundamental rules 1–8 and
the existence of a basis are concerned. For the sake of comparison, it is
convenient to think of R3 as the space of 1-vectors, and to denote it by
R31 . The analogy becomes even clearer if we establish a correspondence
between 2-vectors and 1-vectors by
e1 ∧ e2 ←→ e3
e1 ∧ e3 ←→ −e2
e2 ∧ e3 ←→ e1 .
Under this correspondence, a general 2-vector ξ = α1 (e1 ∧ e2 ) + α2 (e1 ∧
e3 ) + α3 (e2 ∧ e3 ) corresponds to α1 e3 − α2 e2 + α3 e1 . This correspondence can be used to identify the wedge product with the cross-product
of classical vector analysis. The geometric interpretation is that u ∧ v
gives the area of the oriented parallelogram with vertice vectors u and
v.
To complete the analogy between R32 and R31 we introduce the notions of dot product and norm of 2-vectors. Let ξ and η be 2-vectors,
say, ξ = α1 (e1 ∧ e2 ) + α2 (e1 ∧ e3 ) + α3 (e2 ∧ e3 ) and η = β1 (e1 ∧ e2 ) +
β2 (e1 ∧ e3 ) + β3 (e2 ∧ e3 ). Then define
ξ·η =
3
X
α i βi .
i=1
It readily follows that the dot product of 2-vectors satisfies fundamental
rules 9–12 of §1. Moreover, for simple 2-vectors, the definition given
above is equivalent to the following
u1 · u2 u1 · v2 .
(u1 ∧ v1 ) · (u2 ∧ v2 ) = v1 · u2 v1 · v2 COVECTORS AND FORMS
9
√
For any 2-vector ξ, we put |ξ| = ξ · ξ. The norm of a 2-vector is
well-defined, and it satisfies fundamental rules 17–20 of §1.
5. The space of 3-vectors
The definition and theorems of this section parallel those of the preceding section completely; accordingly, we shall not go into as much
detail here.
Let u, v, and w be any vectors in R3 . The object u ∧ v ∧ w is called
the wedge product of the vectors u, v, and w. In general, a 3-vector in
R3 is an expression of the form
ρ = α1 (u1 ∧ v1 ∧ w1 ) + . . . + αr (ur ∧ vr ∧ wr ).
The set of 3-vectors is denoted R33 . If σ = β1 (x1 ∧y1 ∧z1 )+. . .+βs (xs ∧
ys ∧ zs ), and ρ is as written above, then we define
ρ + σ = α1 (u1 ∧ v1 ∧ w1 ) + . . . + αr (ur ∧ vr ∧ wr )
+ β1 (x1 ∧ y1 ∧ z1 ) + . . . + βs (xs ∧ ys ∧ zs ),
and
αρ = αα1 (u1 ∧ v1 ∧ w1 ) + . . . + ααr (ur ∧ vr ∧ wr ),
for any real number α.
In order that fundamental rules 1–8 of §1 be satisfied, we assume
(1)
α1 (u1 ∧ v1 ∧ w1 ) + α2 (u2 ∧ v2 ∧ w2 ) + α3 (u3 ∧ v3 ∧ w3 )
= α1 (u1 ∧ v1 ∧ w1 ) + α2 (u2 ∧ v2 ∧ w2 ) + α3 (u3 ∧ v3 ∧ w3 ),
(2) α1 (u1 ∧ v1 ∧ w1 ) + α2 (u2 ∧ v2 ∧ w2 ) = α2 (u2 ∧ v2 ∧ w2 ) + α1 (u1 ∧
v1 ∧ w1 ),
(3) α(u ∧ v ∧ w) + β(u ∧ v ∧ w) = (α + β)(u ∧ v ∧ w),
(4) 1(u ∧ v ∧ w) = u ∧ v ∧ w.
Then we also make the following five additional assumptions:
(5) (u1 + u2 ) ∧ v ∧ w = (u1 ∧ v ∧ w) + (u2 ∧ v ∧ w),
(6) u ∧ (v1 + v2 ) ∧ w = (u ∧ v1 ∧ w) + (u ∧ v2 ∧ w),
(7) u ∧ v ∧ (w1 + w2 ) = (u ∧ v ∧ w1 ) + (u ∧ v ∧ w2 ),
(8) (αu) ∧ v ∧ w = u ∧ (αv) ∧ w = u ∧ v ∧ (αw) = α(u ∧ v ∧ w),
(9) u ∧ v ∧ w = 0 whenever at least two of the vectors u, v, w are
equal.
Again note that the definition is axiomatic rather than constructive.
Axioms 1–4 are more or less natural to insure that fundamental rules
1–8 of §1 are satisfied. Axioms 5–9 carry the special information about
the space of 3-vectors.
10
MAROLA AND ZIEMER
Exercise 5.1. Show that u ∧ v ∧ w = −v ∧ u ∧ w.
Proposition 5.2. For any vectors u, v, and w in R3 , the following are
equal
u∧v∧w =w∧u∧v =v∧w∧u
= −v ∧ u ∧ w = −u ∧ w ∧ v = −w ∧ v ∧ u.
Proof. The proof is left to the reader.
Proposition 5.3. The single 3-vector e1 ∧ e2 ∧ e3 forms a basis for the
space R33 of 3-vectors.
Proof. Consider a 3-vector of the form u ∧ v ∧ w with u, v, w ∈ R3 .
Then u, v, w can be written as
u = α1 e1 + α2 e2 + α3 e3
v = β1 e1 + β2 e2 + β3 e3
w = γ1 e1 + γ2 e2 + γ3 e3 .
When we form u ∧ v ∧ w, and apply the assumptions 5–8, we obtain an
expression
3
X
u∧v∧w =
δijk (ei ∧ ej ∧ ek ),
i,j,k=1
in which there are 27 terms in the sum on the right. In all but 6 of the
27 terms, we have ei ∧ ej ∧ ek , where two (at least) of the three factors
are equal. By assumption 9, these terms are all zero. The remaining
6 terms are of the form ei ∧ ej ∧ ek , where i, j, k are all different; i.e.
i, j, k are 1,2,3 in some order. But the previous proposition shows us
that these 6 vectors are then equal to ±e1 ∧ e2 ∧ e3 , where the sign
depends upon the ordering of the subscripts. Combining these terms,
we have
u ∧ v ∧ w = α(e1 ∧ e2 ∧ e3 ).
The reader should verify that
α = (α1 β2 γ3 ) + (α2 β3 γ1 ) + (α3 β1 γ2 )
− (α1 β3 γ2 ) − (α2 β1 γ3 ) − (α3 β2 γ1 )
α1 α2 α3 = β1 β2 β3 .
γ1 γ2 γ3 It follows that since every 3-vector is a linear combination of simple
3-vectors, i.e. 3-vectors of the type u ∧ v ∧ w, that every 3-vector can
COVECTORS AND FORMS
11
be written as a linear combination of e1 ∧ e2 ∧ e3 . Hence every 3-vector
is simple!
It is also true that the 3-vector e1 ∧ e2 ∧ e3 is linearly independent,
i.e. not zero, though we shall not give the proof. Lest the reader think
this is obvious, see the remark at the end of this section.
Hence it follows that {e1 ∧ e2 ∧ e3 } is a basis, and every 3-vector can
be uniquely expressed as ρ = α(e1 ∧ e2 ∧ e3 ).
Note in particular that in this manner there is a 1-1 correspondence
between R33 , the space of 3-vectors in R3 , and R1 .
Remark 5.4. We could, of course, continue, and try to define a space
R34 of 4-vectors in R3 . But observe that when we turned to the problem of expressing 4-vectors in terms of e1 , e2 , e3 , we would obtain an
expression of the form
3
X
u∧v∧w∧x=
δijkl (ei ∧ ej ∧ ek ∧ el ).
i,j,k,l=1
However, at least 2 of the 4 indices must be equal, so the 4-vector
ei ∧ ej ∧ ek ∧ el = 0. This would hold for all the 81 terms of this sum,
so we would have u ∧ v ∧ w ∧ x = 0; as a consequence we have R34 = 0.
The table below summarize the various spaces of p-vectors in R3 ,
where p is any non-negative integer:
0 − vectors:
R30
basis vectors: {1}
1 − vectors:
R31
R32
R33
R3p
basis vectors: {e1 , e2 , e3 }
2 − vectors:
3 − vectors:
p − vectors:
basis vectors: {e1 ∧ e2 , e1 ∧ e3 , e2 ∧ e3 }
basis vectors: {e1 ∧ e2 ∧ e3 }
= 0, p ≥ 4.
6. The space of p-vectors in Rn
We have tried to give the definitions of §§4 and 5 so that they can
be easily generalized. A p-vector in Rn , then, is an expression
θ=
t
X
αi (u1i ∧ . . . ∧ upi ),
i=1
where the αi ’s are real numbers, and the uji ’s are vectors in Rn . The
space of p-vectors in Rn is denoted Rnp . The four basic axioms that
make Rnp into a vector space are
12
MAROLA AND ZIEMER
1)
α1 (u11 ∧ . . . ∧ up1 ) + α2 (u12 ∧ . . . ∧ up2 ) + α3 (u13 ∧ . . . ∧ up3 )
= α1 (u11 ∧ . . . ∧ up1 ) + α2 (u12 ∧ . . . ∧ up2 ) + α3 (u13 ∧ . . . ∧ up3 ) ,
2)
α1 (u11 ∧ . . . ∧ up1 ) + α2 (u12 ∧ . . . ∧ up2 )
= α2 (u12 ∧ . . . ∧ up2 ) + [α1 (u11 ∧ . . . ∧ up1 ),
3) α(u1 ∧ . . . ∧ up ) + β(u1 ∧ . . . ∧ up ) = (α + β)(u1 ∧ . . . ∧ up ),
4) 1(u1 ∧ . . . ∧ up ) = u1 ∧ . . . ∧ up .
In addition, the axioms that give Rnp its special properties are
5) For k = 1, 2, . . . , p,
(u1 ∧ . . . ∧ uk + vk ∧ . . . ∧ up )
= (u1 ∧ . . . ∧ uk ∧ . . . ∧ up ) + (u1 ∧ . . . ∧ vk ∧ . . . ∧ up ),
6) For k = 1, 2, . . . , p,
α(u1 ∧ . . . ∧ up ) = u1 ∧ . . . ∧ (αuk ) ∧ . . . ∧ up ,
7) u1 ∧ . . . ∧ up = 0 if 2 or more uj ’s are equal.
One then proves that if v1 ∧ . . . ∧ vp is obtained from u1 ∧ . . . ∧ up
by rearrangement of factors, then v1 ∧ . . . ∧ vp = ±u1 ∧ . . . ∧ up , with
the sign + or - according to whether the number of interchanges used
in making the rearrangement is even or odd.
Finally, a basis for Rnp is obtained by taking p-vectors ei1 ∧ . . . ∧ eip ,
where e1 ∧ . . . ∧ en are the usual basis for Rn , and the subscripts
are
arranged in increasing order: i1 < i2 < . . . < ip . Thus Rnp has np basis
vectors; i.e., it is np -dimensional.
7. The space of 2-covectors
7.1. Definition I. We define 2-covectors from covectors exactly as we
defined 2-vectors from vectors:
An object of the form f ∧ g, where f and g are covectors in (R31 )∗
is called the wedge product of f and g; in general, a 2-covector is an
expression of the form
F = α1 (f1 ∧ g1 ) + . . . + αk (fk ∧ gk ).
The set of 2-covectors in R3 is denoted
(R32 )∗ .
0
0
If G = β1 (f10 ∧ g10 ) + . . . + βm (fm
∧ gm
), then
0
0
F + G = α1 (f1 ∧ g1 ) + . . . + αk (fk ∧ gk ) + β1 (f10 ∧ g10 ) + . . . + βm (fm
∧ gm
)
COVECTORS AND FORMS
13
and
αF = αα1 (f1 ∧ g1 ) + . . . + ααk (fk ∧ gk ),
for any real number α.
So that the fundamental rules 1–8 of §1 are satisfied, we first assume:
(1) α1 (f
∧
g
)
+
α
(f
∧
g
)
+
α
(f
∧
g
)
= α1 (f1 ∧ g1 ) + α2 (f2 ∧
1
1
2
2
2
3
3
3
g2 ) + α3 (f3 ∧ g3 ),
(2) α1 (f1 ∧ g1 ) + α2 (f2 ∧ g2 ) = α2 (f2 ∧ g2 ) + α1 (f1 ∧ g1 ),
(3) α(f ∧ g) + β(f ∧ g) = (α + β)(f ∧ g)
(4) 1(f ∧ g) = f ∧ g.
Then to give (R32 )∗ its special properties, we assume:
(5)
(6)
(7)
(8)
(f1 + f2 ) ∧ g = (f1 ∧ g) + (f2 ∧ g),
f ∧ (g1 + g2 ) = (f ∧ g1 ) + (f ∧ g2 ),
α(f ∧ g) = (αf ) ∧ g = f ∧ (αg),
f ∧ f = 0, the zero 2-covector.
One then proves that f ∧ g = −g ∧ f . Furthermore, suppose that
f1 , f2 , f3 are the basis covectors defined in §2. Then we obtain, as in
§4, the result that f1 ∧ f2 , f1 ∧ f3 , and f2 ∧ f3 are basis 2-covectors.
There is an alternative way of obtaining the space (R32 )∗ , which we
now consider.
7.2. Definition II. A co-2-vector is a linear map H : R32 → R1 . If H
and K are co-2-vectors, and α ∈ R1 , we define
1) (H + K)(ξ) = H(ξ) + K(ξ),
2) (αH)(ξ) = α(H(ξ)).
We see that the definition of co-2-vector as a linear function on the
space of 2-vectors is exactly parallel to the definition of a covector as a
linear function on the space of vectors given in §2. We shall not repeat
the proofs of propositions which are exactly the same as in §2.
Proposition 7.1.
a) The sum of co-2-vectors is a co-2-vector.
b) The product of a real number and a co-2-vector is a co-2-vector.
Proposition 7.2. Addition and scalar multiplication of co-2-vectors
satisfy the fundamental rules 1–8 of §1.
Proposition 7.3. If γ1 , γ2 , γ3 are any three given real numbers, then
there is one and only one co-2-vector H such that
H(e1 ∧ e2 ) = γ1 ,
H(e1 ∧ e3 ) = γ2 ,
H(e2 ∧ e3 ) = γ3 .
14
MAROLA AND ZIEMER
Now using this last proposition, we can obtain a basis for the space
of co-2-vectors. We define these co-2-vectors H1 , H2 , H3 , by:
H1 (e1 ∧ e2 ) = 1 H2 (e1 ∧ e2 ) = 0 H3 (e1 ∧ e2 ) = 0
H1 (e1 ∧ e3 ) = 0 H2 (e1 ∧ e3 ) = 1 H3 (e1 ∧ e3 ) = 0
H1 (e2 ∧ e2 ) = 0 H2 (e2 ∧ e3 ) = 0 H3 (e2 ∧ e3 ) = 1.
H1 , H2 , H3 are well-defined co-2-vectors, and they form a basis for
the space of co-2-vectors. The following theorem is of critical importance.
Theorem 7.4. There is a canonical one-to-one correspondence between
the space of 2-covectors and the space of co-2-vectors. Under this correspondence addition and scalar multiplication are preserved.
Remark 7.5. In the language of abstract algebra, this theorem says
that the space of 2-covectors and the space of co-2-vectors are canonically isomorphic.
Proof. We first recall that if F is any 2-covector whatever, then F can
be written in one and only one way as F = α1 (f1 ∧ f2 ) + α2 (f1 ∧
f3 ) + α3 (f2 ∧ f3 ). Similarly, if H is any co-2-vector whatever, H can
be uniquely expressed as a linear combination of the basis co-2-vectors
H1 , H2 , H3 .
Now let F and G be any 2-covectors. Then F = α1 (f1 ∧ f2 ) +
α2 (f1 ∧ f3 ) + α3 (f2 ∧ f3 ) and G = β1 (f1 ∧ f2 ) + β2 (f1 ∧ f3 ) + β3 (f2 ∧
f3 ). By our rule of correspondence, F corresponds to the co-2-vector
H = α1 H1 + α2 H2 + α3 H3 , and G corresponds to the co-2-vector K =
β1 H1 + β2 H2 + β3 H3 . Now by the rules for addition of 2-covectors,
F + G = (α1 + β1 )(f1 ∧ f2 ) + (α2 + β2 )(f1 ∧ f3 ) + (α3 + β3 )(f2 ∧ f3 ).
Under the correspondence, F + G corresponds to (α1 + β1 )H1 + (α2 +
β2 )H2 + (α3 + β3 )H3 . But under the rules for addition of co-2-vectors,
this is precisely the sum H + K. Thus F + G corresponds to H + K.
Similarly, we obtain that αF corresponds to αH for any real number
α.
To complete the proof, we must justify the use of the word ”canonical”. Consider a 2-covector g ∧ h, where g and h are covectors; say
g = α1 f1 + α2 f2 + α3 f3 , and h = β1 f1 + β2 f2 + β3 f3 . Then we
know that g ∧ h = (α1 β2 − α2 β1 )(f1 ∧ f2 ) + (α1 β3 − α3 β1 )(f1 ∧ f3 ) +
(α2 β3 − α3 β2 )(f2 ∧ f3 ). Hence g ∧ h corresponds to the co-2-vector
H = (α1 β2 − α2 β1 )H1 + (α1 β3 − α3 β1 )H2 + (α2 β3 − α3 β2 )H3 . Now let
u, v be any vectors, and consider H(u ∧ v). We have
H(u ∧ v) = (α1 β2 − α2 β1 )H1 (u ∧ v) + (α1 β3 − α3 β1 )H2 (u ∧ v)
+ (α2 β3 − α3 β2 )H3 (u ∧ v).
COVECTORS AND FORMS
15
Suppose u = α1∗ e1 +α2∗ e2 +α3∗ e3 , v = β1∗ e1 +β2∗ e2 +β3∗ e3 . Then we know
that u ∧ v = (α1∗ β2∗ − α2∗ β1∗ )(e1 ∧ e2 ) + (α1∗ β3∗ − α3∗ β1∗ )(e1 ∧ e3 ) + (α2∗ β3∗ −
α3∗ β2∗ )(e2 ∧ e3 ). By definition of H1 , H2 , H3 , H1 (u ∧ v) = α1∗ β2∗ − α2∗ β1∗ ,
H2 (u ∧ v) = α1∗ β3∗ − α3∗ β1∗ , and H3 (u ∧ v) = α2∗ β3∗ − α3∗ β2∗ . Therefore,
H(u ∧ v) = (α1 β2 − α2 β1 )(α1∗ β2∗ − α2∗ β1∗ )
+ (α1 β3 − α3 β1 )(α1∗ β3∗ − α3∗ β1∗ ) + (α2 β3 − α3 β2 )(α2∗ β3∗ − α3∗ β2∗ ).
Expanding and canceling,
H(u ∧ v) = α1 α1∗ β2 β2∗ + α2 α2∗ β1 β1∗
− α2 α1∗ β1 β2∗ − α1 α2∗ β2 β1∗
+ α1 α1∗ β3 β3∗ + α3 α3∗ β1 β1∗
− α1 α3∗ β3 β1∗ − α3 α1∗ β1 β3∗
+ α2 α2∗ β3 β3∗ + α3 α3∗ β2 β2∗
− α3 α2∗ β2 β3∗ − α2 α3∗ β3 β2∗ .
By knowing the answer in advance, we are able to tell that this mess
is:
α1 α∗ + α2 α∗ + α3 α∗ α1 β ∗ + α2 β ∗ + α3 β ∗ 3
2
1
3
2
1
.
H(u ∧ v) = β1 α1∗ + β2 α2∗ + β3 α3∗ β1 β1∗ + β2 β2∗ + β3 β3∗ Now observe:
α1 α1∗ + α2 α2∗ + α3 α3∗ = g(u)
α1 β1∗ + α2 β2∗ + α3 β3∗ = g(v)
β1 α1∗ + β2 α2∗ + β3 α3∗ = h(u)
β1 β1∗ + β2 β2∗ + β3 β3∗ = h(v).
Therefore we have shown that if g and h are any covectors, then the
co-2-vector H which corresponds to g ∧ h satisfies
g(u) g(v) .
H(u ∧ v) = h(u) h(v) Hence the correspondence given in this theorem is independent of bases.
The importance of the preceding theorem is now easily established.
The theorem tells us that the two algebraic structure, namely what we
have called the space of 2-covectors and the space of co-2-vectors, are
completely equivalent. This equivalence enables us to identify the two
spaces. We shall only use the term 2-covector, and if g ∧ h is such a
16
MAROLA AND ZIEMER
2-covector, we shall think of it equally well as the wedge product of the
covectors g and h, or as that linear function g ∧ h : R32 → R1 , specified
by the rule
g(u) g(v) .
(g ∧ h)(u ∧ v) = h(u) h(v) Notice also that this formula determines g ∧ h completely; for if
η = α1 (u1 ∧ v1 ) + . . . + αk (uk ∧ vk ) is any 2-vector, then because the
function g ∧ h is linear, we have
(g ∧ h)(η) = (g ∧ h) α1 (u1 ∧ v1 ) + . . . + αk (uk ∧ vk )
= α1 (g ∧ h)(u1 ∧ v1 ) + . . . + αk (g ∧ h)(uk ∧ vk )
g(u1 ) g(v1 ) g(uk ) g(vk )
= α1 + . . . + αk h(u1 ) h(v1 ) h(uk ) h(vk )
.
Furthermore, because the correspondence of the theorem preserves
sums and scalar products, we may extend the basic of the previous page
by linearity to arbitrary 2-covectors: if φ = β1 (g1 ∧h1 )+. . .+βs (gs ∧hs )
is any 2-covector, then when considering φ as a linear function R32 →
R1 , we have
φ(η) = β1 (g1 ∧ h1 ) + . . . + βs (gs ∧ hs ) (η)
= β1 (g1 ∧ h1 )(η) + . . . + βs (gs ∧ hs )(η).
In conclusion, then, if φ is any 2-covector and η any 2-vector, with
formulas as on the previous page, then φ(η) is the real number given
by
gi (uj ) gi (vj ) s X
k
X
.
φ(η) =
βi αj hi (uj ) hi (vj ) i=1 j=1
Exercise 7.6. Let the covectors g1 , g2 , h1 , h2 be given by g1 = 2f1 +f2 ,
g2 = f1 + f2 − f3 , h1 = f2 − 3f3 , h2 = −f1 + 2f2 , where f1 , f2 , f3
are the standard basis covectors. Let the vectors u and v be given by
u = (1, 1, 0), v = (2, −1, 3). Consider the 2-covector φ = 2(g1 ∧ h1 ) −
(g2 ∧ 2h2 ), and compute φ(u ∧ v).
8. The space of 3-covectors
A 3-covector is an expression
θ = α1 (g1 ∧ h1 ∧ k1 ) + . . . + αs (gs ∧ hs ∧ ks ),
where the αi ’s are real numbers, and the gi ’s, hi ’s, and ki ’s are covectors
in R3 . Sums and scalar products are defined in the obvious way, and
COVECTORS AND FORMS
17
9 basic assumptions are made. These 9 assumptions are exactly like
those in §5 given for 3-vectors, and we shall not copy them down again.
The following propositions are true:
Proposition 8.1. For any covectors f, g, and h ∈ (R31 )∗ , the following
are equal:
f ∧g∧h=g∧h∧f =h∧f ∧g
= −g ∧ f ∧ h = −f ∧ h ∧ g = −h ∧ g ∧ f.
Proposition 8.2. The single 3-covector f1 ∧ f2 ∧ f3 forms a basis for
the space (R33 )∗ of 3-covectors.
For the moment, let us use the term co-3-vector to describe a linear
function R33 → R1 . Sums and scalar multiples of co-3-vectors are
defined in the (by now) usual way, and the set of co-3-vectors forms a
space satisfying fundamental rules 1–8 (cf. §7).
Proposition 8.3. If α is any given real number, there is one and only
one co-3-vector H such that H(e1 ∧ e2 ∧ e3 ) = α.
Using these propositions, it follows that the unique co-3-vector H1
specified by H1 (e1 ∧ e2 ∧ e3 ) = 1 forms a basis for the space of co-3vectors.
Theorem 8.4. The space of 3-covectors and the space of co-3-vectors
are canonically isomorphic.
The proof makes the single basis 3-covector f1 ∧ f2 ∧ f3 correspond
to the single basis co-3-vector H1 .
If f, g, h are covectors and u, v, w are vectors, then the co-3-vector
H corresponding to f ∧ g ∧ h satisfies
H(u ∧ v ∧ w) = (f ∧ g ∧ h)(u ∧ v ∧ w)
f (u) f (v) f (w) = g(u) g(v) g(w) .
h(u) h(v) h(w) which shows that the isomorphism is canonical.
This theorem allows us to identify the concepts of 3-covector and co3-vector, and we choose to use the term 3-covector only. Thus we shall
think of f ∧ g ∧ h equally well as the wedge product of the covectors
f, g, and h, or as that linear function specified by the rule above.
There is no need to extend this rule by linearity, since every 3-vector
is simple, and every 3-covector is simple.
18
MAROLA AND ZIEMER
9. The space of p-covectors in Rn
A p-covector in Rn would be an expression
φ=
t
X
αi (g1i ∧ . . . ∧ gpi ),
i=1
where the αi ’s are real numbers, and the gji ’s are covectors in Rn ; i.e.
real-valued linear functions on Rn . The basic axioms for p-covectors
are completely parallel to those given in §6. A basis for the space
(Rnp )∗ is obtained by taking p-covectors fi1 ∧ . . . ∧ fip , where f1 , . . . , fn
are the natural basis covectors on (Rn1 )∗ , and i1 < i2 < . . . < ip .
The fundamental theorem, of course, is that the p-covectors may be
identified with the linear functions on the space (Rnp ) of p-vectors. The
proof makes the basis p-covectors fi1 ∧ . . . ∧ fip correspond to the linear
function whose value at ei1 ∧ . . . ∧ eip (same subscripts as on the f ’s)
is 1, and whose value at the other basis p-vectors is 0. The formula
which expresses the action of an arbitrary p-covector on an arbitrary
p-vector is
(g1 ∧ . . . ∧ gp )(u1 ∧ . . . ∧ up )
g1 (u1 ) g1 (u2 ) . . . g1 (up )
g2 (u1 ) g2 (u2 ) . . . g2 (up )
= ..
..
. . ..
. .
.
.
g (u ) g (u ) . . . g (u )
p 1
p 2
p p
which is extended by linearity in both directions.
10. Applications to area theory
10.1. Case 1. Let D ⊂ R2 be open, and consider Σ : D → R3 , Σ
smooth. We define ∂Σ
(p), and ∂Σ
(p), for p ∈ D as follows: Say
∂u
∂v

 x = φ(u, v)
y = ψ(u, v)
Σ:

z = θ(u, v)
Then
∂Σ
(p)
∂u
is a vector:
∂φ
∂ψ
∂θ
(p),
(p),
(p)
∂u
∂u
∂u
∂y
∂x
∂z
(also written ∂u
(p), ∂u
(p), ∂u
(p) ). Also,
∂Σ
∂φ
∂ψ
∂θ
(p) =
(p),
(p),
(p) ,
∂v
∂v
∂v
∂v
∂Σ
(p) =
∂u
COVECTORS AND FORMS
or
19
∂x
∂z
(p), ∂y
(p), ∂v
(p)
∂v
∂v
.
The Jacobian 2-vector of Σ at p is given by
∂Σ
∂Σ
e
(p) ∧
(p).
JΣ(p)
=
∂u
∂v
Proposition 10.1. Under the correspondence
e1 ∧ e2 ←→ e3
e1 ∧ e3 ←→ −e2
e2 ∧ e3 ←→ e1
e
as given in §4, JΣ(p)
corresponds to n(p), the normal to Σ at p.
e
Proof. Let JΣ(p)
= α1 (e1 ∧ e2 ) + α2 (e1 ∧ e3 ) + α3 (e2 ∧ e3 ). We must
determine the coefficients α1 , α2 , α3 . It is convenient to utilize the
e
theory of 2-covectors. For α1 = (f1 ∧ f2 )(JΣ(p)),
and by the rule
presented in § 7, we get
f1 ( ∂Σ (p)) f1 ( ∂Σ (p)) ∂u
∂v
e
α1 = (f1 ∧ f2 )(JΣ(p))
= ∂Σ
∂Σ
f2 ( (p)) f2 ( (p)) ∂u
∂v
= =
∂ψ
∂ψ
(p)
(p)
∂u
∂v
∂φ
(p)
∂u
∂φ
(p)
∂v
∂(φ, ψ)
∂(x, y)
(p) =
(p).
∂(u, v)
∂(u, v)
Similarly
∂(x, z)
∂(y, z)
(p),
α3 =
(p).
∂(u, v)
∂(u, v)
e
Hence the components of JΣ(p)
do correspond to the components of
n(p) (see, e.g., [1, p. 836] or [3, p. 272]).
α2 =
Corollary 10.2.
ZZ
e
|JΣ(p)|.
A(Σ) =
D
Proof. This result is immediate from the definition of A(Σ), for instance, see [1, p. 836] or [3, p. 275], and the definition of norm for
vectors and 2-vectors.
20
MAROLA AND ZIEMER
10.2. Case 2. Let D ⊂ R3 be open, and consider T : D → R3 , T
smooth; that is, T ∈ C 1 , JT (p) 6= 0 for all p ∈ D. Suppose

 x = φ(u, v, w)
y = ψ(u, v, w)
T :
 z = θ(u, v, w)
We define
∂T
(p) =
∂u
∂φ
(p),
∂u
∂x
=
(p),
∂u
∂ψ
∂θ
(p),
(p)
∂u
∂u
∂y
∂z
(p),
(p) .
∂u
∂u
∂T
(p), ∂w
(p) are defined similarly.
Also, ∂T
∂v
The Jacobian 3-vector of T at p is given by
e (p) = ∂T (p) ∧ ∂T (p) ∧ ∂T (p).
JT
∂u
∂v
∂w
Remark 10.3. As we know, every 3-vector is simple; if fact, by the
e (p) = α(e1 ∧ e2 ∧ e3 ), where by the rule
methods in § 5, we see that JT
on p. 9,
∂φ
(p) ∂φ (p) ∂φ (p) ∂u
∂v
∂w
∂ψ
∂ψ
∂ψ
α = ∂u (p) ∂v (p) ∂w (p) ∂θ
(p) ∂θ (p) ∂θ (p) ∂u
∂v
∂w
=
∂(φ, ψ, θ)
∂(x, y, z)
(p) =
(p) = JT (p)
∂(u, v, w)
∂(u, v, w)
the ordinary Jacobian. This computation will justify the use of the word
”Jacobian” in the terms ”Jacobian 2-vector” and ”Jacobian 3-vector”.
Corollary 10.4.
ZZZ
e (p)|,
|JT
V (T (D)) =
D
e (p)| is the norm of the 3-vector (for a 3-vector u ∧ v ∧ w =
where |JT
α(e1 ∧ e2 ∧ e3 ) we define |u ∧ v ∧ w| = |α|).
Proof. This is immediate from [1, p. 788] or [3, Theorem 3, p. 239]. COVECTORS AND FORMS
21
10.3. Case 3. Let D ⊂ R1 be open, and consider γ : D → R3 , γ
smooth (D will be a union of non-overlapping open intervals). We
define dγ
(t ) as usual
dt 0

 x = φ(t)
y = ψ(t)
γ:

z = θ(t)
Then for t0 ∈ D,
dγ
(t0 ) = γ 0 (t0 ) = (φ0 (t0 ), ψ 0 (t0 ), θ0 (t0 ))
dt
dx
dy
dz
=
(t0 ),
(t0 ),
(t0 ) .
dt
dt
dt
The Jacobian 1-vector of γ at t0 is given by
e 0 ) = dγ (t0 ) ∈ R3
Jγ(t
1
dt
(i.e. in this case the Jacobian vector is simply the derivative vector
dγ
).
dt
Remark 10.5.
Z
e 0 )|.
|Jγ(t
l(γ) =
D
This is immediate from definition.
10.4. General case. These three illustrations above provide ample
motivation for the general case, which we now study.
Let D ⊂ Rk be open, and consider T : D → Rn . We suppose
T ∈ C 1 . Let us use the term k-dimensional measure. We want a
formula then for the k-dimensional measure of the set T (D) ⊂ Rn .
Suppose

y1 = φ1 (x1 , x2 , . . . , xk )


 y2 = φ2 (x1 , x2 , . . . , xk )
T :
..

.


yn = φn (x1 , x2 , . . . , xk )
Let p ∈ D. Then we define
∂T
∂φ1
∂φ2
∂φn
(p) =
(p),
(p), . . . ,
(p)
∂xj
∂xj
∂xj
∂xj
∂y1
∂y2
∂yn
=
(p),
(p), . . . ,
(p) ,
∂xj
∂xj
∂xj
where j = 1, 2, . . . , k. For j = 1, 2, . . . , k,
Rn .
∂T
(p)
∂xj
is a vector in n-space
22
MAROLA AND ZIEMER
The Jacobian k-vector of T at p is given by
e (p) = ∂T (p) ∧ ∂T (p) ∧ . . . ∧ ∂T (p).
JT
∂x1
∂x2
∂xk
e (p) in terms of the
It is a k-vector in n-space (see §6). If we expand JT
basis k-vectors (ei1 ∧ ei2 ∧ . . . ∧ eik ), with i1 < i2 < . . . < ik , we have
X
e (p) =
JT
αi1 ···ik (ei1 ∧ ei2 ∧ . . . ∧ eik ).
e (p) is then defined in the obvious way – it is the square
The norm of JT
e (p) with respect
root of the sums of the squares of the components of JT
to the basis k-vectors.
The k-dimensional measure of the set T (D) ⊂ Rn is given by the
formula
Z
Z
e (p)|.
µk (T (D)) = · · · |JT
| {z }
k times
e (p)| =
The formula is valid under the condition |JT
6 0 for all p ∈ D (this
is the general requirement of smoothness).
Remark 10.6. Thus the theory of k-vectors in n-space has enabled us
to give a unified treatment of length, area, and volume. Briefly stated:
”to find the k-dimensional measure of a k-dimensional surface in nspace, integrate the Jacobian k-vector.”
This implication alone gives a good justification for the theory of
k-vectors in Rn .
11. Differential 2-forms
Let Ω be an open set in R3 . A differential 2-form on Ω is a function
w : Ω → (R32 )∗ ;
thus Ω is a 2-covector valued function.
Proposition 11.1. Every 2-form w on Ω can be uniquely represented
in the form
w(p) = A1 (p)(f1 ∧ f2 ) + A2 (p)(f1 ∧ f3 ) + A3 (p)(f2 ∧ f3 ),
for all p ∈ Ω.
Proof. The proof is left to the reader.
These three functions A1 , A2 , A3 are called the coordinate functions
of the 2-form w. The 2-form w is said to be continuous (differentiable)
if the Ai ’s are continuous (differentiable).
COVECTORS AND FORMS
23
Let w be a continuous 2-form on an open set Ω. Let Σ : D → R3
be a smooth surface such that trace(Σ)1 ⊂ Ω. Then the integral of w
over Σ is given by
ZZ
ZZ
e
w=
w ◦ Σ(u, v) JΣ(u,
v) dudv,
Σ
D
e
where JΣ(u,
v) is the
Jacobian2-vector as defined in §10, and where the
e
symbol w ◦ Σ(u, v) JΣ(u,
v) is interpreted as follows: for each point
e
(u, v) ∈ D, JΣ(u,
v) is a 2-vector, Σ(u, v) a point in Ω, w ◦ Σ(u, v) is a
2-covector, and the entire expression denotes the real number obtained
by letting the 2-covector w ◦ Σ(u, v), considered as a linear function on
e
R32 , operate on the 2-vector JΣ(u,
v).
Remark 11.2. This definition is a straightforward generalization of
the definition of the integral of a 1-form over a smooth curve (p. 4).
For if w : Ω → (R13 )∗ is a continuous 1-form, and γ : [a, b] → R3 a
smooth curve so that trace(γ) ⊂ Ω, then
Z
Z b
Z b
0 e
w=
w ◦ γ(t) γ (t) dt =
w ◦ γ(t) Jγ(t)
dt,
γ
a
a
e
where Jγ(t)
is the Jacobian 1-vector of γ at t as defined in 10.3.
Example 11.3. Let w(x, y, z) = xydydz + xdzdx + 3zxdxdy, and

 x=u+v
y =u−v
Σ:
 z = uv,
where 0 ≤ u, v ≤ 1. Here Ω, the domain of definition of w, is all of
R3 , and w is smooth. Now
∂Σ
∂x
∂y
∂z
(u0 , v0 ) =
(u0 , v0 ), (u0 , v0 ), (u0 , v0 ) = (1, 1, v0 ),
∂u
∂u
∂u
∂u
and similarly
∂Σ
(u0 , v0 )
∂v
= (1, −1, u0 ). Therefore
e
JΣ(u
0 , v0 ) = (1, 1, v0 ) ∧ (1, −1, u0 ).
Next, w(x, y, z) = 3zx(f1 ∧ f2 ) − x(f1 ∧ f3 ) + xy(f2 ∧ f3 ). So that
w ◦ Σ(u0 , v0 ) = 3(u0 v0 )(u0 + v0 )(f1 ∧ f2 ) − (u0 + v0 )(f1 ∧ f3 )
+ (u0 + v0 )(u0 − v0 )(f2 ∧ f3 ).
1The
set of all points that lie on Σ is called the trace or graph of the surface Σ.
24
MAROLA AND ZIEMER
Finally,
e
w ◦ Σ(u0 , v0 ) JΣ(u
0 , v0 ) = ...
(left to the reader)
= 3u0 v0 (u0 + v0 )(−2) − (u0 + v0 )(u0 − v0 )
+ (u20 − v02 )(u0 + v0 )
= u30 − u20 − 5u20 v0 − 7u0 v02 + v02 − v03 .
Thus
ZZ
Z
w=
Σ
1
Z
du
0
1
(u3 − u2 − 5u2 v − 7uv 2 + v 2 − v 3 ) dv = −2.
0
Lemma 11.4. Let Σ : D → R3 and Σ∗ : D∗ → R3 be smoothly
equivalent surfaces. Recall that this means there is a transformation
h : D → D∗ such that h is 1-1 and onto, h ∈ C 1 , Jh(p) > 0 for all
p ∈ D, and Σ∗ ◦ h = Σ. Then
e
e ∗ (h(p))Jh(p)
JΣ(p)
= JΣ
for all p ∈ D.
Proof. Define three projection functions π1 , π2 , π3 : R3 → R2 , by the
formulae
π1 (x, y, z) = (x, y)
π2 (x, y, z) = (x, z)
π3 (x, y, z) = (y, z).
Since Σ∗ ◦ h = Σ, we also have
πi ◦ Σ ∗ ◦ h = πi ◦ Σ
for i = 1, 2, 3.
π1 ◦Σ
/ R2
=
{
{{
{
h
∗
{{
{{ π1 ◦Σ
D
D∗
This diagram represents three transformations from the plane into
the plane. By the Chain Rule for all p ∈ D
d(π1 ◦ Σ)|p = d(π1 ◦ Σ∗ ◦ h)|p = d(π1 ◦ Σ∗ )|h(p) dh|p .
Taking determinants on both sides,
J(π1 ◦ Σ)(p) = J(π1 ◦ Σ∗ )(h(p))Jh(p).
COVECTORS AND FORMS
25
Similarly,
J(π2 ◦ Σ)(p) = J(π2 ◦ Σ∗ )(h(p))Jh(p)
J(π3 ◦ Σ)(p) = J(π3 ◦ Σ∗ )(h(p))Jh(p).
e
Now consider the 2-vector JΣ(p).
If Σ(u, v) = (x, y, z), then by the
Proposition 10.1,
∂(x, y) ∂(x, z) ∂(y, z) e
JΣ(p)
=
(e1 ∧ e2 ) +
(e1 ∧ e3 ) +
(e2 ∧ e3 )
∂(u, v) p
∂(u, v) p
∂(u, v) p
= J(π1 ◦ Σ)(p)(e1 ∧ e2 ) + J(π2 ◦ Σ)(p)(e1 ∧ e3 )
+ J(π3 ◦ Σ)(p)(e2 ∧ e3 )
= J(π1 ◦ Σ∗ )(h(p))Jh(p)(e1 ∧ e2 ) + J(π2 ◦ Σ∗ )(h(p))Jh(p)(e1 ∧ e3 )
+ J(π3 ◦ Σ∗ )(h(p))Jh(p)(e2 ∧ e3 )
= Jh(p) [J(π1 ◦ Σ∗ )(h(p))(e1 ∧ e2 ) + J(π2 ◦ Σ∗ )(h(p))(e1 ∧ e3 )
+ J(π3 ◦ Σ∗ )(h(p))(e2 ∧ e3 )] .
e ∗ (h(p)).
Again by Proposition 10.1, the bracketed 2-vector is simply JΣ
Hence
e
e ∗ (h(p)).
JΣ(p)
= Jh(p)JΣ
Theorem 11.5. Let w be a continuous 2-form on an open set Ω ⊂ R3 .
Let Σ : D → R3 and Σ∗ : D∗ → R3 be smoothly equivalent surfaces
such that trace(Σ) = trace(Σ∗ ) ⊂ Ω. Then
ZZ
ZZ
w=
w.
Σ
Σ∗
Proof. Let h : D → D∗ be as in the preceding lemma. Say h(u, v) =
(x, y). Now
ZZ
ZZ
∗
e (x, y) dxdy.
w=
w ◦ Σ∗ (x, y) JΣ
Σ∗
D∗
We make a change of integral using the transformation h:
ZZ
ZZ
∗
e (h(u, v)) |Jh(u, v)| dudv.
w=
w ◦ Σ∗ (h(u, v)) JΣ
Σ∗
D
use the fact that Jh > 0, and the linearity of the 2-covector
Now we
w ◦ Σ∗ (h(u, v)) :
ZZ
ZZ
∗
e (h(u, v))Jh(u, v) dudv.
w=
w ◦ Σ∗ ◦ h(u, v) JΣ
Σ∗
D
26
MAROLA AND ZIEMER
∗
e (h(u, v))Jh(u, v) = JΣ(u,
e
Since Σ∗ ◦ h = Σ, and JΣ
v) by the
preceding lemma, so we have
ZZ
ZZ
e
w=
w ◦ Σ(u, v) JΣ(u,
v) dudv
Σ∗
Z ZD
=
w,
by definition.
Σ
12. Differential 3-forms
Let Ω ⊂ R3 be an open set. A differential 3-form on Ω is a function
w : Ω → (R33 )∗ ;
every 3-form can be written
w(p) = A(p)(f1 ∧ f2 ∧ f3 )
for all p ∈ Ω; A(p) is called the coordinate function of w; w is continuous
(differentiable) if and only if A is continuous (differentiable).
Provisional definition. If w is a 1-form, we integrated w over a curve
(i.e., a function γ : R1 → R3 ); if w was a 2-form, we integrated w over
a surface (i.e., a function Σ : R2 → R3 ), therefore, to be consistent, we
should integrate a 3-form over a function T : R3 → R3 .
Let w be a continuous 3-form on an open set Ω ⊂ R3 . Let T : D →
3
R be a smooth transformation (JT (p) 6= 0 in D). Also suppose that
T (D) ⊂ Ω. Then the integral of w over T is given by
ZZZ
ZZZ
e (u, v, w) dudvdw.
w=
w ◦ T (u, v, w) JT
T
D
Proposition 12.1. Let w be a continuous 3-form on an open set Ω; let
T : D → R3 be a smooth transformation such that T (D) ⊂ Ω. Let A
be the coordinate function of w. Then if T is 1-1 on D and JT (p) > 0
for all p ∈ D,
ZZZ
ZZZ
w=
T
A.
T (D)
RRR
Proof. In the defining formula for
w, we note:
T
1) w ◦ T (u, v, w) = A(T (u, v, w))(f1 ∧ f2 ∧ f3 ),
e (u, v, w) = JT (u, v, w)(e1 ∧ e2 ∧ e3 ).
2) JT
Now (f1 ∧ f2 ∧ f3 )(e1 ∧ e2 ∧ e3 ) = 1, so
ZZZ
ZZZ
w=
A(T (u, v, w))|JT (u, v, w)| dudvdw,
T
D
COVECTORS AND FORMS
27
and the hypothesis T is 1-1 enables us to apply the change of integral
theorem, so
ZZZ
ZZZ
A(T (u, v, w))|JT (u, v, w)| dudvdw =
A(x, y, z) dxdydz.
D
T (D)
Remark 12.2. If the additional hypotheses on T are not satisfied, then
the proposition is false, in general.
For the purposes of this note, we shall only be interested in integrating a 3-form w over a transformation T which satisfies the hypotheses
of the preceding proposition. Accordingly, we abandon our provisional
definition, and substitute in its place the following definition.
Definition 12.3. Let w be a continuous 3-form on Ω; say w(x, y, z) =
A(x, y, z)(f1 ∧ f2 ∧ f3 ); let D be a subset of Ω having volume. Then the
integral of w over D is given by
ZZZ
ZZZ
w=
A(x, y, z) dxdydz.
D
D
We can, of course, consider this definition as a special case of the
provisional definition simply by taking T : D → R3 to be the identity
transformation on D.
Lemma 12.4. Let D, D∗ ⊂ R3 be open, T : D → R3 , T ∗ : D∗ → R3 ,
where T, T ∗ ∈ C 1 . Suppose h : D → D∗ , h ∈ C 1 , and T = T ∗ ◦ h on
D. Then
e (p) = JT
e ∗ (p)Jh(p)
JT
for every p ∈ D.
Proof. The proof is left to the reader.
/ R3
=
{
{{
{
h
{{ ∗
{{ T
D
T
D∗
Theorem 12.5. Let w : Ω → (R33 )∗ be a continuous 3-form. Let
T : D → R3 , T ∗ : D∗ → R3 be smoothly equivalent transformations so
that T (D) = T ∗ (D∗ ) ⊂ Ω. Then
ZZZ
ZZZ
w=
w.
T
Proof. The proof is left to the reader.
T∗
28
MAROLA AND ZIEMER
13. The exterior algebra of R3
13.1. Exterior products of k-covectors (k = 0, 1, 2, 3). Thus far,
we have considered the spaces (R31 )∗ , (R32 )∗ , (R33 )∗ of 1-covectors, 2covectors, and 3-covectors, respectively, as separate entities; the spaces
are disjoint. Each space is, moreover, endowed with a vector space
structure. We now wish to show that these spaces are naturally interrelated by a multiplication. Define (R30 )∗ = R1 ; this is merely a
convention.
By the same reasoning as in Remark 5.4, we note that (R34 )∗ =
0, so there is no value to be received by considering k-covectors for
k > 3. Now let ξ be a k-covector, and η an l-covector, where k and
l are integers between 0 and 3. Our goal is to define a product of ξ
and η. This product will be a (k + l)-covector. We shall write the
product as ξ ∧ η, purposely confusing it with the wedge product (which
is not really a true ”multiplication” as defined). We shall want our
product to satisfy the distributive laws with respect to addition and
scalar multiplication.
Every k-covector can be uniquely expressed as a linear combination
of the basis k-covectors. (If k = 0, the basis 0-covector is the real
number 1.) Similarly, every l-covector can be uniquely expressed as a
linear combination of the basis l-covectors.
Since we want the distributive laws to be satisfied, it is sufficient
to define the products of the various basis k-covectors and extend the
definition by linearity. Let us list these basis k-covectors:
Space
Basis
(R30 )∗
{1}
(R31 )∗
{f1 , f2 , f3 }
(R32 )∗
{f1 ∧ f2 , f1 ∧ f3 , f2 ∧ f3 }
(R33 )∗
{f1 ∧ f2 ∧ f3 }
Definition 13.1. The product of a basis k-covector and a basis lcovector is given by the table below. The product of an arbitrary kcovector and an arbitrary l-covector is obtained by linearity.
COVECTORS AND FORMS
29
(1) ∧ (1) = 1
(1) ∧ (fi ) = fi
(1) ∧ (fi ∧ fj ) = fi ∧ fj
(1) ∧ (fi ∧ fj ∧ fk ) = fi ∧ fj ∧ fk
(fi ) ∧ (1) = fi
(fi ) ∧ (fj ) = fi ∧ fj
(fi ) ∧ (fj ∧ fk ) = fi ∧ fj ∧ fk
(fi ) ∧ (fj ∧ fk ∧ fl ) = fi ∧ fj ∧ fk ∧ fl = 0
(fi ∧ fj ) ∧ (1) = fi ∧ fj
(fi ∧ fj ) ∧ (fk ) = fi ∧ fj ∧ fk
(fi ∧ fj ) ∧ (fk ∧ fl ) = fi ∧ fj ∧ fk ∧ fl = 0
(fi ∧ fj ) ∧ (fk ∧ fl ∧ fm ) = fi ∧ fj ∧ fk ∧ fl ∧ fm = 0
(fi ∧ fj ∧ fk ) ∧ (1) = fi ∧ fj ∧ fk
(fi ∧ fj ∧ fk ) ∧ (fl ) = fi ∧ fj ∧ fk ∧ fl = 0
(fi ∧ fj ∧ fk ) ∧ (fl ∧ fm ) = fi ∧ fj ∧ fk ∧ fl ∧ fm = 0
(fi ∧ fj ∧ fk ) ∧ (fl ∧ fm ∧ fn ) = fi ∧ fj ∧ fk ∧ fl ∧ fm ∧ fn = 0.
To make a long story short, in multiplying the basis k-covectors,
one simply strings them together. The reader should feel that this
definition is quite natural.
Example 13.2.
1)
(2f1 + πf2 ) ∧ (3(f1 ∧ f3 ) +
√
2(f2 ∧ f3 ))
√
= (2f1 ) ∧ (3(f1 ∧ f3 )) + (2f1 ) ∧ ( 2(f2 ∧ f3 ))
√
+ (πf2 ) ∧ (3(f1 ∧ f3 )) + (πf2 ) ∧ ( 2(f2 ∧ f3 ))
√
= 6(f1 ∧ f2 ∧ f3 ) + 2 2(f1 ∧ f2 ∧ f3 )
√
+ 3π(f2 ∧ f1 ∧ f3 ) + 2π(f2 ∧ f2 ∧ f3 )
√
= (2 2 − 3π)(f1 ∧ f2 ∧ f3 ).
30
MAROLA AND ZIEMER
2)
(6)∧(f2 − 3f3 ) ∧ (f1 − f2 + 10f3 )
= (6) ∧ (f2 ) + (6) ∧ (−3f3 ) ∧ (f1 − f2 + 10f3 )
= (6f2 − 18f3 ) ∧ (f1 − f2 + 10f3 )
= 6(f2 ∧ f1 ) − 6(f2 ∧ f2 ) + 60(f2 ∧ f3 )
− 18(f3 ∧ f1 ) + 18(f3 ∧ f2 ) − 180(f3 ∧ f3 )
= −6(f1 ∧ f2 ) + 18(f1 ∧ f3 ) + 42(f2 ∧ f3 ).
3) 2(f1 ∧ f2 ) + π(f2 ∧ f3 ) ∧ (f1 ∧ f3 ) − 4(f2 ∧ f3 ) = 0, because
it is a 4-covector.
Exercise 13.3. Evaluate the following:
√
a) (f1 − 2f2 +f3 ) ∧ (f1 + 3f2 − 3f3 .
b) (f1 + f2 ) ∧ (2(f
∧
f
)
−
4(f
∧
f
))
∧
(3)
.
1
2
2
3
c) n(f2 ∧ f3 ) ∧ (7) ∧ (−f1 + 4f3 ).o
d) (−6) ∧ (f2 + f3 ) ∧ (f1 ∧ f2 ) ∧ (−f2 + πf3 ).
Remark 13.4. We have seen that multiplication of k-covectors is distributive with respect to addition and scalar multiplication. It is also
associative (i.e., (ξ ∧ η) ∧ θ = ξ ∧ (η ∧ θ)), as the reader may verify.
But it is not commutative. For example, (f1 ) ∧ (f2 ) 6= (f2 ) ∧ (f1 ).
In a completely analogous manner, one may define the product of a
k-vector and an l-vector, for k, l = 0, 1, 2, 3. The definitions and properties of this multiplication are exactly the same as for the multiplication
of k-covectors, so we shall not go through the details.
If ξ is a k-covector, and η is an l-covector, ξ ∧ η is called the exterior
product of ξ and η.
Associated with the vector space R3 , we have 8 vector spaces: 4
spaces of k-vectors, R30 , R31 , R32 , R33 , and 4 spaces of k-covectors, (R30 )∗ ,
(R31 )∗ , (R32 )∗ , (R33 )∗ . These are all vector spaces, and in addition satisfy
extra axioms concerning wedge products. Each (R3k )∗ may be regarded
as the space of linear functions on the corresponding (R3k ). In addition,
there is defined an exterior product of k-vectors and l-vectors, and an
exterior product of k-covectors and l-covectors.
This entire structure, as described above, may be called the exterior
algebra of R3 or the Grassmann algebra of R3 (Hermann Grassmann,
1809–1877).
The reader who should have no difficulty in describing the structure
of the exterior algebra of Rn .
COVECTORS AND FORMS
31
Notice that we did not include the notions of differential k-forms
among the various parts of the exterior algebra of R3 ; the differential forms are the link between the differential and integral calculus of
Euclidean spaces and the exterior algebra. Specifically, the concept of
differential forms enables us to express certain geometric problems of
the calculus of Euclidean spaces in such a manner that the algebraic
tools of the exterior algebra can be applied to their solution.
In the next section we develop some manipulative techniques for differential forms; following that we shall proceed to illustrate the comments made above, by attacking Stokes’, Green’s, and Gauss’ Theorems.
14. The algebra of differential forms
Thus far we have defined differential k-forms for k = 1, 2, 3. Let us
complete the spectrum for k = 0:
A differential 0-form is a mapping
w : Ω → (R30 )∗ = R1 .
Hence a 0-form is simply a real-valued function on Ω ⊂ R3 . It is continuous or differentiable in the usual sense of continuity or differentiability
of a real-valued function.
Let w1 and w2 both be differential k-forms, say with domains Ω1 , Ω2 ⊂
R3 , where 0 ≤ k ≤ 3. We define w1 + w2 to be a differential k-form
with domain Ω1 ∩ Ω2 by
(w1 + w2 )(p) = w1 (p) + w2 (p),
for all p ∈ Ω1 ∩ Ω2 . Note that the addition on the right-hand side is in
the space of k-covectors. Similarly, if α is a real number, then αw1 is
a differential k-form with domain Ω1 given by
(αw1 )(p) = α[w1 (p)]
for all p ∈ Ω1 . The interested reader may verify that these definitions
turn the set of differential k-forms into a vector space, although we
shall not need that fact.
Next we define the exterior product w1 ∧w2 of a k-form w1 : Ω1 → R3
and an l-form w2 : Ω2 → R3 to be a (k+l)-form (w1 ∧w2 ) : Ω1 ∩Ω2 → R3
by
(w1 ∧ w2 )(p) = w1 (p) ∧ w2 (p).
Remark 14.1. This multiplication of differential forms is associative
and distributive with respect to addition and scalar multiplication. That
is, if w1 , w2 , and w3 are forms, and α1 , α2 , α3 are real numbers, then
a) (w1 ∧ w2 ) ∧ w3 = w1 ∧ (w2 ∧ w3 ),
32
MAROLA AND ZIEMER
b) (α1 w1 + α2 w2 ) ∧ w3 = α1 (w1 ∧ w3 ) + α2 (w2 ∧ w3 ) (here assume
w1 and w2 have the same dimension, i.e., both are k-forms),
c) w1 ∧ (α2 w2 + α3 w3 ) = α2 (w1 ∧ w2 ) + α2 (w1 ∧ w3 ) (here assume
w2 and w3 have the same dimension).
Exercise 14.2. Prove the preceding remark.
Exercise 14.3. Is multiplication of forms commutative? Prove it or
give a counterexample.
Notice that the algebraic structure on differential forms is simply
transported from the structure of k-covectors. The situation is completely analogous to the determination of the strucure of the space of
real-valued functions from the algebraic rules for real numbers.
14.1. Exterior derivative of differential forms. Let f : Ω → R1 ,
f ∈ C 1 . In other words, f is differentiable differential 0-form. We have
seen earlier how the differential of f , df , may be regarded as a 1-form
df : Ω → (R31 )∗ . In fact, we note that the coordinate functions of df
are simply the partials of f ; i.e.,
df (p) =
∂f
∂f
∂f
(p)f1 +
(p)f2 +
(p)f3 .
∂x
∂y
∂z
(Here f1 , f2 , f3 are the basis covectors, not to be confused with the
partials of f which appear as coordinate functions.)
In this manner, starting with a differentiable differential 0-form f ,
we have obtained a differential 1-form, df . It is this procedure which
we want to generalize; i.e., starting with a differentiable differential kform w, we wish to define a differential (k + 1)-form dw. We proceed
in stages, using the definition of df for a 0-form f as a starting point.
Definition 14.4.
(1) Let f : Ω → R1 be a 0-form, f ∈ C 1 . Then
df : Ω → (R31 )∗ is a 1-form, given by
df (p) =
∂f
∂f
∂f
(p)f1 +
(p)f2 +
(p)f3 .
∂x
∂y
∂z
(2) Let w : Ω → (R31 )∗ be a 1-form, w ∈ C 1 . Say
w(p) = A1 (p)f1 + A2 (p)f2 + A3 (p)f3 .
Then dw : Ω → (R32 )∗ is a 2-form, given by
dw(p) = (dA1 (p) ∧ f1 ) + (dA2 (p) ∧ f2 ) + (dA3 (p) ∧ f3 ).
In this formula, dA1 , dA2 , dA3 are the 1-forms which are defined in (1) above.
COVECTORS AND FORMS
33
(3) Let w : Ω → (R32 )∗ be a 2-form, w ∈ C 1 . Say
w(p) = A1 (p)(f1 ∧ f2 ) + A2 (p)(f1 ∧ f3 ) + A3 (p)(f2 ∧ f3 ).
Then dw : Ω → (R33 )∗ is a 3-form, given by
dw(p) = (dA1 (p) ∧ (f1 ∧ f2 )) + (dA2 (p) ∧ (f1 ∧ f3 )) + (dA3 (p) ∧ (f2 ∧ f3 )).
Again, dA1 , dA2 , dA3 are as in (1) above.
Example 14.5. Let w = x2 y dydz−xz dxdy. In our language, w(x, y, z) =
(−xz)(f1 ∧ f2 ) + (x2 y)(f2 ∧ f3 ). Using the terminology of (3) above, we
have A1 (x, y, z) = −xz, A2 (x, y, z) = 0, A3 (x, y, z) = x2 y. Using (1)
above,
∂A1
∂A1
∂A1
dA1 (x, y, z) =
(x, y, z)f1 +
(x, y, z)f2 +
(x, y, z)f3
∂x
∂y
∂z
= −zf1 − xf3 ,
and similarly dA3 (x, y, z) = 2xyf1 + x2 f2 . Thus
dw(x, y, z) = (dA1 (x, y, z) ∧ (f1 ∧ f2 )) + (dA3 (x, y, z) ∧ (f2 ∧ f3 ))
= (−zf1 − xf3 ) ∧ (f1 ∧ f2 )) + (2xyf1 + x2 f2 ) ∧ (f2 ∧ f3 ))
= (2xy − x)(f1 ∧ f2 ∧ f3 ).
Exercise 14.6.
1) Let f, g be 0-forms on Ω, with f, g ∈ C 1 , and
let α, β be real numbers. Prove: d(αf + βg) = αdf + βdg.
2) Let f, g be 0-forms on Ω, with f, g ∈ C 1 . Prove: d(f ∧ g) =
(df ∧ g) + (f ∧ dg).
Proposition 14.7. Let w1 , w2 be k-forms, in C 1 , and let α, β be real
numbers. Then d(αw1 + βw2 ) = αdw1 + βdw2 .
Proof. For k = 0, the proposition is true by Exercise 14.6 1). Let
k = 1. Suppose A1 , A2 , A3 are the coordinate functions of w1 . For
simplicity of notation we shall write w1 = A1 f1 +A2 f2 +A3 f3 . Similarly,
w2 = B1 f1 + B2 f2 + B3 f3 . Then
d(αw1 + βw2 ) = d (αA1 + βB1 )f1 + (αA2 + βB2 )f2 + (αA3 + βB3 )f3
= d(αA1 + βB1 ) ∧ f1 + d(αA2 + βB2 ) ∧ f2 + d(αA3 + βB3 ) ∧ f3
= (αdA1 + βdB1 ) ∧ f1 + (αdA2 + βdB2 ) ∧ f2 + (αdA3 + βdB3 ) ∧ f3
= α(dA1 ∧ f1 ) + β(dB1 ∧ f1 ) + α(dA2 ∧ f2 ) + β(dB2 ∧ f2 )
+ α(dA3 ∧ f3 ) + β(dB3 ∧ f3 )
= α (dA1 ∧ f1 ) + (dA2 ∧ f2 ) + (dA3 ∧ f3 ) + β (dB1 ∧ f1 )
+ (dB2 ∧ f2 ) + (dB3 ∧ f3 )
= αdw1 + βdw2 .
34
MAROLA AND ZIEMER
The proof for k = 2 is exactly as above.
This proposition tells us that the exterior derivative may be thought
of as a linear transformation from the space of k-forms to the space of
(k + 1)-forms.
Theorem 14.8. Let w1 be a k-form, w2 an l-form, with w1 , w2 ∈ C 1 .
Then
d(w1 ∧ w2 ) = (dw1 ∧ w2 ) + (−1)k (w1 ∧ dw2 ).
Proof. The case k = l = 0 was proved in Exercise 14.6 2). We consider
the case k = l = 1. As in the preceding proposition, let w1 = A1 f1 +
A2 f2 + A3 f3 , w2 = B1 f1 + B2 f2 + B3 f3 . Then
(dw1 ∧ w2 ) − (w1 ∧ dw2 )
h
i h
i
= (dA1 ∧ f1 ) + (dA2 ∧ f2 ) + (dA3 ∧ f3 ) ∧ B1 f1 + B2 f2 + B3 f3
i h
i
h
− A1 f1 + A2 f2 + A3 f3 ∧ (dB1 ∧ f1 ) + (dB2 ∧ f2 ) + (dB3 ∧ f3 )
= (dA1 ∧ f1 ) ∧ (B2 f2 ) + (dA2 ∧ f2 ) ∧ (B1 f1 ) + (dA1 ∧ f1 ) ∧ (B3 f3 )
+ (dA3 ∧ f3 ) ∧ (B1 f1 ) + (dA2 ∧ f2 ) ∧ (B3 f3 ) + (dA3 ∧ f3 ) ∧ (B2 f2 )
h
− (A1 f1 ) ∧ (dB2 ∧ f2 ) + (A2 f2 ) ∧ (dB1 ∧ f1 ) + (A1 f1 ) ∧ (dB3 ∧ f3 )
i
+ (A3 f3 ) ∧ (dB1 ∧ f1 ) + (A2 f2 ) ∧ (dB3 ∧ f3 ) + (A3 f3 ) ∧ (dB2 ∧ f2 )
= B2 (dA1 ∧ f1 ∧ f2 ) + B1 (dA2 ∧ f2 ∧ f1 ) − A1 (f1 ∧ dB2 ∧ f2 )
− A2 (f2 ∧ dB1 ∧ f1 ) + B3 (dA1 ∧ f1 ∧ f3 ) + B1 (dA3 ∧ f3 ∧ f1 )
− A1 (f1 ∧ dB3 ∧ f2 ) − A3 (f3 ∧ dB1 ∧ f1 ) + B3 (dA2 ∧ f2 ∧ f3 )
+ B2 (dA3 ∧ f3 ∧ f2 ) − A2 (f2 ∧ dB3 ∧ f3 ) − A3 (f3 ∧ dB2 ∧ f2 )
h
i
h
= A1 dB2 + B2 dA1 − A2 dB1 − B1 dA2 ∧ (f1 ∧ f2 ) + A1 dB3
i
h
+ B3 dA1 − A3 dB1 − B1 dA3 ∧ (f1 ∧ f3 ) + A2 dB3 + B3 dA2
i
i
− A3 dB2 − B2 dA3 ∧ (f2 ∧ f3 )
= d(A1 B2 − A2 B1 ) ∧ (f1 ∧ f2 ) + d(A1 B3 − A3 B1 ) ∧ (f1 ∧ f3 )
+ d(A2 B3 − A3 B2 ) ∧ (f2 ∧ f3 ).
The last step follows by Exercise 14.6. Now (A1 B2 − A2 B1 ), (A1 B3 −
A3 B1 ), (A2 B3 − A3 B2 ) are the coordinate functions of w1 ∧ w2 . Hence
this final expression is d(w1 ∧ w2 ).
COVECTORS AND FORMS
35
The cases k = 0, l = 1, and k = 1, l = 0, are similar to, but simpler
than the case just considered, and we leave the proofs to the reader.
Since d(w1 ∧ w2 ) is a (k + l + 1)-form, we see that in all the remaining
cases (k = 1, l = 2; . . .; k = 3, l = 3) both sides are zero and there is
nothing to prove.
Lemma 14.9. Let A be a 0-form on Ω, A ∈ C 2 . Then
d(dA) = 0.
Proof. Since dA = Ax f1 + Ay f2 + Az f3 , we obtain
d(dA) = (dAx ∧ f1 ) + (dAy ∧ f2 ) + (dAz ∧ f3 )
= (Axx f1 + Axy f2 + Axz f3 ) ∧ f1
+ (Ayx f1 + Ayy f2 + Ayz f3 ) ∧ f2
+ (Azx f1 + Azy f2 + Azz f3 ) ∧ f3
= (Ayx − Axy )(f1 ∧ f2 ) + (Azx − Axz )(f1 ∧ f3 )
+ (Azy − Ayz )(f2 ∧ f3 ).
Since A ∈ C 2 , off diagonal terms in the final expression are equal.
Hence d(dA) = 0.
Theorem 14.10. Let w be a k-form on Ω, w ∈ C 2 . Then
d(dw) = 0.
Proof. The case k = 0 is covered by the preceding lemma. The case
k = 1 is given as an exercise (see below). Since d(dw) is always a
(k + 2)-form, we find that for k = 2, 3, d(dw) is a 4 or 5-form which is
automatically 0.
Exercise 14.11. Prove the above theorem for k = 1. Do not use
the partial derivatives directly. The proof one should give uses: 1) the
preceding lemma, 2) the preceding theorem on exterior derivative of a
wedge product.
15. Effects of a transformation on differential forms
In this section, we develop the concepts of ”change of variable” for
differential forms. We shall present a unified treatment.
Definition 15.1. Suppose T : D → Rm , T ∈ C 1 , where D ⊂ Rn
is open. For each point p ∈ D, there is defined a linear mapping
dT (p) : Rn → Rm , called the differential of T . We now claim that
dT (p) induces, in a natural way, linear transformations dk T (p) from
the space of k-vectors in Rn , into the space of k-vectors in Rm
dk T (p) : Rnk → Rm
k ,
36
MAROLA AND ZIEMER
by the formula
dk T (p) (ei1 ∧ . . . ∧ eik ) = dT (p, ei1 ) ∧ . . . ∧ dT (p, eik ).
There are several remarks to be made concerning this definition. In
our applications, we shall have m, n = 2 or 3, so the formulas will
become somewhat simpler. However, we shall have to deal with several
specific cases of this definition, namely n = m = 3, n = 2, m = 3, and
n = m = 2. It is for this reason that we have presented one unifying
definition.
Suppose for example, n = m = 3. Then we have three linear transformations:
d1 T (p) :R31 → R31
d2 T (p) :R32 → R32
d3 T (p) :R33 → R33 .
They are specified by the following rules: d1 T (p)(e1 ) = dT (p, e1 );
d1 T (p)(e2 ) = dT (p, e2 ); d1 T (p)(e3 ) = dT (p, e3 ). Also
d2 T (p)(e1 ∧ e2 ) = dT (p, e1 ) ∧ dT (p, e2 )
d2 T (p)(e1 ∧ e3 ) = dT (p, e1 ) ∧ dT (p, e3 )
d2 T (p)(e2 ∧ e3 ) = dT (p, e2 ) ∧ dT (p, e3 ),
and d3 T (p)(e1 ∧ e2 ∧ e3 ) = dT (p, e1 ) ∧ dT (p, e2 ) ∧ dT (p, e3 ).
To return to the definition, we notice that we have only defined
the function dk T (p) on the basis k-vectors; two questions immediately
arise:
(1) How do we know that there is a linear transformation from Rnk
to Rm
k whose values on the basis k-vectors are those given?
(2) Even if there is such a linear transformation, how do we know
there is only one? That is, what right do we have simply to
specify the values of the transformation on the few basis vectors?
Let us state the underlying abstract principle that is involved:
Let V be an abstract vector space, and let {v1 , . . . , vs } be a basis for
V . If W is any vector space, and {w1 , . . . , ws } are any given vectors of
W , then there is one and only one linear transformation T : V → W
such that T (v1 ) = w1 , . . . , T (vs ) = ws .
Using this principle, the mappings dk T (p) are all linear transformations, and they are well-defined, i.e. unambiguous.
Notice that for each choice of n and m, there are only a finite number
of non-trivial mappings to be defined; namely we only need to define
dk T (p) for k ≤ min(n, m). For when k > min(n, m), one of the two
COVECTORS AND FORMS
37
spaces Rnk , Rm
k reduces to 0. For example, if n = 4, m = 6, we would
only define d1 T (p), d2 T (p), d3 T (p), d4 T (p).
Finally, to make the definition complete, let us define d0 T (p). Recall
that for any n, Rn0 , the space of 0-vectors in n-space, is merely defined
to be R1 . As a matter of convention, then, we define
d0 T (p) : R1 → R1
to be the identity mapping: d0 T (p) (α) = α.
Remark 15.2. Suppose T : D → Rm , D ⊂
Rn , T ∈ C 1 . The reader
may verify that for all u1 , . . . , uk ∈ Rn , dk T (p) (u1 ∧ . . . ∧ uk ) =
dT (p, u1 ) ∧ . . . ∧ dT (p, uk ). The purpose for introducing these linear
transformations dk T (p) is to enable us to define what is meant by the
transform of a k-form.
Definition 15.3. Let T : D → Rm , where D is an open set in Rn ,
∗
and T ∈ C 1 . Let w : Ω → (Rm
k ) be a differential k-form in m-space.
We suppose that T (D) ⊂ Ω. We define the transform of w by T to be
a differential k-form in n-space, defined on D:
T ∗ w : D → (Rnk )∗ .
It is given by the following formula: For each p ∈ D, T ∗ w(p) is a
k-covector in n-space, whose value at a k-vector ξ is
∗
T w(p) (ξ) = w(T (p)) dk T (p)(ξ) .
Again there are many remarks to be made. Let us study this formula
carefully. The left hand side is what we are defining. We want to know
what T ∗ w is, so we must define what the k-covector T ∗ w(p) is for any
p ∈ D. Then, to know what a k-covector is, it is sufficient to know
what its value is for an arbitrary k-vector ξ.
On the right hand side, T (p) is a point in m-space; in fact, T (p) ∈ Ω,
since by assumption T (D) ⊂ Ω. The k-form w is therefore defined at
T (p), and w(T (p)) is a k-covector in m-space. Also, ξ is a k-vector
in n-space, so by Definition 15.1, [dk T (p)](ξ) is a k-vector in m-space.
The right hand side therefore stands for the effect of the operation of a
k-covector in m-space on a k-vector in m-space, which is a real number,
as desired. So at least everything makes sense. However, there is one
important detail to be checked. We have claimed in our definition
that for each p ∈ D, T ∗ w(p) is a k-covector. We must verify that the
function T ∗ w(p) as defined is indeed linear, i.e.
∗
T w(p) (α1 ξ1 + α2 ξ2 ) = α1 T ∗ w(p) (ξ1 ) + α2 T ∗ w(p) (ξ2 ).
38
MAROLA AND ZIEMER
Proof. By definition
∗
T w(p) (α1 ξ1 + α2 ξ2 ) = w(T (p)) dk T (p)(α1 ξ1 + α2 ξ2 ) .
But dk T (p) and w(T (p)) are linear transformations, so we obtain
w(T (p)) α1 (dk T (p))(ξ1 ) + α2 (dk T (p))(ξ2 )
= α1 w(T (p)) dk T (p))(ξ1 ) + α2 w(T (p)) dk T (p))(ξ2 )
def
= α1 T ∗ w(p) (ξ1 ) + α2 T ∗ w(p) (ξ2 ).
Example 15.4. Let T : R2 → R2 be given by T (x, y) = (u2 + v, v).
Clearly T ∈ C 1 . We note that
2u 1
dT (u, v) =
.
0 1
Now we must transform a k-form in the plane. Therefore we consider
w(x, y) = xydx; i.e. w(x, y) = xyf1 . w is a 1-form, and T ∗ w will also
be a 1-form, again in the plane. The usual way of representing such
a 1-form is by its coordinate functions. For notational convenience, w
is a 1-form in the (x, y)-plane, T is a transformation from the (u, v)plane to the (x, y)-plane, so T ∗ w is a 1-form in the (u, v)-plane. It will
be given by T ∗ w(u, v) = A1 (u, v)f1 + A2 (u, v)f2 . We compute these
coordinate functions as follows:
A1 (u, v) = A1 (u, v)f1 + A2 (u, v)f2 (e1 )
= T ∗ w(u, v) (e1 )
def = w(T (u, v)) d1 T (u, v)(e1 )
= w(T (u, v)) dT [(u, v), e1 )] .
Now dT [(u, v), e1 ] = (2u, 0), and w(T (u, v)) = (u2 + v) ∧ f1 . Thus
A1 (u, v) = ((u2 + v) ∧ f1 )(2u, 0) =(u2 v + v 2 )(2u)
= 2u3 v + 2uv 2 .
∗
In a similar manner, A2 (u, v) = T w(u, v) (e2 ) = [(u2 v+v 2 )f1 ](1, 1) =
u2 v + v 2 . Thus T ∗ w(u, v) = (2u3 v + 2uv 2 )f1 + (u2 v + v 2 )f2 .
Example 15.5. Let T : R2 → R3 be given by T (x, y, z) = (u −
v, uv, v 2 ), T ∈ C 1 , and


1 −1
u .
dT (u, v) =  v
0 2v
T will transform k-forms in 3-space into k-forms in 2-spaces. In particular, T will transform k-forms in (x, y, z)-space into k-forms in (u, v)space. Let us put k = 2. So let w(x, y, z) = x(f1 ∧ f2 ) + yz(f2 ∧ f3 ).
COVECTORS AND FORMS
39
Now T ∗ w will be a 2-form in the (u, v)-plane, so
T ∗ w(u, v) = A(u, v)(f1 ∧ f2 ).
Note that f1 , f2 in this formula stand for the basis covectors in R2 ,
while f1 , f2 , f3 in the formula for w stand for the basis covectors in R3 .
Now A(u, v) may be determined by
A(u, v) = A(u, v)(f1 ∧ f2 ) (e1 ∧ e2 )
= T ∗ w(u, v) (e1 ∧ e2 )
= w(T (u, v)) d2 T (u, v)(e1 ∧ e2 )
= w(u − v, uv, v 2 ) (dT (u, v)(e1 )) ∧ (dT (u, v)(e2 )) .
dT (u, v)(e1 ) = (1, v, 0), and dT (u, v)(e2 ) = (−1, u, 2v). Hence
A(u, v) = w(u − v, uv, v 2 ) (1, v, 0) ∧ (−1, u, 2v)
= (u − v) (f1 ∧ f2 )[(1, v, 0) ∧ (−1, u, 2v)]
+ (uv 3 ) (f2 ∧ f3 )[(1, v, 0) ∧ (−1, u, 2v)]
f1 (1, v, 0) f1 (−1, u, 2v) = (u − v) f2 (1, v, 0) f2 (−1, u, 2v) 3 f2 (1, v, 0) f2 (−1, u, 2v) + (uv ) f3 (1, v, 0) f3 (−1, u, 2v) v u 1 −1 3
+ (uv ) = (u − v) 0 2v v
u = (u − v)(u + v) + (uv 3 )(2v 2 ) = u2 − v 2 + 2uv 5 .
Therefore T ∗ w(u, v) = (u2 − v 2 + 2uv 5 )(f1 ∧ f2 ).
Exercise 15.6.
a) Let T : R3 → R2 be given by
u = x2 + z
T :
v = x + y,
and w(u, v) = uvf1 − vf2 . Compute T ∗ w.
b) Let T : R3 → R3 be given by

 x=r+s
y = s2 t
T :
 z = 2t,
and w(x, y, z) = (xy − z 2 )f1 ∧ f2 ∧ f3 . Compute T ∗ w.
Let us consider what happens to our Definition 15.3 in case k = 0.
By definition, a 0-form is simply a real-valued function. Suppose then
∗
that D ⊂ Rn is open, and T : D → Rm , T ∈ C 1 . Let w : Ω → (Rm
0 )
40
MAROLA AND ZIEMER
be a 0-form in m-space, and let T (D) ⊂ Ω. Then T ∗ w should be a
0-form in n-space, T ∗ w : D → (Rn0 )∗ , specified by
∗
T w(p) (ξ) = [w(T (p))][d0 T (p)(ξ)].
Here ξ is a 0-vector in Rn0 = R1 . By Definition 15.1, d0 T (p)(ξ) = ξ.
Hence we have
∗
T w(p) (ξ) = [w(T (p))](ξ).
Now to make sense of this equation, we must know how a 0-covector
operates on a 0-vector. We have not defined this notion before, as
we should have done, so we give the definition now. If α ∈ (Rn0 )∗ ,
and β ∈ Rn0 , then (α)(β) = αβ. In particular then, T ∗ w(p) (ξ) and
[w(T (p))](ξ) stand for ordinary real number multiplication. Now put
ξ = 1, and we have T ∗ w(p) = w(T (p)). Since this holds for every
p ∈ D, T ∗ w = w ◦ T . Thus we see that for the case of 0-forms, the
notion of transform by T corresponds to the notion of inducing a change
of coordinates by T .
We wish to develop a formula concerning the interrelation between
the exterior derivative and the transform by T . This requires a number
of preliminary results which have independent interest as well. For the
next several pages we shall always consider T : D → Rm , D ⊂ Rn
open, T ∈ C 1 .
Proposition 15.7. T ∗ may be thought as a linear transformation from
the space of k-forms in m-space to the space of k-forms in n-space.
Proof. The proof is left to the reader.
The next result applies to the interaction of T ∗ with the exterior
product. First we need some lemmas.
Lemma 15.8. If w is a k-form in m-space, then the transform T ∗ w
may also be described by the formula
T ∗ w(p) = w(T (p)) ◦ dk T (p).
Proof. This is obvious from the definition. The diagram below illustrates this lemma.
Rnk
T ∗ w(p)
CC
CC
CC
dk T (p) CC!
Rm
k
/ R1
=
|
|
|
|
||
|| w(T (p))
COVECTORS AND FORMS
41
Lemma 15.9. Let ξ be a simple k-covector in m-space; say ξ = g1 ∧
∗
m ∗
. . . ∧ gk ∈ (Rm
k ) , where gi ∈ (R1 ) . Then ξ ◦ dk T (p) is a simple
k-covector in n-space; in fact,
ξ ◦ dk T (p) = g1 ◦ dT (p) ∧ . . . ∧ gk ◦ dT (p) .
Proof. We must show that both sides of the equation stand for
same function in (Rnk )∗ ; so let (u1 ∧ . . . ∧ uk ) ∈ Rnk . Then, using
formula on p. 17,
ξ ◦ dk T (p) (u1 ∧ . . . ∧ uk ) = ξ dT (p)(u1 ) ∧ . . . ∧ dT (p)(uk )
g1 (dT (p)(u1 )) . . . g1 (dT (p)(uk ))
..
..
= .
.
g (dT (p)(u )) . . . g (dT (p)(u ))
k
1
k
k
Now gi (dT (p)(uj )) = gi ◦ dT (p) (uj ). So we get
g1 ◦ dT (p)(u1 ) . . . g1 ◦ dT (p)(uk ) ..
..
.
.
gk ◦ dT (p) (u1 ) . . . gk ◦ dT (p) (uk ) = (g1 ◦ dT (p)) ∧ . . . ∧ gk ◦ dT (p) (u1 ∧ . . . ∧ uk ).
the
the
.
Thus the functions ξ ◦ dk T (p) and (g1 ◦ dT (p)) ∧ . . . ∧ gk ◦ dT (p) have
the same values on simple k-vectors, and so by linearity they are the
same function.
Lemma 15.10. Let φ1 be a k-covector in Rm , and φ2 an l-covector in
Rm . Then
(φ1 ∧ φ2 ) ◦ dk+l T (p) = φ1 ◦ dk T (p) ∧ φ2 ◦ dl T (p) .
Proof. First consider the case where φ1 and φ2 are simple; say φ1 =
∗
m ∗
g1 ∧ . . . ∧ gk ∈ (Rm
k ) , and φ2 = h1 ∧ . . . ∧ hl ∈ (Rl ) . Then, using
Lemma 15.8,
(φ1 ∧ φ2 ) ◦ dk+l T (p) = (g1 ∧ . . . ∧ gk ∧ h1 ∧ . . . ∧ hl ) ◦ (dk+l T (p))
= (g1 ◦ dT (p)) ∧ . . . ∧ (gk ◦ dT (p))
∧ (h1 ◦ dT (p)) ∧ . . . ∧ (hl ◦ dT (p))
= φ1 ◦ dk T (p) ∧ φ2 ◦ dl T (p) .
P
P
In the general case, we may write φ1 = si=1 αi ξi , φ2 = ti=1 βj ηj ,
where the ξi ’s are simple k-covectors and the ηj ’s are simple l-covectors.
By the linearity of the exterior product,
s X
t
X
φ1 ∧ φ2 =
αi βj (ξi ∧ ηj ).
i=1 i=1
42
MAROLA AND ZIEMER
Therefore, using the first case, we obtain
(φ1 ∧ φ2 ) ◦ dk+l T (p) =
s X
t
X
αi βj (ξi ∧ ηj ) ◦ dk+l T (p)
i=1 i=1
=
s X
t
X
αi βj (ξi ◦ dk T (p)) ∧ (ηj ◦ dl T (p)
i=1 i=1
s
X
"
=
#
"
αi (ξi ◦ dk T (p)) ∧
i=1
"
=
s
X
t
X
#
βj (ηj ◦ dl T (p)
i=1
!
αi ξi
#
"
◦ dk T (p)) ∧
i=1
t
X
!
βj ηj
#
◦ dl T (p)
i=1
= φ1 ◦ dk T (p) ∧ φ2 dl T (p) .
Theorem 15.11. If w1 is a k-form in Rm , and w2 is an l-form in Rm ,
then
T ∗ (w1 ∧ w2 ) = T ∗ w1 ∧ T ∗ w2 .
Proof. Let p ∈ D. Using Lemma 15.8, Lemma 15.10, and again Lemma 15.8,
we obtain
T ∗ (w1 ∧ w2 )(p) = (w1 ∧ w2 )(T (p)) ◦ dk+l T (p)
= w1 (T (p)) ∧ w2 (T (p)) ◦ dk+l T (p)
= w1 (T (p)) ◦ dk T (p) ∧ w2 (T (p)) ◦ dl T (p)
= (T ∗ w1 (p)) ∧ (T ∗ w2 (p)) = (T ∗ w1 ∧ T ∗ w2 )(p).
Theorem 15.12. If w is a k-form in Rm , and T : D → Rm as above,
then if w ∈ C 1 ,
T ∗ (dw) = d(T ∗ w).
Proof. The method of proof is similar to the proof of the preceding
theorem.
Exercise 15.13. Let k = 0, and n = m = 3, and prove the theorem in
this special case.
The general case now follows quickly from what we have already
established. First we make two remarks. Let f¯1 , . . . , f¯m be k-forms in
COVECTORS AND FORMS
43
Rm , and suppose T : D → Rm is as above, with coordinate functions

 y1 = φ1 (x1 , . . . , xn )
..
T :
.

ym = φm (x1 , . . . , xn )
Then we observe that T ∗ f¯i = dφi . For indeed, in the proof of Lemma 15.10
we showed T ∗ f¯i = (φi )x1 f1 + . . . + (φi )xn fn = dφi .
As our second remark, we note df¯i = 0. For indeed, if we let Fi :
m
R → R1 be given by Fi (x1 , . . . , xn ) = xi , then we have seen that
dFi = f¯i . Thus df¯i = d(dFi ) = 0.
To prove the theorem, we proceed in steps; let k = 1, and let w be
a 1-form in Rm . Then w = A1 ∧ f¯1 + . . . + Am ∧ f¯m expresses w as a
sum of wedge products of 0-forms and 1-forms. Now
d T ∗ (Ai ∧ f¯i ) = d (T ∗ Ai ) ∧ (T ∗ f¯i )
= d(T ∗ Ai ) ∧ (T ∗ f¯i ) + (T ∗ Ai ) ∧ d(T ∗ f¯i ) .
As we have noted, T ∗ f¯i = dφi , so d(T ∗ f¯i ) = d(dφi ) = 0, so the second
term is 0, and we get d(T ∗ Ai ) ∧ (T ∗ f¯i ), which by the Exercise 15.13 for
k = 0, is T ∗ dAi ∧ T ∗ f¯i = T ∗ (dAi ∧ f¯i ). Finally, d(Ai ∧ f¯i ) = (dAi ∧
f¯i ) + (Ai ∧ df¯i ) = dAi ∧ f¯i by the second remark. Thus T ∗ (dAi ∧ f¯i ) =
T ∗ d(Ai ∧ f¯i ). Hence dT ∗ (Ai ∧ f¯i ) = T ∗ d(Ai ∧ f¯i ).
Thus the desired formula holds in each of the terms of the sum
representing w. We now make the observation that since T ∗ and d are
both linear, it follows that if T ∗ d(wi ) = dT ∗ (wi ) for i = 1, 2, . . . , s, then
the formula holds for their sum
s
s
X
X
∗
T d
=T
wi
d(wi )
∗
i=1
i=1
=
s
X
∗
T (d(wi )) =
i=1
s
X
d(T ∗ (wi ))
i=1
s
s
X
X
∗
∗
=d
T (wi ) = d T
wi .
i=1
i=1
Since the formula does hold for each of the terms Ai ∧ f¯i which add up
to w, it holds for w.
To complete the theorem, we must establish the formula for any k.
We note that if the formula ”T ∗ d = dT ∗ ” holds for any forms w1 and
w2 (of arbitrary dimension) it holds for their wedge product: Let w1
44
MAROLA AND ZIEMER
be a k-form
T ∗ (d(w1 ∧ w2 )) = T ∗ ((dw1 ∧ w2 ) + (−1)k (w1 ∧ dw2 ))
= T ∗ (dw1 ∧ w2 ) + (−1)k T ∗ (w1 ∧ dw2 )
= T ∗ (dw1 ) ∧ T ∗ (w2 ) + (−1)k T ∗ (w1 ) ∧ T ∗ (dw2 )
= d(T ∗ w1 ) ∧ T ∗ w2 + (−1)k T ∗ w1 ∧ d(T ∗ dw2 )
= d T ∗ w1 ∧ T ∗ w2 = d T ∗ (w1 ∧ w2 ) .
This said, we now note that if w is a k-form, k ≥ 2, then w can be built
up from forms of dimension less than k by sums and wedge products;
hence if the formula ”T ∗ d = dT ∗ ” holds for forms of dimension less than
k, it holds for forms of dimension k. Thus the proof is complete.
To illustrate this final assertion, consider a 2-form w in R3 . As we
have seen, w has a coordinate representation: w(p) = A1 (p)(f1 ∧ f2 ) +
A2 (p)(f1 ∧ f3 ) + A3 (p)(f2 ∧ f3 ). But this very formula means that
w = A1 ∧ f¯1 ∧ f¯2 + A2 ∧ f¯1 ∧ f¯3 + A3 ∧ f¯2 ∧ f¯3 . The Ai ’s are 0-forms and
the f¯i ’s are 1-forms. Since we have proved the formula in case k = 0, 1,
it follows for the 2-form w by our remarks.
We now consider the effects of two transformations in composition
on differential forms. For the next few pages we shall deal with the
following situation: U ⊂ Rq open, V ⊂ Rn open, S : U → Rn ,
T : V → Rm , S, T ∈ C 1 , and S(U ) ⊂ V , so the composite T ◦ S makes
good sense as well as d(T S)(p) = dT (S(p)) ◦ dS(p).
U@
@
TS
@@
@@
@
S
V
/ Rm
{=
{{
{
{{
{{ T
Rq D
d(T S)(p)
DD
DD
D
dS(p) DD!
Rn
/ Rm
<
y
yy
y
yy
yy dT (S(p))
Proposition 15.14. For all k ≥ 0, we have
dk (T S)(p) = dk T (S(p)) ◦ dk S(p)
for any point p ∈ U .
Rqk
dk (T S)(p)
AA
AA
AA
dk S(p) AA
Rnk
/ Rm
= k
||
|
|
||
|| dk T (S(p))
1
Proof. Case 1: k = 0. In this case, Rq0 = Rn0 = Rm
0 = R , and
d0 (T S)(p) = d0 T (S(p)) = d0 S(p) = Identity. Hence the conclusion is
immediate.
COVECTORS AND FORMS
45
Case 2: k > 0. Using Definition 15.1 and the preceding reminder,
dk (T S)(p) (ei1 ∧ . . . ∧ eik ) = d(T S)(p) (ei1 ) ∧ . . . ∧ d(T S)(p) (eik )
= d(T (S(p)) ◦ dS(p) (ei1 ) ∧ . . . ∧ d(T (S(p)) ◦ dS(p) (eik )
= d(T (S(p)) dS(p) (ei1 ) ∧ . . . ∧ d(T (S(p)) dS(p) (eik )
= dk T (S(p)) [dS(p)](ei1 ) ∧ . . . ∧ [dS(p)](eik )
= dk T (S(p)) dk S(p) (ei1 ∧ . . . ∧ eik )
= dk T (S(p)) ◦ dk S(p) (ei1 ∧ . . . ∧ eik ).
We have proved that the two linear transformations dk (T S)(p) and
dk T (S(p)) ◦ dk S(p) agree on the basis vectors ei1 ∧ . . . ∧ eik . Thus they
agree on all k-vectors as showed earlier.
Proposition 15.15. Let us use the symbols Fkq , Fkn , Fkm to denote the
spaces of differential k-forms in q-space, n-space, m-space, respectively.
Then for all k ≥ 0, we have
(T S)∗ w = S ∗ (T ∗ w),
where w is any k-form in m-space such that T (V ) ⊂ D2(w).
Fkq o
(T S)∗
`BB
BB
BB
S ∗ BB
Fkm
{{
{{
{
{ ∗
{} { T
Fkm
Proof. Note first that both sides stand for k-forms in q-space defined
on U . Let p ∈ U and ξ ∈ Rqk . Then, using the preceding proposition
and Definition 15.3, we obtain
(T S)∗ w(p) (ξ) = w(T S(p)) dk (T S)(p)(ξ)
= w(T (S(p))) (dk T (S(p)) ◦ dk S(p))(ξ)
= w(T (S(p))) dk T (S(p))(dk S(p)(ξ))
= T ∗ w(S(p)) dk S(p)(ξ)
= S ∗ (T ∗ w)(p) (ξ).
Since this equation holds for all p ∈ U and ξ ∈ Rqk , we have (T S)∗ w =
S ∗ (T ∗ w).
Lemma 15.16. Let D ⊂ Rk open, T : D → Rn , T ∈ C 1 . Then for all
e (p) = dk T (p) (e1 ∧ . . . ∧ ek ).
p ∈ D, JT
2domain
of the k-form w
46
MAROLA AND ZIEMER
Proof. Let

 y1 = φ1 (x1 , . . . , xn )
..
T :
.

ym = φm (x1 , . . . , xn )
then dT (p) is the Jacobian
matrix of T at p. Then note that dT (p)(ej ) =
∂φ1
∂φn
∂T
(p), . . . , ∂xj (p) = ∂xj (p). Hence
∂xj
e (p) = ∂T (p) ∧ . . . ∧ ∂T (p)
JT
∂x1
∂xk
= dT (p)(e1 ) ∧ . . . ∧ dT (p)(ek )
= dk T (p) (e1 ∧ . . . ∧ ek ).
We continue to use the basic hypotheses as given on page 43 preceding Proposition 15.14. We suppose also that S and T are smooth.
Proposition 15.17. Let w be a continuous q-form in m-space. Then
T ∗ w is a continuous q-form in n-space, and
Z
Z
∗
T w=
w.
S
TS
∗
Proof. To show that T w is continuous, we must
P show that its coordi∗
nate functions are continuous. Now T w(p) =
Ai1 ,...,iq (p)(fi1 ∧ . . . ∧
fiq ), where i1 , . . . , iq is an increasing sequence of integers from among
{1, . . . , n}, and f1 , . . . , fn are the basis covectors in n-space. We know
the procedure for obtaining the coordinate functions:
Ai1 ,...,iq (p) = T ∗ w(p) (ei1 ∧ . . . ∧ eiq )
= w(T (p)) dq T (p)(ei1 ∧ . . . ∧ eiq )
= w(T (p)) dT (p)(ei1 ) ∧ . . . ∧ dT (p)(eiq ) .
We use the fact that w is continuous; this means
X
w(x) =
Bj1 ,...,jq (x)(gj1 ∧ . . . ∧ gjq )
where j1 , . . . , jq is an increasing sequence of integers from among {1, . . . , m},
and where g1 , . . . , gm are the basis covectors in m-space. Now
w(T (p)) dT (p)(ei1 ) ∧ . . . ∧ dT (p)(eiq )
X
=
Bj1 ,...,jq (T (p)) (gj1 ∧ . . . ∧ gjq )(dT (p)(ei1 ) ∧ . . . ∧ dT (p)(eiq )) .
The bracketed quantity is given by a certain determinant of a q × qmatrix (see p. 17). Note that gj dT (p)(ei ) stands for the action
of a covector in m-space on a vector in m-space; its value is the j th
COVECTORS AND FORMS
47
component of the vector dT (p)(ei ), that is to say, the partial derivative
∂φj
(p). Thus the entries in the determinant are simply the various
∂xi
partials of the functions of T , all of which are continuous, since T ∈ C 1 .
Hence the entire determinant is a continuous function of p. Also, the
coefficients Bj1 ,...,jq ◦ T are composites of continuous functions. Thus
T ∗ w is continuous.
R
We may write S T ∗ w as follows
Z
Z
∗
∗
e
T w=
T w(S(p)) JS(p)
S
ZU
∗
=
T w(S(p)) dq S(p)(ei1 ∧ . . . ∧ eiq )
ZU
w(T (S(p))) dq T (S(p))(dq S(p)(ei1 ∧ . . . ∧ eiq ))
=
ZU
=
w(T (S(p))) dq (T S)(p)(ei1 ∧ . . . ∧ eiq )
ZU
e S)(p)
=
w(T S)(p)) J(T
ZU
=
w.
TS
Corollary 15.18. Let w be a continuous 2-form in R3 . Let Σ : D →
R3 be a smooth surface, where D ⊂ R2 is open. Then
Z
Z
Σ∗ w.
w=
Σ
D
Proof. Put T = Σ in the preceding proposition. Also, put S : D → D
equal to the identity transformation; so the assertion of the preceding
proposition says
Z
Z
Z
∗
Σw=
w=
w.
S
ΣS
Σ
On the other hand, by the remark after Definition 12.3 restated for
2-forms in the plane instead of 3-forms in space, we have
Z
Z
∗
Σw=
Σ∗ w.
S
D
48
MAROLA AND ZIEMER
16. The Gauss–Green–Stokes theorems
Let γ1 : [a, b] → R3 , γ2 : [b, c] → R3 be two curves. Suppose that
γ1 (b) = γ2 (b). Then, as usual, we define γ1 + γ2 : [a, c] → R3 by
γ1 (t),
a ≤ b,
(γ1 + γ2 )(t) =
γ2 (t), b ≤ t ≤ c.
Suppose that γ1 and γ2 are smooth. Then γ1 + γ2 will be a piecewise
smooth curve, and we shall define
Z
Z
Z
w=
w+
w,
γ1 +γ2
γ1
γ2
for any 1-form w which is continuous and whose domain contains
trace(γ1 ) ∩ trace(γ2 ).
Lemma 16.1. Let w1 , w2 : Ω → (R3k )∗ be continuous k-forms (k =
1, 2 or 3). Let α1 , α2 be real numbers; let D ⊂ Rk be open, and let
T : D → R3 be a smooth k-surface, with T (D) ⊂ Ω. Then
Z
Z
Z
α1 w1 + α2 w2 = α1 w1 + α2 w2 .
T
Proof.
Z
T
T
T
Z
e (p)
α1 w 1 + α2 w 2 =
(α1 w1 + α2 w2 )(T (p)) JT
D
Z
Z
e
e (p)
= α1
w1 (T (p)) JT (p) + α2
w2 (T (p)) JT
D
ZD
Z
= α1 w1 + α2 w2 .
T
T
By an admissible region D in the plane we shall mean the following:
D shall be a subset of a rectangle R = {(x, y) ∈ R2 : a ≤ x ≤ b, c ≤
y ≤ d}, and if the projection of D onto the horizontal axis is the closed
interval [α, β], then D is the set of all points (x, y) such that
α ≤x ≤ β
f (x) ≤y ≤ g(x),
where f and g are the smooth (e.g. C 1 ) functions whose graphs form
the top and bottom pieces of the boundary of D. Likewise, if [α0 , β 0 ] is
the projection of D upon the vertical axis, D is the set of points (x, y)
COVECTORS AND FORMS
49
such that
α0 ≤y ≤ β 0
F (y) ≤x ≤ G(y).
We are now ready to state the first form of Green’s theorem.
Theorem 16.2 (Green). Let D be an admissible region, and let w :
Ω → (R31 )∗ be a C 1 1-form, with D ⊂ Ω open. Then
Z
Z
w=
dw.
∂D
D
Proof. Let w(x, y) = A(x, y)fR1 + B(x, Ry)f2 . Let Rw1 (x, y) = A(x, y)f1 ,
w2 (x, y) = B(x, y)f2 . Since ∂DR w = ∂D w1 + ∂D w2 , we may treat
each part separately. Consider ∂D w1 . On γ1 , the lower part of ∂D,
y = f (x) (or x = F (y)), α ≤ x ≤ β (or α0 ≤ y ≤ β 0 ), and w1 (x, f (x)) =
A(x, f (x))f1 . Thus
Z
Z β
A(x, f (x)) dx.
w1 =
γ1
α
On γ2 , the upper part of ∂D, y = g(x) (or x = G(y)) and x goes from
β to α. Thus, w1 (x, g(x)) = A(x, g(x))f1 and
Z
Z α
Z β
w1 =
A(x, g(x)) dx = −
A(x, g(x)) dx.
γ2
β
α
On the vertical parts of ∂D, if any, w1 = 0. Adding, we obtain
Z
Z
Z β
w1 =
A dx =
A(x, f (x)) − A(x, g(x)) dx.
∂D
∂D
α
On the other hand dw1 = d(A(x, y)f1 ) = dA(x, y) ∧ f1 = −Ay (x, y)f1 ∧
f2 so that
Z
Z
dw1 = −
Ay (x, y) dxdy
D
D
β
Z
Z
=−
g(x)
Ay (x, y) dy
dx
α
f (x)
β
=−
A(x, g(x)) − A(x, f (x)) dx
α
Z β
Z
=
A(x, f (x)) − A(x, g(x)) dx =
Z
α
w1 .
∂D
A similar computation shows that the same relation holds for w2 =
B(x, y)f2 , and adding these, we obtain the formula for a general 1form.
50
MAROLA AND ZIEMER
We wish to extend Green’s theorem to a wider class of plane regions
than class of admissible regions; we can of course extend the theorem
without further delay to a finite union of admissible regions D1 , . . . , Dn ,
and we get
Z
Z
dw =
D1 ∪···∪Dn
w.
∂D1 ∪···∪∂Dn
Now we would like to write ∂(D1 ∪ · · · ∪ Dn ) for ∂D1 ∪ · · · ∪ ∂Dn . As
we know, this equality is generally false; however, it may happen that
Z
Z
w=
w
∂D1 ∪···∪∂Dn
∂(D1 ∪···∪Dn )
for all differential forms w, as the following example shows:
Consider the crescent-shaped region D bounded by the curves y =
x2 + 1 and y = 2x2 . Let us cut D into two pieces by the y-axis: D1 and
D2 are both admissible regions. We observe that ∂D1 and ∂D2 both
contain the segment of the y-axis between 0 and 1 but the orientation
of this segment is reversed. Accordingly, the integrals over this segment
cancel, and we find that
Z
Z
Z
w=
w=
dw.
∂D
∂D1 ∪∂D2
D1 ∪D2
Now D1 ∪ D2 6= D, because D1 and D2 do not contain the segment on
the y-axis. However, this segment is a set of zero area, so
Z
Z
dw =
dw.
D1 ∪D2
D
Thus the theorem holds for D.
In this manner, Green’s theorem holds also for a wider class of regions.
Theorem 16.3 (Green revisited). Suppose Green’s theorem holds for
a domain D. Let D ⊂ U ⊂ R2 , where U is open, and suppose T : U →
R2 , T ∈ C 2 on U , T is 1-1 on U and JT (p) > 0 on U . Then Green’s
theorem holds for T (D).
Proof. Let ID denote the identity transformation on D. Then we obtain
Z
Z
Z
Z
T hm.16.2
∗
w=
w=
T w =
d(T ∗ w)
∂(T (D))
T ◦∂D
∂D
D
Z
Z
Z
Z
T hm.15.12
∗
∗
=
T (dw) =
T (dw) =
dw =
dw.
D
ID
T ◦ID
T (D)
COVECTORS AND FORMS
51
If we now consider the region in the first and fourth quadrants
bounded by the lines x = 0, x = 1, y = 1 and the curve y = x3 sin(π/x),
it can be shown that this region is the image of the unit square under
a transformation T having all the properties of the preceding theorem.
But clearly this region cannot be chopped up into a finite number of
admissible regions, hence this theorem gives us additional power in
applying Green’s theorem.
Theorem 16.4 (Stokes). Let D be a region in the plane for which
Green’s theorem holds, and let Σ : D → R3 be a smooth surface. We
view ∂D as one or more closed curves in the plane, and we define
∂Σ = Σ ◦ ∂D. Thus ∂Σ consists of one or more closed curves in R3 .
If w : Ω → (R31 )∗ is a class C 1 -form, with Ω ⊃ Σ(D), then
Z
Z
w=
dw.
∂Σ
Σ
Proof. Let ID denote the identity transformation on D. Then we obtain
Z
Z
Z
Z
Z
∗
∗
w=
w=
Σw=
d(Σ w) =
Σ∗ (dw)
∂Σ
∂D
D
D
Z
Z
ZΣ◦∂D
∗
dw.
dw =
Σ (dw) =
=
Σ
Σ◦ID
ID
Theorem 16.5 (Gauss). Let R be the unit cube in R3 , and regard ∂R
as six smooth surfaces. Then if w : Ω → (R32 )∗ is a class C 1 2-form,
with Ω ⊃ R,
Z
Z
dw.
w=
∂R
R
Theorem 16.6 (Gauss revisited). Suppose Gauss’ theorem holds for a
region R. Let R ⊂ U ⊂ R3 , where U is open, and suppose T : U → R3 ,
T ∈ C 2 on U , T is 1-1 on U and JT (p) > 0 on U . Then Gauss’ theorem
holds for T (R).
The proofs of these theorems are quite similar to those given for
Green’s theorem, and we shall not repeat them.
Using the material contained in these notes the reader should be
able to formulate and prove analogues of the Gauss, Green, and Stokes
theorems for higher dimensions.
52
MAROLA AND ZIEMER
17. A glance at currents in Rn
To close this note, we very briefly introduce the notion of currents in
R as it would be a natural continuation of this treatise. The interested
reader should consult, e.g., Lang [6], and the references therein.
As a historical sidenote, currents in the sense of geometric measure
theory were introduced by de Rham in 1955 (for use in the theory of
harmonic forms). Later, in the fundamental paper from 1960 Federer
and Fleming developed the class of rectifiable currents, and thereby
provided a solution to the Plateau problem for surfaces of arbitrary
dimension and codimension in Euclidean spaces. Roughly speaking,
Plateau’s problem is as follows: Given an (m − 1)-dimensional boundary Γ. Find an m-dimensional surface S such that ∂S = Γ and S
has the minimal m-dimensional area. The theory of currents then developed into a powerful tool in the Calculus of Variations. Federer’s
monograph [4] gives a comprehensive account of the state of the subject prior to 1970. Since then, the theory has been extended in various
directions and has found numerous applications in geometric analysis
and Riemannian geometry. A breakthrough was achieved by Ambrosio
and Kirchheim [2] in 2000. In [2] the authors extended the theory of
currents in metric spaces. Their approach employs (m + 1)-tuples of
real-valued Lipschitz functions in place of m-forms and provides new
insight to the theory even in Euclidean spaces. For a nice exposition
on the theory of currents in metric spaces, we refer to [6].
n
An m-dimensional current in Rn is a continuous linear mapping
T : w → R1 ,
where w is a m-form. The support supp(T ) of a current T is defined to
be the smallest closed set C ⊂ Rn with the property that T (w) = 0 for
all m-forms w with supp(w) ∩ C = ∅. The boundary of an m-current
T is
∂T (w) := T (dw),
where w is an (m − 1)-form and dw is the exterior derivative of w (i.e.
an m-form), is an (m − 1)-current. Clearly, ∂ ◦ ∂ = 0 since d(dw) = 0,
and supp(∂T ) ⊂ supp(T ).
Example 17.1.
(1) Measures are 0-currents, and they act on 0forms, i.e. functions.
COVECTORS AND FORMS
(2) T (w1 dx1 + w2 dx2 ) =
Then
R1R1
0
0
53
w1 (x, y) dxdy is an 1-current in R2 .
∂f
∂f
∂T (f ) = T (df ) = T
dx1 +
dx2
∂x1
∂x2
Z 1Z 1
Z 1
∂f
=
dx1 dx2 =
(f (1, x2 ) − f (0, x2 )) dx2 .
0
0 ∂x1
0
Suppose Σ is a smooth oriented m-dimensional submanifold of R3
with boundary, and Σ is a closed subset of Rn . Let the orientation of
Σ be given as a continuous function τ : Σ → Rnm such that for every
x ∈ Σ, τ (x) is a simple m-vector and represents the tangent space Tx Σ,
and |τ (x)| = 1. Then
Z
Z
→
−
TΣ (w) :=
w = (w(x), T Σ (x)) dHm (x),
Σ
Σ
m
where H denotes the m-dimensional Hausdorff measure. TΣ is an
m-dimensional current attached to Σ. Moreover, suppose that ∂Σ is
equipped with the induced orientation τ 0 : ∂Σ → Rnm−1 , i.e., τ = η ∧ τ 0
for the exterior unit normal η, then formally
Z
∂TΣ (w) = TΣ (dw) = (dw(x), τ (x)) Hm (x)
Σ
Z
=
(w(x), τ 0 (x)) Hm−1 (x)
∂Σ
= T∂Σ (w)
for all (m − 1)-form w, where the Stokes theorem 16.4 was used.
References
[1] (Or suchlike) R. A. Adams: Calculus – A Complete Course, Pearson Addison
Wesley.
[2] L. Ambrosio and B. Kirchheim: Currents in metric spaces, Acta Math. 185
(2000), 1–80.
[3] (Or suchlike) R. Creighton Buck: Advanced Calculus, McGraw-Hill Book Company, Inc., New York-Toronto-London, 1956.
[4] H. Federer: Geometric Measure Theory, Die Grundlehren der mathematischen
Wissenschaften, Band 153, New York 1969.
[5] S. G. Krantz and H. R. Parks: Geometric Integration Theory, Cornerstones,
Birkhuser Boston, Inc., Boston, MA, 2008.
[6] U. Lang: Local currents in metric spaces, http://www.math.ethz.ch/~lang/.
[7] H. Whitney: Geometric Integration Theory, Princeton University Press,
Princeton, N. J., 1957.
54
MAROLA AND ZIEMER
(N.M.) Department of Mathematics and Statistics, P.O.Box 68, FI00014 University of Helsinki, Finland
E-mail address: [email protected]
(W.P. Z.) Mathematics Department, Indiana University, Bloomington, Indiana 47405, USA
E-mail address: [email protected]