geometric valuations - University of Notre Dame

GEOMETRIC VALUATIONS
CORDELIA E. CSAR, RYAN K. JOHNSON, AND RONALD Z. LAMBERTY
Work begun on June 12 2006. Ended on July 28 2006.
1
2
CORDELIA E. CSAR, RYAN K. JOHNSON, AND RONALD Z. LAMBERTY
Introduction
This paper is the result of a National Science Foundation-funded Research Experience for
Undergraduates (REU) at the University of Notre Dame during the summer of 2006. The
REU was directed by Professor Frank Connolly, and the research project was supervised by
Professor Liviu Nicolaescu.
The topic of our independent research project for this REU was Geometric Probability.
Consequently, the first half of this paper is a study of the book Introduction to Geometric
Probability by Daniel Klain and Gian-Carlo Rota. While closely following the text, we
have attempted to clarify and streamline the presentation and ideas contained therein. In
particular, we highlight and emphasize the key role played by the Radon transform in the
classification of valuations. In the second part we take a closer look at the special case of
valuations on polyhedra.
Our primary focus in this project is a type of function called a “valuation”. A valuation associates a number to each ”reasonable” subset of Rn so that the inclusion-exclusion principle
is satisfied.
Examples of such functions include the Euclidean volume and the Euler characteristic.
Since the objects we are most interested lie in an Euclidean space and moreover we are
interested in properties which are independent of the location of the objects in space, we are
motivated to study “invariant” valuations on certain subsets of Rn . The goal of the first half
of this paper, therefore, is to characterize all such valuations for certain natural collections
of subsets in Rn .
This undertaking turned out to be quite complex. We must first spend time introducing
the abstract machinery of valuations (chapter 1), and then applying this machinery to the
simpler simpler case of pixelations (chapter 2). Chapter 3 then sets up the language of polyconvex sets, and explains how to use the Radon transform to generate many new examples of
valuations. These new valuations have probabilistic interpretations. In chapter 4 we finally
nail the characterization of invariant valuations on polyconvex sets. Namely, we show that
all valuations are obtainable by the Radon transform technique from a unique valuation, the
Euler characteristic. We celebrate this achievement in chapter 5 by exploring applications of
the theory.
With the valuations having been completely characterized, we turn our attention toward
special polyconvex sets: polyhedra, that is finite unions of convex polyhedra. These polyhedra can be triangulated, and in chapter 5 we investigate the combinatorial features of a
triangulation, or simplicial complex.
In Chapter 7 we prove a global version of the inclusion-exclusion principle for simplicial
complexes known as the Möbius inversion formula. Armed we this result, we then explain
how to compute the valuations of a polyhedron using data coming form a triangulation.
In Chapter 8 we use the technique of integration with respect to the Euler characteristic
to produce combinatorial versions of Morse theory and Gauss-Bonnet formula. In the end,
we arrive at an explicit formula relating the Euler characteristic of a polyhedron in terms of
measurements taken at vertices. These measurements can be interpreted either as curvatures,
or as certain averages of Morse indices.
GEOMETRIC VALUATIONS
3
The preceding list says little about why one would be interested in these topics in the first
place. Consider a coffee cup and a donut. Now, a topologist would you tell you that these
two items are “more or less the same.” But that’s ridiculous! How many times have you eaten
a coffee cup? Seriously, even aside from the fact that they are made of different materials,
you have to admit that there is a geometric difference between a coffee cup and a donut. But
what? More generally, consider the shapes around you. What is it that distinguishes them
from each other, geometrically?
We know certain functions, such as the Euler characteristic and the volume, tell part of the
story. These functions share a number of extremely useful properties such as the inclusionexclusion principle, invariance, and “continuity”. This motivates us to consider all such
functions. But what are all such functions? In order to apply these tools to study polyconvex
sets, we must first understand the tools at our disposal. Our attempts described in the previous
paragraphs result in a full characterization of valuations on polyconvex sets, and even lead
us to a number of useful and interesting formulae for computing these numbers.
We hope that our efforts to these ends adequately communicate this subject’s richness,
which has been revealed to us by our research advisor Liviu Nicolaescu. We would like to
thank him for his enthusiasm in working with us. We would also like to thank Professor
Connolly for his dedication in directing the Notre Dame REU and the National Science
Foundation for supporting undergraduate research.
4
CORDELIA E. CSAR, RYAN K. JOHNSON, AND RONALD Z. LAMBERTY
C ONTENTS
Introduction
Introduction
1. Valuations on lattices of sets
§1.1. Valuations
§1.2. Extending Valuations
2. Valuations on pixelations
§2.1. Pixelations
§2.2. Extending Valuations from Par to Pix
§2.3. Continuous Invariant Valuations on Pix
§2.4. Classifying the Continuous Invariant Valuations on Pix
3. Valuations on polyconvex sets
§3.1. Convex and Polyconvex Sets
§3.2. Groemer’s Extension Theorem
§3.3. The Euler Characteristic
§3.4. Linear Grassmannians
§3.5. Affine Grassmannians
§3.6. The Radon Transform
§3.7. The Cauchy Projection Formula
4. The Characterization Theorem
§4.1. The Characterization of Simple Valuations
§4.2. The Volume Theorem
§4.3. Intrinsic Valuations
5. Applications
§5.1. The Tube Formula
§5.2. Universal Normalization and Crofton’s Formulæ
§5.3. The Mean Projection Formula Revisited
§5.4. Product Formulæ
§5.5. Computing Intrinsic Volumes
6. Simplicial complexes and polytopes
§6.1. Combinatorial Simplicial Complexes
§6.2. The Nerve of a Family of Sets
§6.3. Geometric Realizations of Combinatorial Simplicial Complexes
7. The Möbius inversion formula
§7.1. A Linear Algebra Interlude
§7.2. The Möbius Function of a Simplicial Complex
§7.3. The Local Euler Characteristic
8. Morse theory on polytopes in R3
§8.1. Linear Morse Functions on Polytopes
§8.2. The Morse Index
§8.3. Combinatorial Curvature
References
2
2
5
5
6
10
10
11
13
16
18
18
21
23
26
27
28
33
36
36
40
41
44
44
46
48
48
50
53
53
54
55
59
59
60
65
69
69
71
73
80
GEOMETRIC VALUATIONS
References
5
80
6
CORDELIA E. CSAR, RYAN K. JOHNSON, AND RONALD Z. LAMBERTY
1. Valuations on lattices of sets
§1.1. Valuations.
Definition 1.1. (a) For every set S we denote by P (S) the collection of subsets of S and by
Map(S, Z) the set of functions f : S → Z. The indicator or characteristic function of a
subset A ⊂ S is the function IA ∈ Map(S, Z) defined by
(
1 s∈A
IA (s) =
0 s 6∈ A
(b) If S is a set then an S-lattice (or a lattice of sets) is a collection L ⊂ P (S) such that
∅ ∈ L, A, B ∈ L =⇒ A ∩ B, A ∪ B ∈ L.
(c) Is L is an S-lattice then a subset G ∈ L is called generating if
∅ ∈ G and A, B ∈ G =⇒ A ∩ B ∈ G,
and every A ∈ L is a finite union of sets in G.
⊓
⊔
Definition 1.2. Let G be an Abelian group and S a set.
(a) A G-valuation on an S-lattice of sets is a function µ : L → G satisfying the following
conditions:
(a1) µ(∅) = 0
(a2) µ(A ∪ B) = µ(A) + µ(B) − µ(A ∩ B). (Inclusion-Exclusion)
(b) If G is a generating set of the S-lattice L, then a G-valuation on G is a function µ : G → G
satisfying the following conditions:
(b1) µ(∅) = 0
(b2) µ(A ∪ B) = µ(A) + µ(B) − µ(A ∩ B), for every A, B ∈ G such that A ∪ B ∈ G. ⊓
⊔
The inclusion-exclusion identity in Definition 1.2 implies the generalized inclusion-exclusion
identity
X
X
X
µ(A1 ∪ A2 ∪ · · · ∪ An ) =
µ(Ai ) −
µ(Ai ∩ Aj ) +
µ(Ai ∩ Aj ∩ Ak ) + · · · (1.1)
i
i<j
i<j<k
Example 1.3. (a) (The universal valuation) Suppose S is a set. Observe that Map(S, Z) is a
commutative ring with 1. The map
I• : P (S) → Map(S, Z)
given by
P (S) ∋ A 7→ IA ∈ Map(S, Z)
is a valuation. This follows from the fact that the indicator functions satisfy the following
identities.
IA∩B = IA IB
(1.2a)
IA∪B = IA + IB − IA∩B = IA + IB − IA IB = 1 − (1 − IA )(1 − IB ).
(1.2b)
Valuations on lattices
7
(b) Suppose S is a finite set. Then the cardinality map
# : P (S) → Z, A 7→ #A := the cardinality of A
is a valuation.
(c) Suppose S = R2 , R = R and L consists of measurable bounded subsets of the Euclidean
space. The map which associates to each A ∈ L its Euclidean area, area(A), is a real valued
valuation. Note that the lattice point count map
λ : L → Z, A 7→ #(A ∩ Z2 )
is a Z-valuation.
(d) Let S and L be as above. Then the Euler characteristic defines a valuation
χ : L → Z, A 7→ χ(A).
⊓
⊔
If (G, +) is an Abelian group and R is a commutative ring with 1 then we denote by
HomZ (G, R) the set of group morphisms G → R, i.e. the set of maps
ϕ:G→R
such that
ϕ(g1 + g2 ) = ϕ(g1 ) + ϕ(g2 ), ∀g1 , g2 ∈ G.
We will refer to the maps in HomZ (G, R) as Z-linear maps from G to R.
Suppose L is an S-lattice. We denote by S(L) the (additive) subgroup of Map(S, Z)
generated by the functions IA , A ∈ L. We will refer to the functions in S(L) as L-simple
functions, or simple functions if the lattice L is clear from the context.
Definition 1.4. Suppose L is an S-lattice, and G is an Abelian group. An G-valued integral
on L is a Z-linear map
Z
Z
: S(L) → G, S(L) ∋ f 7−→ f ∈ G.
⊓
⊔
Observe that any G-valued integral on an S-lattice L defines a valuation µ : L → G by
setting
Z
µ(A) := IA .
The inclusion-exclusion formula for µ follows from (1.2a) and (1.2b). We say that µ is the
valuation induced by the integral. When a valuation is induced by an integral we will say
that the valuation induces an integral.
In general, a generating set of a lattice has a much simpler structure and it is possible to
construct many valuations on it. A natural question arises: Is it possible to extend to the
entire lattice a valuation defined on a generating set? The next result describes necessary and
sufficient conditions for which this happens.
8
Csar-Johnson-Lamberty
§1.2. Extending Valuations.
Theorem 1.5 (Groemer’s Integral Theorem). Let G be a generating set for a lattice L and
let µ : G → H be a valuation on G, where H is an Abelian group. The following statements
are equivalent.
(1) µ extends uniquely to a valuation on L.
(2) µ satisfies the inclusion-exclusion identities
X
X
µ(B1 ∪ B2 ∪ · · · ∪ Bn ) =
µ(Bi) −
µ(Bi ∩ Bj ) + · · ·
i
i<j
for every n ≥ 2 and any Bi ∈ G such that B1 ∪ B2 ∪ · · · ∪ Bn ∈ G.
(3) µ induces an integral on the space of simple functions S(L).
Proof. We follow closely the presentation in [KR, Chap.2].
• (1) =⇒ (2). Note that the second statement is not trivial because B1 ∪ · · · ∪ Bn−1 is
not necessarily in G. Suppose the valuation µ extends uniquely to a valuation on L. Then
µ satisfies the inclusion exclusion identity µ(A ∪ B) = µ(A) + µ(B) − µ(A ∩ B) for all
A, B ∈ L. Since B1 ∪ · · · ∪ Bn−1 ∈ L even if it is not in G, we can apply the inclusionexclusion identity repeatedly to obtain the result.
R
• (2) =⇒ (3). We wish to construct a linear map : S(L) → H. To do this, we first note
that by (1.2b) any function, f in S(L) can be written as
f=
m
X
αi IKi ,
i=1
where Ki ∈ G and αi ∈ Z. We thus define an integral as follows:
Z X
m
m
X
αi IKi dµ :=
αi µ(Ki ).
i=1
i=1
This map might not be well-defined since f could be represented in different ways as a linear
combination of indicator functions of generating sets. We thus need to show that the above
map is independent of such a representation. We argue by contradiction and we assume that
f has two distinct representations
X
X
f=
γi IAi =
βi IBi ,
yet
X
γi µ(Ai ) 6=
X
βi µ(Bi ).
Thus, subtracting these equations and renaming the terms appropriately, we are left with the
situation
m
m
X
X
αi IKi = 0 and
αi µ(Ki ) 6= 0.
(1.3)
i=1
i=1
Now we label the intersections
L1 = K1 , . . . , Lm = Km , Lm+1 = K1 ∩ K2 , Lm+2 = K1 ∩ K3 , . . .
Valuations on lattices
9
such that Li ⊂ Lj ⇒ j < i. This can be done because we have a finite number of sets. We
note that all the Li ’s are in G, since the Ki ’s and their intersections are in G. We then rewrite
(1.3) in terms of the Li ’s as
p
X
ai ILi = 0
and
ai µ(Li ) 6= 0.
(1.4)
p
X
ai µ(Li ) 6= 0.
(1.5)
i=1
i=1
Now take q maximal such that
p
X
p
X
ai ILi = 0
and
i=q
i=q
Note that 1 ≤ q < p. Note that aq 6= 0 since then q would not be maximal.
Let us now observe that
p
[
Li .
Lq ⊂
Indeed, if x ∈ Lq r
i=q+1
Sp
j=q+1 Lj
then
ILi (x) = 0 ∀i 6= q and aq =
p
X
ai ILi (x) = 0.
p
[
(Lq ∩ Li ).
i=q
This is impossible since aq 6= 0. Hence
Lq = Lq ∩
p
[
Li
i=q+1
!
=
i=q+1
Let us write Lq ∩ Li = Lji . Then, since i > q and by construction Li ⊂ Lj =⇒ j < i, we
have that ji > q. Thus:
p
p
[
[
Lji .
(Lq ∩ Li ) =
Lq =
i=q+1
i=q+1
Then we have
0 6=
n
X
ai µ(Li ) = aq µ(Lq ) +
i=q
= aq µ
p
[
i=q+1
n
X
ai µ(Li )
i=q+1
Lji
!
+
n
X
ai µ(Li ) =
p
X
bi µLi ,
i=q+1
i=q+1
where the last equality is attained by applying the assumed inclusion/exclusion principle to
the union and regrouping the terms.
We now repeat exactly the same process with the expression involving the indicator function. Then,
p
p
p
X
X
X
S
p
bi ILi
ai ILi =
ai ILi = aq I i=q+1 (Lq ∩Li ) +
i=q
i=q+1
i=q+1
10
Csar-Johnson-Lamberty
Thus,
p
X
bi ILi = 0
i=q+1
However, this contradicts the maximality of q. Hence, the integral map is well defined, and
we are done.
• (3) =⇒
R (1). Suppose µ defines an integral on the space of L-simple functions. Then for
A ∈ G, IA dµ = µ(A). This motivates us to define an extension µ̃ of µ to L by
Z
µ̃(A) := IA dµ = µ(A), A ∈ L.
This definition is certainly unambiguous and its restriction to G is just µ, so we need only
check that it is a valuation. Let A, B ∈ L. Then,
Z
Z
µ̃(A ∪ B) = IA∪B dµ = IA + IB − IA∩B dµ
Z
Z
Z
= IA dµ + IB dµ − IA∩B dµ = µ̃(A) + µ̃(B) − µ̃(A ∩ B).
Thus, µ̃ is an extension of µ to L. Moreover, it is unique.
Suppose ν is another extension of µ. Then, given any A ∈ L we can write A = K1 ∪ . . . ∪
Kr for Ki ∈ G. Since µ̃ and ν are both valuations, both satisfy the generalized inclusion
exclusion principle. Furthermore, since both are extensions of µ, both agree on G
=
r
X
i=1
µ̃(A) = µ̃(K1 ∪ . . . ∪ Kr )
X
X
µ(Ki ∩ Kj ∩ Kk ) + · · ·
µ(Ki ) −
µ(Ki ∩ Kj ) +
i<j
i<j<k
= ν(K1 ∪ . . . ∪ Kr ) = ν(A).
Hence, the extension is unique.
⊓
⊔
Valuations on pixelations
11
2. Valuations on pixelations
§2.1. Pixelations. We now study valuations on a small class of easy to understand subsets
of Rn . We will then use this knowledge to study valuations on much more general subsets of
Rn . One of our main aims is to classify all the “nice” valuations on such subsets. It will turn
out that the ideas and results of this section are analogous to results discussed later.
Definition 2.1. (a) An affine k-plane in Rn is the translate of a k-dimensional vector subspace
of Rn . A hyperplane in Rn is an affine (n − 1)-plane.
(b) An affine transformation of Rn is a bijection T : Rn → Rn such that
T (λ~u + (1 − λ)~v) = λT ~u + (1 − λ)T~v, ∀λ ∈ R, ~u, ~v ∈ Rn .
The set Aff(Rn ) of affine transformations of Rn is a group with respect to the composition
of maps.
⊓
⊔
Definition 2.2. An (orthogonal) parallelotope in Rn a compact set P of the form
P = [a1 , b1 ] × · · · [an , bn ], ai ≤ bi , i = 1, · · · n.
⊓
⊔
We will often refer to an orthogonal parallelotope as a parallelotope or even just a box.
Remark 2.3. It is entirely possible for any number of the intervals defining an orthogonal
parallelotope to have length zero. Finally, note that the intersections of two parallelotopes is
again a parallelotope.
⊓
⊔
We denote by Par(n) the collection of parallelotopes in Rn and we define a pixelation to
be a finite union of parallelotopes. Observe that the collection Pix(n) of pixelations in Rn is
a lattice, i.e. it is stable under finite intersections and unions.
Definition 2.4. (a) A pixelation P ∈ Pix(n) is said to have dimension n (or full dimension)
if P is not contained in a finite union of hyperplanes.
(b) A pixelation P ∈ Pix(n) is said to have dimension k (k ≤ n) if P is contained in a finite
union of k-planes but not in a finite union of (k − 1)-planes.
⊓
⊔
The top part of Figure 1 depicts possible boxes in R2 , while the bottom part depicts a
possible pixelation in R2 .
§2.2. Extending Valuations from Par to Pix. By definition, Par(n) is a generating set of
Pix(n). Consequently, we would like to know if we can extend valuations from Par(n) to
Pix(n). The following theorem shows that we can do so whenever the valuations map into a
commutative ring with 1.
Theorem 2.5. Let R be a commutative ring with 1. Then any valuation µ : Par(n) → R
extends uniquely to a valuation on Pix(n).
Proof. Due to Groemer’s Integral Theorem, all we need to show is that µ gives rise to an
integral on the vector space of functions generated by the indicator functions of boxes. Thus
12
Csar-Johnson-Lamberty
F IGURE 1. Planar pixelations.
it suffices to show that
m
X
αi IPi = 0 =⇒
i=1
m
X
αi µ(Pi) = 0,
i=1
where the Pi ’s are boxes.
We proceed by induction on the dimension n. If the dimension is zero, the space only has
one point, and the above claim is true.
Now suppose that the theorem holds in dimension n − 1. For the sake of contradiction,
we suppose the theorem does not hold for dimension n. That is, suppose there exist distinct
boxes P1 , . . . , Pm such that
m
X
αi IPi = 0 and
i=1
m
X
αi µ(Pi ) = r 6= 0.
(2.1)
i=1
Let k be the number of the boxes Pi of full dimension. Take k to be minimal over all such
contradictions. We distinguish three cases.
Case 1. k = 0. Since none of the boxes is of full dimension, each is contained in a hyperplane. Of all the relations of type (2.1) we choose the one so that the boxes involved are
contained in the smallest possible number ℓ of hyperplanes.
Assume first that ℓ = 1. Then all the Pi are contained in a single hyperplane. By the
induction hypothesis, then the integral is well defined, so we have a contradiction.
Thus, we can assume that ℓ > 1. So, there exist ℓ hyperplanes orthogonal to the coordinate
axes, H1 , . . . , Hℓ , such that each Pi is contained in one of them. Without loss of generality,
we may renumber the indices so that P1 ⊂ H1 .
The the restriction to H1 of the first sum in (2.1) is zero so that
m
X
αi IPi ∩H1 = 0.
i=1
But, Pi ∩ H1 is a subset of the a hyperplane H1 and we can apply the induction hypothesis
to conclude that
m
X
αi µ(Pi ∩ H1 ) = 0.
i=1
Valuations on pixelations
13
Subtracting the above two equations from (2.1) we see that
m
X
m
X
αi (IPi − IPi ∩H1 ) = 0 and
i=1
αi (µ(Pi) − µ(Pi ∩ H1 )) = r 6= 0.
i=2
The above sums take the same form (2.1), but we see that the boxes Pj ⊂ H1 disappear since
Pj = Pj ∩ H1 . Thus we obtain new equalities of the type (2.1) but the boxes involved are
contained in fewer hyperplanes, H2 , · · · , Hℓ contradicting the minimality of ℓ.
Case 2. k = 1. We may assume the top dimensional box is P1 . Then P2 ∪ · · · ∪ Pm is
contained in a finite union of hyperplanes H1 , · · · , Hν perpendicular to the coordinate axes.
Observe that
(H1 ∪ · · · ∪ Hν ) ∩ P1 ( P1 .
Indeed,
ν
X
vol (H1 ∪ · · · ∪ Hν ) ∩ P1 ≤
vol (Hj ∩ P1 ) = 0 < vol (P1 ),
j=1
so that
Using the identity
minimality of k.
P
αi IPi
∃x0 ∈ P1 r (H1 ∪ · · · ∪ Hν ).
= 0 at x0 found above we deduce α1 = 0 which contradicts the
Case 3. k > 1. We can assume that the top dimensional boxes are P1 , · · · , Pk .
Choose a hyperplane H such that P1 ∩ H is a facet of P1 , i.e. a face of P1 of highest
dimension such that it is not all of P1 . H has two associated closed half-spaces H + and H − .
H + is singled out by the requirement P1 ⊂ H + . Recall that
m
X
αi IPi = 0.
i=1
+
Restricting to H we deduce
m
X
αi IPi ∩H + = 0.
i=1
Likewise,
m
X
αi IPi ∩H = 0 and
i=1
m
X
αi IPi ∩H − = 0
(2.2)
i=1
Note that Pi = (Pi ∩ H + ) ∪ (Pi ∩ H − ) and (Pi ∩ H + ) ∩ (Pi ∩ H − ) = Pi ∩ H. Then, since
µ is a valuation, it obeys the inclusion-exclusion rule so
m
X
i=1
αi µ(Pi) =
m
X
+
αi µ(Pi ∩ H ) +
i=1
m
X
−
αi µ(Pi ∩ H ) −
i=1
Since the sets Pi ∩ H are in a space of dimension n − 1, and
m
X
i=1
αi IPi ∩H = 0
m
X
i=1
αi µ(Pi ∩ H)
(2.3)
14
Csar-Johnson-Lamberty
we deduce from the induction assumption that
m
X
αi µ(Pi ∩ H) = 0.
i=1
−
On the other hand, P1 ∩ H = P1 ∩ H so P1 ∩ H − has dimension n − 1. We now have a
new collection of boxes Pi ∩ H − , i = 1, · · · , m of which at most k − 1 are top dimensional
and satisfy
X
αi IPi ∩H − = 0.
i
The minimality of k now implies
m
X
αi µ(Pi ∩ H − ) = 0.
i=1
Therefore, the equality (2.3) implies
m
X
αi µ(Pi ) =
m
X
αi µ(Pi ∩ H + ) = r.
(2.4)
i=1
i=1
P1 has dimension n. Then there exist 2n hyperplanes H1 , . . . , H2n such that
P1 =
2n
\
Hi+ .
i=1
+
Replacing Pi with Pi ∩ H and iterating the above argument we get
m
X
αi µ(Pi ∩
H1+
∩
H2+
∩···∩
H2+n )
m
X
=
i=1
and
αi µ(Pi ∩ P1 ) = r.
(2.5)
i=1
m
X
αi IPi ∩P1 = 0.
(2.6)
i=1
We repeat this argument with the remaining top dimensional boxes P2 , . . . , Pk and if we set
P0 := P1 ∩ · · · ∩ Pk , A := α1 + · · · + αk
we conclude
m
X
i=1
m
X
i=1
αi IPi ∩P0 = AIP0 +
X
αi IPi ∩P0 = 0,
(2.7a)
i>k
αi µ(Pi ∩ P0 ) = Aµ(P0 ) +
X
αi µ(Pi ∩ P0 ) = r.
(2.7b)
i>k
In the above sums, at most one of the boxes is top dimensional, which contradicts the minimality of k > 1.
⊓
⊔
Valuations on pixelations
15
§2.3. Continuous Invariant Valuations on Pix.
Notation 2.6. We shall denote Ten the subgroup of Aff(Rn ) generated by the translations and
the permutations of coordinates in Rn .
⊓
⊔
Definition 2.7. A valuation µ on Pix(n) is called invariant if
µ(gP ) = µ(P ), ∀P ∈ Pix(n), g ∈ Ten .
and is called translation invariant if the same condition holds for all translations g.
⊓
⊔
We aim to find all the invariant valuations on Pix(n). To avoid unnecessary complications,
we will impose a further condition on the valuations.
The final condition we would like on our valuations on Pix(n) is that of continuity. Our
valuations are functions on Pix(n), which is a collection of compact subsets of Rn . So, in
order for continuity to make any sense, we would like some concept of open sets for this
collection of compact sets. A good way of achieving this goal is to make them into a metric
space by defining a reasonable notion of distance.
Definition 2.8. (a) Let A ⊂ Rn and x ∈ Rn . The distance from x to A, d(x, A), is the
nonnegative real number d(x, A) defined by
d(x, A) = inf d(x, a)
a∈A
where d(x, a) is the Euclidian distance from x to a
(b) Let K and E be subsets of Rn . Then the Hausdorff distance δ(K, E) is defined by:
δ(K, E) = max sup d(a, E), sup d(K, b) .
a∈K
b∈E
n
(c) A sequence of compact sets Kn in R converges to a set K if δ(Kn , K) −→ 0 as n −→
∞. If this is the case, then we write Kn −→ K.
⊓
⊔
Remark 2.9. If K and E are compact, then δ(K, E) = 0 if and only if K = E. That is, the
Hausdorff distance is positive definite on the set of compact sets in Rn .
⊓
⊔
Let Bn be the unit ball in Rn . For K ⊂ Rn and ǫ > 0, set
K + ǫBn := {x + ǫu | x ∈ K and u ∈ Bn }.
The following lemma (whose proof is clear) gives a hands-on understanding of how the
Hausdorff distance behaves.
Lemma 2.10. Let K and E be compact subsets of Rn . Then
δ(K, E) ≤ ǫ ⇐⇒ K ⊂ E + ǫBn and E ⊂ K + ǫBn .
⊓
⊔
The above result implies that the Hausdorff distance is actually a metric. The next result
summarizes this.
Proposition 2.11. The collection of compact subsets of Rn together with the Hausdorff distance forms a metric space.
⊓
⊔
16
Csar-Johnson-Lamberty
Definition 2.12. Let µ : Pix(n) → R be a valuation on Pix(n). Then µ is called (box)
continuous if µ is continuous on boxes that is for any sequence of boxes Pi converging in the
Hausdorff metric to a box P we have
µ(Pi) −→ µ(P )
⊓
⊔
We want to classify the continuous invariant valuations on Pix(n). We start by considering
the problem in R1 . An element of Pix(1) is a finite union of closed intervals. For A ∈ Pix(1),
set
µ10 (A) = the number of connected components of A
µ11 (A) = the length of A
Both are continuous invariant valuations on Pix(1). It is clear that they are invariant under
Ten , which is, in this case, the group of translations. It is clear that both are continuous.
Proposition 2.13. Every continuous invariant valuation µ : Pix(1) → R is a linear combination of µ10 and µ11 .
Proof. Let c = µ(A), where A is a singleton set {x}, x ∈ R. Now let µ′ = µ − cµ10 . µ′
vanishes on points by construction. Now, define a continuous function f : [0, ∞)−→R by
f (x) = µ′ ([0, x]). µ′ is invariant because µ and µ10 are invariant. Then, if A is a closed
interval of length x, µ′ (A) = f (x) since we can simply translate A to the origin.
Now observe that
f (x + y) = µ′ ([0, x + y]) = µ′ ([0, x] ∪ µ′ ([x, x + y]) = µ′ (0, x]) + µ′ ([x, x + y]) − µ′ ({x})
= µ′ ([0, x]) + µ′ ([x, x + y]) = f (x) + f (y).
Since f is continuous and linear, we deduce that there exists a constant r such that f (x) = rx,
for all x ≥ 0. Therefore, µ′ = rµ11 and our assertion follows from the equality
µ = µ′ + cµ0 = rµ1 + cµ0 .
⊓
⊔
We now move onto Rn . Let µn (P ) be the volume of a pixelation P of dimension n.
Definition 2.14. The k-th elementary symmetric polynomial in the variables x1 , . . . , xn the
polynomial ek (x1 , . . . , xn ) such that e0 = 1 and
X
ek (x1 , . . . , xn ) =
xi1 · · · xik 1 ≤ k ≤ n.
⊓
⊔
1≤i1 <i2 <···<ik ≤n
Observe that we have the identity
n
Y
j=1
(1 + txj ) =
n
X
ek (x1 , · · · , xn )tk .
(2.8)
k=0
Theorem 2.15. For 0 ≤ k ≤ n, there exists a unique continuous valuation µk on Pix(n)
invariant under Ten , such that µk (P ) = ek (x1 , . . . , xn ) whenever P ∈ Par(n) with sides of
length x1 , . . . , xn .
Valuations on pixelations
17
Proof. Let µ10 , µ11 : Pix(1) → R be the valuations described above. We set µ1t = µ10 + tµ11 ,
where t is a variable. Then,
µnt = µ1t × µ1t × · · · × µ1t : Par(n) → R[t]
is an invariant valuation on parallelotopes with values in the ring of polynomials with real
coefficients. By Groemer’s Extension Theorem (2.5), µnt extends to a valuation on Pix(n)
which must also be invariant. Using (2.8) we deduce
n
n
Y
X
n
ek (x1 , · · · , xn )tk .
(1 + txj ) =
µt ([0, x1 ] × · · · × [0, xn ]) =
j=1
k=0
For any parallelotope P we can write
µt (P ) =
n
X
µnk (P )tk .
k=0
µnk (P )
The coefficients
define continuous invariant valuations on Par(n) which extend to
continuous invariant valuations on Pix(n) such that
n
X
n
µnk (S)tk , ∀S ∈ Pix(n) µnk ([0, x1 ] × · · · × [0, xn ]) = ek (x1 , · · · , xn ).
µt (S) =
k=0
⊓
⊔
Theorem 2.16. The valuations µi on Pix(n) are normalized independently of the dimension
n
n, i.e. µm
i (P ) = µi (P ) for all P ∈ Pix(n).
Proof. This follows from the preceding theorem and the definition of an elementary symmetric function. If P ∈ Par(n) and we consider the same P ∈ Par(k), where k > n, P
remains a cartesian product of the same intervals, except there are some additional intervals
of length 0, which do not effect µi .
⊓
⊔
Since the valuation µk (P ) is independent of the ambient space, µk is called the k-th intrinsic volume. µ0 is the Euler characteristic. µ0 (Q)=1 for all non-empty boxes.
Theorem 2.17. Let H1 and H2 be complementary orthogonal subspaces of Rn spanned by
subsets of the given coordinate system with dimensions h and n − h, respectively. Let Pi be
a parallelotope in Hi and let P = P1 × P2 .
X
µi (P1 × P2 ) =
µr (P1 )µs (P2 )
(2.9)
r+s=i
The identity is therefore valid when P1 and P2 are pixelations since both sides of the above
equalities define valuations on Par(n) which extend uniquely to valuations on Par(n).
Proof. Suppose P1 has sides of length x1 , . . . , xh and P2 has sides of length y1 , . . . , yn−h .
Then,
!
X
X
X
X
µr (P1 )µs (P2 ) =
xj1 · · · xjh
yk1 · · · jks
r+s=i
r+s=i
1≤j1 <···<jr ≤h
1≤k1 <···<ks ≤n−h
18
Csar-Johnson-Lamberty
Let jr+1 = k1 + h, . . . , ji = jr+s = ks + h and let xh=1 = y1 , . . . , xn = yn−h , simply
relabelling. Then,


X
X
X
X

xjr+1 · · · xji 
xj1 · · · xjr
µr (P1 )µs (P2 ) =
r+s=i
r+s=i
1≤j1 <···<jr ≤h
X
=
h+1≤jr+1 <···<ji ≤n
xj1 · · · xji = µi (P1 × P2 )
1≤j1 <···<jr <jr+1 <···<ji ≤n
since P1 × P2 is simply a parallelotope of which we know how to compute µi .
⊓
⊔
§2.4. Classifying the Continuous Invariant Valuations on Pix. At this point we are very
close to a full description of the continuous invariant valuations on Pix(n).
Definition 2.18. A valuation µ on Pix(n) is said to be simple if µ(P ) = 0 for all P of
dimension less than n.
⊓
⊔
Theorem 2.19 (Volume Theorem for Pix(n)). Let µ be a translation invariant, simple valuation defined on Par(n) and suppose that µ is either continuous or monotone. There exists
c ∈ R such that µ(P ) = cµn (P ) for all P ∈ Pix(n), that is µ is equal to the volume, up to a
constant factor.
n Proof. Let [0, 1]n denote the unit cube in Rn and let c = µ([0, 1])n . Then, µ 0, k1
= kcn
for all k > 0 ∈ Z. Therefore, µ(C) = cµn (C) for every box C of rational dimensions
with sides parallel to the coordinate axes since C can be built from [0, k1 ]n cubes for some k.
Since µ is either continuous or monotone and Q is dense in R, then µ(C) = cµn (C) for C
with real dimensions since we can find a sequence of rational Cn converging to C. Then, by
inclusion-exclusion, µ(P ) = cµn (P ) for all P ∈ Pix(n) (since it works for parallelotopes,
we can extend it to a valuation on pixelations).
⊓
⊔
Theorem 2.20. The valuations µ0 , µ1 , . . . , µn form a basis for the vector space of all continuous invariant valuations defined on Pix(n).
Proof. Let µ be a continuous invariant valuation on Pix(n). Denote by x1 , . . . , xn be the
standard Euclidean coordinates on Rn and let Hj denote the hyperplane defined by the equation xj = 0. The restriction on µ to Hj is an invariant valuation on pixelations in Hj .
Proceeding by induction (taking n = 1 as a base case, which was proven in Proposition 2.13,
assume
n−1
X
µ(A) =
ci µi (A) ∀A ∈ Pix(n) such that A ⊆ Hj
(2.10)
i=0
The c are the same for all Hj since µ0 , . . . , µn−1 are invariant under permutation. Then, µ −
Pn−1i
i=0 ci µi vanishes on all lower dimensional pixelations in Pix(n) since any such pixelation
is in a hyperplane parallel to one of the Hj ’s (since the µi ’s are translationally invariant).
Then, by Theorem 2.19 we deduce
µ−
n−1
X
i=0
ci µ i = cn µ n
Valuations on pixelations
19
⊓
⊔
which proves our claim.
If we can find a continuous invariant valuation on Pix(n), then we know that it is a linear
combination of the µi ’s. However, we would like a better description, if at all possible. The
following corollary yields one.
Definition 2.21. A valuation µ is said to be homogenous of degree k > 0 if µ(αP ) =
αk µ(P ) for all P ∈ Pix(n) and all α ≥ 0.
Corollary 2.22. Let µ be a continuous invariant valuation defined on Pix(n) that is homogenous of degree k for some 0 ≤ k ≤ n. Then there exists c ∈ R such that µ(P ) = cµk (P ) for
all P ∈ Pix(n).
Proof. There exist c1 , . . . , cn ∈ R such that µ =
n
X
ci µi . If P = [0, 1]n , then for all α > 0,
i=0
µ(αP ) =
n
X
ci µi (αP ) =
i=0
Meanwhile,
k
µ(αP ) = α µ(P ) = α
so
meaning that ci = 0 for i 6= k.
n
X
i=0
k
n
X
i=0
n X
n
ci αi
ci α µi (P ) =
i
i=0
i
n
X
n
ci µi (P ) = α
ci
i
i=0
k
!
n
n
X
X
n i
n
k
α
α =
ci
ci
i
i
i=0
i=0
⊓
⊔
20
Csar-Johnson-Lamberty
3. Valuations on polyconvex sets
Now that we understand continuous invariant valuations on a very specific collection of subsets of Rn , we recognize the limitations of this viewpoint. Notably we have said nothing of
valuations on exciting shapes such as triangles and disks. To include these, we dramatically
expand our collection of subsets and again try to classify the continuous invariant valuations.
This effort will turn out to be a far greater undertaking.
§3.1. Convex and Polyconvex Sets.
Definition 3.1. (a) K ⊂ Rn is convex if for any two points x and y in K, the line segment
between x and y lies in K. We denote by Kn the set of all compact convex subsets of Rn .
(b) A polyconvex set is a finite union of compact convex sets. We denote by Polycon(n) the
set of all polyconvex sets in Rn .
⊓
⊔
Example 3.2. A single point is a compact convex set. If
T := {(x, y ) ∈ R2 | x, y ≥ 0, x + y ≤ 1}.
then T is a filled in triangle in R2 , so that T ∈ K2 . Also, ∂T is a polyconvex set, but not a
convex set.
⊓
⊔
One of the most important properties of convex sets is the separation property.
Proposition 3.3. Suppose C ⊂ Rn is a closed convex set and x ∈ Rn r C. Then there exists
a hyperplane which separates x from C, i.e. x and C lie in different half-spaces determined
by the hyperplane.
Proof. We outline only the main geometric ideas of the construction of such a hyperplane.
Let
d := dist(x, C).
Then we can find a unique point y ∈ C such that d = |y − x|. The hyperplane perpendicular
to the segment [x, y] and intersecting it in the middle will do the trick.
⊓
⊔
Definition 3.4. If A is a polyconvex set in Rn , then we say that A is of dimension n or has
full dimension if A is not contained in a finite union of hyperplanes. Otherwise, we say that
A has lower dimension.
⊓
⊔
Remark 3.5. Polycon(n) is a distributive lattice under union and intersection. Furthermore,
Kn is a generating set of Polycon(n).
⊓
⊔
We must now explore some tools we can use to understand these compact, convex sets.
The most critical of these is the support function. Let h−, −i denote the standard inner
product on Rn and by | − | the associated norm. We set
Sn−1 := {u ∈ Rn ; |u| = 1}.
Valuations on polyconvex sets
21
Definition 3.6. Let K ∈ Kn and nonempty. Then its support function, hK : Sn−1 → R,
given by
hK (u) := maxhu, xi.
⊓
⊔
x∈K
Example 3.7. If K = {x} is just a single point, then hK (u) = hu, xi for all u ∈ Sn−1 .
Remark 3.8. (a) We can characterize the support function in terms of a function on Rn as
follows. Let h̃ : Rn −→R be such that h̃(tu) = th̃(u) for all u ∈ Sn−1 and t ≥ 0. Let h be
the restriction of h̃ to Sn−1 . Then h is a support function of a compact convex set in Rn if
and only if
h̃(x + y) ≤ h̃(x) + h̃(y)
n
for all x, y ∈ R .
(b) Consider the hyperplane H(K, u) = {x ∈ Rn | hx, ui = hK (u)} and the closed halfspace H(K, u)− = {x ∈ Rn | hx, ui ≤ hK (u)}. Then it is easy to see that H(K, u)
is “tangent” to ∂K and that K lies wholly in H(K, u)− for all u ∈ Sn−1 . The separation
property described in Proposition 3.3 implies
\
H(K, u)−.
K=
u∈Sn−1
In other words, K is uniquely determined by its support function.
⊓
⊔
Definition 3.9. Let K, L be in Kn . Then we define
K + L := {x + y | x ∈ K, y ∈ L}
and call K + L the Minkowski sum of K and L.
Remark 3.10. We want to point out that for every K, L ∈ Kn we have
K ⊂ L ⇐⇒ hk (u) ≤ hk (u), ∀u ∈ Sn−1
and
hK+L(u) = max (hx + y, ui) = max (hx, ui + hy, ui)
x∈K,y∈L
x∈K,y∈L
= max(hx, ui) + max(hy, ui) = hK (u) + hL (u).
x∈K
y∈L
⊓
⊔
Remark 3.11. Recall that for compact sets K and L in Rn , the Hausdorff metric satisfies
δ(K, L) ≤ ǫ if and only if K ⊂ L + ǫB and L ⊂ K + ǫB. In light of this fact and the
preceding comments, one can show that
δ(K, L) = sup |hK (u) − hL (u)|.
u∈Sn−1
That is, the Hausdorff metric on compact convect subsets of Rn is given by the uniform
metric on the set of support functions of compact convex sets.
⊓
⊔
Now that we have some tools to understand these compact convex sets, we will soon
wish to consider valuations on them. But we don’t want just any valuations. We would like
our valuations to be somehow tied to the shape of the convex set alone, rather than where
the convex set is in the space. So, we would like some sort of invariance under types of
22
Csar-Johnson-Lamberty
transformations. Furthermore, to make things nicer we would like to restrict our attention to
those valuations which are somehow continuous. We formalize these notions.
It is easiest to begin with continuity. Recall that the Hausdorff distance turns the set of
compact subsets of Rn into a metric space. Since Kn is a subset of these, continuity is a well
defined concept for elements of Kn . (We restrict our attention to these elements and not all
of Polycon(n) since dimensionality problems arise when considering limits of polyconvex
sets.)
Definition 3.12. A valuation µ : Polycon(n) → R is said to be convex continuous (or simply
continuous where no confusion is possible) if
µ(An )−→µ(A)
whenever An , A are compact, convex sets and An −→A.
⊓
⊔
Notation 3.13. Let En be the Euclidean group of Rn , which is the subgroup of affine transformations of Rn generated by translations and rotations. For any g ∈ En there exist
T ∈ SO(n) and v ∈ Rn such that
g(x) = T (x) + v, ; ∀x ∈ Rn .
The elements of En are also known as rigid motions.
Definition 3.14. Let µ : Polycon(n) → R be a valuation. Then µ is said to be rigid motion
invariant (or invariant when no confusion is possible) if
µ(A) = µ(gA)
for all A ∈ P olycon(n) and g ∈ En . If the same holds only for translations g then µ is said
to be translation invariant.
⊓
⊔
Our aim is to understand the set of convex-continuous invariant valuations on Polycon(n),
which we will denote by Val(n). Note that for every m ≤ n we have an inclusion
Polycon(m) ⊂ Polycon(n)
given by the natural inclusion Rm ֒→ Rn . In particular, any continuous invariant valuation
µ : Polycon(n) → R induces by restriction a valuation on Polycon(m). In this way we
obtain for every m ≤ n a restriction map
Sm,n : Val(n) → Val(m)
such that Sn,n is the identity map and for every k ≤ m ≤ n we have Sk,n = Sk,m ◦ Sm,n , i.e.
the diagram below commutes.
Sm,n
''
Val(n)
Sk,n
w
Val(m)
''') S
u
k,m
Val(k)
Valuations on polyconvex sets
23
Definition 3.15. An intrinsic valuation is a sequence of convex-continuous, invariant valuations µn ∈ Val(n) such that for every m ≤ n we have
µm = Sm,n µn .
⊓
⊔
Remark 3.16. (a) To put the above definition in some perspective we need to recall a classical
notion. A projective sequence of Abelian groups is a sequence of Abelian groups (Gn )n≥1
together with a family of group morphisms
Sm,n : Gn → Gm ,
m≤n
satisfying
Snn = 1Gn , Skn = Skm ◦ Smn , ∀k ≤ m ≤ n..
The projective limit of a projective sequence {Gn ; Smn } is the subgroup
Y
limprojn Gn ⊂
Gn
n
consisting of sequences (gn )n≥1 satisfying
gn ∈ Gn , gm = Sm,n gn , ∀m ≤ n.
The sequence (Val(n)) together with the maps Smn define a projective sequence and we set
Y
Val(∞) := limprojn Val(n) ⊂
Val(n).
n≥0
An intrinsic measure is then an element of Val(∞).
Similarly if we denote by ValPix (n) the space of continuous, invariant valuations on
Pix(n) the we obtain again a projective sequence of vector spaces and an element in the
corresponding projective limit will be an intrinsic valuation in the sense defined in the previous section.
(b) Observe that since ValPix (n) ⊂ Val(n) we have a natural map
Φn : Val(n) → ValPix (n).
A priori this linear map need be neither injective nor surjective. However, in a later section
we will show that this map is a linear isomorphism.
⊓
⊔
§3.2. Groemer’s Extension Theorem. We now show that any convex-continuous valuation on Kn can be extended to Polycon(n). Thus, we can confine our studies to continuous
valuations on compact, convex sets.
Theorem 3.17. A convex, continuous valuation µ, on Kn can be extended (uniquely) to
Polycon(n). Moreover, if µ is also invariant, then so is its extension.
Proof. Suppose that µ is a convex-continuous valuation on Kn . In light of Groemer’s integral
theorem, we need only show that the integral defined by µ on the space of indicator functions
is well defined.
We proceed by induction on dimension. In dimension zero, this proposition is trivial, and
in dimension one, Kn is the same as Par(n) and Polycon(n) is the same as Pix(n). Hence,
24
Csar-Johnson-Lamberty
we have already done this dimension as well. So, suppose the theorem holds for dimension
n − 1.
Suppose for the sake of contradiction that the integral defined by µ is not well defined. Using the same technique as in Theorem 2.5, Groemer’s Extension Theorem for parallelotopes,
suppose that there exist K1 , . . . , Km ∈ Kn such that
m
X
αi IKi = 0
(3.1)
αi µ(Ki ) = r 6= 0
(3.2)
i=1
while
m
X
i=1
Take m to be the least positive integer such that (3.1) and (3.2) exist.
Choose a hyperplane H with associated closed half-spaces H + and H − such that K1 ⊂
Int(H + ). Recall that IA∩B = IA IB . Thus, in light of equation (3.1), we can multiply and get
m
X
αi IKi ∩H + = 0
i=1
as well as
m
X
m
X
αi IKi ∩H − = 0 and
αi IKi ∩H = 0.
i=1
i=1
Now note that Ki = (Ki ∩ H + ) ∪ (Ki ∩ H − ) and that H + ∩ H − = H. Thus, since µ is a
valuation, we may apply this decomposition and see that
m
X
αi µ(Ki ) =
i=1
m
X
αi µ(Ki ∩ H + ) +
m
X
αi µ(Ki ∩ H − ) −
αi µ(Ki ∩ H).
i=1
i=1
i=1
m
X
Since each Ki ∩ H lies inside H, a space of dimension n − 1, we deduce from the induction
assumption that
m
X
αi µ(Ki ∩ H) = 0.
i=1
Moreover, since we took K1 ⊂ Int (H + ), we have
IK1 ∩H − = 0,
and the sum
m
X
−
αi µ(Ki ∩ H ) =
i=1
m
X
αi µ(Ki ∩ H − )
i=2
must be zero due to the minimality of m. Thus, from (3.2), we have
0 6= r =
m
X
i=1
αi µ(Ki ) =
m
X
i=1
αi µ(Ki ∩ H + ).
Valuations on polyconvex sets
25
Thus we have replaced Ki by Ki ∩ H + in (3.1) and (3.2). We now repeat this process. By
taking a countable dense subset, choose a sequence of hyperplanes, H1 , H2 , . . . such that
K1 ⊂ Int(Hi+ ), and
\
K1 =
Hi+ .
Thus, iterating the proceeding argument, we have that
m
X
αi µ(Ki ∩ H1+ ∩ · · · ∩ Hq+ ) = r 6= 0
i=1
for all q ≥ 1. Since µ is continuous, we can take the limit as q → ∞, giving
m
X
αi µ(Ki ∩ K1 ) = r 6= 0,
i=1
while applying the same type of argument to (3.1) yields
m
X
αi IKi ∩K1 = 0.
i=1
Thus, we find ourselves in exactly the same position we were with equations (3.1) and (3.2);
therefore, we may repeat the entire argument for K2 , . . . , Km , giving
m
X
αi µ(Ki ∩ K1 ∩ · · · ∩ Km ) =
i=1
m
X
αi µ(K1 ∩ · · · ∩ Km )
i=1
=
m
X
i=1
and
m
X
αi
!
· µ(K1 ∩ · · · ∩ Km ) = r 6= 0
αi IK1 ∩···∩Km =
i=1
m
X
i=1
αi
!
(IK1 ∩···∩Km ) = 0.
(3.3a)
(3.4)
The equalities (3.3a) and (3.4) contradict each other. The first implies that α1 + · · · + αm 6= 0
and K1 ∩· · ·∩Km 6= ∅, while from the second implies that α1 +· · ·+αm = 0 or IK1 ···∩Km = 0.
Thus, the integral must be well defined, so there exists a unique extension of µ to Polycon(n).
⊓
⊔
§3.3. The Euler Characteristic.
Theorem 3.18. (a) There exists an intrinsic valuation µ0 = (µn0 )n≥0 uniquely determined by
µn0 (C) = 1, ∀C ∈ Kn .
(b) Suppose C ∈ Polycon(n), and u ∈ Sn−1 . Define ℓu : Rn → R by ℓu (x) = hu, xi and set
Ht := ℓ−1
u (t), Ct := C ∩ Ht , Fu,C (t) := µ0 (Ct ).
26
Csar-Johnson-Lamberty
Then Fu,C (t) is a simple function, i.e. it is a linear combination of integral coefficients of
characteristic functions IS , S ∈ K1 . Moreover
Z
Z
X
µ0 (C) = IC dµ0 = Fu,C (t)dµ0 (t) =
Fu,C (t) − Fu,C (t + 0) .
(Fubini)
t
Proof. Part (a) follows immediately from Theorem 3.17. To establish the second part we use
again Theorem 3.17. For every C ∈ Polycon(n) define
Z
λu (C) := Fu,C (t)dµ0(t)
Observe that for every t ∈ R and every C1 , C2 ∈ Polycon(n) we have
Fu,C1 ∪C2 (t) = µ0 Ht ∩ (C1 ∪ C2 ) = µ0 (Ht ∩ C1 ) ∪ (Ht ∩ C2 )
= µ0 (Ht ∩ C1 ) + µ0 (Ht ∩ C2 ) − µ0 (Ht ∩ C1 ) ∩ (Ht ∩ C2 )
= Fu,C1 (t) + Fu,C2 (t) − Fu,C1 ∩C2 (t).
Observe that if C ∈ Kn then Fu,C is the characteristic function of a compact interval ⊂ R1 .
This shows that χu,C is a simple function for every C ∈ Polycon(n). Moreover
Z
Z
Z
Z
Fu,C1 ∪C2 dµ0 = Fu,C1 dµ0 + Fu,C2 dµ0 − Fu,C1 ∩C2 dµ0
so that the correspondence
Polycon(n) ∋ C 7→ λu (C)
is a valuation such that λu (C) = 1, ∀C ∈ Kn . From part (a) we deduce
Z
Fu,C dµ0 = λu (C) = µ0 (C).
We only have to prove that for every simple function h(t) on R1 we have
Z
X
h(t)dµ0 (t) = L1 (h) :=
h(t) − h(t + 0) .
t
To achieve this observe that L1 (h) is linear and convex continuous in h and thus defines an
integral on the space of simple functions. Moreover if h is the characteristic function of a
compact interval then L1 (h). Thus by Theorem 3.17 we deduce
Z
L1 = dµ0.
⊓
⊔
Remark 3.19. Denote by S(Rn ) the the Abelian subgroup of Map(Rn , Z) generated by indicator functions of compact convex subsets of Rn . We will refer to the elements of S(Rn ) as
simple functions. Thus any f ∈ S(Rn ) can be written non-uniquely as a sum
X
f=
αi ICi , αi ∈ Z, Ci ∈ Kn .
i
Valuations on polyconvex sets
27
The Euler characteristic then defines an integral
Z
Z X
X
n
dµ0 : S(R ) → Z,
αi ICi dµ0 =
αi .
i
i
The convolution of two simple functions f, g is the function f ∗ g defined by
Z
f ∗ g(x) := f¯x (y) · g(y)dµ0(y),
where
f¯x (y) := f (x − y).
Observe that if A, B ∈ Kn and A + B is their Minkowski sum then
IA ∗ IB = IA+B
so that f ∗ g is a simple function for any f, g ∈ S(Rn ). Moreover,
Finally, f ∗ I{0}
f ∗ g = g ∗ f, ∀f, g ∈ S(Rn ).
= f , for all f ∈ S(Rn ) so that S(Rn ), +, ∗ is a commutative ring with 1.⊓
⊔
Definition 3.20. A convex polyhedron is an intersection of a finite collection of closed halfspaces. A convex polytope is a compact convex polyhedron. A polytope is a finite union of
convex polytopes. The polytopes form a distributive sublattice of Polycon(n).
The dimension of a convex polyhedron P is the dimension of the affine subspace Aff(P )
generated by P . We denote by relint(P ) the relative interior of P that is, the interior of P
relative to the topology of Aff(P ).
Remark 3.21. Give a convex polytope P , the boundary ∂P is also a polytope. Therefore,
µ0 (∂P ) is defined.
Theorem 3.22. If P ⊂ Rn is a convex polytope of dimension n > 0, then µ0 (∂P ) =
1 − (−1)n .
Proof. Let u ∈ Sn−1 be a unit vector and define ℓu : Rn → R as before. Using the previous
notation, note that Ht ∩∂P = ∂(Ht ∩P ) if t is not a boundary point of the interval ℓu (P ) ⊂ R.
Let F : R → R be defined by
F (t) = µ0 (Ht ∩ ∂P ).
We proceed by induction. For n = 1, we have µ0 (∂P ) = 2 = 1 − (−1) since ∂P consists of
two distinct points (since P is an interval).
For n > 1, it follows from the induction by hypothesis that
µ0 (Ht ∩ ∂P ) = µ0 (∂(Ht ∩ P )) = 1 − (−1)n−1
(*)
if t ∈ ℓu (P ) is not a boundary point of the interval ℓu (P ). If t ∈ ∂ℓu (P ), we have
µ0 (Ht ∩ ∂P ) = 1
(**)
n
since Ht ∩ P is a face of P and is thus in K . Finally,
µ0 (Ht ∩ ∂P ) = 0
when Ht ∩ ∂P = ∅.
(***)
28
Csar-Johnson-Lamberty
We can now compute
Z
F (t)dµ0(t) =
X
(F (t) − F (t + 0))
t
which vanishes except at the two point a and b (a < b) where [a, b] = ℓu (P ). Then,
X
(F (t) − F (t + 0)) = F (a) − F (a + 0) + F (b) − F (b + 0).
t
Now observe that F (b + 0) = 0 by (***), F (b) = F (a) = 1 by (**) and F (a + 0) =
1 − (−1)n+1 by (*). Then,
Z
F (t)dµ0(t) = 1 − 1 + (−1)n−1 + 1 = (1) + (−1)n−1 = 1 − (−1)n
⊓
⊔
Theorem 3.23. Let P be a compact convex polytope of dimension k in Rn . Then
µ0 (relint(P )) = (−1)k
(3.5)
Proof. Since µ0 is normalized independently of n, we can consider P in the k-dimensional
plane in Rn in which it is contained. Then, relint(P ) = P r ∂P so
µ0 (relint(P )) = µ0 (P ) − µ0 (∂P ) = (−1)k .
⊓
⊔
Definition 3.24. A system of faces of a polytope P , is a family F of convex polytopes such
that the following hold.
[
(a)
relint(Q) = P .
Q∈F
(b) If Q, Q′ ∈ F and Q 6= Q′ , then relint(Q) ∩ relint(Q′ ) = ∅.
⊓
⊔
Theorem 3.25 (Euler-Schläfli-Poincaré). Let F be a system of faces of the polytope P , and
let fi be the number of elements in F of dimension i. Then, µ0 = f0 − f1 + f2 − · · · .
Proof. We have
IP =
X
Irelint(Q)
Q∈F
so that
µ0 (P ) =
X
Q∈F
µ0 (relint(Q)) =
X
(−1)dim Q =
Q∈F
X
(−1)k fk .
k≥0
⊓
⊔
Valuations on polyconvex sets
29
§3.4. Linear Grassmannians. We denote by Gr(n, k) the set of all k-dimensional subspaces in Rn . Observe that the orthogonal group O(n) acts transitively on Gr(n, k) and
the stabilizer of a k-dimensional coordinate plane can be identified with the Cartesian product O(k) × O(n − k). Thus
Gr(n, k) can also be identified with the space of left cosets
O(n)/ O(k) × O(n − k) .
Example 3.26. Gr(n, 1) is the set of lines in Rn , a.k.a the projective space RPn−1 . Denote
by Sn−1 the unit sphere in Rn . We have a natural map
ℓ : Sn−1 → Gr(n, 1), Sn−1 ∋ x 7→ ℓ(x) = the line through 0 and x.
The map ℓ is 2-to-1 since
ℓ(x) = ℓ(−x).
⊓
⊔
Gr(n, k) can also be viewed as a subset of the vector space of symmetric linear operators
Rn → Rn via the map
Gr(n, k) ∋ V 7→ PV = the orthogonal projection onto V .
As such it is closed and bounded and thus compact. To proceed further we need some
notations and a classical fact.
Denote by ωn the volume of the unit ball in Rn and by σn−1 the (n − 1)-dimensional
“surface area” of Sn−1 . Then
σn−1 = nωn−1
and
 k
π




 k!
n = 2k
π n/2
ωn =
=
,
Γ(n/2 + 1) 
2k+1 k

2
π
k!


n = 2k + 1

(2k + 1)!
where Γ(x) is the gamma function. We list below the values of ωn for small n.
n
ωn
0 1 2
1 2 π
3
4
4π
3
π2
2
.
Theorem 3.27. For every positive constant c there exists a unique measure µ = µc on
Gr(n, k) which is invariant, i.e.
µ(g · S) = µ(S), ∀g ∈ O(n), S ⊂ Gr(n, k) open subset
and has finite volume, i.e. µ(Gr(n, k)) = c.
⊓
⊔
For a proof we refer to [San].
Example 3.28. Here is how we construct such a measure on Gr(n, 1). Given an open set
U ⊂ Gr(n, 1) we obtain a subset
e = ℓ−1 (U) ⊂ Sn−1
U
30
Csar-Johnson-Lamberty
e consists of two diametrically opposed subsets of the unit sphere. Define
U
µ(U) =
Observe that for this measure
µ(Gr(n, 1)) =
1
e)
area(U
2
1
σn−1
nωn
area(Sn−1 ) =
=
.
2
2
2
⊓
⊔
Observe that a constant multiple of an invariant measure is an invariant measure. In particular
µc = c · µ1 .
Define
n!
nωn
, [n]! := [1] · [2] · · · [n] = n ωn ,
[n] :=
2ωn−1
2
[n]
n
:=
k
[k]![n − k]!
and denote by νkn the invariant measure on Gr(n, k) such that
n
n
.
νk (Gr(n, k)) =
k
§3.5. Affine Grassmannians. We denote by Graff(n, k) the space of k-dimensional affine
subspaces (planes) of Rn . For every affine k-plane we denote by Π(V ) the linear subspace
parallel to V , and by V ⊥ the orthogonal complement of Π(V ). We obtain in this fashion a
surjection
Π : Graff(n, k) → Gr(n, k), V 7→ Π(V )
Observe that an affine k-plane V is uniquely determined by V ⊥ and the point p = V ⊥ ∩ V .
The fiber of the map Π : Graff(n, k) → Gr(n, k) over a point L ∈ Gr(n, k) is the set Π−1 (L)
consisting of all affine k-planes parallel to L. This set can be canonically identified with L⊥
via the map
Π−1 (L) ∋ V 7→ V ∩ L⊥ ∈ L⊥ .
The map Π : Graff(n, k) → Gr(n, k) is an example of vector bundle.
We now describe how to integrate functions f : Graff(n, k) → R. Define
Z
Z
Z
n
f (V )dλk :=
f (L + p)dL⊥ p dνkn ,
Graff(n,k)
Gr(n,k)
L⊥
where dL⊥ p denotes the Euclidean volume element on L⊥ .
Example 3.29. Let
(
1 dist (L, 0) ≤ 1
f : Gr(3, 1) → R, f (L) :=
.
0 dist (L, 0) > 1
Valuations on polyconvex sets
Π (V)
31
V
p
V
⊥
F IGURE 2. The orthogonal projection of an element in Graff(3, 2).
Observe that f is none other than the indicator function of the set Graff(3, 1; B3) of affine
lines in R3 which intersect B3 , the closed unit ball centered at the origin. Then
Z
Z
Z
3
3
3
= [3] · ω2
f (V )dλ1 =
f (L + p)dp dν1 = ω2
1
Graff(3,1)
Gr(3,1)
L⊥
{z
}
|
=ω2
=
3ω3
3ω3
ω2 =
= 2π.
2ω2
2
In particular
λ31 Graff(3, 1; B3 ) ) = 2π =
σ2
1
= × surface area of B3 .
2
2
⊓
⊔
§3.6. The Radon Transform. At this point, we further our goal of understanding Val(n)
by constructing some of its elements.
Recall that Kn is the collection of compact convex subsets of Rn and Polycon(n) is the
collection of polyconvex subsets of Rn . We denote by Val(n) the vector space of convex
continuous, rigid motion invariant valuations
µ : Polycon(n) → R.
Via the embedding Rk ֒→ Rn , k ≤ n, we can regard Polycon(k) as a subcollection of
Polycon(n) and thus we obtain a natural map
Sk,n : Val(n) → Val(k)
32
Csar-Johnson-Lamberty
We denote by Valw (n) the vector subspace of Val(n) consisting of valuations µ homogeneous of degree (or weight) w, i.e.
µ(tP ) = tw µ(P ), ∀t > 0, P ∈ Polycon(n).
For every k ≤ n and every µ ∈ Val(k) we define the Radon transform of µ to be
Z
Rn,k µ : Polycon(n) → R, Polycon(n) ∋ P 7→ (Rn,k µ)(P ) :=
µ(P ∩ V )dλnk .
Graff(n,k)
Proposition 3.30. If µ ∈ Val(k) then Rn,k µ is a convex continuous, invariant valuation on
Polycon(n).
Proof. For every µ ∈ Val(k), V ∈ Graff(n, k) and any S1 , S2 ∈ Polycon(n) we have
V ∩ (S1 ∪ S2 ) = (V ∩ S1 ) ∪ (V ∩ S2 ), (V ∩ S1 ) ∩ (V ∩ S2 ) = V ∩ (S1 ∩ S2 )
so that
µ( V ∩ (S1 ∪ S2 ) ) = µ(V ∩ S1 ) + µ(V ∩ S2 ) − µ(V ∩ (S1 ∩ S2 )).
Integrating the above equality with respect to V ∈ Graff(n, k) we deduce that
Rn,k (S1 ∪ S2 ) = Rn,k (S1 ) + Rn,k (S2 ) − Rn,k (S1 ∩ S2 )
so that Rn,k µ is a valuation on Polycon(n). The invariance of Rn,k follows from the invariance of µ and of the measure λnk on Graff(n, k). Observe that if Cν is a sequence of compact
convex sets in Rn such that
Cν → C ∈ Kn
then
lim µ(Cν ∩ V ) = µ(C ∩ V ), ∀V ∈ Graff(n, k).
ν→∞
We want to show that
Z
lim
ν→∞
µ(Cν ∩ V
Graff(n,k)
)dλnk (V
)=
Z
Graff(n,k)
µ(C ∩ V )dλnk (V )
by invoking the Dominated Convergence Theorem. We will produce an integrable function
f : Graff(n, k) → R such that
µ(Cn ∩ V ) ≤ f (V ), ∀n > 0, ∀V ∈ Graff(n, k).
(3.6)
To this aim observe first that since Cn → C there exists R > 0 such that all the sets Cn and
the set C are contained in the ball Bn (R) of radius R centered at the origin. Define
Graff(n, k; R) := V ∈ Graff(n, k) | V ∩ Bn (R) 6= ∅ .
Graff(n, k; R) is a compact subset of Graff(n, k) and thus it has finite volume. Now define
N = {0} ∪ {1/n | n ∈ Z>0 } ⊂ [0, 1],
and
F : N × Graff(n, k; R) → R, F (r, V ) := µ(C1/r ∩ V ) where for uniformity we set C := C1/0 . Observe that N × Graff(n, k; R) is a compact space
and F is continuous. Set
M := sup F (r, V ) | (r, V ) ∈ N × Graff(n, k; R) .
Valuations on polyconvex sets
33
Then M < ∞. The function
(
M
f : Graff(n, k) → R, f (V ) :=
0
V ∈ Graff(n, k; R)
V 6∈ Graff(n, k; R)
⊓
⊔
satisfies the requirements of (3.6).
The resulting map Rn,k : Val(k) → Val(n) is linear and it is called the Radon transform.
Proposition 3.31. If µ ∈ Val(k) is homogeneous of weight w, then Rk+j,k µ is homogeneous
of weight w + j, that is
Rk+j,k Valw (k) ) ⊂ Valw+j (k + j).
Proof. If C ∈ Polycon(k + j) and t is a positive real number then
Z
Z
(Rk+j,k µ)(tC) =
µ(tC ∩ (V + p))dL⊥ p dνkk+j
L⊥
Gr(k+j,k)
We use the equality tC ∩ (V + p) = t (C ∩ (V + t−1 p)) to get
Z
Z
w
µ(C ∩ (V + t−1 p))dL⊥ p dνkk+j
(Rk+j,k µ)(tC) = t
L⊥
Gr(k+j,k)
We make a change in variables q = t−1 p in the interior integral so that dL⊥ p = tj dL⊥ q and
Z
Z
w+j
µ(C ∩ (V + q))dL⊥ q dνkn = tw+j Rk+j,k µ(C).
(Rk+j,k µ)(tC) = t
L⊥
Gr(k+j,k)
⊓
⊔
So far we only know two valuations in Val(n), the Euler characteristic, and the Euclidean
volume voln . We can now apply the Radon transform to these valuations and hopefully
obtain new ones.
Example 3.32. Let volk ∈ Val(k) denote the Euclidean k-dimensional volume in Rk . Then
for every n > k and every compact convex subset C ⊂ Rn we have
Z
(Rn,k volk )(C) =
volk (V ∩ C)dλnk (V )
Graff(n,k)
=
Z
Gr(n,k)
Z
volk
L⊥
!
(C ∩ (L + p) dL⊥ p dνkn (L),
where dL⊥ p denotes the Euclidean volume element on the (n − k)-dimensional space L⊥ .
The usual Fubini theorem applied to the decomposition Rn = L⊥ × L implies that
Z
Z
volk (C ∩ (L + p) dL⊥ p.
voln (C) = µn (C) =
d Rn q =
Rn
Hence
(Rn,k volk )(C) =
Z
Gr(n,k)
L⊥
voln (C)dνkn (L)
n
voln (C).
=
k
34
Csar-Johnson-Lamberty
In other words
n
voln .
Rn,k volk =
k
Thus the Radon transform of the Euclidean volume produces the Euclidean volume in a
higher dimensional space, rescaled by a universal multiplicative scalar.
⊓
⊔
The above example is a bit disappointing since we have not produced an essentially new
type of valuation by applying the Radon transform to the Euclidean volume. The situation is
dramatically different when we play the same game with the Euler characteristic.
We are now ready to create elements of Val(n). For any positive integers k, n define
µnk := Rn,n−k µ0 ∈ Val(n),
where µ0 ∈ Val(k) denotes the Euler characteristic.
Observe that if C is a compact convex subset in Rk+j , then for any V ∈ Graff(k + j, k)
we have
(
1 if C ∩ V 6= ∅
µ0 (V ∩ C) =
.
0 if C ∩ V = ∅
Thus the function
Graff(k + j, k) ∋ V 7→ µ0 (V ∩ C)
is the indicator function of the set
Graff(C, k) := V ∈ Graff(k + j, k); V ∩ C 6= ∅ .
We conclude that
µjk+j (C)
= (Rk+j,k µ0 ) (C) =
Z
Graff(k+j,k)
IGraff(C,k) dλkk+j
= λkk+j Graff(k, C) .
Thus the quantity µjk+j (C) “counts” how many k-planes intersect C. From this interpretation
we obtain immediately the following result.
Theorem 3.33 (Sylvester). Suppose K ⊆ L ⊂ Rk+j are two compact convex subsets.
Then the conditional probability that a k-plane which meets L also intersects K is equal
to
µk+j
(K)
j
µk+j
(L)
j
⊓
⊔
.
We can give another interpretation of µnj . Observe that if C ∈ Kn then
!
Z
Z
Z
n
µnj (C) =
µ0 (C∩V )dλnn−j (V ) =
µ0 C∩L+p) dL⊥ p dνn−j
(L).
Graff(n,n−j)
Gr(n,n−j)
L⊥
Now observe that if (C|L⊥ ) denotes the orthogonal projection of C onto L⊥ then
µ0 C ∩ (L + p) 6= 0 ⇐⇒ µ0 C ∩ (L + p) ⇐⇒ p ∈ (C|L⊥ ).
An example of this fact in R2 can be seen in Figure 3. Hence
Z
µ0 C ∩ L + p) dL⊥ p = volj (C|L⊥ )
L⊥
Valuations on polyconvex sets
and thus
µnj (C)
=
Z
Gr(n,n−j)
35
n
volj (C|L⊥ )dνn−j
(L).
(3.7)
Thus µnj is the “average value” of the volumes of the projections of C onto the j-dimensional
subspaces of Rn .
V
L
C
p
C L⊥
L⊥
F IGURE 3. C|L⊥ for V in Graff(2, 1).
A priori, some or maybe all of the valuations µnj ∈ Valj (n) ⊂ Val(n), 0 ≤ j ≤ n could
be trivial. Note, however, that µnn is voln . Furthermore, volume is intrinsic, so µnn is in fact
µn . We show that in fact all of the µnj are nontrivial.
Proposition 3.34. The valuations
µ0 = µn0 , µn1 , · · · , µnj , · · · µn ∈ Val(n)
are linearly independent.
Proof. Let Bn (r) denote the closed ball of radius r centered at the origin of Rn . We set
Bn = Bn (1). Since µnj is homogeneous of degree j we have
Z
Z
(3.7) j
n
⊥
n
j
n
j n
volj (Bj )dνn−j
(L).
volj (Bn |L )dνn−j (L) = r
µj Bn (r) = r µj (Bn ) = r
Gr(n,n−j)
Gr(n,n−j)
Hence
n
j n
.
= ωj r
= ωj r
Bn (r) = ωj r
j
n−j
Gr(n,n−j)
Observe that the above equality is also valid for j = 0.
Suppose that for some real constants c0 , c1 , · · · , cn we have
n
X
cj µnj = 0.
µnj
j
Z
n
dνn−j
j=0
j
(3.8)
36
Csar-Johnson-Lamberty
Then
n
X
j=0
so that
cj µnj Bn (r) = 0, ∀r > 0,
n j
r = 0, ∀r > 0.
cj ω j
j
j=0
n
X
This implies cj = 0, ∀j.
⊓
⊔
The valuation µnj is homogeneous of degree j and it induces a continuous, invariant, homogeneous valuation of degree w on the lattice Pix(n) ⊂ Polycon(n). Observe that if C1 ⊂ C2
are two compact convex subsets then
µnj (C1 ) ≤ µnj (C2 ).
Since every n-dimensional parallelotope contains a ball of positive radius we deduce from
(3.8) that µnj (P ) 6= 0, for any n-dimensional parallelotope. Corollary 2.22 implies that there
exists a constant C = Cjn such that
Cjn µnj (P ) = µj (P ), ∀P ∈ Pix(k + j).
We denote by µ
bnj the valuation
µ
bnj := Cjn µnj ∈ Valj (n).
(3.9)
(3.10)
Since µnn is voln , as is µ
b nn , we know that Cnn = 1. Thus, µ
b nn = µ
b n = µn = voln . We will
n
prove in later sections that the sequence (b
µj ) defines an intrinsic valuation and that in fact
all the constants Cjn are equal to 1.
Remark 3.35. Observe that
µnn−1 (Bn )
nωn
n
= [n]ωn−1 =
= ωn−1
1
2
1
σn−1
= × surface area of Bn .
2
2
This is a special case of a more general fact we discuss in the next subsection which will
n
⊓
⊔
imply that the constants Cn−1
in (3.9) are equal to 1.
=
§3.7. The Cauchy Projection Formula. We want to investigate in some detail the valuations µj+1
∈ Val(j + 1). Suppose C ∈ Kj+1 and ℓ ∈ Gr(j + 1, 1). If we denote by (C|ℓ⊥ )
j
the orthogonal projection of C onto the hyperplane ℓ⊥ then we obtain from (3.7)
Z
j+1
µj (C) =
µj (C|ℓ⊥ )dν1j+1 (ℓ), ∀C ∈ Kj+1 .
(3.11)
Gr(j+1,1)
µj+1
j (C)
Loosely, speaking
is the average value of the “areas” of the shadows of C on the hyperplanes of Rj+1 . To clarify the meaning of this average we need the following elementary
fact.
Valuations on polyconvex sets
Lemma 3.36. For any v ∈ Sj ⊂ Rj+1 ,
Z
|u · v|du = 2ωj
37
(3.12)
Sj
Proof. We recall from elementary calculus that an integral can be treated as a Riemann sum,
i.e.,
Z
X
|ui · v|du ≈
|ui · v|S(Ai),
Sj
i
where the sphere Sj has been partitioned into many small portions (here denoted Ai ), each
containing the point ui and possessing area S(Ai ).
ei denote the orthogonal projection of Ai onto the hyperplane tangent to Sj at the
Let A
ei |v ⊥ the orthogonal projection of Ai onto the hyperplane v ⊥ . Since
point ui. Denote by A
j
ei lies in a
Ai ⊂ S , this projection is entirely into the disk Bn−1 lying inside v ⊥ . Since A
⊥
hyperplane with unit normal v , its projected area is the product of its area in the hyperplane
ei |v ⊥ ) = |ui · v|S(A
ei).
times the angle between the two planes, i.e. S(A
ei ). Clearly then |ui ·v|S(Ai) ≈
For a fine enough partition of Sj we have that S(Ai ) ≈ S(A
ei ), and thus S(Ai |v ⊥ ) ≈ S(A
ei |v ⊥ ). Therefore,
|ui · v|S(A
Z
X
|u · v|du ≈
S(Ai |v ⊥ ).
Sj
i
⊥
Because the collection of sets {Ai |v } contains equivalent portions above and below the
hyperplane v ⊥ , it covers the unit, j-dimensional ball Bj once from above and once from
below, we see that
Z
|u · v|du ≈ 2ωj .
Sj
The similarities in this proof all converge to equalities in the limit as the mesh of the partition
Ai goes to zero.
⊓
⊔
Theorem 3.37 (Cauchy’s surface area formula). For every convex polytope K ⊂ Rj+1 we
denote by S(K) its surface area. Then
Z
1
S(K) =
µj (K|u⊥ )du.
ωj Sj
Proof. Let P have facets Pi with corresponding unit normal vectors vi and surface areas αi .
For every unit vector u the projection (Pi |u⊥ ) onto the j-dimensional subspace orthogonal
to u has j-dimensional volume
µj (Pi |u⊥ ) = αi |u · vi |.
For u ∈ Sj , the region (P |u⊥ ) is covered completely by the projections of facets Pi ,
[
(P |u⊥) = (Pi |u⊥ ).
i
For almost every point p in the projection (P |u⊥ ) the line containing p and parallel to u
intersects the boundary of P in two different points. (The set of points p ∈ (P |u⊥) for
38
Csar-Johnson-Lamberty
which this does happen is negligible, i.e. it has j-dimensional volume 0.) Thus, in the above
decomposition of (P |u⊥) as a union of projection of facets, almost every point of (P |u⊥ ) is
contained in exactly two such projections. Therefore
1X
µj (P |u⊥) =
µj (Pi |u⊥ )
2 i
so that
Z
=
⊥
µj (P |u )du =
Sj
Z
m
X
αi
i=1
2
|u · vi |du
Z
m
Sj
1X
αi |u · vi |du
2 i=1
(3.12)
=
Sj
m
X
αi ωj = ωj S(P ).
i=1
⊓
⊔
Recall that for parallelotopes P ∈ Par(j + 1),
1
µj (P ) = S(P ).
2
In this light, Cauchy’s surface area formula becomes
Z
1
µj (P |u⊥ )du, ∀P ∈ Par(j + 1).
µj (P ) =
2ωj Sj
We want to rewrite this as an integral over the Grassmannian Gr(j + 1, 1). Recall that we
have a 2-to-1 map
ℓ : Sj → Gr(j + 1, 1).
An invariant measure µc on Gr(j+1, 1) of total volume c corresponds to an invariant measure
µ̃c on Sj of total volume 2c. If σ denotes the invariant measure on Sj defined by the area
element on the sphere of radius 1 then
σ(Sj ) = σj , µ̃c =
2c
σ
σj
Any function f : Gr(j + 1, 1) → R defines a function ℓ∗ f := f ◦ ℓ : Sj → R called the
pullback of f via ℓ. Conversely, any even function g on the sphere is the pullback of some
function ḡ on Gr(j + 1, 1). Moreover
Z
Z
Z
1
c
∗
f dµc =
ℓ f dµ̃c =
ℓ∗ f dσ.
2 Sj
σj Sj
Gr(j+1,1)
This shows that
Z
σj
ℓ f dσ =
c
Sj
∗
Z
f dµc .
Gr(j+1,1)
The measure ν1j+1 has total volume c = [j + 1] so that
Z
Z
1
σj
⊥
µj (P ) =
µj (P |u )du =
µj (P |L⊥ )dν1j+1 (L)
2ωj Sj
2[j + 1]ωj Gr(j+1,1)
Valuations on polyconvex sets
Using (3.11) we deduce
σj
µj+1
(P )
2[j + 1]ωj j
for every parallelotope P ⊂ Rj+1. Recalling that
(j + 1)ωj+1
σj = (j + 1)ωj+1, [j + 1] =
2ωj
we conclude
σj
=1
2[j + 1]ωj
so that finally
µj (P ) = µj+1
j (P ), ∀P ∈ Par(j + 1).
µj (P ) =
39
40
Csar-Johnson-Lamberty
4. The Characterization Theorem
We are now ready to completely characterize Val(n).
§4.1. The Characterization of Simple Valuations. Before we can state and prove the
characterization theorem for simple valuations, we need to recall a few things. En denotes
the group of rigid motions of Rn , i.e. the subgroup of affine transformations of Rn generated
by translations and rotations. Two subsets S0 , S1 ⊂ Rn are called congruent if there exists
φ ∈ En such that
φ(S0 ) = S1 .
A valuation µ : Polycon(n) → R is called simple if
µ(S) = 0, for every S ∈ P olycon(n) such that dim S < n.
The Minkowski Sum of two compact convex sets, K, and L is given by
K + L = x + y | x ∈ K, y ∈ L .
We call a zonotope a finite Minkowski sum of straight line segments, and we call a zonoid a
convex set Y that can be approximated in the Hausdorff metric by a convergent sequence of
zonotopes.
If a compact convex set is symmetric about the origin, the we call it a centered set. We
denote the set of centered sets in Kn by Knc .
The proof of the characterization theorem relies in a crucial way on the following technical
result.
Lemma 4.1. Let K ∈ Knc . Suppose that the support function of K is smooth. Then there
exist zonoids Y1 , Y2 such that
K + Y2 = Y1 .
⊓
⊔
Idea of proof. We begin by observing that the support function of a centered line segment
in Rn with endpoints u, −u is given by
hu : Sn−1 → R, h(x) := |hu, xi|.
Thus, any function g : Sn−1 → R which is a linear combinations of hu ’s with positive
coefficients is the support function of a centered zonotope. In particular any uniform limit of
such linear combinations will be the support function of a zonoid. For example, a function
g : Sn−1 → R
of the form
g(x) =
Z
|hu, xi|f (u)dSu,
Sn−1
with f : Sn−1 → [0, ∞) a continuous, symmetric function, is the support function of a
zonoid.
The characterization theorem
41
Let us denote by Ceven (Sn−1 ) the space of continuous, symmetric (even) functions Sn−1 →
R. We obtain a linear map
Z
n−1
n−1
C : Ceven (S ) → Ceven (S ), (Cf )(x) :=
|hu, xi|f (u)dSu
Sn−1
called the cosine transform. Thus we can rephrase the last remark by saying that the cosine
transform of a nonnegative continuous, even function on Sn−1 is the support function of a
zonoid.
Note that f is continuous, even, then we can write is as the difference of two continuous,
even, nonnegative functions
1
1
f = f+ − f− , f+ = (f + |f |), f− = (|f | − f ).
2
2
n
Thus if h is the support function of some C ∈ Kc such that
h = Cf =⇒ h + Cf− = Cf+
Note that Cf− is the support function of a zonoid Y1 and Cf+ is the support function of a
zonoid Y1 and thus
h + Cf− = Cf+ ⇐⇒ C + Y1 = Y2 .
We deduce that any continuous function which is in the image of the cosine transform is the
support function of a set C ∈ Kn satisfying the conclusions of the lemma.
The punch line is now that any smooth, even function f : Sn−1 → R is the cosine transform of s smooth, even function on the sphere.
This existence statement is a nontrivial result, and its proof is based on a very detailed
knowledge of the action of the cosine transform on the harmonic polynomials on Sn−1 . For
a proof along these lines we refer to [Gr, Chap. 3]. For an alternate approach which reduces
the problem to a similar statement about Fourier transforms we refer to [Gon, Prop. 6.3]. ⊓
⊔
Remark 4.2. The connection between the classification of valuations and the spectral properties of the cosine transform is philosophically the key reason why the classification turns out
to be simple. The essence of the classification is invariant theoretic, and is roughly says that
the invariance under the group of motions dramatically cuts down the number of possible
choices.
⊓
⊔
Theorem 4.3. Suppose that µ ∈ Val(n) is a simple valuation. If µ is even, that is
µ(K) = µ(−K), ∀K ∈ Kn ,
then
µ([0, 1]n ) = 0 ⇐⇒ µ is identically zero on Kn .
Proof. Clearly only the implication =⇒ is nontrivial. To prove it we employ a simple
strategy. By cleverly cutting and pasting and taking limits we produce more and more sets
S ∈ Polycon(n) such that µ(S) = 0 until we get what we want. We will argue by induction
on the dimension n of the ambient space.
The result is true for n = 1 because in this case K1 = Par(1) and we can invoke the
volume theorem for Pix(1), Theorem 2.19.
42
Csar-Johnson-Lamberty
So, now suppose that n > 1 and the theorem holds for valuations on Kn−1 . Note that
for every positive integer k the cube [0, 1] decomposes in k n cubes congruent to 0, k1 and
overlapping on sets of dimension < n. We deduce that
µ([0, 1/k]n) = 0.
n
Since any box whose edges have rational length decomposes into cubes congruent to 0, k1
for some positive integer k we deduce that µ vanishes on such boxes. By continuity we
deduce that it vanishes on all boxes P ∈ Par(n). Using the rotation invariance we conclude
that rotation invariance, µ(C) = 0 for all boxes C with positive sides parallel to some other
orthonormal axes.
We now consider orthogonal cylinders with convex bases, i.e. sets congruent to convex
sets of the form C × [0, r] ∈ Kn , C ∈ Kn−1 . For every real numbers a < b define a valuation
τ = τa,b on Kn−1 by
τ (K) = µ(K × [a, b]),
for all K ∈ Kn−1 . Note that [0, 1]n−1 × [a, b] ∈ Par(n) so that
τ ([0, 1]n−1 ) = µ([0, 1]n−1 × [a, b]) = 0.
Then τ satisfies the inductive hypotheses, so τ = 0. Hence µ vanishes on all orthogonal
cylinders with convex basis.
Now suppose that M is a prism, with congruent top and bottom faces parallel to the
hyperplane xn = 0, but whose cylindrical boundary is not orthogonal to xn = 0 but rather
meets it at some constant angle. See the top of Figure 4
M1
M
2
M
2
M1
F IGURE 4. Cutting and pasting a prism.
The characterization theorem
43
We now cut M into two pieces M1 , M2 by a hyperplane orthogonal to the cylindrical
boundary of M. Using translation invariance, slide M1 and M2 around and glue them together along the original top and bottom faces. (See Figure 4). We then have a right cylinder
C. Thus,
µ(M) = µ(M1 ) + µ(M2 ) = µ(C) = 0.
Note that we must actually be careful with this cutting and repasting. It is possible that M
is too “fat”, preventing us from slicing M with a hyperplane orthogonal to the cylindrical
axes and fully contained in each of them. If this problem occurs, we can easily remedy it
by subdividing the top and bottom of M into sufficiently small convex pieces. Using the
simplicity of µ, we can then consider each separately.
Now let P be a convex polytope having facets P1 , . . . , Pm , and corresponding outward unit
normal vectors u1 , . . . , um . Let v ∈ Rn and let v̄ denote the straight line segment connecting
the point v to the origin. Without loss of generality, suppose that P1 , . . . , Pj are exactly those
facets of P such that hui , vi > 0 for all 1 ≤ i ≤ j. We can thus express the Minkowski sum
P + v̄ as
!
j
[
P + v̄ = P ∪
(Pi + v̄) ,
i=1
where each term in the above union is either disjoint from the others or intersects in dimension less than n (see Figure 5).
_
P+v
1
P1
P
P
2
_
P+v
2
_
v
F IGURE 5. Smearing a convex polytope.
This simply results from the fact that P + v̄ is the ’smearing’ of P in the direction of v.
Hence, since µ is simple,
µ(P + v̄) = µ(P ) +
j
X
µ(Pi + v̄).
i=1
But now each term Pi + v̄ is the smearing of the facet Pi in the direction of v, making Pi + v̄
a prism, so that
µ(P + v̄) = µ(P ),
44
Csar-Johnson-Lamberty
for all convex polytopes P and line segments v̄. By translation invariance v̄ could be any
segment in the space.
We deduce iteratively that for all convex polytopes P and all zonotopes Z we have
µ(Z) = 0 and
µ(P + Z) = µ(P ).
µ(Y ) = 0 and
µ(K + Y ) = µ(K),
By continuity,
n
for all K ∈ K and all zonoids Y.
Now suppose that K ∈ Knc and has a smooth support function. Then by our lemma, there
are zonoids Y1 , Y2 such that K + Y1 = Y2 . Thus, we now have that
µ(K) = µ(K + Y2 ) = µ(Y1 ) = 0.
Since any centered convex body can be approximated by a sequence of centered convex sets
with smooth support functions, by continuity of µ, µ is zero on all centered convex, compact
sets.
Now let ∆ be an n-dimensional simplex, with one vertex at the origin. Let u1 , . . . , un
denote the other vertices of ∆, and let P be the parallelepiped spanned by the vectors
u1 , . . . , un . Let v = u1 + · · · + un . Let ξ1 be the hyperplane passing through the points
u1 , . . . , un and let ξ2 be the hyperplane passing through the points v − u1 , . . . , v − un . Finally, denote by Q the set of all points of P lying between the hyperplanes ξ1 and ξ2 .
We now write
P = ∆ ∪ Q ∪ (−∆ + v),
where each term in the union intersects any other in dimension less than n. P and Q are
centered, so
0 = µ(P ) = µ(∆) + µ(Q) + µ(−∆ + v) = µ(∆) + µ(−∆).
Thus, µ(∆) = −µ(−∆). Yet by assumption µ(∆) = µ(−∆). Thus, µ(∆) = 0.
Now since any convex polytope can be expressed as the finite union of simplices each of
which intersects with any other in dimension less than n, we have that µ(P ) = 0 for all
convex polytopes P . Since convex polytopes are dense in Kn , we have that µ is zero on Kn ,
which is what we wanted.
⊓
⊔
We now immediately get the following result.
Theorem 4.4. Suppose that µ is a continuous simple valuation on Kn that is translation and
rotation invariant. Then there exists some c ∈ R such that µ(K) + µ(−K) = cµn (K) for all
K ∈ Kn .
Proof. For K ∈ Kn , define
η(K) = µ(K) + µ(−K) − 2µ([0, 1]n)µn (K).
Then η satisfies the conditions of the previous theorem, so that η is zero on Kn . Thus,
µ(K) + µ(−K) = cµn (K),
where c = 2µ([0, 1]n ).
⊓
⊔
The characterization theorem
45
§4.2. The Volume Theorem.
Theorem 4.5 (Sah). Let ∆ be a n-dimensional simplex. There exist convex polytopes P1 , . . . , Pm
such that ∆ = P1 ∪ · · · ∪ Pm where each term of the union intersects another in dimension
at most n − 1 and where each of the polytopes Pi is symmetric under a reflection across a
hyperplane, i.e. for each Pi , the exists a hyperplane such that Pi is symmetric when reflected
across it.
Proof. Let x0 , . . . , xn be the vertices of ∆ and let ∆i be the facet of ∆ opposite to xi . Let
z be the center of the inscribed sphere of ∆ and let zi be the foot of the perpendicular line
from z to the facet ∆i . For all i < j, let Ai,j denote the convex hull of z, zi , zj and the face
∆i ∩ ∆j (see Figure 6).
x
0
z
z
2
z1
1
z
z
x
1
z0
A0,1
z
x2
0
F IGURE 6. Cutting a simplex into symmetric polytopes.
Then
∆=
[
Ai,j ,
0≤i<j≤n
where the distinct terms Ai,j of this union, by construction, intersect in at most dimension
n − 1. Each Ai,j is symmetric under reflection across the n − 1 hyperplane determined by z
and the face ∆i ∩ ∆j . We can then relabel the Ai,j as P1 , . . . , Pm and ∆ = P1 ∪ · · · ∪ Pm as
desired.
⊓
⊔
Theorem 4.6 (Volume Theorem for Polycon(n)). Suppose that µ is a continuous rigid motion invariant simple valuation on Kn . Then there exists c ∈ R such that µ(K) = cµn (K),
for all K ∈ Kn . Note that we can extend continuous valuations on Kn to continuous valuations on Polycon(n), so the theorem also holds replacing Kn with Polycon(n).
Proof. Since µ is translation invariant and simple, by Theorem 4.4, there exists a ∈ R such
that µ(K) + µ(−K) = aµn (K) for all K ∈ Kn . Let ∆ be a n-simplex in Rn . Then
µ(∆) + µ(−∆) = aµn (∆).
Suppose n is even, meaning the dimension of Rn is even. Then ∆ and −∆ differ by a
rotation. This is clearly true in R2 . We can then rotate ∆ to −∆ in each “orthogonal” plane
of Rn , meaning ∆ and −∆ differ by the composition of these rotations, i.e. a rotation. Then
µ(∆) = µ(−∆) = a2 µn (∆).
Now suppose that n is odd. By Theorem 4.5, there exist polytopes P1 , . . . , Pm such that
∆ = P1 ∪ · · · ∪ Pm and dim Pi ∩ Pj ≤ n − 1 and each Pi is symmetric under a reflection
46
Csar-Johnson-Lamberty
across a hyperplane. Thus, we can transform Pi into −Pi by a series of rotations and then
reflection across the hyperplane. Then, µ(Pi ) = µ(−Pi ) and
m
m
X
X
µ(−∆) =
µ(−Pi ) =
µ(Pi) = µ(∆)
i=1
i=1
a
µ (∆).
2 n
Then for all n-simplices ∆, µ(∆) =
Let c = a2 and suppose P is a convex polytope in Rn . Then P is a finite union of simplices,
i.e. P = ∆1 ∪ · · · ∪ ∆m such that dim ∆i ∩ ∆j < n. Then
µ(P ) = µ(∆1 ) + · · · + µ(∆m )
= cµn (∆1 ) + · · · + cµn (∆m )
= cµn (P )
The set of convex polytopes is dense in Kn , so since µ is continuous, µ(K) = cµn (K) for
all K ∈ Kn .
⊓
⊔
§4.3. Intrinsic Valuations. We would like to show that for each k ≥ 0 the valuations
µ
b nk determine an intrinsic valuation, meaning that for all N ≥ n and P ∈ Polycon(n),
µ
b nk (P ) = µ
bN
k (P ). For uniformity, we set
µ
b nk = 0, ∀k > n.
Theorem 4.7. For every k ≥ 0 the sequence
Y
Val(n)
(b
µ nk ) ∈
n≥0
defines an intrinsic valuation, i.e., an element of
µ
b k ∈ Val(∞) = limprojn Val(n).
Proof. First of all let us fix k. Since there is no danger of confusion we will write µ
b n instead
of µ
b nk . Consider the statement
µ
b n (P ) = µ
b ℓ (P ), ∀P ∈ Polycon(ℓ).
(Pn,ℓ )
Note that l ≤ n. We want to prove by induction over n that the statement
Pn,ℓ for any ℓ ≤ n
(Sn )
is true for any n. The statement S0 is trivially true. We will prove that
S0 , · · · , Sn−1 =⇒ Sn .
Thus, we know that Pm,ℓ is true for every ℓ ≤ m < n, and we want to prove that
Pn,ℓ is true for any ℓ.
(Tℓ )
To prove Tℓ we argue by induction on ℓ. (n is fixed.)
The result is obviously true for ℓ = 0, 1 since in these case Pix = Polycon. Thus, we
assume that Pn,ℓ is true and we will show that Pn,ℓ+1 is true. In other words, we know that
µ
b n (P ) = µ
b ℓ (P ), P ∈ Polycon(ℓ)
(4.1)
The characterization theorem
47
and we want to prove that
µ
b n (P ) = µ
b ℓ+1 (P ), ∀P ∈ Polycon(ℓ + 1).
Clearly the last statement is trivially true if ℓ + 1 = n so we may assume ℓ + 1 < n.
We denote by ν the restriction of µ
b n to Polycon(ℓ + 1). Then ν restricts to µ
b ℓ on
Polycon(ℓ) and µ
b n restricts to µ
b ℓ by (4.1).
Since ℓ + 1 < n we deduce from Sℓ+1 that µ
b ℓ+1 restricts to µ
b ℓ on Polycon(ℓ). Then
ν − µℓ+1 vanishes on Polycon(ℓ), meaning it is a continuous invariant simple valuation on
b ℓ+1 = cb
µ ℓ+1
Polycon(ℓ + 1). Theorem 4.6 implies that there exists c ∈ R such that ν − µ
ℓ+1 on
Polycon(ℓ + 1).
On the other hand, ν = µ
b l+1 on Pix(l + 1), meaning c = 0 and thus ν = µ
b ℓ+1 .
⊓
⊔
Based on this result, the superscript of µ
b nk does not matter. Therefore, we define
µ
b k := µ
b nk .
At this point, we are able to characterize the continuous, invariant valuations on Polycon(n),
as we did for Pix(n) in Theorem 2.20.
Theorem 4.8 (Hadwiger’s Characterization Theorem). The valuations µ
b 0, µ
b 1, . . . , µ
b n form
a basis for the vector space Val(n).
Proof. Let µ ∈ Val(n) and let H be a hyperplane in Rn . The restriction of µ to H is then
a continuous invariant valuation on H. Note that the choice of H does not matter since µ is
rigid motion invariant and all hyperplanes can be arrived at through rigid motions of a single
hyperplane. Recall that Polycon(1) = Pix(1), and by Theorem 2.20, the statement is true
for n = 1. We take this as our base case and proceed by induction.
For every polyconvex set A in H, we assume that
µ(A) =
n−1
X
i=0
Thus,
µ−
n−1
X
i=0
ci µ
b i (A)
ci µ
bi
is a simple valuation in Val(n). Then, by Theorem 4.6,
µ−
n−1
X
i=0
We move the sum to the other side and
µ=
ci µ
b i = cn µ
bn
n
X
i=0
ci µ
bi
⊓
⊔
In a definition analogous to that on Pix(n), a valuation µ on Polycon(n) is homogenous
of degree k if for α ≥ 0 and all K ∈ Polycon(n), µ(αK) = αk µ(K).
48
Csar-Johnson-Lamberty
Corollary 4.9. Let µ ∈ Val(n) be homogenous of degree k. Then there exists c ∈ R such
that µ(K) = cb
µ k (K) for all K ∈ Polycon(n).
Proof. By Theorem 4.8, we know that there exist c0 , · · · , cn ∈ R such that
n
X
µ=
ci µ
bi
i=0
n
n
Suppose P = [0, 1] , the unit cube in R . Fix α ≥ 0. Then,
n n
n
X
X
X
n
i
ci α i .
α ci µ
b i (P ) =
ci µ
b i (αP ) =
µ(αP ) =
i
i=0
i=0
i=0
n
µ
b i (P ) = µi (P ) = i since for P ∈ Par(n), µi is the i-th elementary symmetric function.
At the same time,
n n
X
X
n
k
k
ci α k .
ci µ
b i (P )α =
µ(αP ) = α µ(P ) =
i
i=0
i=0
We compare the coefficients ci in the two sums and conclude that cj = 0 for i 6= k. Thus,
µ = ck µ
b k.
⊓
⊔
Applications
49
5. Applications
It is payoff time! Having thus proven Hadwiger’s Characterization Theorem, we now seek
to use it in extracting as much interesting geometric information as possible. Let us recall
that
µnk = Rn,n−k µ0 and µ
b nk = Ckn µnk ,
where the normalization constants Ckn are chosen so that
µ
b nk ([0, 1]n )
n
.
= µk ([0, 1] ) =
k
n
Above, µk is the intrinsic valuation on the lattice of pixelations defined in Section 2. We have
seen in the last section that the sequence {b
µ nk }n≥k defines an intrinsic valuation and thus we
n
will write µ
b k instead of µ
bk
§5.1. The Tube Formula.
Let K, L ∈ Kn , and let α ≥ 0. We have the Minkowski sum
K + αL = {x + αy | x ∈ K and y ∈ L}
Proposition 5.1 (Smearing Formula). Let u denote the straight line segment connecting u
and the origin o. For K ∈ Kn and any unit vector u ∈ Rn ,
µn (K + ǫu) = µn (K) + ǫµn−1 (K|u⊥ )
Proof. Let L = K + ǫu. We will compute the volume of L by integrating the lengths of
one-dimensional slices of L with lines ℓx parallel through u passing through points x ∈ u⊥ ,
that is
Z
µ1 (L ∩ ℓx )dx.
µn (L) =
u⊥
Since µ1 (l ∩ ℓx ) = µ1 (K ∩ ℓx ) + ǫ for all x ∈ K|u⊥ and zero for x ∈
/ K|u⊥ , we have that
Z
Z
µn (L) =
µ1 (L ∩ ℓx )dx =
(µ1 (K ∩ ℓx ) + ǫ)dx = µn (K) + ǫµn−1 (K|u⊥ )
u⊥
K|u⊥
⊓
⊔
Let Cn denote the n-dimensional unit cube. Recall that for 0 ≤ i ≤ n,
n
.
µi (Cn ) =
i
Proposition 5.2. For ǫ ≥ 0,
µn (Cn + ǫBn ) =
n
X
i=0
n−i
µi (Cn )ωn−i ǫ
n
n X
X
n
n−i
ωn−i µ
b ni (Cn )ǫn−i .
ωn−i ǫ
=
=
i
i=0
i=0
50
Csar-Johnson-Lamberty
Proof. Let u1 , . . . , un denote the standard orthonormal basis for Rn , and ui the line segment
connecting ui to the origin. By Proposition 5.1 we have
n X
1
ωn−ir i
µn (Bn + ru1 ) = ωn + rωn−1 =
i
i=0
for all n ≥ 1.
Having proven the base case, we proceed by induction. Suppose that
k X
k
µn (Bn + ru1 + · · · + ruk ) =
ωn−ir i ,
i
i=0
for some 1 ≤ k < n. Then
µn (Bn +ru1 +· · ·+ruk+1 ) = µn (Bn +ru1 +· · ·+ruk )+rµn−1 ((Bn−1 +ru1 +· · ·+ruk )|u⊥
k+1 )
=
k k X
X
k
k
i
ωn−i−1 r i+1
ωn−ir + rµn−1(Bn−1 + ru1 + · · ·+ uk ) =
ωn−i r +
i
i
i
i=0
i=0
k X
k
i=0
i
=
k+1 X
k
i=0
i
k+1 X
k+1
k
i
ωn−ir i .
ωn−ir =
+
i
i−1
i=0
Thus, by induction we have that
n X
n
ωn−i r i
µn (Bn + ru1 + · · · + run ) =
i
i=0
Noting that µn is homogeneous of degree n, we have that
1
1
n
µn (Bn +ru1 +· · ·+run ) = µn (Bn +rCn ) = µn r
B n + Cn
= r µn
B n + Cn .
r
r
Letting ǫ = 1r , the previous two inequalities give us
n n X
1i X n
n
n
ωn−i =
µn (Cn + ǫBn ) = ǫ
ωn−i ǫn−i
i
i
ǫ
i=0
i=0
⊓
⊔
We can now prove the following remarkable result, first proved by Jakob Steiner in dimensions ≤ 3 in the 19th century.
Theorem 5.3 (Tube Formula). For K ∈ Kn and ǫ ≥ 0,
µn (K + ǫBn ) =
n
X
i=0
µ
b i (K)ωn−i ǫn−i .
(5.1)
Applications
51
Proof. Let η(K) = µn (K + Bn ), for K ∈ Kn . Because Bn ∈ Kn , for K, L ∈ Kn we have
that
(K ∪ L) + Bn = (K + Bn ) ∪ (L + Bn ),
and clearly
(K + Bn ) ∩ (L + Bn ) = (K ∩ L) + Bn ,
so we have
η(K ∪ L) = µn (K ∪ L) + Bn = µn (K + Bn ) ∪ (L + Bn )
= µn (K + Bn ) + µn (L + Bn ) − µn (K + Bn ) ∩ (L + Bn )
= η(K) + η(L) − η(K ∩ L).
n
Thus η is a valuation on K . The continuity and invariance of µn , and the symmetry of Bn
under Euclidean transformations show that η is a convex-continuous invariant valuation on
Kn , which according to the Extension Theorem 3.17 extends to a valuation in Val(n). By
Hadwiger’s Characterization Theorem we have that
η(K) =
n
X
i=0
ci µ
b i (K),
for all K ∈ Kn . Therefore, for ǫ > 0 we have that
n
n
X
X
1
1
n
n
K + Bn = ǫ
ci µ
b i (K) i =
ci µ
b i (K)ǫn−i .
µn (K + ǫBn ) = ǫ µn
ǫ
ǫ
i=0
i=0
In particular, if we let K = Cn in the previous equation and comparing with the results of
the previous Proposition, we find that
n
X
ci µi(Cn )ǫn−i =
i=0
i=0
i.e. ci = ωn−i .
n
X
µ
b i (Cn )ωn−i ǫn−i ,
⊓
⊔
Theorem 5.4 (The intrinsic volumes of the unit ball). For 0 ≤ i ≤ n,
n ωn
n
µ
b i (Bn ) =
ωi .
=
i ωn−i
i
Proof. Applying the tube formula to to K = Bn we obtain
n
X
i=0
µ
b i (Bn )ωn−iǫn−i = µn (Bn + ǫBn ) = µn ((1 + ǫ)Bn )
n
= (1 + ǫ) µn (Bn ) =
n X
n
i=0
i
ωn ǫn−i ,
for all ǫ > 0. Comparing coefficients of powers of ǫ, we uncover the desired result.
⊓
⊔
52
Csar-Johnson-Lamberty
§5.2. Universal Normalization and Crofton’s Formulæ. It turns out that the intrinsic
volumes of the unit ball are the final piece of the puzzle in understanding the constants Ckn
in the formula µ
b nk = Ckn µnk , where
µnk = Rn,n−k µ0 and µ
b nk = µk on Pix(n).
We have the following very pleasant surprise.
Theorem 5.5. For 0 ≤ k ≤ n and K ∈ Kn ,
In other words, no more hats!
Proof. From Theorem 5.4 we deduce
Ckn µnk (Bn )
=
On the other hand (3.8) shows that
µ
bk = µ
b nk = µnk
µ
b nk
= ωn
µnk
n
n−k
n
.
= ωn
k
n
.
= ωk
k
Equating the two, we readily see that Ckn = 1 for all n, k ≥ 0. In fact, since both µ
b k and µk
are intrinsic, we have shown that µ
b k = µk .
⊓
⊔
We have thus proved that the intrinsic volumes µ
b k on Polycon(n) are exactly the Radon
Transforms Rn,n−k µ0 , a result to which we feel obliged to “tip our hats”.
The following theorem is a generalization of the previous one.
Theorem 5.6 (Crofton’s formula). For 0 ≤ i, j ≤ n and K ∈ Polycon(n),
i+j
µi+j (K)
(Rn,n−i µj )(K) =
j
Proof. From the section before regarding Radon Transforms, we already know that Rn,n−i µj ∈
Vali+j (n). By Corollary 4.9 we have that there exists a c ∈ R such that Rnn−i µj = cµi+j . To
obtain this c we simply evaluate (Rn,n−iµj )(Bn ).
(Rn,n−i µj )(Bn ) = Rn,n−i((Rn,n−i )µ0 ) (Bn )
Z
Z
n−i
=
µ0 (Bn ∩ W )dλn−i−j (W ) dλnn−i(V )
=
Z
Graff(n,n−i)
Gr(n,n−i)
=
Z
Z
Graff(V,n−i−j)
Gr(V,n−i−j)
Gr(n,n−i)
Z
Z
µ0 (Bn ∩ (L +
L⊥
Gr(V,n−i−j)
Z
Bi+j
n−i
x))dp dνn−i−j
(L)
!
n
dνn−i
(V )
n−i
n
dp dνn−i−j
(L) dνn−i
(V )
n−i−j
n
ωi+j dνn−i
(V )
=
n−i
Gr(n,n−i)
Z
Applications
n
n−i
But we also have by Theorem 5.4 that
=
and thus
53
n−i−j
ωi+j .
n−i
n
ωi+j
cµi+j (Bn ) = c
i+j
n
c=
i+j
−1 i+j
n−i
n
.
=
j
n−i n−i−j
⊓
⊔
Remark 5.7. If we define a weighted Radon transform
R∗w : Val(k) → Val(k + w), R∗w := [w]!Rk+w,k
and we set µ∗i := [i]!µi then we can rewrite Crofton’s formulæ in the more symmetric form
µ∗i+j = R∗j µ∗i .
Note that µ∗i are also intrinsic valuations, i.e. are independent of the dimension of the ambient
space.
⊓
⊔
§5.3. The Mean Projection Formula Revisited. The conclusion of Theorem 5.5 allows
us to restate Cauchy’s surface area formula as follows:
Theorem 5.8 (The mean projection formula). For 0 ≤ k ≤ n and C ∈ Kn ,
Z
(Rn,n−k µ0 )(C) = µk (C) =
µk (C|V0 ) dνkn (V0 ).
Gr(n,k)
⊓
⊔
It turns out that Cauchy’s mean value formula is a special case of a more general formula.
Theorem 5.9 (Kubota). For 0 ≤ k ≤ l ≤ n and C ∈ Kn ,
Z
n−k
k
µk (C).
µk (C|V ) dνl (V ) =
l−k
Gr(n,l)
Proof. Define a valuation ν on Kn by
Z
ν(C) =
Gr(n,l)
µk (C|V ) dνlk (V ).
Arguing as in the proof of Proposition 3.30 we deduce that ν ∈ Valk (n). By Corollary 4.9
there exists a constant c ∈ R such that ν = cµk . As before, we compute the constant c by
considering what happens when C = Bn .
Z
n
k
.
cµk (Bn ) = ν(Bn ) =
µk (C|V ) dνl (V ) = µk (Bl )
l
Gr(n,l)
54
Therefore
Csar-Johnson-Lamberty
−1 µk (Bl ) n
1 n
n
l
c=
ωk
=
k
k
µk (Bn ) l
ωk l
[n − k]!
[l]!
[k]![n − k]!
[n]!
n−k
.
=
=
=
[k]![l − k]!
[n]!
[l]![n − l]!
[n − l]![l − k]!
l−k
§5.4. Product Formulæ.
⊓
⊔
We now examine the intrinsic volumes on products.
Theorem 5.10. Let 0 ≤ k ≤ n. Let K ∈ Polycon(k) and L ∈ Polycon(n − k). Then
X
µi (K × L) =
µr (K)µs (L).
r+s=i
Proof. The set function µi (K × L) is a continuous valuation in each of the variables K and
L when the other is fixed. Also, note that every rigid motion φ ∈ Ek is the restriction of
some rigid motion Φ ∈ En that restricts to the identity on the orthogonal complement Rn−k .
Thus, by the invariance of µi,
µi φ(K × L) = µi Φ(K × L) = µi (K × L).
The characterization theorem tells us that for every L that there exist constants cr (L), depending on L, such that
k
X
cr (L)µr (K),
µi (K × L) =
r=0
k
for all K ∈ K .
Repeating the same argument with fixed K and varying L, we see that the cr (L) are
continuous invariant valuations on Kn−k , so we apply the characterization theorem again,
collect terms appropriately, and ultimately conclude that there are constants crs ∈ R such
that
k X
n−k
X
µi (K × L) =
crs µr (K)µs (L),
r=0 s=0
k
n−k
for all K ∈ K and L ∈ K .
Now we determine the constants using Cm , the unit m-dimensional cube. Let α, β ≥ 0.
Then
k X
n−k
X
µi (αCk × βCn−k ) =
crs µr (Ck )µs (Cn−k )αr β s
r=0 s=0
=
k n−k r s
αβ .
crs
s
r
s=0
k X
n−k
X
r=0
On the other hand, we already know how to compute this from the product formula in Theorem 2.17
X k n − k αr β s .
µi (αCk × βCn−k ) =
s
r
r+s=i
Applications
55
Comparing these two, we see that for 0 ≤ r ≤ k and 0 ≤ s ≤ n − k, we have crs = 1 if
r + s = i and crs = 0 otherwise.
Hence,
X
µi (K × L) =
µr (K)µs (L).
r+s=i
⊓
⊔
As a result of this theorem, we get the following corollary.
Corollary 5.11. Suppose that µ is a convex-continuous invariant valuation on Polycon(n)
such that
µ(K × L) = µ(K)µ(L),
for all K ∈ Polycon(k), L ∈ Polycon(n − k), where 0 ≤ k ≤ n. Then either µ = 0 or there
exists c ∈ R such that
µ = µ0 + cµ1 + c2 µ2 + · · · + cn µn .
Conversely, if µ is a valuation satisfying
µ = µ0 + cµ1 + c2 µ2 + · · · + cn µn ,
then µ satisfies
µ(K × L) = µ(K)µ(L).
Proof. By the characterization theorem, there are real constants ci such that
µ = c0 µ 0 + c1 µ 1 + · · · + cn µ n .
Let Ck denote the unit cube in Rk . Then for all α, β ≥ 0, the multiplication condition implies
that
!
k
n−k
X
X
µ(αCk × βCn−k ) = µ(αCk )µ(βCn−k ) =
cr αr µr (Ck )
cs β s µs (Cn−k )
r=0
=
k X
n−k
X
s=0
cr cs µr (Ck )µs (Cn−k )αr β s .
r=0 s=0
On the other hand, by the previous theorem,
µ(αCk × βCn−k ) =
n
X
ci µi (αCk × βCn−k ) =
i=0
=
n
X
i=0
k X
n−k
X
ci
X
µr (Ck )µs (Cn−k )αr β s
r+s=i
cr+s µr (Ck )µs (Cn−k )αr β s .
r=0 s=0
Comparing these polynomials in α, β, we see that the coefficients must be equal. Hence,
cr cs = cr+s , for all 0 ≤ r, s ≤ n. Thus, c0 = c0+0 = c20 . Hence, c0 is either 0 or 1. If c0 = 0,
then cr = cr+0 = cr c0 = 0, making µ zero. If c0 = 1, then relabel c1 = c. Then, for r > 0,
cr = c1+···+1 = cr1 = cr , and we are done. The converse follows by simply applying the
previous theorem.
⊓
⊔
56
Csar-Johnson-Lamberty
§5.5. Computing Intrinsic Volumes. Now that we have a better understanding of the
intrinsic measures µi , it would be nice if we could compute the intrinsic volumes of various
types of sets. We already know how to compute them for pixelations. We want to explain how
to determine them for convex polytopes. In a later section we will explain how to compute
them for arbitrary polytopes using triangulations and the Möbius inversion formula.
Suppose P ∈ Kn is a convex polytope. Using the tube formula (5.1) we deduce that for
all r > 0
n
X
µn (P + rB) =
µi(K)ωn−i r n−i ,
i=0
where the superscript of Bn has been dropped for notational convenience. Suppose x ∈
P + rB. Because P is compact and convex, there exists a unique point xP ∈ P such that
|x − xP | ≤ |x − y|, ∀y ∈ P.
If x ∈ P , then clearly x = xP . If, on the other hand, x ∈
/ P , then xP ∈ ∂P . Furthermore,
if x ∈
/ P and y ∈ ∂P , then y = xP ⇐⇒ x − y ⊥ H, where H is the support plane of P
and y ∈ P ∩ H. That is to say, the line connecting x and xp must be perpendicular to the
boundary.
Denote by Pi (r) the set of all x ∈ P + rB such that xP lies in the relative interior of an
i-face of P . If x is in the relative interior of P then it is contained in an n-face of P , and
consequently so is xP = x. If x is on the boundary then it is in one of P ’s faces (and again,
so is xP = x). Finally, if x is in neither the interior of P nor the boundary, then as stated
before there is a unique xP on the boundary and hence in an i-face of P .
P + rB =
n
[
Pi (r).
i=0
The uniqueness of xP implies that this union is disjoint.
Denote by Fi (P ) the set of all i-faces of P . Let Q be a face of P , and let v be any outward
unit normal to the boundary of P at y. Then we set
M(Q, r) = {y + δv | y ∈ relint(Q), 0 ≤ δ ≤ r}.
M(x,r)
x
M(Q,r)
Q
P
F IGURE 7. A decomposition of P + rB2 .
Applications
Proposition 5.12.
[
Pi (r) =
57
M(Q, r).
Q∈Fi (P )
Proof. First, suppose x ∈ Pi (r). From before, xp ∈ relint(Q) for some Q ∈ Fi (P ). If
x = xP , then x ∈ relint(Q), and therefore x ∈ M(Q, r) for any r. On the other hand, if
x 6= xP , then let v denote the unit vector parallel to the line between x and xP . From the
above discussion we know that v is perpendicular to the boundary
S of P at xP , and therefore
x = xP + δv, where δ = |x − xP |. Thus we have that Pi (r) ⊂ Q∈Fi (P ) M(Q, r).
On the other hand, suppose x ∈ M(Q, r). Then x = y + δv for some y ∈ relint(Q). If
δ = 0 then x = y and is in Pi (r). If δ 6= 0, x − y = δv, and is therefore
⊥ to the boundary
S
of P at y. Therefore, y = xP and x ∈ Pi (r). Therefore Pi (r) ⊃ Q∈Fi(P ) M(Q, r), and thus
we have equality.
⊓
⊔
For A ⊂ Rn , let the affine hull of A be the intersection of all planes in Rn containing A.
We then denote by A⊥ the set of all vectors in Rn orthogonal to the affine hull of A.
Let Q ∈ Fi (P ). Then Q⊥ has dimension n − i. Because of this, M(Q, r) is like M(Q, 1),
except that it has been “stretched” by a factor of r in n−i dimensions. Thus, µn (M(Q, r)) =
r n−i µn (M(Q, 1)). This fact, coupled with the fact that our M(Q, r) are disjoint, gives us that




[
[
µn (Pi (r)) = µn 
M(Q, r) = r n−i µn 
M(Q, 1) = r n−iµn (Pi (1)).
Q∈Fi (P )
Q∈Fi (P )
Furthermore,
µn (P + rB) = µn
n
[
i=0
Pi (r)
!
=
n
X
µn (Pi (r)) =
i=0
n
X
µn (Pi (1))r n−i,
i=0
for all r > 0. Comparing with the tube formula (5.1), we see that
n
X
µi (K)ωn−ir
n−i
i=0
=
n
X
µn (Pi (1))r n−i .
i=0
By comparing coefficients of powers or r, we get that
µi (P ) =
µ1(z i )
µn (Pi (1))
.
ωn−i
(5.2)
θi
F IGURE 8. M(z i , 1) from Example 5.13.
58
Csar-Johnson-Lamberty
Example 5.13. Consider a general convex polyhedron P in R3 with edges z 1 , . . . , z m . Each
edge z i is formed by two facets of P , denote these two faces Qi1 and Qi2 , and denote their
outward unit normals by ui1 and ui2 , respectively. We then have that the volume of the
section M(z i , 1) is given by
arccos(ui1 · ui2 )
µ1 (z i )θi
(π(1)2 )(µ1 (z i )) =
,
µ3 (M(z i , 1)) =
2π
2
where θi = arccos(ui1 · ui2 ). Refer to Figure 8 for an example. Therefore
m
µ3 (P1 (1))
1 X
µ1 (P ) =
=
µ1 (z i )θi .
ω2
2π i=1
This remarkable formula has immediate applications in specific polyhedra of interest.
For example, take an orthogonal parallelotope P = a1 × a2 × a3 . Clearly there are four
sides of length ai for each i, and the angle θi = π2 for each i. It is evident then that
m
1 X
π
µ1 (P ) =
4ai = a1 + a2 + a3 .
2π i=1
2
Simplicial complexes
59
6. Simplicial complexes and polytopes
At this point, we feel like we understand Val(n) to our satisfaction. We would now like to focus much more concretely on computing the these valuations for a special case of polyconvex
sets, called polytopes. These are finite unions of convex polytopes. The inclusion-exclusion
principle suggest dissecting these into smaller, and simpler pieces. Technically, this dissection operation is called triangulation, and the simpler pieces are called simplices. In this
section, we formalize the notion of triangulation and investigate some of its combinatorial
features.
§6.1. Combinatorial Simplicial Complexes. A combinatorial simplicial complex (CSC
for brevity) with vertex set in V is a collection K of non-empty subsets of the finite set V
such that
τ ∈ K =⇒ σ ∈ K, ∀σ ⊂ τ, σ 6= ∅.
The elements of K are called the (open) faces of the simplicial complex. If σ is a face then
we define
dim σ := #σ − 1.
A subset σ of a face τ is also called a face of τ . If additionally
#σ = #τ − 1
then we say that σ is an (open) facet of τ . If dim σ = 0 then we say that σ is a vertex of K.
We denote by VK the collection of vertices of K. In other words, the vertices are exactly the
singletons belonging to K. For simplicity we will write v ∈ VK instead of {v} ∈ VK .
Two CSCs K and K ′ are called isomorphic, and we write this K ∼
= K ′ , if there exists a
′
bijection from the set of vertices of K to the set of vertices of K which induces a bijection
between the set of faces of K and K ′ .
We denote by Σ(V ) ⊂ P (P (V )) the collection of all CSC’s with vertices in V . Observe
that
K1 , K2 ∈ Σ(V ) =⇒ K1 ∩ K2 , K1 ∪ K2 ∈ Σ(V )
so that Σ(V ) is a sublattice of P (P (V )).
Example 6.1. For every subset S ⊂ V define ∆S ∈ Σ(V ) to be the CSC consisting of all
non-empty subsets of S. ∆S is called the elementary simplex with vertex set S. Observe that
∆S1 ∩ ∆S2 = ∆S1 ∩S2 ,
For every CSC K ∈ Σ(V ) the simplices ∆σ , σ ∈ K are called the closed faces of K.
We deduce that the collection
n
o
∆(V ) := ∆S | S ⊂ V
is a generating set of the lattice Σ(V ).
Example 6.2. Let K ∈ Σ(V ). For every nonnegative integer m we define
Km := σ ∈ K | dim σ ≤ m .
⊓
⊔
60
Csar-Johnson-Lamberty
Km is a CSC called the m-skeleton of K. Observe that
[
K0 ⊂ K1 ⊂ · · · , K =
Km .
⊓
⊔
Definition 6.3. The Euler characteristic of a CSC K ∈ Σ(V ) is the integer
X
χ(K) :=
(−1)dim σ .
⊓
⊔
m≥0
σ∈K
Proposition 6.4. The map χ : Σ(K) → Z, K 7→ χ(K) is a valuation.
Proof. For every K ∈ Σ(V ) and every nonnegative integer m we denote by Fm (K) the
collection of its m-dimensional faces, i.e.
Fm (K) := σ ∈ K | dim σ := m .
We then have
χ(K) =
X
(−1)m #Fm (K).
m≥0
Then
Fm (K1 ∪ K2 ) = Fm (K1 ) ∪ Fm (K2 ), Fm (K1 ∩ K2 ) = Fm (K1 ) ∩ Fm (K2 )
and the claim of the proposition follows from the inclusion-exclusion property of the cardinality.
⊓
⊔
Example 6.5. Suppose ∆S is a simplex. Then
χ(∆S ) =
X
m≥0
X
m
(−1)
σ⊂S, #σ=m+1
!
=
X
(−1)
m≥0
m
n
m+1
= 1.
⊓
⊔
§6.2. The Nerve of a Family of Sets.
Definition 6.6. Fix a finite set V . Consider a family
A := {Av ; v ∈ V }
of subsets a set X parameterized by V .
(a) For every σ ⊂ V we set
Aσ :=
\
Av .
v∈σ
The nerve of the family A is the collection
N(A) := σ ⊂ V | Aσ 6= ∅ .
⊓
⊔
Clearly the nerve of a family A = {Av | v ∈ V } is a combinatorial simplicial complex
with vertices in V .
Simplicial complexes
61
Definition 6.7. Suppose V is a finite set and K ∈ Σ(V ). For every σ ∈ K we define the
star of σ in K to be the subset
StK (σ) := τ ∈ K | τ ⊃ σ .
Observe that
[
K=
⊓
⊔
StK (v).
v∈VK
Proposition 6.8. For every CSC K ∈ Σ(V ) we have the equalities
\
[
StK (v), ∀σ ∈ K, K =
∆σ .
StK (σ) =
v∈σ
σ∈K
In particular, we deduce that K is the nerve of the family of stars at vertices
StK := StK (v) | v ∈ VK .
⊓
⊔
We can now rephrase of the inclusion-exclusion formula using the language of nerves.
Corollary 6.9. Suppose L is a lattice of subsets of a set X, µ : L → R is a valuation into a
commutative ring with 1. Then for every finite family A = Av v∈V of sets in L we have
!
[
X
µ
Av =
(−1)dim σ µ(Aσ ).
v∈V
σ∈N (A)
In particular, for the universal valuation A 7→ IA we have
X
(−1)dim σ IAσ .
I∪v Av =
⊓
⊔
σ∈N (A)
Corollary 6.10. Suppose C = {Cs }s∈S is a finite family of compact convex subsets of Rn .
Denote by N(C) its nerve and set
[
C :=
Cs .
s∈S
Then
In particular, if
µ0 (C) = χ N(C) .
\
Cs 6= ∅.
s∈S
then
µ0 (C) = 1,
where µ0 : Polycon(n) → Z denotes the Euler characteristic.
Proof. Let
Cσ =
\
Cs , ∀σ ⊂ S.
s∈σ
Note that if σ ∈ N(C) then Cσ is nonempty, compact and convex and thus µ0 (Cσ ) = 1 .
Hence
X
µ0 (C) =
(−1)dim σ µ0 (Cσ ) = χ(C).
62
Csar-Johnson-Lamberty
If Cσ 6= ∅ ∀σ ⊂ S we deduce that N(C) is the elementary simplex ∆S . In particular
µ0 (C) = χ(∆S ) = 1.
⊓
⊔
§6.3. Geometric Realizations of Combinatorial Simplicial Complexes. Recall that a
convex polytope in Rn is the convex hull of a finite collection of points, or equivalently, a
compact set which is the intersection of finitely many half-spaces.
Definition 6.11. (a) A subset V
{v0 , v1 , . . . , vk }
of k + 1 points of a real vector space E is called affinely independent if the collection of
vectors
−
→
−−→
v−
0 v1 , . . . , v0 vk
is linearly independent.
(b) Let 0 ≤ k ≤ n be nonnegative integers. An affine k-simplex in Rn is the convex hull
of a collection of (k + 1), affinely independent points {v0 , · · · , vk } called the vertices of the
simplex. We will denote it by [v0 , v1 , · · · , vk ].
(c) If S is an affine simplex in Rn , then we denote by V (S) the set of its vertices and we
write dim σ := #V (σ) − 1. An affine simplex T is said to be a face of S, and we write this
as τ ≺ σ if V (S) ⊂ V (T ).
⊓
⊔
Example 6.12. A set of three points is affinely independent if they are not collinear. A set
of four points is affinely independent if they are not coplanar. If v0 6= v1 then [v0 , v1 ] is the
line segment connecting v0 to v1 . If v0 , v1 , v2 are not collinear then [v0 , v1 , v2 ] is the triangle
spanned by v0 , v1 , v2 (see Figure 9).
⊓
⊔
v
1
v
0
v0
v
2
v
1
F IGURE 9. A 1-simplex and a 2-simplex.
Proposition 6.13. Suppose [v0 , . . . , vk ] is an affine k-simplex in Rn . Then for every point
p ∈ [v0 , v1 , . . . , vk ] there exist real numbers t0 , t1 , · · · , tk uniquely determined by the requirements
k
k
X
X
ti ∈ [0, 1], ∀i = 0, 1, · · · , n,
ti = 1, p =
ti vi .
i=0
i=0
Simplicial complexes
63
Proof. The existence follows from the fact that [v0 , . . . , vk ] is the convex hull of the finite set
{v0 , . . . , vk }. To prove uniqueness, suppose
k
X
si vi =
i=0
k
X
ti vi .
i=0
Note that v0 = (s0 + · · · + sk )v0 = (t0 + · · · , tk )v0 so that
k
X
i=1
si (vi − v0 ) =
k
X
→
ti (vi − v0 ), vi − v0 = −
v−
0 vi .
i=1
→
v−
From the linear independence of the vectors −
0 vi we deduce si = ti , ∀i = 1, · · · , k. Finally
s0 = 1(s1 + · · · + sk ) = 1 − (t1 + · · · + tk ) = t0 .
⊓
⊔
The numbers (ti ) postulated by Proposition 6.13 are called the barycentric coordinates of
the point p. We will denote them by (ti (p)) In particular, the barycenter of a k simplex σ is
the unique point bσ whose barycentric coordinates are equal, i.e.
1
t0 (bσ ) = · · · = tk (bσ ) =
.
k+1
The relative interior of ∆[v0 , v1 , . . . , vk ] is the convex set
∆(v0 , v1 , · · · , vk ) := p ∈ ∆[v0 , · · · , vk ] | ti (p) > 0, ∀i = 0, 1, · · · , k }.
Definition 6.14. An affine simplicial complex in Rn (ASC for brevity) is a pair (C, T) satisfying the following conditions.
(a) C is a compact subset.
(b) T is a triangulation of C, i.e. a finite collection of affine simplices satisfying the conditions
(b1) If T ∈ T and S is a face of T then S ∈ T.
(b2) If T0 , T1 ∈ T then T0 ∩ T1 is a face of both T0 and T1 .
(b3) C is the union of all the affine simplices in T.
⊓
⊔
F IGURE 10. An ASC in the plane.
Remark 6.15. One can prove that any polytope in Rn admits a triangulation.
⊓
⊔
64
Csar-Johnson-Lamberty
In Figure 10 we depicted an ASC in the plane consisting of six simplices of dimension 0,
nine simplices of dimension 1 and two simplices of dimension 2.
To an ASC (C, T) we can associate in a natural way a CSC K(C, T) as follows. The set
of vertices V is the collection of 0-dimensional simplices in T. Then
K = V (S) | S ∈ T ,
(6.1)
where we recall that V (S) denotes the set of vertices of the affine simplex S ∈ T.
Definition 6.16. Suppose K is a CSC with vertex set V . Then an affine realization of K is
an ASC (C, T) such that
K∼
⊓
⊔
= K(C, T).
Proposition 6.17. Let V be a finite set. Then any CSC K ∈ Σ(V ) admits an affine realization.
Proof. Denote by RV the space of functions f : V → R. This is a vector space of the same
dimension as V . It has a natural basis determined by the Dirac functions δu : V → R, u ∈ V ,
where
(
1 v=u
δu (v) =
.
0 v 6= u
The set {δu | u ∈ V } ⊂ RV is affinely independent.
For every σ ∈ K we denote by [σ] the affine simplex in RV spanned by the set {δu | u ∈
σ}. Now define
[
C=
[σ], T = [σ] | σ ∈ K}.
σ∈K
Then (C, T) is an ASC and by construction K = K(C, T).
⊓
⊔
Suppose (C, T) is an ASC and denote by K the associated CSC. Denote by fk = fk (C, T).
According to the Euler-Schläfli-Poincaré formula we have
X
µ0 (C) =
(−1)k fk (C, T).
k
The sum in the right hand side is precisely χ(K), the Euler characteristic of K. For this
reason, we will use the symbols χ and µ0 interchangeably to denote the Euler characteristic
of a polytope.
Möbius inversion
65
7. The Möbius inversion formula
§7.1. A Linear Algebra Interlude. As in the previous section, for any finite set A we
denote by RA the space of functions A → R. This is a vector space and has a canonical basis
given by the Dirac functions
δa : A → R, a ∈ A.
Note that to any linear transformation T : RA → RB we can associate a function
S = ST : B × A → R
uniquely determined by the equalities
T δa =
X
S(b, a)δb , a ∈ A.
(7.1)
b∈B
We say that ST is a scattering matrix of T . If f : A → R is a function then
X
f (a)δa
f=
a∈A
and we deduce that the function T f : B → R is given by the equalities
X
X
S(b, a)f (a), ∀b ∈ B.
Tf =
S(b′ , a)f (a)δb′ ⇐⇒ (T f )(b) =
a∈A,b′ ∈B
(7.2)
a∈A
Conversely, to any map S : B × A → R we can associate a linear transformation T : RA →
RB whose action of δa is determined by (7.2).
Lemma 7.1. Suppose A0 , A1 , A2 are finite sets and
T0 : RA0 → RA1 , T1 : RA1 → RA2
are linear transformations with scattering matrices
S0 : A1 × A0 → R, S1 : A2 × A1 → R
Then the scattering matrix of T1 ◦ T0 : RA2 × RA0 → R is the map S1 ∗ S0 : A2 × A0 → R
defined by
X
S1 (a2 , a1 )S0 (a1 , a0 ).
S1 ∗ S0 (a2 , a0 ) =
a1 ∈A1
Proof. Denote by S the scattering matrix of T1 ◦ T0 so that
X
δa2 S(a2 , a0 ).
(T1 ◦ T0 )δa0 =
a2
On the other hand
(T1 ◦ T0 )δa0 = T1
X
a1
=
X
a2
δa2
X
a1
S1 (a2 , a1 )S0 (a1 , a0 )
δa1 S0 (a1 , a0 )
!
=
X
a2
!
δa2 (S1 ∗ S0 )(a2 , a0 ).
66
Csar-Johnson-Lamberty
Comparing the above equalities we obtain the sought for identity
S(a2 , a0 ) = S1 ∗ S0 (a2 , a0 ).
⊓
⊔
Lemma 7.2. Suppose that A is a finite set equipped with a partial order < (we use the
convention that a ≮ a, ∀a ∈ A). Suppose
S:A×A→R
is strictly upper triangular, i.e. it satisfies the condition
S(a0 , a1 ) 6= 0 =⇒ a0 < a1 .
Then the linear transformation T : RA → RA determined by S is nilpotent, i.e. there exists
a positive integer n such that
T n = 0.
Proof. Let a, b be in A and denote by Sn the scattering matrix of T n . By repeated application
of the previous lemma,
X
Sn (a, b) =
S(a, c1 )S(c1 , c2 ) · · · S(cn−1 , b).
c1 ,...,cn−1 ∈A
Then since S is strictly upper triangular, we deduce that the above sums contains only ‘monomials‘ of the form
S(a, c1 )S(c1 , c2 ) · · · S(cn−1 , b), ci < ci+1 , ∀i.
Thus, we really have that:
Sn (a, b) =
X
S(a, c1 )S(c1 , c2 ) · · · S(cn−1 , b).
a<c1 <c2 <···<cn−1 <b∈A
So, since we are using that a ≮ a, if we take n > #A we cannot find such a sequence, so
Sn = 0, meaning that T n = 0.
⊓
⊔
Lemma 7.3. Suppose T is a finite dimensional vector space and T : E → E is a nilpotent
linear transformation. Then the linear map 1E + T is invertible and
X
(−1)k T k .
(1E + T )−1 =
k≥0
Proof. Since T is nilpotent, there is some positive integer m such that T m = 0. We may
even assume that m is odd. Thus
!
X
(−1)k T k .
1E = 1E + T m = (1E + T )(1E − T + T 2 − · · · + T m−1 ) = (1E + T )
k≥0
⊓
⊔
Möbius inversion
§7.2. The Möbius Function of a Simplicial Complex.
Σ(V ) is a CSC. The zeta function of K is the map
67
Suppose V is a finite set and K ∈
(
1
ζ = ζK : K × K → Z, ζ(σ, τ ) =
0
στ
.
στ
Define ξ = ξK : K × K → Z by
(
1
ξ(σ, τ ) =
0
στ
.
otherwise
ζ defines a linear map Z : RK → RK and ξ defines a linear map Ξ : RK → RK . Note that
for every function f : K → R we have
X
X
f (τ ).
(7.3)
(Zf )(σ) =
ζ(σ, τ )f (τ ) =
τ σ
τ ∈K
Proposition 7.4. (a) Z = 1 + Ξ.
(b) The linear map Ξ is nilpotent. In particular, Z is invertible.
(c) Let µ : K × K → R the scattering matrix of Z −1 . Then µ satisfies the following
conditions.
µ(σ, τ ) 6= 0 =⇒ σ τ,
(7.4a)
µ(σ, σ) = 1, ∀σ ∈ K,
X
µ(σ, τ ) = −
µ(σ, ϕ).
(7.4b)
(7.4c)
σϕτ
Proof. First, we denote by δ(σ, τ ) the delta function, that is,
(
1 σ=τ
δ(σ, τ ) =
0 σ 6= τ
(a) From above we have
(Zf )(σ) =
X
ζ(σ, τ )f (τ ) =
τ σ
τ ∈K
=
X
τ ∈K
δ(σ, τ )f (τ ) +
X
X
f (τ ) = f (σ) +
X
τ σ
ξ(σ, τ )f (τ ) = (Ξf )(σ) + (1f )(σ) =
τ ∈K
f (τ )
(Ξ + 1)f (σ).
(b) We have from before that Ξ is defined by a scattering matrix ξ which is strictly upper
triangular, i.e.
ξ(σ, τ ) 6= 0 ⇒ σ τ.
By Lemma 7.2, we have that Ξ is nilpotent. Thus, because Z = 1 + Ξ, by Lemma 7.3 we
have that Z is invertible.
(c) From Lemma 7.3 we have that
X
(−1)k Ξk = 1 − Ξ + Ξ2 − · · ·
Z −1 = (1 + Ξ)−1 =
k≥0
68
Csar-Johnson-Lamberty
Let us examine Ξi . By an iteration of Lemma 7.1, we have that the scattering matrix for Ξi
will be
X
ξi (σ, τ ) =
ξ(σ, ϕ1 )ξ(ϕ1 , ϕ2 ) · · · ξ(ϕi−1 , τ )
ϕ1 ,··· ,ϕi−1 ∈K
This scattering matrix is, like ξ, strictly upper triangular. Note that this also implies that the
ξi (σ, τ ) gives a value of 0 for the diagonal terms σ = σ. Furthermore, the identity operator
1 has scattering matrix δ(σ, τ ), which gives a value of 1 on the diagonal terms and zero
elsewhere. We therefore have proven equations (7.4a) and (7.4b). To prove the third, we
convert the equality 1 = Z −1 Z into a statement about scattering matrices
δ = µ ∗ ζ.
By Lemma 7.1, we therefore have
X
δ(σ, τ ) =
µ(σ, ϕ)ζ(ϕ, τ )
ϕ∈K
Thus, for σ 6= τ , we have that
0=
X
µ(σ, ϕ)ζ(ϕ, τ ) =
ϕ∈K
X
µ(σ, ϕ)
σϕτ
and thus
µ(σ, τ ) = −
X
µ(σ, ϕ).
σϕτ
⊓
⊔
Definition 7.5. (a) Suppose A, B are two subsets of a set V . A chain of length k ≥ 0
between A and B is a sequence of subsets
A = C0
C1 ( · · ·
Ck = B.
We denote by ck (A, B) the number of chains of length k between A and B. We set
(
0 B 6= A
c0 (A, B) =
,
1 B=A
and
c(A, B) =
X
(−1)k ck (A, B).
k
(b) We set
ck (n) := ck (∅, B), c(n) = c(∅, B),
⊓
⊔
where B is a set of cardinality n.
Lemma 7.6. If σ, τ are faces of the combinatorial simplicial complex K then
µ(σ, τ ) = c(σ, τ ).
Möbius inversion
69
Proof. We have that µ is the scattering matrix of
X
Z −1 =
(−1)k Ξk = 1 − Ξ + Ξ2 − · · · ,
k≥0
so µ should be equal to the scattering matrix of the right hand side, i.e.
X
µ(σ, τ ) =
(−1)k ξk (σ, τ ),
k≥0
where ξk (σ, τ ) is, as before,
X
ξk (σ, τ ) =
ξ(σ, ϕ1 )ξ(ϕ1 , ϕ2 ) · · · ξ(ϕk−1, τ ) =
ϕ1 ,··· ,ϕk−1 ∈K
X
1 = ck (σ, τ ).
σ=ϕ0 ϕ1 ···ϕk−1 ϕk =τ
Furthermore, because c0 (σ, τ ) = δ(σ, τ ), we have that
X
(−1)k ck (σ, τ ) = c(σ, τ ).
µ(σ, τ ) =
k≥0
⊓
⊔
Lemma 7.7. We have the following equalities.
(a)
X n
ck−1 (n − j).
ck (n) =
j
j>0
(b) c(n) = (−1)n .
(c) If A ⊂ B are two finite sets then
c(A, B) = (−1)#B−#A .
Proof of a. We recall that ck (n) is the number of chains of length k in C(B, ∅), where |B| =
n. Then, ck−1(n − j) is the number of chains in C(A, ∅), where |A| = n − j. Consider a
chain of length k − 1 in C(A, ∅) where A ⊂ B ⊂ V : ∅ ( C1 ( · · · ( Ck−1 = A. There is
only one option for turning it into a chainof length k in C(B, ∅), namely: ∅ ( C1 ( · · · (
Ck−1 ( Ck = B. For each j, there are nj ways to choose the set J of elements to be taken
out of B, i.e. to take A = B r J, |A| = n − j. Then there are ck−1 (n − j) chains of length
k in C(B, ∅) to be constructed as above. Thus,
X n
ck−1 (n − j)
ck (n) =
j
j>0
⊓
⊔
Proof of b. We note that the statement is trivially true for n = 0, and we use this as a base
case for induction. It is also important to note that for n 6= 0, c0 (n) = 0. By part a,
X n
k
k
ck−1(n − j)
(−1) ck (n) = (−1)
j
j>0
70
Then,
Csar-Johnson-Lamberty
X X n
(−1)k−1 cn−1 (n − j)
c(n) =
(−1) ck (n) = −
j
k≥0
k≥0 j>0
X n X
=−
(−1)k−1 ck−1 (n − j)
j
j>0
k≥1
X n
X n X
k
c(n − j).
(−1) ck (n − j) = −
=−
j
j
j>0
j>0
k≥0
X
k
By the induction hypothesis,
X n
X n
X n
n−j
n
(−1)n−j .
(−1)
= (−1) −
c(n − j) = −
−
j
j
j
j>0
j>0
j≥0
The last sum is the Newton binomial expansion of (1 − 1)n so it is equal to zero. Hence
c(n) = (−1)n .
⊓
⊔
Proof of c. Suppose A = {α1 , . . . , αj }. If A ⊂ B, B = {α1 , . . . , αj , β1 , . . . , βm }. Then a
chain in ck (A, B) can be identified with a chain of length k of the set
B r A = {β1 , . . . , βm }.
Therefore,
ck (A, B) = ck (∅, B r A) = ck (#B − #A).
In particular, c(A, B) = (−1)(#B−#A) .
⊓
⊔
Theorem 7.8 (Möbius inversion formula). Let V be a finite set and K ∈ Σ(V ) suppose that
f, u : K → R are two functions satisfying the equality
X
u(σ) =
f (σ), ∀σ ∈ K.
τ σ
Then
f (σ) =
X
(−1)dim τ −dim σ u(τ ).
τ σ
Proof. Note that u(σ) = (Zf )σ so that Z −1 u = (Z −1 Z)f = f . Using (7.2), and realizing
that µ is the scattering matrix of Z −1 we deduce
X
f (σ) = (Z −1 u)(σ) =
µ(σ, τ )u(τ )
τ ∈K
By Lemma 7.6,
f (σ) =
X
c(σ, τ )u(τ )
τ ⊇σ
Then, by Lemma 7.7,
f (σ) =
X
τ ⊇σ
(−1)dim τ −dim σ u(τ ).
Möbius inversion
71
⊓
⊔
Corollary 7.9. Suppose (C, T) is an ASC. Then
X
IC =
m(S)IS
S∈T
where
m(S) :=
X
(−1)dim T −dim S = (−1)dim S
T S
X
(−1)dim T .
T S
In particular, we deduce that the coefficients m(S) depend only on the combinatorial simplicial complex associated to (C, T).
Proof. Define f, u : T → Z by
f (T ) := (−1)dim T and u(S) := (−1)dim S m(S) =
X
f (T ).
T S
We apply the Möbius inversion formula and see that
X
X
f (S) =
(−1)dim T −dim S u(T ) = (−1)dim S
m(T ).
T ≥S
T S
Then,
1 = (−1)dim S f (S) =
X
m(T ).
(7.5)
T S
Now consider the function L : Rn → Z,
L(x) =
X
m(S)IS (x).
S∈T
Clearly, L(x) = 0 if x 6∈ C. Suppose x ∈ C. Denote by Tx the collection of simplices T ∈ T
such that x ∈ T . Then
\
Sx :=
T
T ∈Tx
is a simplex in T and in fact it is the minimal affine simplex in T containing x. We have
L(x) =
X
S∈T
Therefore,
m(S)IS (x) =
X
m(S)IS (x) =
SSx
X
X
(7.5)
m(S) = 1.
SSx
m(S)IS = IC
S∈T
⊓
⊔
72
Csar-Johnson-Lamberty
§7.3. The Local Euler Characteristic. We begin by defining an important concept in the
theory of affine simplicial complexes, namely the concept of barycentric subdivision. As its
definition is a bit involved we discuss first a simple example.
Example 7.10. (a) Suppose [v0 , v1 ] is an affine 1-simplex in some Euclidean space Rn . Its
barycenter is precisely the midpoint of the segment and we denote it by v01 . The barycentric
subdivision of the line segment [v0 v1 ] is the triangulation depicted at the top of Figure 11
consisting of the affine simplices.
v0 , v01 , v1 , [v0 , v01 ], [v01 , v1 ].
v0
v1
v1
v
v
01
0
v0
v0
v0
v0 2
1
v0 1 2
v1
v2
v1
v12
v
2
F IGURE 11. Barycentric subdivisions
(b) Suppose [v0 , v1 , v2 ] is an affine 2-simplex in some Euclidean space Rn . We denote the
barycenter of the face [vi , vj ] by vij = vji and we denote by v012 the barycenter of the two
simplex.
Observe that there exists a natural bijection between the faces of [v0 , v1 , v2 ] and the nonempty
subsets of {0, 1, 2}. For every such subset σ ⊂ {0, 1, 2} we denote by vσ the barycenter of
the face corresponding to σ. For example,
v{0,1} = v01 etc..
To any chain subsets of {0, 1, 2}, i.e. a strictly increasing family of subsets, we can associate
a simplex. For example, to the increasing family
{2} ⊂ {0, 2} ⊂ {0, 1, 2}
we associate in Figure 11 the triangle [v2 , v02 , v012 ]. We obtain in this fashion a triangulation
of the 2-simplex [v0 , v1 , v2 ] whose simplices correspond bijectively to the chain of subsets.
The next two results follow directly from the definitions.
Lemma 7.11. Suppose [v0 , · · · , vk ] is an affine simplex in Rn . For every nonempty subset
σ ⊂ {0, · · · , k} we denote by vσ the barycenter of the affine simplex spanned by the points
{vs | s ∈ σ}. Then for every chain
∅ ( σ0 ( σ1 ( · · · ( σℓ
of subsets of {0, 1 · · · , k} the barycenters vσ0 , · · · , vσℓ are affinely independent.⊓
⊔
Möbius inversion
73
Proposition 7.12. Suppose (C, T) is an affine simplicial complex. For every chain
S0 ( S1 ( · · · ( Sk
of simplices in T we denote by ∆[S0 , · · · , Sk ] the affine simplex spanned by the barycenters
of S0 , · · · , Sk and we denote T ′ the collection of all the simplices ∆[S0 , · · · , Sk ] obtained in
this fashion. Then T ′ is a triangulation of T.⊓
⊔
Definition 7.13. The triangulation T ′ constructed in Proposition 7.12 is called the barycentric subdivision of the triangulation T.
⊓
⊔
Definition 7.14. Suppose (C, T) is an affine simplicial complex and v is a vertex of T.
(a) The star of v in T is the collection of all simplices of T which contain v as a vertex. We
denote the star by StT(v). For every simplex S ∈ StT(v) we denote by S/v the face of S
opposite to v.
(b) The link of v in T is the polytope
[
lkT(v) :=
S/v,
S∈St(v)
with the triangulation induced from T ′ .
(c) For every face S of T we denote by bS its barycenter. We define link of S in T to be the
link of bS in the barycentric subdivision
lkT(S) := lkT′ (bS ).
(d) For every face S of T we define the local Euler characteristic of (C, T) along S to be the
integer
χS (C, T) := 1 − χ lkT(S) .
⊓
⊔
c
3
a
c
1
a
1
a
2
c2
d2
c
3
o
d
2
c
1
o
d1
d3
b
d
3
c
1
a
3
d
2
d
1
d
3
F IGURE 12. An open book with three pages
Example 7.15. Consider the polyconvex set C consisting of three triangles in space which
have a common edge. Denote these triangles by [a, b, ai ], i = 1, 2, 3 (see Figure 12).
We denote by o the barycenter of [a, b], by ci the barycenter of [a, ai ] and by di the barycenter of [a, b, ai ]. Then the link of the vertex a is the star with tree arms joined at o depicted at
the top right hand side. It has Euler characteristic 1. The link of [a, b] is the set consisting of
74
Csar-Johnson-Lamberty
three points {d1 , d2 , d3 }. It has Euler characteristic 3 The local Euler characteristic along a
is 0, while the local Euler characteristic along [a, b] is −2.
⊓
⊔
Remark 7.16. Suppose (C, T) is an ASC. Denote by VT ⊂ T the set of vertices, i.e. the
collection of 0-dimensional simplices. Recall that the associated combinatorial complex K
consists of the collection
V (S), S ∈ T
where V (S) denotes the vertex set of S. The m-dimensional simplices the barycentric subdivision correspond bijectively to chains of length m in K.
Let S in T and denote by σ its vertex set. The number of m-dimensional simplices in
StT′ (bS ) is equal to
X
cm (σ, τ ).
⊓
⊔
τ ⊇σ
Proposition 7.17. Suppose (C, T) is an ASC. Then for every S ∈ T we have
X
χS (C, T) =
(−1)dim T −dim S ,
so that IC =
P
T S
S∈T χS (C, T)IS .
In particular,
Z
X
X
dim S
(−1)
= χ(C) = IC dµ0 =
χS (C, T).
S∈T
S∈T
Proof. To begin, for a simplicial complex, K, let Fm (K) = {faces of dimension m in K}
and recall that
X
χ(K) =
(−1)m #Fm (K).
m≥0
The (m−1)-dimensional faces of lkT(S) (m ≥ 1) correspond bijectively toPthe m-dimensional
simplices T ′ ∈ T ′ containing bS . According to Remark 7.16 there are T S cm (S, T ) of
them. Hence
X
Fm−1 (lkT(S)) =
cm (S, T )
T S
Hence
χ(lkT(S)) = −
X
(−1)m
m>0
so that
χS (C, T) = 1 − χ(lkT(S)) = 1 +
X
cm (S, T )
T S
X
(−1)m
m>0
=
XX
m≥0 T S
(−1)m cm (S, T ) =
XX
(−1)m cm (S, T ) =
T S m≥0
The above proof implies the following result.
X
cm (S, T )
T S
X
T S
c(S, T ) =
X
(−1)dim T −dim S .
T S
⊓
⊔
Möbius inversion
75
Corollary 7.18. For any vertex v ∈ VT we have
X
χ(lkT(v)) = −
(−1)dim S .
⊓
⊔
S)v
Corollary 7.19. Suppose (C, T) is an ASC in Rn , R is an commutative ring with 1, and
µ : Polycon(n) → R a valuation. Then
X
µ(C) =
χS (C, T)µ(S).
S∈T
In particular, if we let µ be the Euler characteristic we deduce
X
X
(−1)dim S = χ(C) =
χS (C, T).
R
S∈T
S∈T
Proof. Denote by dµ the integral defined by µ. Then
!
Z
Z X
Z
X
X
µ(C) = IC dµ =
χS (C, T)IS dµ =
χS (C, T) IS dµ =
χS (C, T)µ(C).
S∈T
S∈T
S∈T
⊓
⊔
76
Csar-Johnson-Lamberty
8. Morse theory on polytopes in R3
§8.1. Linear Morse Functions on Polytopes. Suppose (C, T) is an ASC in R3 . For simplicity we assume that all the faces of T have dimension < 3. Denote by h−, −i the canonical
inner product in R3 and S2 the unit sphere centered at the origin of R3 .
Any vector u ∈ S2 defines a linear map Lu : R3 → R given by
Lu (x) := hu, xi, ∀x ∈ R3 .
We denote by ℓu its restriction to C. We say the u is T-nondegenerate if the restriction of Lu
to the set of vertices of T is injective. Otherwise u is called T-degenerate. We denote by ∆T
the set of degenerate vectors.
Definition 8.1. A linear Morse function on (C, T) is a function of the form ℓu , whereu is a
T-nondegenerate unit vector.
⊓
⊔
Lemma 8.2 (Bertini-Sard). ∆T is a subset of S2 of zero (surface) area.
Proof. Let v1 , · · · , vk be the vertices of T. For u ∈ S2 to be a T-degenerate vector, it must
be perpendicular to an edge connecting two of the vi . The set of all such lines is a plane in
R3 , and the intersection
of such a plane and S2 is a great circle. Therefore, ∆T is composed
of at most k2 great circles, meaning ∆T is a finite set and thus has zero surface area.
⊓
⊔
Suppose u is T-nondegenerate. For every t ∈ R we set
Ct = x ∈ C | ℓu (x) = t .
In other words, Ct is the intersection of C with the affine plane Hu,x0 := {x | hu, xi =
hu, x0 i = t}. We denote by VT the set of vertices of T. For every nondegenerate vector u the
function ℓu defines an injection
ℓu : VT → R.
Its image is a subset Ku ⊂ R called the critical set of ℓu . The elements of Ku are called the
critical values of ℓu .
Lemma 8.3. For every t ∈ R the slice Ct is a polytope of dimension ≤ 1.
Proof. Ct is the intersection of C with the plane Ht = {hu, xi}. Ht contains at most one
vertex, so it must intersect each face transversally. The intersection with each face will have
codimension 1 inside that face. Hence, no intersection can have dimension greater than 1. ⊓
⊔
The above proof also shows that the slice Ct has a natural simplicial structure induced
from the simplicial structure of C. The faces of Ct are the non-empty intersections of the
faces of C with the hyperplane Ht .
Remark 8.4. Define a binary relation
R = Rt,t+ǫ ⊂ V (Ct+ǫ ) × V (Ct )
such that for a ∈ V (Ct+ǫ ) and b ∈ V (Ct )
a Rt,t+ǫ b ⇒ a and b lie on the same edge of C
Morse theory
77
Observe that fixed t and for ǫ ≪ 1, the binary relation Rt,t+ǫ is the graph of a map
V (Ct+ǫ ) → V (Ct )
which preserves the incidence relation. This means the following:
• ∀a ∈ Ct+ǫ , ∃ a unique b ∈ V (Ct ) such that aRb, and
• If [a0 , a1 , · · · , ak ] is a face of Ct+ǫ and ai Rbi for bi ∈ V (Ct ), then [b0 , b1 , ..., bk ], is a face of Ct .
We will denote the map V (Ct+ǫ ) → V (Ct ) by the same symbol R.
⊓
⊔
Denote by χu (t) the Euler characteristic of Ct . We know that
X
χ(C) =
ju (t),
t
where ju (t) denotes the jump of χu at t,
ju (t) := χu (t) − χu (t + 0).
Every nondegenerate vector u defines a map
j(u|−) : VT → Z, j(u|x) := the jump of χu at the critical value ℓu (x).
Lemma 8.5.
ju (t) 6= 0 =⇒ t ∈ Ku .
Proof. By definition ju (t) 6= 0 ⇒ χ(Ct ) 6= χ(Ct+0 ). We would like to better understand
the relationship between Ct and Ct+ǫ for any ǫ ≪ 1. By Remark 8.4, for ǫ sufficiently small
Rt,t+ǫ defines a map V (Ct+ǫ ) → V (Ct ) which preserves the incidence relationship. Assume
that t ∈
/ Ku . All points in V (Ct ) and V (Ct+ǫ ) come from transversal intersections of edges of
C. Because these edges cannot terminate anywhere inbetween those two points and because
the intersection is transversal (and thus unique), for small ǫ we have that V (Ct+ǫ ) → V (Ct ) is
a bijection. Any edge which connects two vertices in Ct+ǫ is subsequently mapped by to the
edge connecting the corresponding vertices in Ct . By the Euler-Schläfli-Poincaré formula,
χ(Ct+ǫ ) = χ(Ct ). But this implies that ju (t) = 0, a contradiction. Thus t ∈ Ku .
⊓
⊔
Lemma 8.5 shows that
χ(C) =
X
j(u|x).
(8.1)
x∈VT
For every x ∈ VT we have a function
j(−|x) : S2 \ ∆T → Z, S2 \ ∆T ∋ u 7→ j(u|x).
Now, for every vertex x we set
1
ρ(x) :=
4π
Z
j(u|x)dσ(u),
S2 \∆T
where dσ(u) denotes the area element on S2 such that
Area (S2 ) = Area (S2 \ ∆T) = 4π.
(8.2)
78
Csar-Johnson-Lamberty
Integrating (8.1) over S2 \ ∆T we deduce
Z
X
XZ
j(u|x) dσ(u) =
4π =
S2 \∆T
x∈VT
x∈VT
so that
X
χ(C) =
j(u|x) = 4π
S2 \∆T
X
ρ(x)
x∈VT
ρ(x)
(8.3)
x∈VT
We dedicate the remainder of this section to giving a more concrete description of ρ(x).
§8.2. The Morse Index. We begin by providing a combinatorial description of the jumps.
Let u ∈ S2 be a T-nondegenerate vector, x0 a vertex of the triangulation T. We set
t0 := ℓu (x0 ) = hu, x0 i.
We denote by St+
T (x0 , u) the collection of simplices in T which admit x0 as a vertex and are
contained in the half-space
+
Hu,x
= v ∈ R3 | hu, vi ≥ hu, x0 i .
0
We define the Morse index of u at x0 to be the integer
X
(−1)dim S .
µ(u|x0 ) :=
S∈St+
T (x0 ,u)
Example 8.6. Consider the situation depicted in Figure 13.
x1
x
2
x
x3
0
x
8
x6
x7
t
0
Ct
0
x
x4
5
t +ε
0
Ct +ε
0
F IGURE 13. Slicing a polytope.
The hyperplanes ℓu = const are depicted as vertical lines. The slice Ct0 is a segment, and
for every ǫ > 0 sufficiently small the slice Ct0 +ǫ consists of two segments and two points.
Hence
χu (t0 ) = 1, χu (t0 + ǫ) = 4, j(u|x0 ) = ju (t0 ) = 1 − 4 = −3.
Morse theory
79
Now observe that St+
T (x0 , u) consists of the simplices
{x0 }, [x0 , x1 ], [x0 , x2 ], [x0 , x3 ], [x0 , x5 ], [x0 , x6 ], [x0 , x1 , x5 ].
We see that St+
T (x0 , u) consists of 1 simplex of dimension zero, 5 simplices of dimension 1
and one simplex of dimension 2 so that
µ(u|x0 ) = 1 − 5 + 1 = −3 = j(u|x0 ).
We will see that the above equality is no accident.
⊓
⊔
Proposition 8.7. Let (C, T) be a polytope in R3 such that all its simplices have dimension
≤ 2. Then for any T-nondegenerate vector u and any vertex x0 of T the jump of ℓu at x0 is
equal to the Morse index of u at x0 , i.e.
j(u|x0 ) = µ(u|x0).
Proof. By definition,
j(u|x0) = χ(Ct0 ) − χ(Ct0 +0 )
We will again utilize the binary relation R defined in Remark 8.4.
We have that t0 is a critical value, so Ct0 contains a unique vertex x0 of C. Denote by
R−1 (x0 ) the set of all vertices of V (Ct0 +ǫ ) which are mapped to x0 by Rt0 ,t0 +ǫ . Then the
induced map V (Ct0 +ǫ ) r R−1 (x0 ) → V (Ct0 ) r {x0 } behaves as it did in Lemma 8.5 is a
bijection on these sets and preserves the face incidence relation (see Figure 14). Using the
Euler-Schläfli-Poincaré formula we then deduce that
χ(Ct0 ) − χ(Ct0 +ǫ )
= # {x0 } − # R−1 (x0 ) − # Edges in Ct+ǫ connecting vertices in R−1 (x0 ) .
Note here that
and
+
# R−1 (x0 ) = # Edges of C inside Hu,x
and containing x0 ,
0
Thus we see that
# Edges in Ct+ǫ connecting vertices in R−1 (x0 )
+
= # Triangles of C inside Hu,x
and
containig
x
.
0
0
+
j(u|x0 ) = # {x0 } − # Edges of C inside Hu,x
and
containing
x
0
0
+
+# Triangles of C inside Hu,x
and containig x0
0
X
(−1)dim S = µ(u|x0 ).
=
S∈St+
T (x0 ,u)
⊓
⊔
80
Csar-Johnson-Lamberty
t
Ct + ε
0
Ct
0
x0
F IGURE 14. The behavior of the map V (Ct0 +ǫ ) → V (Ct0 ).
§8.3. Combinatorial Curvature. To formulate the notion of combinatorial curvature we
need to introduce the notion of conormal cone. For u, x ∈ Rn define
+
Hu,x
:= y ∈ Rn | hu, yi ≥ hu, xi .
+
Note that if u 6= 0 then Hu,x
is a half-space containing x on its boundary. u is normal to the
boundary and points towards the interior of this half-space.
Definition 8.8. Suppose P is a convex polytope in Rn and x is a point in P . The conormal
cone of x ∈ C is the set
+
Cx (P ) = Cx (P, Rn ) := u ∈ Rn | P ⊂ Hu,x
.
⊓
⊔
We create an equivalent and more useful description of the conormal cone.
+
Cx (P ) := u ∈ Rn | P ⊂ Hu,x
= u ∈ Rn | hu, yi ≥ hu, xi, ∀y ∈ P ,
The following result follows immediately from the definition.
Proposition 8.9. The conormal cone is a convex cone, i.e. it satisfies the conditions
u ∈ Cx (P ), t ∈ [0, ∞) =⇒ tu ∈ Cx (P )
u0, u1 ∈ Cx (P ) =⇒ u0 + u1 ∈ Cx (P ).
⊓
⊔
Proposition 8.9 shows that the conormal cone is an (infinite) union of rays (half-lines)
starting at the origin. Each one of these rays intersects the unit sphere Sn−1 , and as a ray
sweeps the cone P , its intersection with the sphere sweeps a region Ωx (P ) on the sphere,
Ωx (P ) = Ωx (P, Rn ) := Cx (P ) ∩ Sn−1 .
The more elaborate notation Ωx (P, Rn ) is meant to emphasize that the region Ωx (P ) depends
on the ambient space Rn . Thus we also have
Ωx (P ) = u ∈ Sn−1 | hu, yi ≥ hu, xi, ∀y ∈ P
Morse theory
81
We denote σn−1 the total (n − 1)-dimensional surface area of Sn−1 and by ωx (P ) the
(n − 1)-dimensional “surface area” of Ωx (P ) divided by σn−1 ,
arean−1 Ωx (P )
arean−1 Ωx (P )
ωx (P ) :=
=
.
arean−1 (Sn−1 )
σn−1
Remark 8.10. One can show that ωx (P ) is independent of the dimension n of the ambient
space Rn , i.e. if we regard P as a polytope in an Euclidean space RN ⊃ Rn then we obtain
the same result for ωx (P ). We will not pursue this aspect here.
⊓
⊔
Proposition 8.11. (a) If P ⊂ R3 is a zero simplex [x] then
ωx (x) = 1.
(b) If P = [x0 , x1 ] ⊂ R3 is a 1-simplex then
1
ωx0 ([x0 , x1 ]) = .
2
3
(c) If P = [x0 , x1 , x2 ] ⊂ R is a 2-simplex and the angle at the vertex xi is ri π, ri ∈ (0, 1)
then
1 ri
r0 + r1 + r2 = 1, ωxi ([x0 , x1 , x2 ]) = − .
2
2
Proof. (a) For [x] ⊂ R3 a singleton, clearly
Ω[x] (P ) = u ∈ Sn−1 | hu, yi ≥ hu, xi, ∀y ∈ [x] = u ∈ Sn−1 | hu, xi ≥ hu, xi = Sn−1
and thus
ω[x] (P ) = 1
(b) We can fix coordinates such that [x0 , x1 ] lies horizontal with x0 located at the origin. It
is then easy to see that the conormal cone is the half-space with boundary perpendicular to
[x0 , x1 ] passing through x0 with rays pointing “towards” x1 . Therefore, ωx0 = 21 .
(c) Without loss of generality, we can consider ωx0 . We fix coordinates such that x0 lies at
the origin and [x0 , x1 , x2 ] lies in a plane with [x0 , x1 ] lying along an axis. The conormal
cones of [x0 , x1 ] and [x0 , x2 ] are each a half-space as described above. The conormal cone
of [x0 , x1 , x2 ] is then the intersection of these two cones. This intersection is bounded by
planes determined by the perpendiculars to [x0 , x1 ] and [x0 , x1 ], both passing through x0 . In
other words, the intersection of the conormal cone with the sphere is a lune. Call the angle
of the opening of the lune θ. The angles between the perpendicular lines and the sides of the
triangle are given by π2 − πr0 (possibly negative), and the total angle is
π
π
− πr0 +
− πr0 + πr0 = π − πr0
θ=
2
2
The area of the lune defined by θ0 is, in spherical coordinates,
Z θ0 Z π
sin(φ)dφ dθ = 2θ0
0
0
Thus
ωx0 =
1 r1
2θ
= − .
4π
2
2
82
Csar-Johnson-Lamberty
⊓
⊔
π
πr i
2
πr i
π
2
πr i
F IGURE 15. The angle between the perpendiculars of [x0 , x1 ] and [x0 , x2 ].
Suppose (C, T) is an affine simplicial complex in R3 such that all the simplices in T have
dimension ≤ 2. Recall that for every vertex v ∈ VT we defined its star to be the collection of
all simplices in T which admit v as a vertex. We now define the combinatorial curvature of
(C, T) at v ∈ VT to be the quantity
X
κ(v) :=
(−1)dim S ωv (S).
S∈StT (v)
Example 8.12. (a) Consider a rectangle A1 A2 A3 A4 . Then any interior point O determines
a triangulation of the rectangle as in Figure 16.
A1
A4
O
A
2
A3
F IGURE 16. A simple triangulation of a rectangle.
Suppose that ∡(Ai OAi+1) = ri π, i = 1, 2, 3, 4 so that
r1 + r2 + r3 + r4 = 2.
The star of O in the above triangulation consists of the simplices
{O}, [OAi], [OAi Ai+1 ], i = 1, 2, 3, 4.
Morse theory
83
Then
1
1 ri
ωO (O) = 1, ωO ([OAi ]) = , ωO ([OAi Ai+1 ]) = − .
2
2
2
We deduce that the combinatorial curvature at O is
4
4
X
1
1 X 1 ri = 1 − 2 + 2 − (r1 + r2 + r3 + r4 ) = 0.
+
−
1−
2 i=1 2
2
2
i=1
This corresponds to our intuition that a rectangle is “flat”. Note that the above equality can
be written as
4
X
2πκ(O) = 2π −
∡(Ai OAi+1 ).
i=1
(b) Suppose (C, T) is the standard triangulation of the boundary of a tetrahedron [v0 , v1 , v2 , v3 ]
in R3 (see Figure 17).
v
0
v1
v2
v
3
F IGURE 17. The boundary of a tetrahedron.
Set
θijk = ∡(vi vj vk ), i, j, k = 0, 1, 2, 3..
Then the star of v0 consists of the simplices
{v0 }, [v0 vi ], [v0 vi vj ], i, j ∈ {1, 2, 3}, i 6= j.
We deduce
1
1
(π − θi0j )
ωv0 (v0 ) = 1, ωv0 ([v0 vi ]) = , ωv0 ([v0 vi vj ]) =
2
2π
and
Hence
3
1
1 X
κ(v0 ) = 1 − +
(π − θi0j ) =
2 2π 1≤i<j≤3
2π
2π −
X
1≤i<j≤3
θi0j
!
.
2πκ(v0 ) = 2π − the sum of the angles at v0 .
This resembles the formula we found in (a) and suggests an interpretation of the curvature
as a measure of deviation from flatness. Note that in the limiting case when the vertex v0
converges to a point in the interior of [v1 v2 v3 ] the sum of the angles at v0 converges to 2π,
and the boundary of the tetrahedron is “less and less curved” at v0 . On the other hand, if the
tetrahedron is “very sharp” at v0 , the sum of the angles at v0 is very small and the curvature
approaches 2π.
⊓
⊔
84
Csar-Johnson-Lamberty
Motivated by the above example we introduce the notion of angular defect.
Definition 8.13. Suppose (C, T) is an affine simplicial complex in R3 such that all simplices
have dimension ≤ 2. For every vertex v of T. we denote by Θ(v) the sum of the angles at v
of the triangles in T which have v as a vertex. The defect at v is the quantity
def (v) := 2π − Θ(v).
⊓
⊔
Proposition 8.14. If (C, T) is an affine simplicial complex in R3 such that all simplices have
dimension ≤ 2 then for every vertex v of T we have
κ(v) =
1
1
def (v) − χ( lkT(v) ).
2π
2
Proof. Using Proposition 8.11 we deduce that
1 1
1 Θ(v)
κ(v) = 1 − # edges at v + # triangles at v −
2
2
2π
1 1
# edges at v − # triangles at v
def (v) −
=
2π
2
The proposition now follows from Corollary 7.18 which states that
.
χ( lkT(v) ) = # edges at v − # triangles at v
⊓
⊔
Theorem 8.15 (Combinatorial Gauss-Bonnet). If (C, T) is an affine simplicial complex in
R3 such that all the simplices in T have dimension ≤ 2 then
X
1 X
1 X
def (v) −
χ( lkT(v) ).
κ(v) =
χ(C) =
2π v∈V
2 v∈V
v∈V
T
T
T
Proof. Let V represent the number of vertices in C, E the number of edges in C and T the
number of triangles in C. Then we denote by Ev , Tv the number of edges and triangles in
StT(v) for v ∈ VT, respectively. We note that
X
χ(lkT(v)) =
(−1)dim S−1 = Ev − Tv
S∈StT (v)rv
We note that each edge is in two stars and each triangle in three. Then,
1 X
3
−
χ(lkT(v)) = −E + T
2 v∈V
2
T
1
def (v) = 1 − Θ(v)
. Thus,
Recall that def (v) = 2π − Θ(v). Therefore, 2π
2π
X
X 1
Θ(v)
T
1−
def (v) =
=V −
2π
2π
2
v∈V
v∈V
T
T
Morse theory
Consequently,
X
85
κ(v) = V − E + T = χ(C)
v∈VT
⊓
⊔
Recall that a combinatorial surface is an ASC with simplices of dimension ≤ 2 such that
every edge is the face of exactly two triangles. In this case, the link of every vertex is a
simple cycle, that is a 1-dimensional ASC whose vertex set has a cyclical ordering
v1 , v2 , · · · , vn , vn+1 = v1
and whose only edges are [vi , vi+1 ], i = 1, · · · , n. The Euler characteristic of such a cycle is
zero. We thus obtain the following result of T. Banchoff, [Ban]
Corollary 8.16. If C is a combinatorial surface with vertex set V then
1 X
def (v).
χ(C) =
2π v∈V
⊓
⊔
We can now finally close the circle and relate the combinatorial curvature to the average
Morse index.
Theorem 8.17 (Microlocal Gauss-Bonnet). If (C, T) is an affine simplicial complex in R3
such that all the simplices in T have dimension ≤ 2 then
κ(v) = ρ(v), ∀v ∈ VT,
where ρ is defined by (8.2).
Proof. Fix x ∈ VT. We now recall some previous ideas. Proposition 8.7 tells us that for any
u ∈ S2 \ ∆T,
X
(−1)dim S = µ(u|x) = j(u|x).
S∈StT (x,u)
We then recall from the proof of Lemma 8.2 that ∆T is a finite union of great circles on
S2 . Thus, S2 \ ∆T consists of a finite union of chambers, A1 , . . . , Am . We now note that for
+
any i, 1 ≤ i ≤ m, St+
T (x, u) = StT (x, v) for any u, v ∈ Ai . So, letting vi be an element in
Ai , we have:
Z
Z
1
1
ρ(x) =
j(u|x)dσ(u) =
µ(u|x)dσ(u)
4π S2 \∆T
4π S2 \∆T
1
=
4π
Z
S2 \∆T
X
m
(−1)
dim S
S∈St+
T (x,u)
m
1 X
=
4π i=1
1 X
dσ(u) =
4π i=1
X
dim S
(−1)
S∈St+
T (x,vi )
Z
Ai
Z
Ai
X
(−1)dim S dσ(u)
S∈St+
T (x,u)
dσ(u).
86
Csar-Johnson-Lamberty
A3
A1
A5
A2
A6
x1
A4
x2
x0
F IGURE 18. The chambers which coincide with the conormal section
Cx (S) ∩ S2 .
Considering this sum, we see that every term of it has a (−1)dim S in it for some S ∈ StT(x).
We also notice that every S ∈ StT(x) appears at least once. So, we expand the sum, collect
the coefficients for each (−1)dim S and if we set
kS := # i; S ∈ St+ (x, vi )
we obtain
m
1 X
ρ(x) =
4π i=1
X
(−1)
dim S
S∈St+
T (x,vi )
PkS R
Z
1
dσ(u) =
4π
Ai
X
(−1)
dim S
kS Z
X
j=1
S∈StT (x)
Aij
!
dσ(u) .
j=1 Aij dσ(u). This is the area of the set of vectors, u, on the
unit sphere such that S ∈ St+
T (x, u). That is, this sum is the area of the set of vectors, u,
+
on the unit sphere such that S lies in Hu,x
. But this is precisely the area of Cx (S) ∩ S2 (see
2
Figure 18). Now recall that ωx (S) = area(Cx4π(S)∩S ) (since the area of the unit sphere in R3 is
Now we consider the sum
4π). Thus, we have:
1
ρ(x) =
4π
X
(−1)dim S
kS Z
X
j=1
S∈StT (x)
=
dσ(u)
Aij
X
!
=
1
4π
X
(−1)dim S (area(Cx (S) ∩ S2 ))
S∈StT (x)
(−1)dim S ωx (S) = κ(x).
S∈StT (x)
⊓
⊔
Remark 8.18. The above equality can be interpreted as saying that the curvature at a vertex
x0 is the average Morse index at x0 of a linear Morse function ℓu .
⊓
⊔
Morse theory
87
References
[Ban]
[Gon]
[Gr]
[KR]
[San]
T.F. Banchoff: Critical points and curvature for embedded polyhedral surfaces, Amer. Math.
Monthly, 77(1970), 475-485.
A.B. Goncharov: Differential equations and integral geometry, Adv. in Math., 131(1997), 279343.
H. Groemer: Geometric Applications of Fourier Series and Spherical Harmonics, Cambridge
University Press, 1996.
D.A. Klain, G.-C. Rota: Introduction to Geometric Probability, Cambridge University Press,
1997.
L. Santalo: Integral Geometry and Geometric Probability, Cambridge University Press, 2004.
D EPARTMENT OF M ATH . U.C. B ERKELEY, B ERKELEY, CA 94720
E-mail address: [email protected]
D EPARTMENT OF M ATH ., U NIV. OF C HICAGO , C HICAGO , IL 60637
E-mail address: [email protected]
D EPARTMENT OF M ATH ., U NIV. OF N OTRE DAME , N OTRE DAME , IN 46556
E-mail address: [email protected]