OPERATOR ALGEBRAS
IAIN RAEBURN
Abstract. These are working notes for a course of 5 lectures at the Research
School on Leavitt Path Algebras and Graph C ∗ -Algebras at the Nesin Mathematics
Village in June-July 2015.
The object of the course was to give students and researchers from other areas an
overview of the C ∗ -algebra background needed to understand papers on graph C ∗ algebras. So the course concentrated on the aspects of the general theory one might
need to know in practice, and how they are used. Very few proofs were discussed
in the lectures, though a few more are given in these notes.
The axioms of a C ∗ -algebra are complicated, but they were designed to characterise the algebras which arise as C ∗ -subalgebras of the algebra of bounded linear
operators on a Hilbert space. So we started by looking in detail at that algebra.
1. Hilbert spaces
From the beginning of this subject, the field of scalars is the field of complex
numbers. Later we can explain why this is so important, but the bottom line is that
even basic linear algebra works much better over the complex numbers.
Let V be a vector space over C. An inner product on V is a map (· | ·) : V × V → C
satisfying
(a)
(b)
(c)
(d)
v 7→ (v | w) is linear for each fixed w ∈ V ;
(v | w) = (w | v) for all v, w ∈ V ;
(v | v) ≥ 0 for all v ∈ V ; and
(v | v) = 0 =⇒ v = 0V .
The pair consisting of V and the inner product is called an inner-product space. It
follows from parts (a) and (b) that the inner product is conjugate linear in the second
variable.
P
Example 1.1. For V = Cn , (v | w) := nk=1 vk wk is an inner product. The complex
conjugate is there to ensure that
(v | v) =
n
X
vk vk =
k=1
n
X
|vk |2
k=1
is nonnegative, so that (c) holds. Recall that a complex number z has |z| = 0 ⇐⇒
z = 0, so
(v | v) = 0 =⇒ |vk |2 = 0 for 1 ≤ k ≤ n =⇒ vk = 0 for 1 ≤ k ≤ n =⇒ v = 0V in V ,
and we also have (d).
1
2
IAIN RAEBURN
Example
1.2. Let `2 be the set of sequences x = {xn } of complex numbers satisfying
P∞
2
2
n=1 |xn | < ∞. Then ` (pronounced “little ell two”) is a vector space with the
operations cx + dy := {cxn + dyn }, and there is an inner product such that
∞
X
(1.1)
(x | y) =
xn yn ;
n=1
It takes some thought to see what these assertions entail. We first need to check
that `2 is a vector space. Scalar multiplication is given by c{xn } = {cxn }, so we need
to check that {cxn } is back in `2 . For this, we recall that for complex numbers c and
z we have |cz| = |c| |z|, so
∞
∞
X
X
2
|cxn | =
|c|2 |xn |2 ,
n=1
n=1
2
P
and this converges
(with finite sum |c| ( n |xn |2 )) by the algebra of series (for the
P
real series n {|xn |2 } and the real scalar |c|2 ). Addition is a little trickier: we want to
define {xn }+{yn } = {xn +yn }, but we need to prove that the sequence {xn +yn } is in
`2 . To see this, recall(?) that for positive real numbers a and b we have 2ab ≤ a2 + b2 .
Thus
0 ≤ |xn + yn |2 ≤ (|xn | + |yn |)2 = |xn |2 + |yn |2 + 2|xn | |yn |
≤ |xn |2 + |yn |2 + (|xn |2 + |yn |2 ) = 2(|xn |2 + |yn |2 ).
P
2
2
The algebra of series implies
P that n 22(|xn | + |yn | ) converges, and hence2 the comparison test implies that n |xn + yn | converges. Thus {xn + yn } is in ` . Now we
need to check the axioms of a vector space, and a bit of thought shows that they all
follow easily from the field axioms for C.
Now we need to check that the formula gives an inner product on `2 , and your
first question should be “What does that infinite sum on the right mean?” Or, in
more sophisticated language, “How do we know that the series converges?” To see
that the series does converge, we use the same inequality 2ab ≤ a2 + b2 to see that
|xn yn | ≤ 2−1 (|xn |2 + |yn |2 ), and the series converges absolutely by the comparison
test1. Now we need to verify that the inner product satisfies the axioms, and for this
another mental check should suffice. (But if your mental check hits doubts you need
to check it out more carefully, or ask.)
The usual dot product in Rn is used to define the angle between vectors, and the
inner product generalises this idea. In particular, vectors v, w in an inner-product
space V are orthogonal if (v | w) = 0. We write v ⊥ w (pronounced “v perp w”), or
more generally v ⊥ S to mean (v | w) = 0 for all w in a subset S of V .
Proposition 1.3. If (· | ·) is an inner product on a vector space V , then we have the
Cauchy-Schwarz inequality
(1.2)
|(v | w)| ≤ (v | v)1/2 (w | w)1/2
for all v, w ∈ V.
The formula kvk := (v | v)1/2 defines a norm k · k on V .
1This
is another example of a result about complex series which we would prove by bootstrapping
from the corresponding result about real series.
C ∗ -ALGEBRAS
3
Proof. If w = 0, both sides are 0. If w 6= 0, we consider
u := v −
(v | w)
w
(w | w)
⊥ w.
(For motivation: we think of u as the component of v orthogonal to w, and you can
(v | w)
quickly check that u ⊥ w and v = u + (w
w.) We have
| w)
(v | w)
v − (v | w) w
w
0 ≤ (u |u) = v − (w
| w)
(w | w)
(v | w)
(v | w)
(v | w)
w
w
= v − (w | w) w v − (w | w) v − (w
| w)
(v | w)
= v − (w
w v −0
| w)
= (v | v) −
(v | w)
(w | v)
(w | w)
= (v | v) −
|(v | w)|2
,
(w | w)
and rearranging gives (1.2).
We trvially have kvk ≥ 0 and kvk = 0 =⇒ v = 0. The calculation
kcvk2 = (cv | cv) = cc(v | v) = |c|2 (v | v) = |c|2 kvk2
gives kcvk = |c| kvk. The Cauchy-Schwarz inequality implies that
kv + wk2 = (v | v) + (v | w) + (w | v) + (w | w)
= (v | v) + 2 Re(v | w) + (w | w) because z + z = 2 Re z
≤ (v | v) + 2|(v | w)| + (w | w)
≤ (v | v) + 2(v | v)1/2 (w | w)1/2 + (w | w)
= (kvk + kwk)2 ,
which gives the triangle inequality.
The norm allows us to define convergence and hence to do analysis. We say that
a sequence {vn } in V converges to v (also written vn → v or limn→∞ vn = v) if
kvn − vk → 0 as n → ∞. Notice that convergence in H is by definition an assertion about convergence of a sequence of real numbers, so the standard theorems
about convergence in R are available. Other concepts involving convergence, such as
continuity, extend in a similar fashion.
A word of warning, though: there is no analogue of the Bolzano-Weierstrass theorem in Hilbert space. For those who have seen topology, the point is that in an
infinite-dimensional Hilbert space, the closed unit ball
B(0, 1) = {h ∈ H : khk ≤ 1}
is not compact. To see this, take an infinite linearly independent set, and apply the
Gram-Schmidt process to turn it into an orthonormal sequence {en }. Since
√
kem − en k = 2 for all m 6= n,
{en } cannot have a convergent subsequence.
4
IAIN RAEBURN
A sequence {vn } is a Cauchy sequence if for every > 0, there exists N ∈ N such
that
m, n ≥ N =⇒ kvm − vn k < .
We say that V is a Hilbert space if every Cauchy sequence {vn } converges: there exists
v ∈ V such that vn → v as n → ∞. In future we use H for a generic Hilbert space.
The finite-dimensional spaces Ck is a Hilbert space for every k ∈ N. More interesting is our stock infinite-dimensional example:
P∞
2
Proposition 1.4. With the inner product defined by (x | y) :=
i=1 xi yi , ` is a
Hilbert space.
Proof. We saw in Example 1.2 that `2 is an inner-product space, so we only have
to prove that `2 is complete. Suppose {xn } is a Cauchy sequence in `2 , and write
xn = (xn,1 , xn,2 , · · · ). Then for each fixed j,
2
|xm,j − xn,j | ≤
∞
X
|xm,i − xn,i |2 = kxm − xn k2 ,
i=1
so {xn,j } is Cauchy and converges to yj , say. We want to prove that y ∈ `2 and
kxn − yk → 0.
Fix > 0, choose N such that
m, n ≥ N =⇒ kxm − xn k < ,
(1.3)
and fix m ≥ N ; we aim to prove that kxm − yk ≤ . (If you are nervous about the
≤ here, replace the in (1.3) by /2; then we get ≤ /2 < at the end.) We know
from (1.3) that for every n ≥ N and every K ∈ N we have
K
X
2
|xm,i − xn,i | ≤
i=1
∞
X
|xm,i − xn,i |2 = kxm − xn k2 < 2 ;
i=1
since there are only finitely many summands and xn,i → yi for each i, we see that
K
X
|xm,i − xn,i |2 →
i=1
K
X
|xm,i − yi |2 as n → ∞.
i=1
Thus
K
X
|xm,i − yi |2 ≤ 2
i=1
for every K. But this in turn implies that
∞
X
|xm,i − yi |2 ≤ 2 .
i=1
This says both that xm − y belongs to `2 , so y = xm − (xm − y) is a linear combination
of elements of `2 , hence belongs to `2 , and that kxm − yk ≤ , as required.
C ∗ -ALGEBRAS
5
2. Bounded linear operators on Hilbert space
Since every Hilbert space H is a vector space, we are interested in linear transformations on H, and for some reason, in this context everybody calls them linear
operators. Because we can talk about limits and we want to do analysis, it makes
sense to ask for continuity.
Proposition 2.1. Suppose H is a Hilbert space and T : H → H is a linear operator.
Then the following statements are equivalent:
(a) T is continuous on H;
(b) T is continuous at 0;
(c) there exists M ∈ R such that kT hk ≤ M khk for all h ∈ H.
Proof. (a) =⇒ (b) is trivial. For (b) =⇒ (c), we suppose T is continuous at 0. Then
taking = 42 in the definition of continuity gives δ > 0 such that khk ≤ δ =⇒
kT hk ≤ 42. For any h ∈ H\{0}, δh/khk has norm δ, so
δ
42
kT (δh/khk)k ≤ 42,
kT hk ≤ 42, and kT hk ≤ khk;
khk
δ
thus M = 42/δ will do. (One trivially has kT hk ≤ M khk if h = 0.)
For (c) =⇒ (a). Suppose M satisfies (c) and {hn } is a sequence in H such that
hn → h. Then
kT hn − T hk = kT (hn − h)k ≤ M khn − hk → 0,
so the squeeze principle implies that T hn → T h. Thus T is continuous at h.
If the equivalent properties of Proposition 2.1 hold for a given T , we say that T is
bounded.
Proposition 2.2. Suppose H is a Hilbert space and T is a bounded linear operator
on H. Then
(2.1)
(2.2)
kT kop : = inf{M : kT hk ≤ M khk for all h ∈ H}
= sup{kT hk : khk ≤ 1},
and the inf in (2.1) is a minimum: we have kT hk ≤ kT kop khk for all h ∈ H.
Proof. We observe that the set appearing on the right-hand side of (2.1) is the set
of upper bounds for the set S on the right-hand side of (2.2). Since S is nonempty
(it contains 0), and the definition of bounded means there is at least one such upper
bound M , we deduce that S has a least upper bound. This least upper bound is a
minimum for the set of upper bounds, and hence is the infimum of the set on the
right-hand side of (2.1).
To prove that a given linear transformation is bounded, we usually find a constant
M such that kT hk ≤ M khk for all h. This implies both that T is bounded and,
from (2.1), that kT kop ≤ M . It is often not possible to calculate kT kop exactly,
but if we want to prove kT kop = M , say, we usually prove kT kop ≤ M by proving
kT hk ≤ M khk and invoking (2.1), and then use (2.2) to get equality, by finding either
one unit vector h such that kT hk = M , or a sequence {hn } of unit vectors such that
kT vn k → M .
6
IAIN RAEBURN
Proposition 2.3. Suppose H is a Hilbert space. The set B(H) of bounded linear
operators on H is a vector space over C with (cS + dT )h = c(Sh) + d(T h), and k · kop
is a norm on B(H). The set B(H) is closed under composition, and then ST := S ◦T
is an associative multiplication on B(H) which distributes over addition and scalar
multiplication, and satisfies kST kop ≤ kSkop kT kop .
This proposition is not hard: a great deal of the proof involves deciding what is
actually involved in these assertions and which ones need serious thought. Some
do, though. For example, a multiplication has to be a binary operation, which here
requires that ST is a bounded linear operator. Linearity just requires a mental check,
but “bounded” is not so obvious.
To see that ST is bounded, we first use the final assertion in Proposition 2.2: kT kop
is a minumum for the set on the right-hand side of (2.1), which means that kT kop is
in the set, and kT hk ≤ kT kop khk for all h. The same is true for kSkop . Then
k(ST )hk = kS(T h)k ≤ kSkop (kT hk) ≤ kSkop kT kop khk,
which by (2.1) says that ST is bounded, and that kST kop ≤ kSkop kT kop , which is
the final assertion in Proposition 2.3.
Now that we have a norm on B(H), we have corresponding notions of convergent
and Cauchy sequences, hence of completeness.
Proposition 2.4. If H is a Hilbert space, then B(H) is complete in the operator
norm.
Proof. Suppose {Tn } is a Cauchy sequence in B(H), so that kTm − Tn kop → 0 as
m, n → ∞ independently. We have to find T ∈ B(H) such that Tn → T as n → ∞.
For each h ∈ H,
kTm h − Tn hk ≤ kTm − Tn kop khk → 0 as m, n → ∞ independently,
and hence {Tn h} is a Cauchy sequence in H; now the completeness of H implies that
{Tn v} converges. We define T h := limn→∞ Tn h, and we have a well-defined function
T : H → H. We need to check that T is a bounded linear operator and that Tn → T
in the operator norm. Linearity follows from from the continuity of the vector space
operations
in H. To see that T is bounded, note that the triangle inequality implies
that kTn kop is Cauchy, and hence is bounded: say kTn kop ≤ M for all n. Then for
each h ∈ H, we have kTn hk ≤ M khk, and hence kT hk = lim kTn hk ≤ M khk. Thus
T is bounded with kT kop ≤ M .
Finally, we need to check that Tn → T . For this, fix > 0. Since {Tn } is Cauchy,
there exists N ∈ N such that
m, n ≥ N =⇒ kTm − Tn kop < /2.
Now holding n fixed, letting m → ∞, and remembering that the norm is continuous,
we deduce that for all h ∈ H
n ≥ N =⇒ k(Tm − Tn )hk < 2 khk for all m ≥ N
=⇒ k(T − Tn )hk ≤ 2 khk
=⇒ k(T − Tn )kop ≤ 2 ,
C ∗ -ALGEBRAS
7
which implies that kT − Tn kop → 0.
There is one extra thing that we can do in B(H):
Proposition 2.5. Suppose that H is a Hilbert space. For every T ∈ B(H) there is a
unique T ∗ ∈ B(H) such that
(2.3)
(T h | k) = (h | T ∗ k) for all h, k ∈ H.
The operation T 7→ T ∗ is conjugate linear, reverses multiplication ((ST )∗ = T ∗ S ∗ ), is
involutive ((T ∗ )∗ = T ), is isometric (kT ∗ kop = kT kop ) and satisfies kT ∗ T kop = kT k2op .
Our main technical tool will be a theorem which says that we can construct elements
of a Hilbert space by constructing linear functions from H to C. In general, a linear
functional on a vector space V over C is a linear map L : V → C. If V has a norm, it
is natural to consider bounded linear functionals: linear maps L : V → C which are
bounded in the operator norm on B(V, C).
Proposition 2.6 (The Riesz representation theorem). Let H be a Hilbert space, and
let k ∈ H. Then Lk : h 7→ (h | k) is a bounded linear functional on H with kLk k = kkk,
and every bounded linear functional on H is Lk for a unique vector k.
Proof. The map Lk : H → C is linear by linearity of the inner product in the first
variable. The Cauchy-Schwarz inequality implies that it is bounded with kLk kop ≤
kkk:
|Lk (h)| = |(h | k)| ≤ khk kkk = kkk khk.
If k = 0, then kLk kop = 0 = kkk; otherwise, k/kkk is a unit vector in H, and
Lk (k/kkk) = (k | k)/kkk = kkk
implies that kLk kop ≥ kkk, so we actually have kLk kop = kkk.
To see that every bounded functional has the form Lk , suppose L is a bounded
linear functional on H. If L is 0, then we can take k = 0. So suppose L 6= 0.
Notice that the k we want must satisfy (h | k) = L(h) = 0 for all h ∈ ker L, so k
must be in (ker L)⊥ . Now ker L is a linear subspace of H (because the kernel of any
linear transformation is), is not all of H (because L 6= 0), and is closed (because L
is bounded and hence continuous: if hn ∈ ker L and hn → h, then 0 = Lhn → Lh,
and Lh = 0). Let g be a vector which does not lie in ker L, and let P denote the
orthogonal projection of H on ker L (which exists because ker L is closed). Then
P g 6= g, so l := g − P g is a non-zero vector in (ker L)⊥ .
To finish off, note that L(l)h − L(h)l belongs to ker L for every h ∈ H, so
0 = (L(l)h − L(h)l | l) = L(l)(h | l) − L(h)klk2
for all h ∈ H,
which implies that L(h) = klk−2 L(l)(h | l), and k := klk−2 L(l)l has Lk = L.
To see that k is unique, suppose there exists f ∈ H such that (h | f ) = L(h) = (h | k)
for all h ∈ H. Subtracting gives (h | f − k) = 0 for all h. We can certainly apply this
to h := f − k, and we deduce that kf − kk2 = 0, f − k = 0, and f = k.
Proof of Proposition 2.5. Uniqueness is clear: (h | Rk) = (h | Sk) for all h, k implies
R = S. For fixed k, the left-hand side of (2.3) is linear in h, and
|(T h | k)| ≤ kT hk kkk ≤ kkk kT kop khk,
8
IAIN RAEBURN
so h 7→ (T h | k) is a bounded linear functional with operator norm at most kkk kT kop .
Thus the Riesz representation theorem says there is a unique vector T ∗ k in H such
that (2.3) holds. Thus we have a well-defined function T ∗ from H to H.
To see that T ∗ is linear, we take k, l ∈ H and c, d ∈ C. Then
¯ h | l)
(h | T ∗ (ck + dl)) = (T h | ck + dl) = c̄(T h | k) + d(T
¯ | T ∗ l) = (h | c(T ∗ k) + dT ∗ (l))
= c̄(h | T ∗ k) + d(h
and the uniqueness in the Riesz theorem implies that T ∗ (ck + dl) = c(T ∗ k) + dT ∗ (l).
Since the operator norm of h 7→ (h | T ∗ k) is kT ∗ kk, we have kT ∗ kk ≤ kkk kT kop , so
T is bounded.
It remains to verify the five properties. For all h, k ∈ H, we have
h | (cS + dT )∗ k = (cS + dT )h | k
= c(Sh | k) + d(T h | k)
= c(h | S ∗ k) + d(h | T ∗ k)
= h | (cS ∗ + dT ∗ )k .
Thus (cS + dT )∗ k = (cS ∗ + dT ∗ )k for all k, and (cS + dT )∗ = cS ∗ + dT ∗ . The
calculation
(ST )h | k = S(T h) | k = T h | S ∗ k = h | T ∗ (S ∗ k) = h | (T ∗ S ∗ )k
shows that (ST )∗ = T ∗ S ∗ . To see that (T ∗ )∗ = T , we conjugate (2.3):
(T ∗ h | k) = (k | T ∗ h) = (T k | h) = (h | T k).
We now use (T ∗ )∗ = T to see that
kT kop = k(T ∗ )∗ kop ≤ kT ∗ kop ≤ kT kop ,
which forces kT ∗ kop = kT kop . This and the inequality kST kop ≤ kSkop kT kop imply
that kT ∗ T kop ≤ kT ∗ kop kT kop = kT k2op , and then the inequality
kT hk2 = (T h | T h) = (h | T ∗ T h) ≤ khk kT ∗ T hk ≤ kT ∗ T kop khk2
gives kT ∗ T kop = kT k2op .
We care about the involution on B(H) because it allows us to encode important
geometric properties of Hilbert spaces algebraically. We illustrate with an example
which is directly relevant to graph algebras.
A subspace K of a Hilbert space H is a subset that is closed under the vector-space
operations:
h, k ∈ K =⇒ −h ∈ K and h + k ∈ K.
A closed subspace of H is a subspace K which is also closed under limits:
{hn } ⊂ K and hn → h ∈ H =⇒ h ∈ K.
The closed subspaces of H are themselves Hilbert spaces with the same inner product.
To see this, observe that every Cauchy sequence {kn } in K is also Cauchy in H, hence
converges in H, say kn → h. Then closedness of K implies that h ∈ K, so {kn }
converges to h in K. Closed subspaces have an important geometric property:
C ∗ -ALGEBRAS
9
Proposition 2.7. Suppose that K is a closed subspace of a Hilbert space H and
h ∈ H. Then there exists exactly one k ∈ K such that
kh − lk ≥ kh − kk for all l ∈ K.
The point k is uniquely characterised by k ∈ K and h − k ⊥ K.
This result is important in the theory of Hilbert space because it is the place where
analysis enters the subject: one constructs the closest point k by choosing a sequence
{kn } in K such that kkn − hk → d(h, K) := inf{kh − lk : l ∈ K}, and then proving
that {kn } is Cauchy (using that the norm comes from an inner product). Then we
take k to be the limit, and have to argue uniqueness.
It is important for us because it tells us that there is a well-defined function P :
H → K such that P h ∈ K and h − P h ⊥ K. We call P the orthogonal projection of
H on K. Notice that P h = h for h ∈ K.
Proposition 2.8. Suppose that K is a nonzero closed subspace of a Hilbert space H.
Then the orthogonal projection P of H on K is a bounded linear operator on H with
kP kop = 1, P 2 = P and P = P ∗ .
Conversely, if P ∈ B(H) satisfies P 2 = P = P ∗ , then P H is a closed subspace of
H, and P is the orthogonal projection of H on P H.
So closed subspaces of Hilbert space are essentially the same as orthogonal projetions in B(H). The relationships between different subspaces can often be described
using the algebra of B(H). For example:
Proposition 2.9. Suppose that P, Q are the orthogonal projections of H on closed
supsaces K, L. Then K ⊂ L if and only if P Q = P = QP .
Proof. Suppose K ⊂ L. Then for all h ∈ H, we have P h ∈ K ⊂ L, so P h = Q(P h) =
(QP )h. Thus QP = P , and P Q = P ∗ Q∗ = (QP )∗ = P ∗ = P .
Conversely, suppose P Q = P = QP and k ∈ K. Then k = P k, and Qk = Q(P k) =
(QP )k = P k = k, so k ∈ L.
3. C ∗ -algebras
We can sum up Propositions 2.3, 2.4 and 2.5 by saying that B(H) is a C ∗ -algebra.
There are many others: if A is a subset of B(H) that is closed under the vector space
operations, multiplication, adjoints, and operator-norm limits, then A is a C ∗ -algebra.
But for this to really make sense, we need a formal definition.
Definition. A C ∗ -algebra is a vector space A over C with an associative multiplication
(a, b) 7→ ab such that a(b + c) = ab + bc and a(zb) = z(ab) = (za)b for z ∈ C, a
conjugate linear involution a 7→ a∗ such that (ab)∗ = b∗ a∗ , and a norm such that
kabk ≤ kak kbk and ka∗ ak = kak2 .
A C ∗ -algebra may or may not have an identity 1 such that 1a = a = a1; if so, it is
unique, and satisfies 1 = 1∗ and k1k = 1.
The axioms were first formulated by Gelfand and Naimark in 1943, and were gradually refined and simplified over the next thirty years. (The book [2] contains a detailed
discussion of these theorems and their history.) There are two “Gelfand -Naimark
10
IAIN RAEBURN
theorems” which are the cornerstones of the subject. The first connects the abstract
definition we just gave with our previous discussion.
Theorem 3.1 (The Gelfand-Naimark theorem). For every C ∗ -algebra A there are a
Hilbert space H and an isometric ∗-preserving homomorphism π : A → B(H). If A
has an identity 1, then we can choose H and π so that π(1) is the identity operator
1H on H.
Then the range of π is automatically closed (it is complete because A is and π is
isometric), and hence π is an isomorphism of A onto a C ∗ -subalgebra π(A) of B(H).
Most textbooks on C ∗ -algebras contain a proof of this theorem, and the proof contains
important ideas for operator algebraists, though for most purposes we just need to
know the result. If you need the details, our biased recommendation is that you look
at the one in [8, Appendix A].
Now we make some comments on the jargon that operator algebraists use. For
them, homomorphisms of C ∗ -algebras are always ∗-preserving. A homomorphism
π : A → B(H) is called a representation of A on H; it is called faithful if it is
injective, in which case it is automatically norm-preserving. Operator algebraists
often apply the Gelfand-Naimark theorem silently: we can always choose to put a
C ∗ -algebra faithfully inside some B(H), and this allows us to use spatial arguments
whenever we want. For anything involving the C ∗ -algebra structure, it doesn’t matter
how we do this.
We can now define projections in an arbitrary C ∗ -algebra A as elements p ∈ A
such that p2 = p = p∗ . Then, whenever π : A → B(H) is a representation, Proposition 2.8 implies that π(p) is the orthogonal projection onto the closed subspace
π(p)H. Proposition 2.9 then says that containment of subspaces is preserved by this
process: if pq = p = qp, then π(p)H is always contained in π(q)H.
We can use similar ideas to define other special classes of elements of C ∗ -algebras,
and this too is important for those wanting to read about graph algebras. First we
work with an operator T ∈ B(H). We say that T is an isometry if kT hk = khk for
all h ∈ H; a unitary if T is an isometry and T is surjective; a partial isometry if T is
an isometry on (ker T )⊥ . Then we have:
Proposition 3.2. Suppose that T ∈ B(H). Then
(a) T is an isometry if and only if T ∗ T = 1; if so, T T ∗ is the orthogonal projection
on T H;
(b) T is a unitary if and only if T ∗ T = 1 = T T ∗ ;
(c) T is a partial sometry if and only if T = T T ∗ T , or if and only if T ∗ T is a
projection, or if and only if T T ∗ is a projection; if so, then T ∗ T is the projection
on (ker T )⊥ and T T ∗ is the projection on T H.
Problem 7 on the problem sheet talks you through part (a). Given that, part (b) is
quite easy, because then part (a) applies to T ∗ . Part (c) is proved in [6, Theorem 2.3.3]
or the Appendix in [7], for example.
Now the algebraic characterisations make sense in any C ∗ -algebra, and they carry
the geometric information with them when they go onto Hilbert space. Thus, for
example, if π : A → B(H) is a representation and a is a partial isometry in A, then
π(a) restricts to a unitary isomorphism of ker π(a)⊥ onto π(a)H.
C ∗ -ALGEBRAS
11
If we only want to study a particular C ∗ -subalgebra of B(H), then the GelfandNaimark theorem doesn’t tell us much. However, C ∗ -algebras arise in many other
ways, and we next look at a couple of examples. In the first one, “compact space”
means compact Hausdorff space. If you are not familiar with the theorems about
continuous functions on these spaces, you should think of X as a closed bounded
subset of Rn .
Example 3.3. Let X be a compact space, and let C(X) be the set of all continuous
functions f : X → C, which is closed under pointwise addition, scalar multiplication, and multiplication. Since continuous functions on compact Hausdorff spaces are
bounded, we can set
kf k = kf k∞ := sup{|f (x)| : x ∈ X}.
Then, with the involution defined by pointwise complex conjugation (so f ∗ (x) :=
f (x)), C(X) is a C ∗ -algebra. The function 1 = 1C(X) defined by 1C(X) (x) = 1 for all
x is an identity for C(X).
Most of this is routine, expect possibly the norm identities and completeness. For
f, g ∈ C(X), kf k∞ is an upper bound for |f (x)|, and similarly for g, so kf k∞ kgk∞
is an upper bound for {|f (x)| |g(x)|}. Hence it is greater than the least upper bound
kf gk∞ , which gives kf gk ≤ kf k kgk. Then
kf ∗ f k = sup |f (x)f (x)| = sup(|f (x)|2 ) = (sup |f (x)|)2 = kf k2 .
So we need to check that if S is a bounded subset of [0, ∞), then (sup S)2 is the least
upper bound of {t2 : t ∈ S}, and then we have kf ∗ f k = kf k2 .
For completeness, observe that convergence in k · k∞ is uniform convergence of
functions. If kfm − fn k∞ → 0 as m, n → ∞ independently, then each {fn (x)} is a
Cauchy sequence of complex numbers, and hence has a limit f (x) ∈ C; then for fixed
n such that kfm − fn k∞ < for m ≥ n, letting m → ∞ gives kf − fn k ≤ . So
fn → f uniformly. Now f is the uniform limit of a sequence of continuous functions,
and hence is itself continuous.
Notice that there isn’t a Hilbert space in sight!
Example 3.4 (Group algebras). Let G be a group, and let
CG := {f : G → C : f (s) 6= 0 for at most finitely many s ∈ G},
which is a vector space over C in the usual pointwise operations. For f, g ∈ CG, the
convolution of f and g is the function f ∗ g : G → C defined by
X
(f ∗ g)(s) =
f (t)g(t−1 s),
t∈G
and CG is an algebra with convolution as product. (This is not meant to be obvious.)
To understand why we define the multiplication this way, notice that CG has a basis
consisting of the point masses
(
1 if t = s
δs (t) :=
0 if t 6= s;
12
IAIN RAEBURN
P
indeed, every f ∈ CG can be written as f =
s∈G f (s)δs (which is a finite sum
because f (s) is only non-zero for finitely many s). The convolution product on point
masses is given by δr ∗ δs = δrs , and the idea of convolution is to extend this product
bilinearly to all of CG = span{δs }.
The formula f ∗ (s) := f (s−1 ) defines an involution on CG, which mirrors inversion
in G: δs∗ = δs−1 . Thus CG is a ∗-algebra. It is not complete in any reasonable norm
unless G is finite. There is, however, a very important construction which does embed
CG in a C ∗ -algebra, and which is the model for many similar constructions in the
literature. An element u of a ∗-algebra A with identity 1 is unitary if u∗ u = uu∗ = 1,
and the set U (A) of unitary elements of A is a group under multiplication. The map
δ : g 7→ δg is a homomorphism of G into U (CG), and if A is a ∗-algebra with 1 and
u : G → U (A) is a homomorphism, the formula
P
P
πu (f ) = πu
s∈G f (s)δs =
s∈G f (s)us
defines a unital ∗-algebra homomorphism πu : CG → A, from which we can recover
u as πu ◦ δ. Indeed, every unital ∗-algebra homomorphism π : CG → A has the form
πu with u = π ◦ δ. We sum this up by saying that “the pair (CG, δ) is universal for
homomorphisms u of G into the unitary groups U (A) of a ∗-algebra A.”
To make CG into a C ∗ -algebra, we choose a unitary representation u of G in the
unitary group of a C ∗ -algebra. Then, provided πu is faithful on CG, we can pull the
norm on A back to a norm k · ku on CG: kf ku := kπu (f )k, and complete to get a
C ∗ -algebra Cu∗ (G). There is always at least one choice for a representation such that
πu is faithful on CG: the regular representation λ : G → U (`2 (G)) characterised by
(λt h)(s) = h(t−1 s). The resulting C ∗ -completion Cλ∗ (G) is called the reduced group
C ∗ -algebra, and denoted by Cr∗ (G). There is also a possibly larger completion called
the full group C ∗ -algebra and denoted C ∗ G). It need not be the same, and indeed
Cr∗ (G) = C ∗ (G) if and only if G is amenable [1].
4. Commutative C ∗ -algebras
A C ∗ -algebra A is commutative if ab = ba for all a, b ∈ A. The example you have
met so far is C(X): there the multiplication of f, g ∈ C(X) is given by pointwise
multiplication of the values, and since wz = zw for all w, z ∈ C, we have
(f g)(x) = f (x)g(x) = g(x)f (x) = (gf )(x) for all x.
The other Gelfand-Naimark theorem says that this is the only example:
Theorem (The Gelfand-Naimark theorem). For every commutative C ∗ -algebra A
with 1, there is a compact Hausdorff space X such that A is isomorphic to C(X).
Remember that, by convention, isomorphisms of C ∗ -algebras are assumed to be
∗-preserving, and, by theorem, always preserve the norm.
To use this Gelfand-Naimark theorem effectively, we need to have some understanding of the construction of the isomorphism, which uses powerful techniques previously
developed by Gelfand in 1940 in the context of (what we now call) commutative Banach algebras (no adjoint is the big difference). Then we can give a more precise and
useful statement.
C ∗ -ALGEBRAS
13
The underlying set of the compact space X = XA is the set of nonzero algebra
homomorphisms φ : A → C. These always satisfy φ(1) = 1, and are bounded with
kφkop = 1. Gelfand’s original theory did not require these homomorphisms to be
∗-preserving, and this is important in his proofs, but it turns out that in our case
they are, and this is important for us.
Lemma 4.1. Suppose A is a commutative C ∗ -algebra with 1, and φ is a non-zero
homomorphism from A to C. Then φ(a∗ ) = φ(a) for all a ∈ A.
Proof. An element a of a C ∗ -algebra is called self-adjoint if a = a∗ . Every element a
can then be written as b + ic with b and c self-adjoint, by taking b = (a + a∗ )/2 and
c = −i(a − a∗ )/2. If we knew that φ(b), φ(c) were real, then we would have
φ(a∗ ) = φ((b + ic)∗ ) = φ(b∗ − ic∗ ) = φ(b − ic)
= φ(b) − iφ(c) = φ(b) + iφ(c) = φ(b + ic) = φ(a).
Thus it will suffice to show that when d ∈ A is self-adjoint, φ(d) is real. Suppose
d = d∗ and φ(d) = x + iy, and consider bt = d + it1 for t ∈ R. Then
(4.1)
|φ(bt )|2 = |φ(d) + itφ(1)|2 = |x + i(y + t)|2 = x2 + (y + t)2 .
Since kφkop = 1, we also have
(4.2)
|φ(bt )|2 ≤ kbt k2 = kb∗t bt k = k(d − it1)(d + it1)|| = kd2 + t2 1k ≤ kd2 k + t2 .
Putting (4.1) and (4.2) together gives x2 + y 2 + 2yt ≤ kd2 k for all t ∈ R, which is
impossible unless y = 0. Thus φ(d) = x belongs to R, and the lemma is proved. We now want to make X into a compact space. For this we observe that, since
X consists of bounded functionals of norm 1, it sits inside the unit ball of the dual
Banach space A∗ of bounded linear functions from A to C. This unit ball is not
compact in the usual operator-norm topology, but it does have an alternative weak*
topology in which it is compact and Hausdorff (by a 1930’s theorem of Banach and
Alaoglu). For our purposes, it suffices to note that a sequence {φn } in A∗ converges
weak* to φ if and only if φn (a) → φ(a) for all a ∈ A.
Lemma 4.2. The space X = XA is weak* closed in A∗ .
Proof. Suppose that {φn } ⊂ X and φn → φ in A∗ . Then for a, b ∈ A we have
φn (ab) = φn (a)φn (b) → φ(a)φ(b),
and continuity of multiplication implies that φ(ab) = φ(a)φ(b). So φ is a homomorphism. Since 1 = φn (1) → φ(1), we have φ(1) = 1 and φ is non-zero. So φ ∈ X. Now as a closed subset of a compact Hausdorff space, X is itself a compact Hausdorff space. This compact Hausdorff space is called the spectrum or maximal ideal
space of A. Now the key idea.
For a ∈ A, the Gelfand transform of a is the function b
a : X → C defined by
b
a(φ) = φ(a). The function b
a is continuous:
φn → φ in X =⇒ φn (a) → φ(a) =⇒ b
a(φn ) → b
a(φ).
14
IAIN RAEBURN
Since each φ(1) = 1, b
1A = 1C(X) ; since φ ∈ X is multiplicative so is a 7→ b
a; and since
each φ is *-preserving, so is a 7→ b
a. Thus the Gelfand transform gives a homomorphism of the C ∗ -algebra A into C(X) which takes 1 to 1.
We are going to outline a proof that the Gelfand transform is in fact an isomorphism. To do this we need two major tools.
Our first main tool goes back at least to 1940. For a ∈ A, the spectrum of a is the
set
σ(a) = {λ ∈ C : a − λ1 is NOT invertible}.
(Hint: That “NOT” can make the spectrum tricky to work with, and one often
prefers to convert statements about the spectrum into ones about λ 6∈ σ(a), which is
the positive assertion that a − λ1 has an inverse.)
Examples. For a ∈ A = Mn (C), σ(a) is the set of complex eigenvalues. The unilateral
shift S ∈ B(`2 ) has no eigenvalues, but in fact σ(S) is the unit circle in C. For
f ∈ C(X), σ(f ) = range(f ).
Theorem 4.3 (Gelfand/Beurling). For each element a of a C ∗ -algebra A, σ(a) is a
nonempty compact subset of C, and
r(a) := max{|λ| : λ ∈ σ(a)} = lim kan k1/n .
n→∞
For a proof of this result, see [6, pages 9–10]. The last assertion is called the
spectral radius formula: its proof uses highly nontrivial complex analysis, and it is
a very powerful tool. (There are several problems on the sheet which you’ll have
trouble doing without it.)
Our second main tool is a theorem about C(X) called the Stone-Weierstrass theorem. It is a generalisation of a classical theorem of Weierstrass about appoximating
continuous functions on intervals by polynomials, and was first proved by Stone in
the 1930s. He later found a more elegant and understandable lattice-theoretic proof
which appears in several books (for example, [3]).
Theorem 4.4 (The Stone-Weierstrass theorem; Stone, 1937). Suppose that X is
a compact Hausdorff space and A is a C ∗ -subalgebra of C(X) which contains the
constants and separates points of X (that is, for all x, y ∈ X, there exists f ∈ A such
that f (x) 6= f (y)). Then A = C(X).
Theorem 4.5 (The Gelfand-Naimark theorem). Suppose that A is a commutative
C ∗ -algebra with 1, and let X be the spectrum of A. Then the Gelfand transform is an
isomorphism of A onto C(X), and σ(a) = σ(b
a) = range(b
a).
Outline of the proof. We have already seen that the Gelfand transform is a ∗-homomorphism of A into C(X). Gelfand theory2 says that a is invertible in A if and only
b
a is invertible in C(X), and since the spectrum is defined in terms of invertibility, we
have σ(a) = σ(b
a). That σ(f ) = range(f ) for f ∈ C(X) is straightforward (we just
have to think about what the inverse of λ1 − f would be). By the spectral radius
formula we have
kb
ak∞ = r(b
a) = r(a) = lim kan k1/n .
n→∞
2Though
unfortunately the arguments I know dip out of the C ∗ -algebra context at a crucial stage.
C ∗ -ALGEBRAS
15
We next prove that the Gelfand transform is isometric. First suppose a = a∗ . Then
k
k
kak2 = ka∗ ak = ka2 k =⇒ ka2 k1/2 = kak for all k ∈ N
k
k
=⇒ lim ka2 k1/2 = lim kan k1/n = kak
k→∞
n→∞
=⇒ r(a) = kak
=⇒ kb
ak = kak.
Now for arbitrary a, a∗ a is self-adjoint, so the previous calculation implies that
∗ ak = k(b
kak2 = ka∗ ak = kac
a)∗b
ak = kb
ak2 ,
and a 7→ b
a is isometric.
The image of a C ∗ -algebra under an isometric ∗-isomorphism is a C ∗ -algebra, and
b is a C ∗ -subalgebra of C(X). It contains 1C(X) = b
hence A
1A . And it separates points:
b
a(φ) = b
a(φ) for all a ∈ A =⇒ φ(a) = ψ(a) for all a
=⇒ φ = ψ in X.
b = C(X), and the Gelfand transform
So the Stone-Weierstrass theorem implies that A
is surjective.
5. The functional calculus and positivity
An element a of a C ∗ -algebra A is called normal if aa∗ = a∗ a. (But you should
not be fooled by the name: as Kelley said, just because you call something normal
doesn’t make it normal.)
Examples. All projections, self-adjoint elements and unitary elements are normal. Not
all isometries are, and not many partial isometries are (just the projections, actually).
Suppose that a is normal. Then the C ∗ -algebra C ∗ (a) generated by a and 1 is the
smallest C ∗ -subalgebra of A containing both a and 1, and we have
C ∗ (a) = span{am (a∗ )n : m, n ∈ N}.
Notice that C ∗ (A) is a commutative C ∗ -algebra with 1, and hence we can apply the
Gelfand-Naimark theorem to it. So there is an isomorphism of C ∗ (a) onto C(XC ∗ (a) ).
To see what X is for the algebra C ∗ (a), note that the fine print in the GelfandNaimark theorem says that b
a is a continuous function from X to C with range σ(a).
We claim that b
a is one-to-one. Suppose that φ, ψ ∈ X and b
a(φ) = b
a(ψ). Then both
φ and ψ are continuous ∗-homomorphisms, so
b
a(φ) = b
a(ψ) =⇒ φ(a) = ψ(a)
=⇒ φ(am (a∗ )n ) = ψ(am (a∗ )n ) for all m, n
=⇒ φ(b) = φ(b) for all b ∈ C ∗ (a),
and we have φ = ψ. Thus b
a is a continuous, one-to-one surjection of the compact
space X onto the Hausdorff space σ(a), and hence is a homeomorphism. Any homeomorphism h : Y → Z of compact spaces induces an isomorphism Ch : C(Z) → C(Y )
16
IAIN RAEBURN
given by Ch (f ) = f ◦ h. So composing with the inverse of the Gelfand transform G
gives an isomorphism
Γ = G−1 ◦ Cba : C(σ(a)) → C(XC ∗ (a) ) → C ∗ (a) ⊂ A.
We let ι denote the identity function ι : z 7→ z in C(σ(a)). Then we have
Γ(ι) = G−1 (Cba (ι)) = G−1 (ι ◦ b
a) = G−1 (b
a) = a.
Thus we have most of:
Theorem 5.1 (The continuous functional calculus). Suppose a is a normal element
of a C ∗ -algebra A with 1. Then there is a unique unital homomorphism Γ of C(σ(a))
into A which takes the identity function ι : z 7→ z to a, and Γ is then an isomorphism
of C(σ(a)) onto the C ∗ -algebra C ∗ (a) generated by a and 1. For f ∈ C(σ(a)) we have
(5.1)
σ(Γ(f )) = f (σ(a)) = {f (λ) : λ ∈ σ(a)}.
Remark 5.2. There is a subtlety here: the σ(a) appearing in the theorem is the
spectrum σC ∗ (a) (a) of a in the C ∗ -algebra C ∗ (a) rather than σA (a). However, there is
a phenomenon called spectral permanence which says the two are the same: if A is a
C ∗ -algebra, B is a C ∗ -subalgebra of A, and b ∈ B, then σA (b) = σB (b) (for a proof,
see [6, Theorem 2.1.11]).
We usually write f (a) for Γ(f ), and say that “f (a) is defined using the continuous
functional calculus for a”. Then (5.1) becomes σ(f (a)) = f (σ(a)), which is known as
the spectral mapping theorem.
The idea is that f (a) is supposed to behave like f does. We illustrate with a couple
of examples.
PN
P
n
n
Examples 5.3. (a) If f (z) = N
n=0 cn ι of C(σ(a)).
n=0 cn z , then f is the element
Because Γ is a homomorphism, we have
f (a) = Γ(f ) =
N
X
n=0
n
cn Γ(ι) =
N
X
c n an .
n=0
(b) Suppose that a ∈ A has
√ σ(a) ⊂ [0, ∞) and f : [0, ∞) → [0, ∞) is the usual
square-root function f (x) = x. Then f is continuous on σ(a), and we can form
b = f (a). Now the whole point of the square-root function is that f (x)2 = x, or
equivalently f 2 = ι in C(σ(a)). So, since Γ is a homomorphism, we have
b2 = f (a)2 = Γ(f )2 = Γ(f 2 ) = Γ(ι) = a.
Since f : [0, ∞) → [0, ∞), the spectral mapping theorem implies that
σ(b) = σ(f (a)) = f (σ(a)) ⊂ [0, ∞).
Corollary 5.4. Suppose that A is a C ∗ -algebra with 1 and a ∈ A is self-adjoint.
Then
(a) σ(a) ⊂ R;
(b) if σ(a) ⊂ [0, ∞), there exists b ∈ B such that b2 = a and σ(b) ⊂ [0, ∞);
(c) there exist a+ , a− ∈ A such that σ(a± ) ⊂ [0, ∞) and a = a+ − a− .
C ∗ -ALGEBRAS
17
Proof. For (a), note that the functional calculus isomorphism Γ : C(σ)) → C ∗ (a)
takes ι to a. Thus ι is self-adjoint, and hence is real-valued. Thus σ(a) ⊂ [0, ∞).
(This argument also carries a hidden reliance on spectral permanence, as in Remark 5.2.) We proved (b) in Examples 5.3. For (c), we define f± : R → [0, ∞)
by f+ (x) := max{x, 0} and f− (x) = max{−x, 0}, and take a± = f± (a). Then the
spectral mapping theorem implies that σ(a± ) ⊂ [0, ∞), and x = f+ (x) − f− (x) for
x ∈ R implies that a = ι(a) = f+ (a) − f− (a) = a+ − a− .
Proposition 5.5. Suppose that A is a C ∗ -algebra and a ∈ A is self-adjoint. Then
σ(a) ⊂ [0, ∞) ⇐⇒ a = b∗ b for some b ∈ A.
We have just seen the forward implication. The reverse is nontrivial, and the usual
proofs involve some clever tricks originally due to Kaplansky (see [6, Theorem 2.2.4]
and the historical notes in [2], for example).
We say that a self-adjoint element a is positive if it has the form b∗ b, and write
a ≥ 0. More generally, we write a ≥ b to mean that a−b ≥ 0. We notice straightaway
that the positive elements span A: indeed, for a ∈ A, we have
a = b + ic for b = (a + a∗ )/2 and c = (a − a∗ )/2i
= b+ − b− + i(c+ − c− ).
Now we summarise some other key properties of positivity. A good reference for
this material is [6, §2.2 and §2.3].
Proposition 5.6. Suppose that A is a commutative C ∗ -algebra with identity. Then
(a) For a, b ∈ A, we have
0 ≤ a =⇒ 0 ≤ a ≤ kak1, and
0 ≤ a ≤ b =⇒ kak ≤ kbk.
(b) The set A+ of positive elements is a proper cone: we have a, b ∈ A+ =⇒ a+b ∈
A+ , a ∈ A+ , c ∈ [0, ∞) =⇒ ca ∈ A+ , and A+ ∩ (−A+ ) = {0}. The relation ≤
is a partial order on A.
(c) If π : A → B(H) is a faithful representation, then
a ≥ 0 ⇐⇒ (π(a)h | h) ≥ 0 for all h ∈ H.
Positivity enters the picture in graph algebras in several places via the relation ≥,
and it is important to realise that this C ∗ -algebraic relation ≥ is a powerful one. As an
example, we prove a result which is particularly useful for dealing with Cuntz-Krieger
families and their generalisations [5].
Proposition 5.7. Suppose that p is a projection in a C ∗ -algebra A and pi are projections in A such that p ≥ p1 + · · · + pn . Then pi pj = 0 for i 6= j.
Lemma 5.8. Suppose that P and Q are orthogonal projections on a Hilbert space H.
If kP + Qk ≤ 1, then QP = 0.
Before we prove Lemma 5.8, we observe that it applies also in C ∗ -algebras: if p, q
are projections in a C ∗ -algebra such that kp + qk ≤ 1, then pq = 0. To see this, we
choose a faithful representation π : A → B(H), and notice that π(p) and π(q) are
18
IAIN RAEBURN
orthogonal projections satisfying kπ(p) + π(q)k ≤ 1. (Look at the small print in the
Gelfand-Naimark theorem (theorem 3.1).
Proof. Take h ∈ P H, so h = P h. Then we calculate, using that Q = Q2 = Q∗ Q:
khk2 ≥ k(P + Q)hk2 = kh + Qhk2 = (h + Qh | h + Qh)
= (h | h) + (h | Qh) + (Qh | h) + (Qh | Qh)
= (h | h) + (h | Q∗ Qh) + (Q∗ Qh | h) + (Qh | Qh)
= (h | h) + (Qh | Qh) + (Qh | Qh) + (Qh | Qh)
= khk2 + 3kQhk2 ,
and we deduce that kQhk = 0. So Qh = 0 for all h ∈ P H, and QP = 0.
Proof. Fix i 6= j. Since pk = p∗k pk ≥ 0 for all k, we have
p ≥ p1 + · · · + pn ≥ pi + pj ≥ 0,
and Proposition 5.6(a) implies that kpk ≥ kpi + pj k. Since p is a projection, we have
kpk ≤ 1 (kpk2 = kp∗ pk = kpk, so kpk is 0 or 1). So we have kpi + pj k ≤ 1 and the
lemma implies that pj pi = 0 (which is equivalent to pi pj = (pj pi )∗ = 0).
To see how this is relevant to graph algebaras, suppose that E is a directed graph.
A Toeplitz-Cuntz-Krieger E-family (P, S) consists of mutually orthogonal projections
{Pv : v ∈ E 0 } and partial isometries {Se : e ∈ E 1 } such that Se∗ Se = Pr(e) and
X
(5.2)
Pv ≥
Se Se∗ for every finite subset F of r−1 (v).
e∈F
The inequality (5.2) has many interesting consequences. First, when F = {e} is
a singleton it says Pr(e) ≥ Se Se∗ . Since Se is a partial isometry, Se Se∗ is a projection
(and in fact the projection on Se H, by Proposition 3.2(c)). We need:
Lemma 5.9. Suppose that P and Q are projections on Hilbert space and P ≥ Q.
Then Q = P Q.
Proof. Suppose that h ∈ QH. Then by the characterisation for positivity in B(H)
(see Proposition 5.6(c)), we have
P ≥ Q =⇒ ((P − Q)h | h) ≥ 0 =⇒ (P h | h) ≥ (Qh | h) = khk2
=⇒ kP hk2 = (P h, | P h) = (P h, | h) ≥ khk2 .
Since Pythagoras gives khk2 = kP hk2 + k(1 − P )hk2 , we deduce that (1 − P )h = 0
for all h ∈ QH, and hence that (1 − P )Q = 0.
By Lemma 5.9, Pr(e) ≥ Se Se∗ implies that Se Se∗ = Pr(e) Se Se∗ , and therefore Se =
Ss Se∗ Se = Pr(e) Se . So we have Se∗ Sf = Se∗ Pr(e) Pr(f ) Sf = 0 unless r(e) = r(f ). On the
other hand, if e 6= f and r(e) = r(f ), then we can apply (5.2) with F = {e, f }, and
get Pr(e) ≥ Se Se∗ + Sf Sf∗ . Now Proposition 5.7 implies that Se Se∗ Sf Sf∗ = 0, and hence
Se∗ Sf = Se∗ (Se Se∗ )(Sf Sf∗ )Sf = Se∗ 0Sf = 0.
Thus:
C ∗ -ALGEBRAS
19
Proposition 5.10. The Toeplitz-Cuntz-Krieger relations, as above, imply that the
partial isometries {Se : e ∈ E 1 } have mutually orthogonal ranges.
So, while the definition used in [4] and [7] requires that the partial isometries
{Se : e ∈ E 1 } have mutually orthogonal ranges, this orthogonality in fact follows
from the main Toeplitz-Cuntz-Krieger relation (5.2).
References
[1] J. Dixmier, C ∗ -Algebras, North-Holland, Amsterdam, 1977.
[2] R.S. Doran and V.A. Belfi, Characterizations of C ∗ -Algebras, Marcel Dekker, New York and
Basel, 1986.
[3] R.G. Douglas, Banach algebra methods in operator theory, Academic Press, 1972.
[4] N.J. Fowler and I. Raeburn, The Toeplitz algebra of a Hilbert bimodule, Indiana Univ. Math.
J. 48 (1999), 155–181.
[5] A. an Huef, M. Laca, I. Raeburn and A. Sims, KMS states on the C ∗ -algebras of finite graphs,
J. Math. Anal. Appl. 405 (2013), 388–399.
[6] G.J. Murphy, C ∗ -Algebras and Operator Theory, Academic Press, San Diego, 1990.
[7] I. Raeburn, Graph Algebras, CBMS Regional Conference Series in Math., vol. 103, Amer. Math.
Soc., Providence, 2005.
[8] I. Raeburn and D.P. Williams, Morita Equivalence and Continuous-Trace C ∗ -Algebras, Math.
Surveys and Monographs, vol. 60, Amer. Math. Soc., Providence, 1998.
Department of Mathematics and Statistics, University of Otago, PO Box 56,
Dunedin 9054, New Zealand.
E-mail address: [email protected]
© Copyright 2026 Paperzz