2. Finite dimensional algebras In this chapter, A will always denote

2. Finite dimensional algebras
In this chapter, A will always denote an algebra (usually finite dimensional) over an algebraically closed field k and all modules will be finitely
generated. Thus, for a finite dimensional irreducible A-module M , Schur’s
lemma says that EndA (M ) = k. Many of the results can also be formulated
when the field k is not algebraically closed but things become more technical
because one has to keep track of the division algebras that arise in Schur’s
lemma, see e.g. chapter 4 of Benson. My main reference for much of the
material in this chapter is the beautiful lecture notes by William CrawleyBoevey which are available from his web page, supplemented by extra tidbits
from Benson’s book, plus the classic paper of Bernstein-Gelfand-Pomonarev.
2.1. Projective indecomposable modules. Assume that A is a finite
dimensional algebra. A projective indecomposable module (PIM for short)
means a projective indecomposable module in A -mod. By the Krull-Schmidt
theorem, such modules are isomorphic to the indecomposable summands of
the regular module A itself. In this section, we want to understand the
structure of the PIMs in a little more detail. We will work for a while in the
entirely equivalent language of idempotents.
Recall an idempotent e ∈ A means a non-zero element such that e2 = e.
Two idempotents e1 , e2 are orthogonal if e1 e2 = e2 e1 = 0. An idempotent is
primitive if it cannot be written as a sum of two orthogonal idempotents. If
e is an idempotent, then so is (1 − e), and e(1 − e) = 0. It follows from this
that
A = Ae ⊕ A(1 − e).
Thus whenever you have an idempotent in A it defines a summand of A.
Conversely if you have a direct sum decomposition A = M ⊕ N , then the
element e ∈ EndA (A)op ∼
= A defined to be the projection of A onto M along
the direct sum is an idempotent in A. Under this correspondence between
idempotents and summands of A A, primitive idempotents correspond to
indecomposable summands, i.e. to PIMs. Decompositions 1A = e1 + · · · + er
of the identity as a sum of primitive orthogonal idempotents correspond
to decompositions A = Ae1 ⊕ · · · ⊕ Aer of A A as a direct sum of PIMs.
Two idempotents e, e0 ∈ A are conjugate if there exists an invertible element
u ∈ A such that ueu−1 = e0 .
Lemma 2.1. Idempotents e, e0 ∈ A are conjugate if and only if the Amodules Ae and Ae0 are isomorphic.
Proof. (⇒). Suppose ueu−1 = e0 . Then, Ae ∼
= Aeu = Aue0 = Ae0 as left
A-modules.
(⇐). Suppose that Ae ∼
= Ae0 . Let µ : Ae → Ae0 be an isomorphism.
Since
HomA (Ae, Ae0 ) ∼
= eAe0
we get that µ corresponds to an element m ∈ eAe0 . Similarly, µ−1 corresponds to an element m0 ∈ e0 Ae, and mm0 = e, m0 m = e0 .
1
2
By the Krull-Schmidt theorem we also have that A(1−e) ∼
= A(1−e0 ). Let
0
ν : A(1 − e) → A(1 − e ) be an isomorphism. Like before, we get elements
n ∈ (1 − e)A(1 − e0 ) and n0 ∈ (1 − e0 )A(1 − e) such that nn0 = (1 − e), n0 n =
(1 − e0 ).
Now consider m + n ∈ A. Then, (m + n)(m0 + n0 ) = mm0 + nn0 + mn0 +
nm0 = e + (1 − e) + 0 + 0 = 1, and similarly (m0 + n0 )(m + n) = 1, so m + n
is invertible. Also (m + n)e0 = m = e(m + n) so (m + n)e0 (m + n)−1 = e
and e, e0 are conjugate.
The crucial result about idempotents is the following:
Theorem 2.2 (Lifting idempotents). Let N be a nilpotent ideal in A. Let
e ∈ A/N be an idempotent. Then, there exists an idempotent f ∈ A lifting
e, i.e. such that f¯ = e. Moreover, if e, e0 ∈ A/N are conjugate idempotents
f, f 0 ∈ A are idempotents lifting e, e0 respectively, then f, f 0 are conjugate
too.
Proof. Take e ∈ A/N . We e1 = e and inductively define an idempotent
ei ∈ A/N i such that ēi = ei−1 . Since N is a nilpotent ideal, N n = 0 for
some n and taking f = en proves the first part of the theorem.
Given ei−1 , define ei as follows. First, let a ∈ A/N i be any pre-image of
ei−1 . Then a2 − a is a pre-image of 0, so a2 − a ∈ N i−1 , so (a2 − a)2 = 0 in
A/N i . Let ei = 3a2 − 2a3 . This still has image ei−1 in A/N i−1 , and
e2i − ei = (3a2 − 2a3 )(3a2 − 2a3 − 1) = −(3 − 2a)(1 + 2a)(a2 − a)2 = 0/
So ei is an idempotent.
For the uniqueness statement, take conjugate idempotents e, e0 ∈ A/N .
Say µ̄e = e0 µ̄ for µ ∈ A with µ̄ invertible in A/N . Suppose f, f 0 ∈ A are
idempotents lifting e, e0 respectively. Let
ν = f 0 µf + (1 − f 0 )µ(1 − f ).
Then, νf = f 0 µf = f 0 ν. To complete the proof, we need to show that ν is
invertible. Observe
µ − ν = f 0 µ + µf − 2f 0 µf = (f 0 µ − µf )(1 − 2f ) ∈ N.
Since µ̄ is invertible, there exists x ∈ A such that 1 − µx ∈ N . Since
µ − ν ∈ N too, we deduce that 1 − νx = n for some n ∈ N . Then,
νx(1 + n + n2 + . . . ) = (1 − n)(1 + n + n2 + . . . ) = 1
so that ν has a right inverse. Similarly, ν has a left inverse and we are
done.
Corollary 2.3. Let 1 = e1 + · · · + er be a decomposition of 1 as a sum of
orthogonal idempotents in A/N . Then there exist orthogonal idempotents
1 = f1 + · · · + fr ∈ A such that f¯i = ei for each i.
Proof. Proceed by induction on r, the case r = 1 being obvious. For r > 1,
let e = e1 + · · · + er−1 and f ∈ A be an idempotent lifting e. By induction applied to the quotient e(A/N )e of the ring f Af , there are orthogonal
3
idempotents f = f1 + · · · + fr−1 ∈ f Af lifting e = e1 + · · · + er−1 . Set
fr = 1 − f1 − · · · − fr−1 .
Now what is the point? Well, since A is a finite dimensional algebra over
an algebraically closed field, J(A) is a nilpotent ideal and Wedderburn’s
theorem shows that
A/J(A) = Mn1 (k) ⊕ · · · ⊕ Mnr (k)
where n1 , . . . , nr are the dimensions of the r inequivalent irreducible Amodules L1 , . . . , Lr . Let e1,1 , . . . , e1,n1 , . . . , er,1 , . . . , er,nr be the diagonal
matrix units, giving us a decomposition
1 = (e1,1 + · · · + e1,n1 ) + · · · + (er,1 + · · · + er,nr )
of the identity in A/J(A) as a sum of orthogonal primitive idempotents.
Note (A/J(A))ei,j ∼
= Li for every j. Apply the corollary to lift to a decomposition
1 = (f1,1 + · · · + f1,n1 ) + · · · + (fr,1 + · · · + fr,nr )
in A. Since the ei,j are primitive, the fi,j obviously are too. As ei,1 , . . . , ei,ni
are all conjugate in A/J(A), so are all of fi,1 , . . . , fi,ni . So the modules Afi,j
are isomorphic for every j = 1, . . . , ni are all isomorphic PIMs to Pi say.
Hence
A∼
= P1n1 ⊕ · · · ⊕ Prnr
as a direct sum of PIMs. Finally,
Pi / rad Pi = Li ,
hence the PIM Pi has a unique maximal submodule. We have now proved:
Theorem 2.4. The map P 7→ P/ rad P induces a 1–1 correspondence between the isomorphism classes of PIMs and the isomorphism classes of irreducible modules. Moreover, letting L1 , . . . , Lr be a complete set of inequivalent irreducibles and P1 , . . . , Pr be the corresponding PIMs, we have that
AA
∼
= P1dim L1 ⊕ · · · ⊕ Prdim Lr .
If you like, this decomposition generalizes Wedderburn’s theorem to the
non-semisimple case. Here is an important consequence.
Corollary 2.5. Let P be a PIM and L = P/ rad P be the corresponding
simple module. For any f.d. A-module M , the composition multiplicity
[M : L] = dim HomA (P, M ).
Proof. Proceed by induction on the length of a composition series of M .
The base case is when M is irreducible, which follows by Schur’s lemma (in
its strong form when the vector space is finite dimensional and the field is
algebraically closed). For the induction step, pick a proper submodule K of
M . Then there is a short exact sequence
0 −→ K −→ M −→ Q −→ 0.
4
Apply the exact functor HomA (P, ?) to get a short exact sequence of vector
spaces
0 −→ HomA (P, K) −→ HomA (P, M ) −→ HomA (P, Q) −→ 0.
We deduce by induction and the Jordan-Holder theorem that
dim HomA (P, M ) = [K : L] + [Q : L] = [M : L].
We are done.
Recalling that A ∼
= EndA (A)op , the next theorem generalizes the results
of the section a little bit.
Theorem 2.6. Let M be an f.d. A-module and let B = EndA (M )op . Let
M = M1⊕n1 ⊕ · · · ⊕ Mr⊕nr
be a decomposition of M into inequivalent indecomposables M1 , . . . , Mr . Let
ei ∈ B be an idempotent projecting M onto one of the summands isomorphic
to Mi , and let Pi = Bei and Li = Pi / rad Pi . Then, P1 , . . . , Pr is a complete
set of inequivalent PIMs for B, L1 , . . . , Lr are the corresponding irreducible
B-modules, and ni = dim Li .
Proof. Just observe that decompositions of M as a direct sum of indecomposable A-modules are in 1–1 correspondence to decompositions of 1B as a
sum of primitive orthogonal idempotents, then apply the results above. The next exercise is really important: the remainder of the chapter will
be devoted to generalizing this example!
Exercise 5. Let A = Tn (k) be the algebra of all upper triangular n × n
matrices over a field k. What is J(A)? How many inequivalent irreducible
A-modules are there? What are their dimensions? What do their projective
covers look like?
One more definition in this section: a finite dimensional algebra A is
called a basic algebra if all its irreducible modules are one dimensional. For
example, the above exercise shows that Tn (k) is a basic algebra.
Suppose for a moment that A is any finite dimensional algebra, and let
P1 , . . . , Pn be a complete set of inequivalent PIMs. Then,
P = P1 ⊕ · · · ⊕ Pn
is a projective generator for A. By the Morita theorem, A is Morita equivalent to the algebra
EndA (P ).
By Theorem 2.6, the latter algebra IS a basic algebra. In other words, if
you are studying representations of finite dimensional algebras, you may as
well restrict yourself straight away to studying just the basic ones.
5
2.2. Quivers and path algebras. A quiver Q = (Q0 , Q1 , s, t : Q1 → Q0 )
is
• a set Q0 of vertices, which for us will be {1, 2, . . . , n} (in particular,
finite);
• a set Q1 of arrows which for us will be finite.
An arrow ρ starts at the vertex s(ρ) and terminates at the vertex t(ρ).
A non-trivial path in Q is a sequence ρ1 . . . ρm (m ≥ 1) of arrows satisfying
t(ρi+1 ) = s(ρi ). Pictorially:
ρ1
ρ2
ρm
◦ ←− ◦ ←− · · · ←− ◦
We also have the trivial paths ei for each vertex i, i.e. the path starting
at vertex i and going nowhere. For a path x, we write s(x) for the vertex
where it starts, t(x) for the vertex where it terminates.
The path algebra kQ is the k-algebra with basis the paths in Q and the
product of two paths x, y is defined by
the concatenation if t(y) = s(x),
xy =
0
otherwise.
Note that kQ is obviously an associative algebra. The trivial paths e1 , . . . , en
are mutually orthogonal idempotents, and the 1 is the element e1 + · · · + en .
The elements e1 , . . . , en together with the paths of length one defined by
each of the finitely many arrows in Q generate Q as an algebra, so kQ is a
finitely generated algebra. If we had allowed infinitely many vertices in our
definition, then kQ would not have an identity. If we had allowed infinitely
many edges then kQ would not be finitely generated.
Example 2.7. Let Q be the quiver
ρ
σ
1 −→ 2 −→ 3
Then kQ has basis {e1 , e2 , e3 , ρ, σ, σρ}. Some products: ρσ = ρρ = e1 σρ =
σρe3 = 0; e3 σρ = σρ.
Exercise 6. Suppose Q is a quiver with at most one path between any two
points. Then, kQ is isomorphic to the subalgebra of Mn (k) consisting of
all matrices with ij-entry equal to zero if there is no path from j to i. For
example,
1 → 2 → ··· → n
is the lower triangular matrices.
Example 2.8. Take Q to be the quiver with one vertex and one loopy
arrow. Then, kQ ∼
= k[T ], the polynomial algebra in one variable. If Q
has one vertex and r loops, then kQ is the free associative algebra on r
generators. I don’t know very much about the latter algebra when there is
more that one generator, but apparently its module category is a completely
wild thing. For instance, I read somewhere that there exist simple modules
of every possible dimension.
6
Let A = kQ. Here are some increasingly difficult remarks which I am not
going to attempt to prove.
(1) Spaces like Aei , ej A, ej Aei , . . . are easy to describe. For instance,
Aei has a basis of all paths starting at i.
(2) A is a finite dimensional algebra if and only if Q has no oriented
cycles, which is if and only if A is Artinian.
(3) A is noetherian if and only if for each vertex i, there is at most one
oriented cycle passing through i exactly once.
(4) J(A) has a basis consisting of all paths from i to j for all pairs of
vertices i, j such that there is no path from j to i. (It is easy at
least to see that this is a two-sided nilopotent ideal, hence that it is
contained in J(A); I didn’t see yet how to prove that it was equal to
J(A).)
I do want to prove some basic facts about the idempotents e1 , . . . , en .
Note that they are orthogonal idempotents summing to 1, so
A = Ae1 ⊕ · · · ⊕ Aen
is a decomposion of A into projectives. In fact:
Lemma 2.9. The ei are primitive idempotents, i.e. each Aei is a PIM.
Moreover, for i 6= j, Aei ∼
6 Aej , hence ei , ej are not conjugate to each other.
=
Proof. Suppose Aei is decomposable. Then EndA (Aei ) ∼
= ei Aei contains an
2
idempotent f 6= ei . Then f = f = f ei so f (ei − f ) = 0. Since f ∈ Aei , f is
some linear combination of paths starting at i. Let x be a path of maximal
length appearing with non-zero coefficient in f . Similarly, (ei − f ) ∈ ei A,
so it is some linear combination of paths ending at i. Let y be a path of
maximal length appearing with non-zero coefficient in (ei − f ). Now think
about f (ei − f ). It is some linear combination of paths starting and ending
at i. Moreover it must contain xy with non-zero coefficient, since no other
paths arising in the product can cancel with that one by choice of x, y. This
shows f (ei − f ) 6= 0, a contradiction.
Now suppose that Aei ∼
= Aej . Since HomA (Aei , Aej ) = ei Aej , inverse
isomorphisms give us elements f ∈ ei Aej and g ∈ ej Aei such that f g =
ei , gf = ej . But gf is a linear combination of paths starting and ending at
j and going through i. The trivial path ej is not such a path, since j 6= i.
So there is no way gf could equal ej .
We are usually going to be concerned with the case that A is a finite
dimensional algebra (i.e. Q has no oriented cycles). Then we’ve decomposed
A = Ae1 ⊕ · · · ⊕ Aen
A into a direct sum of inequivalent PIMs, and by Krull-Schmidt these are
all the PIMs. Thus, A has exactly n simple modules, all of which are one
dimensional, namely the modules L1 , . . . , Ln where Li = Aei / rad Aei . In
particular, A is a basic algebra, as defined at the end of the previous section.
7
On the other hand if A is not a finite dimensional algebra, it is not
Artinian, so Krull-Schmidt may not hold, so we can’t really say too much
from this analysis. For instance A might be a free algebra on more than one
generator, which has infinitely many simple modules.
2.3. Representations of quivers. Let Q = (Q0 , Q1 , s, t) be a quiver. A
representation V of Q is an assignment of a vector space Vi to each vertex
and a linear map Vρ : Vs(ρ) → Vt(ρ) to each edge. (There is no assumption
about anything commuting). We’ll always just consider finite dimensional
representations of Q, that is, representations for which each Vi is a finite
dimensional vector space. The dimension vector of a representation V of Q
is the vector dimV = (dim Vi )i∈Q0 .
Given two representations V, V 0 of Q, a morphism θ : V → V 0 is given
by linear maps θi : Vi → Vi0 for each i ∈ Q0 such that the obvious diagrams commute. You can obviously compose morphisms, so we have a nice
category: the category of all finite dimensional representations of Q.
There are notions of subrepresentations, quotient representations, direct
sums of representations, . . . . I think they’re all pretty obvious, so you
should formulate them for yourselves. So the category of (finite dimensional)
representations of Q is an abelian category.
Example 2.10. Fix i ∈ Q0 . Let Li be the representation defined by putting
k on the ith vertex, and 0 on all other vertices, with all maps being zero.
Then, Li is an irreducible representation – obviously there are no subrepresentations other than 0 and itself.
Example 2.11. Let Q be the quiver
◦←◦→◦
Let V and V 0 be the representations
1
1
k←k→k
and
1
k←k→0
respectively. Then, Hom(V, V 0 ) is one dimensional, Hom(V 0 , V ) is zero.
Theorem 2.12. The category of finite dimensional representations of Q is
equivalent to the category kQ -mod of finite dimensional modules over the
path algebra.
Proof. It is traditional just to explain how to turn representations of Q into
kQ-modules and vice versa, and then to omit all the tedious checks.
Spose (Vi )ni=1 is a representation of Q. We associate a kQ-module V as
follows. As a vector space,
n
M
V =
Vi .
i=1
8
The action of ei ∈ kQ on V is as the projection of V onto the summand Vi .
The action of ρ ∈ kQ for ρ ∈ Q1 is as the linear map Vρ : Vs(ρ) → Vt(ρ) on
Vs(ρ) and as zero on all other Vi ’s.
Conversely, given a kQ-module V , let Vi = ei V and let Vρ be the linear
map defined by the action of ρ ∈ kQ on Vs(ρ) – noting that ρei V ⊆ ej V
since ρ = ej ρei .
Now assume for a moment that kQ is finite dimensional. In the previous section, we showed that the ei ’s are mutually orthogonal, non-conjugate
primitive idempotents summing to 1, so that P1 , . . . , Pn defined from Pi =
(kQ)ei is a complete set of PIMs. In this section we’ve constructed irreducible modules Li for each i = 1, . . . , n: namely one dimensional on the
ith vertex and 0 everywhere else. Now multiplication obviously defines a
non-zero map
Pi Li .
Hence Li is the unique irreducible quotient of Pi . Thus, in the case that Q
has no oriented cycles, L1 , . . . , Ln is a complete set of inequivalent irreducible
representations.
Example 2.13. Here I will discuss how basic problems in linear algebra
translate into the language of quivers:
(1) (Equivalence of matrices) Consider the quiver with two vertices and
one edge. Giving a finite dimensional representation means giving
a linear map f : V1 → V2 between two finite dimensional vector
spaces. An isomorphism between f : V1 → V2 and g : W1 → W2
means vector space isomorphisms θ : V1 → W1 and φ : V2 → W2
such that φ ◦ f = g ◦ θ. In other words, V1 and W1 have the same
dimension, V2 and W2 have the same dimension, and the rectangular
matrices representing f and g in some bases are equivalent matrices
[f ] = [φ]−1 [g][θ]. So you are reduced to studying equivalence classes
of matrices – which by linear algebra are completely determined by
their size and their rank. In particular you see from this that there
are three indecomposable representations up to isomorphism (not
1
counting the zero module as an indecompsable), namely k → 0, k →
k and 0 → k. Note there are just finitely many indecomposables –
this is “finite”.
(2) (Similarity of matrices) Consider the quiver with one vertex and
one loop. Giving a finite dimensional representation means simply
giving an endomorphism of a finite dimensional vector space. Two
representations f : V → V and g : W → W are isomorphic if there
is a vector space isomorphism θ : V → W such that θ ◦ f = g ◦ θ.
In other words, they are isomorphic if and only if they have the
same dimension and the square matrices representing f and g in
some bases are similar matrices [f ] = [θ]−1 [g][θ]. So you see you
are reduced to studying Jordan normal forms to solve the problem.
9
In particular, the indecomposable finite dimensional representations
correspond to the Jordan blocks Jn (λ). This time there are infinitely
many indecomposables but we can classify them – this is “tame”.
(Of course you know that the path algebra is the polynomial algebra
in one variable k[x], so finite dimensional representations of this
quiver are just finitely generated torsion k[x]-modules – which we
completely understand by anyway).
(3) Consider the Kronecker quiver with two vertices and two arrows
both from 1 to 2. A representation means a pair of linear maps
f, g : V1 → V2 . It is a difficult but solvable (hence important!)
problem in linear algebra to classify equivalence classes of pairs of
rectangular matrices. We’ll discuss this one later on and manage to
classify them: it is again “tame” though non-trivial.
(4) Consider the quiver with one vertex and two arrows. A representation of this means a pair of endomorphisms f, g : V → V of a
vector space. It is a “wild” problem to classify similarity classes
of pairs of square matrices, just as it is a “wild” problem to study
representations of khx, yi.
(5) Here’s one we can solve. Consider the quiver
1 → 2 ← 3.
Suppose that V is an indecomposable representation. Either V is
k→0←0
or
0→0←k
or else the maps V1 → V2 and V2 ← V3 are injective. So to classify the
remaining indecomposable representations of this quiver is exactly
the problem of classifying indecomposable pairs of subspaces of a
vector space V . But you just pick a basis for the intersection, then
extend it to the first one then to their sum... You deduce that the
indecomposable representations are the two above together with
1
1
1
1
0 → k ← 0, k → k ← 0, 0 → k ← k, k → k ← k.
Thus there are 6 indecomposables up to isomorphism with dimension
vectors (1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1) and (1, 1, 1).
It is not a coincidence that the picture here is just like the picture
for lower triangular 3 × 3 matrices, i.e. the quiver
1 → 2 → 3.
Even though these algebras are not Morita equivalent there seem to
be lots of similarities between their representations. We will explain
this later on when we talk about reflection functors.
10
It is perhaps worth pointing out how to translate things into matrices in
general. We will use this observation more fundamentally later on. Suppose
V is a representation of a quiver Q with dimension vector α. Picking a basis
for each Vi we can identify Vi = k αi and the map Vρ with a αt(ρ) × αs(ρ) matrix xρ . Given another representation V 0 of the same dimension, we can
pick bases again to identify Vi0 with k αi and the maps then are matrices x0ρ .
Now saying that V ∼
= V 0 means that there are linear maps
gi ∈ GLαi (k)
for each i such that
−1
x0ρ = gt(ρ) xρ gs(ρ)
for each ρ ∈ Q1 . This should help in the following exercises. Note these
exercises and the above examples contain all the main examples that we’ll
use to understand everything else we are doing...
Exercise 7. (Some indecomposable representations of the Kronecker quiver)
Recalling Example 3 above, let Q be the Kronecker quiver. Fix λ ∈ k and
n ≥ 1. Take the representation V = V (λ, n) defined by letting V1 = V2 = k n
and the linear maps on the two arrows V1 → V2 being the maps In : k n → k n
(the identity matrix) and Jn (λ) : k n → k n (the Jordan block of size n
with eigenvalue λ). Show that V (λ, n)’s are (infinitely many) inequivalent
indecomposable representations of Q. Up to isomorphism, there is exactly
one other indecomposable representation in which the vector spaces V1 and
V2 have dimension n. What is it?
Exercise 8. (three subspace problem) Find 12 non-isomorphic indecomposable representations (not counting the zero representation as an indecomposable) of the quiver with one vertex in the middle and three vertices around
the edge, with three arrows all pointing inwards. The general theory developed in a while will show these are all the indecomposables, so you don’t
need to prove that extra thing right now.
Exercise 9. (four subspace problem) Find infinitely many non-isomorphic
indecomposable representations of the quiver with one vertex in the middle
and four vertices around the edge, with four arrows all pointing inwards.
To end the section here are a few more definitions to get familiar with –
these will play the crucial role later on.
(1) We’ll work with vectors in Zn . For instance the dimension vector
dimV is such a vector. Let i = dimLi , so 1 , . . . , n is the standard
basis for Zn .
(2) Define a bilinear form on Zn by
hα, βi =
n
X
i=1
αi βi −
X
αs(ρ) βt(ρ) .
ρ∈Q1
I’ll call this the asymmetric bilinear form to remind you its not
symmetric.
11
(3) Let q(α) = hα, αi. This is a quadratic form on Zn which I’ll call the
quadratic form!
(4) Let (α, β) be the symmetric bilinear form associated to the quadratic
form q. Thus, (α, β) = hα, βi + hβ, αi.
Note: (., .) doesn’t depend on orientation.
Exercise 10. Take the two quivers discussed in Example 2.11(5) above
(both had underlying graph 1 − 2 − 3). For each, write down the matrix of
the asymmetric bilinear form h., .i and of the symmetric bilinear form (., .)
with respect to the basis 1 , 2 , 3 for Z3 . This is the Cartan matrix of type
A3 !
Note that for i 6= j, hi , j i is
−(the number of paths from i to j).
This will be important shortly.
2.4. The standard resolution. I’m now going to start using some basic
homological algebra. Here is a rapid review – hopefully enough for those
of you who haven’t taken a course in homological algebra to follow what is
going on. Let A be a k-algebra and M, N be A-modules. I want to define
the k-vector spaces ExtiA (M, N ).
The usual way to do this is to take a projective resolution
· · · −→ P1 −→ P0 −→ M −→ 0
of M (thus it is an exact sequence and all the Pi ’s are projectives). There
are many ways to do this, but it doesn’t matter which you choose for our
purposes since the Comparison Theorem shows that any two projective resolutions are chain homotopy equivalent. Now apply the contravariant functor
HomA (?, N ) to P∗ to get a complex
0 −→ HomA (P0 , N ) −→ HomA (P1 , N ) −→ . . .
This complex is not necessarily exact. So we take cohomology: let
ExtiA (M, N )
be the kernel (cocycles) over the image (coboundaries) of the differential
in the ith slot. Since HomA (?, N ) is left exact, we do at least have that
Ext0A (M, N ) = HomA (M, N ). The fact that the complex is unique up to
chain homotopy equivalence means that its cohomology is well-defined independent of the choice.
There is another way entirely to define ExtiA (M, N ). I think it will be
enough for us to understand this for Ext1A (M, N ). This can be identified
with the set of all short exact sequences
0 −→ N −→ E −→ M −→ 0
under a natural equivalence relation. The identification goes as follows.
Start with a projective resolution of M as above. Using the comparison
theorem, we define homomorphisms P0 → E, P1 → N and P2 → 0 so the
12
diagrams commute. Note the map P1 → N is a cocycle since the left hand
rectangle commutes. Its class in Ext1A (M, N ) is then the element of ext
corresponding to the s.e.s...
The only other thing you need to know is the long exact sequence. If
0 −→ K −→ M −→ L −→ 0
is a short exact sequence, then applying the left exact functor HomA (?, N )
gives the long exact sequence
0 −→ HomA (L, N ) −→ HomA (M, N ) −→ HomA (K, N )
−→ Ext1A (L, N ) −→ . . .
There is a similar long exact sequence for HomA (N, ?).
Now lets look at what happens if A is the path algebra of a quiver.
Theorem 2.14. Let A = kQ and V be a kQ-module. There is a projective
resolution
n
M
f M
g
0 −→
Aet(ρ) ⊗k es(ρ) V −→
Aei ⊗k ei V −→ V −→ 0
i=1
ρ∈Q1
where g is the natural multiplication map, and f (a ⊗ v) = aρ ⊗ v − a ⊗ ρv
for a ⊗ v ∈ Aet(ρ) ⊗ es(ρ) V .
Proof. The terms in the sequence are obviously projective. Also it is obvious
that g ◦ f = 0 and that g is onto. So we just need to show that ker f = 0
and that ker g ⊆ im f .
(1) ker f = 0. Let ξ ∈ ker f . We can write
XX
ξ=
a ⊗ vρ,a
ρ∈Q1 a
where the second sum is over all paths a with s(a) = t(ρ) and the elements
vρ,a ∈ es(ρ) V are almost all 0. Then,
XX
f (ξ) =
aρ ⊗ vρ,a − a ⊗ ρvρ,a .
ρ
a
Let a be a path of maximal length such that vρ,a 6= 0 for some ρ. Then f (ξ)
involves aρ ⊗ vρ,a , and nothing can cancel this, so f (ξ) 6= 0.
(2) ker g ⊆L
im f . This is a little more tricky. First some observations.
Given ξ ∈ i Aei ⊗k ei V , we can represent it as
ξ=
n X
X
i=1
a ⊗ xa
a
where the second sum is over all paths a starting at i and almost all the
xa ∈ es(a) V are zero. Define deg(ξ) to be the length of the longest path a
with xa 6= 0.
13
Now if a is a non-trivial path with s(a) = i, we can express it as a product
a = a0 ρ with ρ an arrow starting at i and a0 some other path. Then
f (a0 ⊗ xa ) = a ⊗ xa − a0 ⊗ ρxa ,
viewing a0 ⊗ xa as an element of the ρth component.
Now we claim that ξ + im f always contains an element of degree 0. If ξ
has degree d > 0 then
!
n X
X
0
a ⊗ xa
ξ=f
i=1
a
where the sum is over all paths a starting at i of length d has degree < d.
Now the claim follows by induction.
We are ready to prove that ker g ⊆ im f . Take ξ ∈ ker g. Let ξ 0 ∈ ξ + im f
be of degree 0. Then,
X
X
0 = g(ξ) = g(ξ 0 ) = g(
ei ⊗ x0ei ) =
x0ei .
i
Ln
0
This belongs to
i=1 Vi so each term on RHS is zero, so ξ = 0. Hence
ξ ∈ im f . We are done.
Corollary 2.15. Let V, W be A-modules. Then ExtiA (V, W ) = 0 for all
i ≥ 2.
Proof. We’ve just constructed a projective resolution with 0 in degrees 2
and higher!
Corollary 2.16. Submodules of projectives are projective.
Proof. Let V be a submodule of a projective P . Take the s.e.s.
0 −→ V −→ P −→ Q −→ 0
and apply HomA (?, W ). The long exact sequence shows that Ext1A (V, W ) =
0. Since this is for all W , V is projective.
Corollary 2.17. If V, W are finite dimensional representations of Q, then
dim Hom(V, W ) − dim Ext1 (V, W ) = hdimV, dimW i.
In particular,
dim End(V ) − dim Ext1 (V, V ) = q(dimV ).
Discussion. An algebra is hereditary if every submodule of a projective
module is projective. We have shown in the second corollary above that
path algebras of quivers are hereditary algebras. We also know: A is finite
dimensional if and only if Q has no oriented cycles, in which case A is a
basic algebra.
Suppose for a moment that A is any finite dimensional algebra. The Ext
quiver of A is the quiver with vertices 1, . . . , n corresponding to the isomorphism classes of irreducible A-modules L1 , . . . , Ln , and with dim Ext1A (Li , Lj )
arrows from i to j.
14
In case A is a path algebra of Q, the last corollary above shows that
dim Ext1A (Li , Lj ) = −hi , j i
which we noticed before was the number of arrows in Q from i to j. Thus
in the case that A is a path algebra of a quiver with no oriented cycles, the
Ext quiver allows us to recover the original quiver in an invariant way!
This seems like a good moment to state Gabriel’s theorem, though I am
not going to write out a proof. (See Benson Propositions 4.1.7 and 4.2.5).
Theorem 2.18 (Gabriel, circa 1970). Suppose that A is a finite dimensional
basic algebra over k. Let Q be its Ext quiver. Then there is a surjective map
π : kQ A with kernel contained in the ideal of all paths of length at least
two. If A is hereditary, this map is an isomorphism.
Thus you see that of all finite dimensional basic algebras, the hereditary
ones are precisely the ones you get from quivers; but for any finite dimensional algebra you get a first approximation to its representation theory by
looking at the representations Ext quiver. This is particularly effective if the
Ext quiver happens to have no oriented cycles, since then ker π is contained
in the square of the Jacobson radical of kQ so you can transfer quite a lot
of the information.
2.5. A little algebraic geometry. To proceed, we need a little bit of
algebraic geometry. Everything in this section will be familiar to those of
you who took my course last year on algebraic groups, but I will try to be
careful about stating all the things we are using so that those of you who
didn’t can follow the main idea.
First remember that affine N -space is the algebraic variety AN . The
points of AN are just the N -tuples from the field k. The coordinate ring
k[AN ] of functions on AN is the polynomial ring k[x1 , . . . , xn ], where xi is
the ith coordinate function.
The space AN is endowed with the Zariski topology defined by declaring that the closed sets are the sets V (I) of common zeros of the ideals in
k[x1 , . . . , xn ] (equivalently, the common zeros of a finite collection of polynomials generating the function I). Conversely, given a closed subset X of AN ,
the set of all functions vanishing on X is a radical
√ ideal of the coordinate
ring. The Nullstellensatz says that I(V (I)) = I. This implies that the
maps V and I give a 1–1 inclusion reversing correspondence between the
closed sets in AN and the radical ideals in k[x1 , . . . , xn ].
An affine variety is a closed set X in AN . Its coordinate ring k[X] is the
ring k[x1 , . . . , xn ]/I(X), i.e. the functions obtained by restricting polynomial functions down to X. The Nullstellensatz extends to this situation to
give a 1–1 inclusion reversing correspondence between the closed sets in X
and the radical ideals in k[X]. In particular, the points in X correspond to
the maximal ideals in k[X], so you can completely recover X from its coordinate ring. In this way you get a functor Spec from the category of affine
algebras (reduced, finitely generated commutative algebras) to the category
15
of affine varieties which is a contravariant equivalence of categories. Products exist in the category of affine varieties: X × Y is the affine variety with
coordinate ring k[X] ⊗ k[Y ].
A topological space is irreducible if any non-empty open subset is dense.
The dimension of an irreducible topological space is
sup{n | there exist irreducible closed subsets Z0 ⊂ Z1 ⊂ · · · ⊂ Zn }.
For an affine variety X, it is irreducible if and only if k[X] is an integral
domain, in which case its dimension dim X is equal to the Krull dimension
of the ring k[X] (defined in terms of chains of prime ideals). For example
the dimension of the irreducible variety AN is N .
If X is a closed subvariety of AN and Y is a closed subvariety of AM , a
morphism f : X → Y means a function with
f (x1 , . . . , xN ) = (f1 (x), . . . , fM (x))
where each fi is a polynomial function, i.e. a function belonging to k[X].
Equivalenly, a function f : X → Y is a morphism if for each θ ∈ k[Y ] the
function f ∗ θ : X → k defined by f ∗ (x) = θ(f (x)) belongs to k[X], in which
case f ∗ : k[Y ] → k[X] is an algebra homomorphism. This latter definition
makes it clear that morphisms don’t depend on the particular choice of the
embedding of X, Y into affine spaces. This will be good enough for our
purposes: we won’t ever think about morphisms of non-affine varieties.
A quasi-affine variety is an open subset U of an affine variety X, equivalently, a locally closed (a.k.a. open in its closure) subset of some AN . It has
the subspace topology induced by the Zariski topology on X. The dimension
of U in this topology is the same as the dimension U if its closure in X.
Special case: let X be an affine variety and 0 6= f ∈ k[X]. Then the set
D(f ) of all non-zeros of f is called a principal open subset of X. I want to
convince you that D(f ) can be given the structure of an affine variety in a
natural way. The trick is to consider homeomorphism
D(f ) → X × A1 , x 7→ (x, 1/f (x)).
The image of D(f ) is the set of all (x, c) such that f (x)c = 1. This is a
closed subset of X × A1 . So we have identified D(f ) with an affine variety.
Its coordinate ring k[D(f )] = k[X, x]/(f x − 1), so the image of f in k[D(f )]
is invertible: f¯x̄ = 1. You get that k[D(f )] = k[X]f , the coordinate ring
of X localized at the function f . Return to the general case that U is an
arbitrary open subset of an affine variety X. The principal open subsets
form a base for the Zariski topology, so U is a (finite) union of D(f )’s,
hence a finite union of affine varieties. In this way we have made sense of
the idea that a quasi-affine variety is a topological space that looks locally
like an affine variety. This leads to the general notion of variety: something
obtained by gluing affine varieties along open sets. But we won’t need such
generality.
An algebraic group is an affine variety G endowed with a multiplication
µ : G × G → G and an inverse i : G → G which are morphisms of affine
16
varieties. The main example is the group GLn (k) of all n × n invertible ma2
trices over k. This is the principal open subset of Mn (k) ∼
= An defined by
the non-vanishing of determinant det, so it is an irreducible affine variety of
dimension n2 . To see that µ, i are morphisms, just note that they are polynomial functions in the n2 coordinate functions and det−1 . Any algebraic
group is isomorphic to a closed subgroup of some GLn (k).
An action of an algebraic group G on an affine variety X means a morphism ρ : G × X → X of affine varieties that is a group action in the usual
sense. In that case a fundamental theorem shows:
(1) the orbits of G on X are irreducible and locally closed;
(2) if O is an orbit, its boundary O − O is a union of orbits of strictly
smaller dimension than O;
(3) the stabilizer Gx of a point x ∈ O is a closed subgroup of G, and
dim O = dim G − dim Gx (note Gx need not be connected but it has
finitely many connected components, each of which is irreducible of
the same dimension, so dim Gx should be understood as meaning
the dimension of any one of these components).
Now we are ready to apply these facts...
2.6. The variety of representations. Throughout the section, Q is a
quiver and A = kQ. We also fix a dimension vector α ∈ Nn . We will discuss
the algebraic geometry arising from the representations of Q of dimension
α.
Note Homk (k s , k t ) is just the space of all t × s matrices. So we can
identify it with the affine variety Ats . Thus it is an irreducible affine variety
of dimension ts. Let
Y
Rep(α) =
Homk (k αs(ρ) , k αt(ρ) ).
ρ∈Q1
P
This is an irreducible variety of dimension ρ∈Q1 αs(ρ) αt(ρ) . Given a point
x ∈ Rep(α), we define a representation R(x) of Q of dimension α by declaring
that R(x)i = k α(i)Qand that R(x)ρ is the linear map with matrix xρ .
= ni=1 GLαi (k). This is an algebraic group of dimension
PnLet GL(α)
2
i=1 αi . We make GL(α) act on the variety Rep(α) by conjugation. Thus,
if x = (xρ )ρ∈Q1 is an element of the product (so each xρ is an αt(ρ) × αs(ρ) matrix) and g = (gi )ni=1 (so each gi is an invertible αi × αi -matrix) we define
gx by
−1
(gx)ρ = gt(ρ) xρ gs(ρ)
.
Lemma 2.19. There is a 1–1 correspondence V 7→ OV between the set of
isomorphism classes of representations of Q of dimension α and the set of
GL(α)-orbits on Rep(α). The correspondence maps a representation V of
dimension α to the set OV = {x ∈ Rep(α) | R(x) ∼
= V }. Moreover, the space
AutA (R(x)) of all A-module automorphisms of R(x) is isomorphic to the
stabilizer of x in GL(α).
17
Proof. If V is a representation of dimension α, picking a basis in each Vi
allows us to identify each Vi with the vector space k αi and each Vρ with a
map from k αs(ρ) to k αt(ρ) . The action of GL(α) on Rep(α) corresponds to
changing to a different choice of basis.
So: representations of dimension α are in 1–1 correspondence with orbits. Now lets look at the geometry of the orbits and try to relate that to
representation theory. Recall: orbits are locally closed and irreducible, and
the boundary of an orbit is a union of orbits of strictly smaller dimension.
The next lemma explains how the first basic geometric notion “dimension”
relates to the representation theory.
Lemma 2.20. Suppose V is a representation of dimension α. Then,
dim Rep(α) − dim OV = dim EndA (V ) − q(α) = dim Ext1 (V, V ).
Proof. Suppose V ∈ R(x). Then,
dim GL(α) − dim OV = dim Gx .
∼ AutA (V ). It is a principal open
Now, the previous lemma shows that Gx =
subset of EndA (V ), so has the same dimension. Hence,
X
X
dim Rep(α) − dim OV =
αs(ρ) αt(ρ) −
αi2 + dim EndA (V )
ρ
i
= dim EndA (V ) − q(α) = dim Ext1 (V, V ).
The last equality here is 2.17.
Right away we can get somewhere with this:
Corollary 2.21. Suppose α 6= 0 and q(α) ≤ 0. Then, for any representation
V of dimension α, dim OV < dim Rep(α). Hence, there are infinitely many
orbits in Rep(α), i.e. infinitely many non-isomorphic representations of
dimension α.
For instance, take the quiver in Exercise 9 above. Labelling the outside
vertices 1, 2, 3, 4 and the inside vertex 0, we have that
q(1, 1, 1, 1, 2) = 8 − 8 = 0.
So we see right away that there are infinitely many non-isomorphic representations of this quiver of this dimension. In particular this cannot have
finitely many indecomposables – i.e. it is not of finite representation type.
Corollary 2.22. Let V be a representation of dimension α. Then, OV is
open in Rep(α) if and only if Ext1A (V, V ) = 0, i.e. V has no self-extensions.
Proof. A proper closed subvariety of an irreducible variety has strictly smaller
dimension. Since orbits are open in their closure and Rep(α) is irreducible,
OV is open if and only if OV = Rep(α) which is if and only if dim OV =
dim Rep(α) which is if and only if dim Ext1A (V, V ) = 0.
Corollary 2.23. There is at most one representation of Q of dimension α
(up to isomorphism) with no self-extensions.
18
Proof. Any two non-empty opens inside an irreducible variety intersect, but
orbits are disjoint.
Let us continue in this vein: relating geometry to representation theory.
Lemma 2.24. If 0 → U → V → W → 0 is a non-split short exact sequence,
then OU ⊕W is contained in the boundary of OV .
Proof. View each Ui as subspaces of Vi , and pick basis for Vi extending basis
of Ui . Then each Vρ looks like a matrix
uρ xρ
0 wρ
for matrices uρ , xρ and wρ of appropriate size. For λ ∈ k × , let gλ ∈ GL(α)
be the element with ith component
λ 0
0 I
Then,
uρ λxρ
(gλ V )ρ =
.
0 wρ
These are contained in OV for each 0 6= λ. Evidently, since some xρ is nonzero, the closure of the set of all such tuples matrices for all 0 6= λ contains
the tuple in which λ = 0. Hence, OU ⊕W ⊆ OV .
Corollary 2.25. If V is an orbit in Rep(α) of maximal dimension and
V = U ⊕ W , then Ext1 (W, U ) = 0.
Proof. Suppose 0 −→ U −→ E −→ W −→ 0 is a non-split extension. Then
by the lemma, OV is contained in the boundary of OE . But that means that
dim OV < dim OE contradicting the maximality of dim OV .
Corollary 2.26. If OV is closed then V is completely reducible.
Proof. Let U be a submodule, W = V /U . We need to show the extension
is split. If not, then OU ⊕W lies in the boundary of OV . But OV is closed so
its boundary is empty. Contradiction.
Comments.
Suppose that Q has no oriented cycles. Let Z ∈ Rep(α) be the unique
element with all matrices being zero. This is obviously in its own orbit,
hence closed. Iterating the lemma above shows that Z is contained in the
closure of any other orbit, hence {Z} is the unique closed orbit. Since Z is
semisimple, we see in this case that V is completely reducible if and only if
OV is closed.
Let us pause to think what is going on in some examples. Suppose Q
is the quiver with one vertex and one arrow. The orbits in Rep(d) are
just the conjugacy classes of d × d matrices, parametrized by Jordan forms.
The Lemma shows that Jd (λ) contains Jd1 (λ) × Jd2 (λ) (d1 + d2 = d) in
its closure. In particular, iterating, it contains J1 (λ)d in its closure. Thus
19
the only conceivably closed conjugacy classes are the ones corresponding to
diagonalizable matrices: to see that these are indeed closed orbits, look at
Exercise 1 below.
Now we compute the dimension of some of the orbits. Suppose λ1 , . . . , λs
are distinct eigenvalues and d1 + · · · + ds = d. Consider the orbit Jd1 (λ1 ) ⊕
· · · ⊕ Jds (λs ). The endomorphism ring is of dimension d (it is the direct
sum k[x]/(xd1 ) ⊕ · · · ⊕ k[x]/(xds )). Since q(d) = 0 we deduce by the Lemma
that the dimension of the orbit is d2 − d. These are the orbits of maximal
dimension – everything else has a larger endomorphism ring. For instance
if you take s = 2, then Corollary 2.25 implies that there are no extensions
between Jordan blocks of different eigenvalues. Of course we know this
already, but it illustrates the point.
Exercise 1. Let M be the variety of all n × n matrices and let GLn (k)
act on M by conjugation. Prove that if m ∈ M is a semisimple (a.k.a.
diagonalizable) matrix, then the conjugacy class of m is closed.
Exercise 2. Suppose Q is the quiver with two vertices and one arrow from
1 to 2. The orbits in Rep(α1 , α2 ) are the equivalence classes of α2 × α1
matrices, thus parametrized by their rank 0 ≤ r ≤ min(α1 , α2 ). Compute
the dimensions of each of these orbits. Describe the partial order on the
orbits defined by Or ≤ Os if Or ⊆ Os . The closures of these orbits are
called rank varieties. These and their analogs for other type A quivers are
an important source of tractable examples in algebraic geometry.
Exercise 3. Suppose Q is the quiver considered in Exercise 8 of the previous problem set (one vertex in the middle, 3 round the edge, arrows pointing
inwards). You should already calculated the dimensions of the twelve different indecomposables. Let α = (1, 1, 1, 2) (where the 2 is on the inside
vertex). Question: how many orbits does GL(α) have on Rep(α) in this
case? Try to describe the partial order on the orbits given by containment
of closures in this case.
Note Corollary 2.21 indicates that if we are looking for quivers with finitely
many indecomposables, we should study which quivers can have q(α) > 0
for every dimension vector α. This is the purpose of the next section.
2.7. Dynkin and Euclidean diagrams. Recall that q doesn’t depend on
orientation. If we forget the orientation on a quiver, we get a graph Γ with
vertices 1, . . . , n and ni,j =P
nj,i edges between
i and j. Given such a graph
P
n
n
2
and α ∈ Z , let q(α) =
i≤j ni,j αi αj . Also let (., .) be the
i=1 αi −
symmetric bilinear form defined by
−ni,j
i 6= j,
(i , j ) =
2 − 2ni,i i = j.
Thus q and (., .) are what they were before if Γ is the graph underlying a
quiver.
We say q is positive definite if q(α) > 0 for all 0 6= α ∈ Zn , and positive
semi-definite if q(α) ≥ 0 for all α. The radical of q is {α ∈ Zn | (α, β) =
20
0 for all β ∈ Zn }. Call α ∈ Zn strict if no αi is zero. Finally, let ≤ be the
partial ordering on Zn defined by α ≤ β if αi ≤ βi for each i = 1, . . . , n. Say
α is positive if it is ≥ 0, negative if it is ≤ 0.
Let us record a technical lemma.
Lemma 2.27. Suppose Γ is connected and β > 0 is a vector in the radical
of q. Then, β is strict and the form q is positive semi-definite. For α ∈ Zn ,
we have that q(α) = 0 if and only if α ∈ Qβ, which is if and only if α lies
in the radical of q.
Proof. Note
0 = (i , β) = (2 − 2ni,i )βi −
X
ni,j βj .
j6=i
P
If βi = 0 then j6=i ni,j βj = 0 and since each term is ≥ 0 we get that βj = 0
whenever there is an edge i − j. Since Γ is connected, it follows that β = 0,
a contradiction. Thus, β is strict. Now,
X
X
βj 2 X
βi βj αi αj 2 X
βi 2
ni,j
−
=
ni,j
αi −
ni,j (−αi αj ) +
ni,j
α
2
βi
βj
2βi
2βj j
i<j
i<j
=
βj 2
α +
2βi i
ni,j
X
(2 − 2ni,i )βi
i6=j
=
i<j
X
i
X
i<j
ni,j αi αj
i<j
1 2 X
α +
ni,j αi αj = q(α).
2βi i
i<j
Hence q is positive semi-definite. If q(α) = 0 then αi /βi = αj /βj whenever
there is an edge i − j. Since Γ is connected it follows that α ∈ Qβ. But that
implies that α is in the radical of q, since β is. Finally, if α is in the radical
of q then q(α) = 0. This completes the proof.
Now we can prove the main classification theorem.
Theorem 2.28. Suppose that Γ is connected.
(1) If Γ is a Dynkin diagram (which I’ll draw on the board) then q is
positive definite.
(2) If Γ is a Euclidean diagram, then q is positive semi-definite and the
radical of q is Zδ, where δ is the vector indicated by the numbers on
the graph. Note that δ is strict and > 0.
(3) Otherwise, there is a vector α > 0 with q(α) < 0 and (α, i ) ≤ 0 for
all i.
Proof.
P Suppose that Γ is a Euclidean diagram. First check that the vector
δ=
δi i belongs to the radical. This amounts to checking that (δ, i ) =
P
− j6=i δj ni,j + 2δi − 2δi ni,i = 0 for each i. For example for Ẽ8 , one has to
observe that the sum of the δj ’s for each neighbor j of i is equal to 2δi . Now
the lemma implies that q is positive semi-definite and that the radical of q
is Qδ ∩ Zn . Since one of the δi ’s is one, that is Zδ. This completes the proof
of (2).
21
Now suppose that Γ is a Dynkin diagram. Add one more vertex to get a
Euclidean diagram Γ̃. Note that qΓ is the restriction to Zn of the quadratic
form qΓ̃ . By the lemma, if q(α) = 0 for a non-zero α ∈ Zm then α ∈ Zδ,
hence α is strict. So q is strictly positive on all the α’s in Zn coming from
Γ, so qΓ is positive definite. This proves (1).
Finally suppose that Γ is neither Euclidean nor Dynkin. Then Γ has a
subgraph Γ0 that is Euclidean, say with radical vector δ. If all the vertices of
Γ are in Γ0 , take α = δ. Else, let i be a vertex not in Γ0 but connected to Γ0
by an edge, and take α = 2δ + i . Now check that q(α) < 0 and (α, i ) ≤ 0
for all i.
From now on we will focus on Γ either Dynkin or Euclidean (it is possible
to extend most of these definitions to the general case, but that will not be
worth it for us!). Let’s associate a few more invariants. First the set of roots
is
∆ = {0 6= α ∈ Zn | q(α) ≤ 1}.
A root is called real if q(α) = 1 and imaginary if q(α) = 0. Note each i is
a root. These are known as the simple roots. Obviously in the Dynkin case,
there are no imaginary roots, while in the Euclidean case the imaginary
roots are the integer multiples of δ, by the lemma.
Here are some further properties of the set of roots:
Lemma 2.29.
(i) If α ∈ ∆ and β belongs to the radical of q, then −α
and α + β are roots.
(ii) Every root is > 0 or < 0.
(iii) If Γ is Euclidean, then (∆ ∪ {0})/Zδ is a finite subset of Zm /Zδ
(quotient abelian group).
(iv) If Γ is Dynkin, then ∆ is finite.
Proof. (i) q(β ± α) = q(β) + q(α) ± (β, α) = q(α) ≤ 1.
(ii) Write α = α+ − α− where α+ , α− ≥ 0 and they have disjoint support.
Then obviously, (α+ , α− ) ≤ 0 so
1 ≥ q(α) = q(α+ ) + q(α− ) − (α+ , α− ) ≥ q(α+ ) + q(α− ).
Since q is positive semi-definite, one of q(α+ ) or q(α− ) must therefore be
zero, i.e. one of them is an imaginary root. But imaginary roots are strict,
so the other one must be zero.
(iii) Pick i with δi = 1. If α is a root with αi = 0 then δ − α and δ + α
are roots by (i). Since their ith component is 1, they are positive roots, so
−δ ≤ α ≤ δ. Therefore
{α ∈ ∆ | αi = 0} ⊆ {α ∈ Zn | − δ ≤ α ≤ δ
so it is finite. Now take β ∈ ∆. Then, β − βi δ belongs to this finite set.
(iv) Embed Γ into Euclidean diagram Γ̃ by adding a vertex i with δi = 1.
Then roots α of Γ are roots of Γ̃ with αi = 0. So done by (iii).
22
Example 2.30. Let Γ be the graph 1 − 2 − 3. The roots are
(1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1), (1, 1, 1)
plus the negatives of these. Let Γ be the graph with one vertex and one
edge. The roots are Zδ where δ = (1). Let Γ be the Kronecker graph with
two vertices and two edges. The positive roots are (a + 1, a), (a, a + 1) for
a ≥ 0 and (a, a) for a > 0.
2.8. Gabriel’s theorem. We now want to classify the quivers with finite
representation type, i.e. with finitely many finitely generated indecomposables up to isomorphism. By the Krull Schmidt theorem that happens if and
only if there are finitely many orbits in Rep(α) for each dimension vector α.
We know right away by Corollary 2.21 that if Q is such a quiver, then q is
positive definite, hence the underlying graph of Q is a Dynkin diagram. So
our main task will be to analyse the Dynkin quivers and prove that they do
all indeed have finitely many indecomposables. The main result:
Theorem 2.31 (Gabriel, circa 1970). A quiver Q has finite representation
type if and only if Q is Dynkin. In that case, the map
V 7→ dimV
defines a 1–1 correspondence between the isomorphism classes of indecomposable representations and the positive roots in ∆ (which is finite).
For example the Dynkin diagram D4 has exactly 12 positive roots, so the
12 indecomposable representations found in Exercise are all the indecomposables.
To prove the theorem, we need one more technical lemma.
Lemma 2.32. Suppose that V is indecomposable and dim End(V ) > 1.
There is a proper indecomposable submodule U ⊂ V with dim Ext1 (U, U ) >
0.
Proof. Note that since V is indecomposable, Fitting’s lemma shows that
E = End(V ) is a local ring, i.e. E/J(E) is a simple E-module. Since k is
algebraically closed, Schur’s lemma shows that E/J(E) ∼
= EndE (E/J(E)) is
one dimensional. Hence our assumption that dim E > 1 means exactly that
J(E) 6= 0. Pick θ ∈ J(E) (i.e. a nilpotent endomorphism of V ) such that
im θ is of minimal dimension – this guarantees in particular that θ2 = 0.
Let I = im θ and ker θ = K1 ⊕ · · · ⊕ Kr with Ki ’s indecomposable. Note
I ⊆ K1 ⊕ · · · ⊕ Kr . Choose j so that the composite α of the inclusion
I ,→ K1 ⊕ · · · ⊕ Kr and the projection K1 ⊕ · · · ⊕ Kr Kj is non-zero. We
note that α : I → Kj is injective: the composite
θ
α
X I ,→ Kj ,→ X
has image im α hence dimension ≥ dim I by the minimality assumption.
Now we claim that Ext1 (I, Kj ) 6= 0. Once we have established that, the
lemma follows for applying Hom(−, Kj ) to 0 → I → Kj → Q → 0 and using
23
the fact that Ext2 vanishes gives Ext1 (I, Kj ) → Ext1 (Kj , Kj ) → 0, hence
Ext1 (Kj , Kj ) 6= 0 and we can take U = Kj .
To prove the claim, suppose for a contradiction that Ext1 (I, Kj ) = 0. We
have
0 −→ K1 ⊕ · · · ⊕ Kr −→ V −→ I −→ 0
j
cj ⊕ · · · ⊕ Kr gives
Factoring out K := K1 ⊕ · · · ⊕ K
0 −→ Kj −→ V /K j −→ I −→ 0
This splits, so Kj has a complement C in V /K j . But in then the inverse
image of C is a complement to Kj in V , hence Kj is a summand of V . This
contradicts the assumption that V is indecomposable.
Corollary 2.33. If V is indecomposable and dim End(V ) > 1, there is an
indecomposable submodule U ⊂ V with dim End(U ) = 1 and dim Ext1 (U, U ) >
0, hence q(dimU ) ≤ 0.
Proof. Apply the lemma. If U is not a brick then repeat. Since things are
finite dimensional the process must terminate.
Now we can prove the theorem.
Suppose that Q is a quiver with underlying graph Dynkin. So q is positive
definite. In view of the preceeding corollary, every indecomposable module
V satisfies dim End(V ) = 1 and dim Ext1 (V, V ) = 0.
Now let V be an indecomposable of dimension α. We have just observed
that q(α) = 1 so α is a positive root of the root system ∆. We know there
is at most one indecomposable of dimension α (Corollary 2.23), so it just
remains to prove that there exists at least one indecomposable of dimension
α for each positive root of ∆.
Well, let α be a positive root of ∆. Let OV be an orbit of maximal dimension in Rep(α). If V is indecomposable, we are done. Else, we can write
V = U ⊕ W . But then Ext1 (W, U ) = Ext1 (U, W ) = 0 by Corollary 2.25,
hence (dimW, dimU ) ≥ 0. In that case,
1 = q(dimV ) = q(dimW +dimU ) = q(dimW )+q(dimU )+(dimU, dimW ) ≥ 2,
a contradiction.
End of proof.
Example 2.34. For the Dynkin diagram E8 , there are 120 positive roots.
It is not too hard to write them all down (though the easiest way to compute them is to make use of the Weyl group which we haven’t defined yet).
Given this, we’ve just proved for example that the quiver has exactly 120
indecomposable representations up to isomorphism!!!
Discussion. Let me now make more precise the trichotomy “finite”,
“tame”, “wild” for finite dimensional algebras. First, the definitions. Let A
be an algebra. Then,
• A is of finite representation type if A − mod has finitely many indecomposables up to isomorphism;
24
• A is of tame representation type if there infinitely many indecomposables in A − mod up to isomorphism, but for any r there are
A, k[t]-bimodules M1 , . . . , MN (where N may depend on r) which
are finitely generated and free over k[t] such that any indecomposable A-module of dimension r is isomorphic to some Mi ⊗k[t]/(t−λ);
• A is of wild representation type if there is an A, khx, yi-bimodule M
that is finitely generated and free over khx, yi such that the functor
M ⊗khx,yi ? sends non-isomorphic finite dimensional khx, yi-modules
to non-isomorphic A-modules.
The idea of tame: you can classify indecomposables up to a given dimension
by finitely many one parameter families. The idea of wild: you can embed
the representation theory of khx, yi into A − mod. Since it is known that the
word problem for finitely presented groups (which is undecidable) can be
embedded into the module theory of khx, yi, such things are in some sense
hopeless.
Exercise 4. For each dimension r ≥ 1, construct a k[x], k[t]-bimodule Mr
that is finitely generated and free over k[t] such that any indecomposable
k[x]-module of dimension r is isomorphic to Mr ⊗k[t] k[t]/(t − λ) for λ ∈ k.
Deduce that the polynomial algebra k[x] is of tame representation type.
The trichotomy comes from the following hard theorem of Drozd:
Theorem 2.35 (Drozd, 1977). Let A be a finite dimensional algebra. Then
either A is of finite, tame or wild representation time.
We have just classified the path algebras of quivers that are of finite
representation type: the Dynkin quivers. I bet you can guess right now
that: the Euclidean quivers are the quivers with tame representation type.
To prove this involves quite a lot more work, since the indecomposable
representations of the Euclidean quivers have to be classified (but the point
is that if they ARE tame it ought to be POSSIBLE to classify!). Of course
you know how to do this for the quiver Ã0 (Exercise 4). (On the other hand
it is not that hard to show that IF Q is a quiver of tame representation type
then it must be a Euclidean diagram.) I’ll talk about the Kronecker quiver
– another “easy” example from linear algebra – in the next section.
2.9. The Kronecker quiver. Let us analyse one more Euclidean diagram
in detail – it is a very classical one and a result worth knowing in its own
right. We’ll consider the Kronecker quiver
1
−→
2.
−→
Let me remind you that the positive real roots for this quiver were (r, r + 1)
and (r + 1, r) for r ≥ 0, and the positive imaginary roots were (r, r) for
r ≥ 1.
25
Theorem 2.36 (Kronecker 1823–1891). Suppose that f, g : V1 → V2 is a
finite dimensional indecomposable representation of the Kronecker quiver.
Then one of the following holds:
(i) dim V1 = r + 1 and dim V2 = r for r ≥ 0, and bases can be chosen
so that f, g are represented by the matrices




1 ··· 0 0
0 1 ··· 0


..  , g =  ..
..
..
f =
 .

.
.
. 
0 ···
1 0
0 0 ···
1
(ii) dim V1 = r and dim V2 = r + 1 for r ≥ 0, and bases can be chosen
so that f, g are represented by the transpose of the matrices in (i).
(iii) dim V1 = dim V2 = r for r ≥ 1, and bases can be chosen so that
f = In , g = Jn (λ) for some λ ∈ k.
(iv) dim V1 = dim V2 = r for r ≥ 1, and bases can be chosen so that
f = Jn (0) and g = In .
The proof involves some tricky linear algebra, so I am not going to give
it here. You can look it up in Benson’s book, Theorem 4.3.2. What I want
you to notice is that for each real root, there is a unique isomorphism class
of indecomposables of that dimension and for each imaginary root there is
an infinite “projective” family of indecomposables. The dimensions of the
indecomposables are exactly the roots in ∆.
This leads me to stating another beautiful theorem.
Theorem 2.37 (Kac, 1980). Let Q be a quiver and α > 0 be a dimension
vector. Then, there is an indecomposable representation of dimension α if
and only if α is a root. In case α is a real root, there is a unique indecomposable representation of dimension α; in case α is an imaginary root there
are infinitely many indecomposables of dimension α.
Notice in the statement of the theorem I have not specified that Q should
be Dynkin or Euclidean – because Kac proves his theorem for ANY QUIVER
AT ALL!!!! But I have not yet given you the definition of roots for quivers
other than the Dynkin and the Euclidean ones, so I am not being completely
honest yet. I will define real and imaginary roots for arbitrary diagrams later
on – we need to introduce the Weyl group before we are ready to do that.
There is one other amusing consequence of Theorem 2.36. Let us pause
to discuss it. Let V4 be the Klein 4 group hx, y | x2 = y 2 = 1, xy = yxi.
Let k be an algebraically closed field of characteristic 2. Consider the finite
dimensional representations of the group algebra kV4 . (Of course that would
be boring for any other characteristic).
Lemma 2.38. Let P be a p-group and k be a field of characteristic p. Then,
there is just one irreducible kP -module, namely, the trivial module.
Proof. Let M be a finite dimensional kP -module. Let 0 6= m ∈ M . Consider
the additive subgroup of M generated by the vectors gm (g ∈ P ). This is
an elementary abelian p-group A of finite rank, on which P acts. Since all
26
the orbits have size a power of p, the number of fixed points of P on A is
divisible by p. Since the zero element is fixed, there must therefore be at
least one non-zero fixed point. Therefore M contains a non-zero P -fixed
vector. In particular, if M is irreducible, M ∼
= k is trivial.
Well, it is not a path algebra of a quiver, so as a first approximation we
should work out its Ext quiver – then we know by Gabriel’s other theorem
that kV4 is a quotient of the path algebra of its Ext quiver. There is just
one irreducible representation, so we just need to compute Ext1 (k, k). It is
two dimensional. That means the path algebra of its ext quiver is khX, Y i.
Ooops. That didn’t work. But if I remember in class I will draw some
pictures of the successive kernels that appear in the projective resolution
that we construct – these same modules appear again in our classification
below and are known as the syzygy modules Ωn (k).
But there is another more tricky way to proceed. Let X = x−1, Y = y−1,
so kV4 = k[X, Y ]/(X 2 , Y 2 ). Let M be a finite dimensional indecomposable.
If it is projective, it is isomorphic to kV4 , the only PIM. Now assume that
it is not projective. Let 0 6= m ∈ M . Consider the submodule (kV4 )m of
M generated by m. If it is isomorphic to kV4 then – since kV4 ∼
= (kV4 )∗
is injective – it splits off as a summand. So by the assumption that M is
a non-projective indecomposable, this does not happen. Hence, the map
kV4 → (kV4 )m has some kernel. So soc(kV4 ) = hXY i must be in the
kernel. We have shown: XY annihilates all non-projective finite dimensional
indecomposable kV4 -modules.
This reduction means we should instead contemplate the indecomposable
representations of the three algebra A := k[X, Y ]/(X 2 , Y 2 , XY ). Let Q be
the Kronecker quiver. I am going to define functors (but we’ll only care
about what they do to objects so I won’t discuss morphisms):
F : Q − mod → A − mod
and
G : A − mod → Q − mod.
First suppose f, g : V1 → V2 is a representation of Q. Let M = V1 ⊕ V2 with
X acting as f : V1 → V2 and Y acting as g : V1 → V2 . Since X 2 = Y 2 =
XY = Y X = 0 that is a well-defined A-module. Conversely, suppose that
M is an A-module. Let V2 = im(X : M → M ) + im(Y : M → M ). Let
V1 := M/V2 . Note that since X 2 = XY = Y X = Y 2 = 0, X, Y annihilate
vectors in V2 . So the linear maps X, Y define on M induce linear maps
f, g : V1 → V2 . Thus we have constructed a representation of Q.
Now it is not hard to show that for any representation V of Q such that
V2 = im f + im g, we have that V ∼
= G(F (V )). And for any A-module M ,
G(M ) is such a representation of Q and F (G(M )) ∼
= M . (Note for the
last isomorphism you have to pick a vector space complement to V2 in M
so it is NOT a natural isomorphism). Moreover, F, G commute with direct
sums. This shows that there is a 1–1 correspondence between isomorphism
27
classes of finite dimensional indecomposable representations of A and finite
dimensional indecomposable representations of Q satisfying V2 = im f +im g.
Now the only f.d. indecomposable representation of Q that doesn’t satisfy
that condition is the simple module L2 . And the only f.d. indecomposable
representation of kV4 that isn’t lifted from A is the PIM kV4 . Combining
this analysis with Theorem 2.36 proves:
Theorem 2.39 (Basev; Heller and Ringel, 1961). Let M be an indecomposable representation of kV4 in characteristic 2. Then one of the following
holds:
(1) M ∼
= kV4 ;
(2) dim M = 2r and M has a basis with respect to which
0r Ir
0r Jr (λ)
X=
,Y =
0 0r
0 0r
for λ ∈ k;
(3) dim M = 2r and M has a basis with respect to which
0r Jr (0)
0r Ir
X=
,Y =
;
0 0r
0 0r
(4) dim M = 2r + 1 and M has a basis with respect to which
0r A
0r B
X=
,Y =
0 0r+1
0 0r+1
where A, B are the r × (r + 1)-matrices listed in Theorem 2.36(i);
(5) dim M = 2r + 1 and M has a basis with respect to which
0r+1 AT
0r+1 B T
X=
,Y =
0
0r
0
0r
where A, B are the r × (r + 1)-matrices listed in Theorem 2.36(i).
Apparently, the neat proof just explained is due to Heller and Ringel. It
is kind of bad news that even V4 is hard to understand. Here is one more
theorem to give you the general picture for finite groups in general:
Theorem 2.40 (Bondarenko and Drozd, 1977). Let G be a finite group and
suppose that char k = p.
(1) kG has finite representation type if and only if G has cyclic Sylow
p-subgroups.
(2) kG has tame representation type if and only if p = 2 and the Sylow
2-subgroups are dihedral, semidihedral or generalized quaternion.
(3) Else kG has wild representation type.
For example even the groups Cp × Cp for p > 2 prime have wild representation type in characteristic p. By the way, it is worth pointing out
that the answer only depends on the Sylow subgroups, which is part of a
general philosophy of representations of finite groups: questions about representations in characteristic p can often be answered “locally’ in terms of
the Sylow p-subgroups.
28
By the way, this all should go to convince you that asking for the classification of indecomposables is often the WRONG question – studying path
algebras of quivers is quite misleading because in that special case it is the
RIGHT question!
To end the section, I want to say a few words about infinite dimensional
representations. Up to now we have only ever studied finite dimensional
representations, i.e. A − mod not A − M od, and been concerned with classifying finite dimensional indecomposables. What about infinite dimensional
modules? This is quite a hard business. Let us consider for an example the
case of the Kronecker quiver. Let A = kQ be its path algebra. Then it is
known for instance that
• There are non-zero modules that have no indecomposable direct
summand;
• There are indecomposables L, M, M 0 with L ⊕ M ∼
= L ⊕ M 0 but
M∼
6 M 0.
=
It is still worth studying – for example it has applications in functional
analysis. For an example of the relevance, let V be the Hilbert space of
df
L2 -functions on (0, 1). Let T be the operator (T f )(x) = x2 dx
. This is
defined on a dense subspace U of V . Thus T defines an infinite dimensional
representation X of the Kronecker quiver, namely. T, 1 : U → V . The
basic question is to find the eigenvalues of T . This is equivalent to finding
homomorphisms from Rλ to X where Rλ is the representation λ, 1 : k → k.
Now you can make progress in understanding eigenvalues by using techniques
from finite dimensional algebra theory, e.g. the standard resolution.
Let me end this discussion by stating two theorems.
Theorem 2.41 (Auslander, 1974). Suppose A is a finite dimensional algebra with finite representation type. Then every indecomposable A-module is
finite dimensional and any module is a direct sum of indecomposables.
Theorem 2.42 (Rojter, 1968). Suppose A is a finite dimensional algebra
that is not of finite representation type. Then there are indecomposable Amodules with an arbitrarily large number of composition factors.
2.10. The Weyl group. Remember the notation from §2.7: Γ is a graph
with vertices 1, . . . , n and ni,j = nj,i edges from i to j. Usually we get Γ
from a quiver Q by forgetting the orientation on the edges. To keep life
as easy as possible, I am going to assume FROM NOW ON that Γ has no
loops, i.e. ni,i = 0 for all i. We’re almost always going to be thinking about
the case when Γ is Dynkin or Euclidean, so this assumption is just removing
Ã0 from consideration – but that is an easy case anyway.
n
GivenP
such a graph
P and α ∈ Z , we have the associated quadratic form
n
2
q(α) = i=1 αi − i≤j ni,j αi αj and (., .) is the corresponding symmetric
bilinear form. Note
(i , j ) = −ni,j
29
for i 6= j and (i , i ) = 2 (because we are assuming ni,i = 0). I will start
writing R for the vector space Zn and I will call it the root lattice since it is
the lattice generated by the simple roots 1 , . . . , n form a basis.
Let W be the Weyl group, that is, the subgroup of Isom(R) generated
by the simple reflections s1 , . . . , sn defined from
si (α) = α −
2(α, i )
i = α − (α, i )i .
(i , i )
Thus, si is the reflection in the hyperplane orthogonal to i .
The goal in this section is to understand the structure of W (at least
a little bit) in the Dynkin and Euclidean cases. So assume Γ is Dynkin or
Euclidean. Remember in these cases we have defined the set ∆ of roots. The
real roots are the elements of R − {0} with q(α) = 1, the imaginary ones are
the elements with q(α) = 0; each root is either > 0 or < 0. Obviously, the
group W permutes the set ∆ of roots. Since the simple roots span E, the
action of W on ∆ is faithful, so
W ,→ Sym(∆).
In particular, in the Dynkin case, this shows that W is a finite group since
we know ∆ is a finite set.
P
We define the height ht(α) of α ∈ R to be ni=1 αi .
Lemma 2.43. Suppose that Γ is Dynkin or Euclidean. Then every real root
is conjugate under W to a simple root.
Proof. Let α be a real root. Replacing α by −α if necessary, we may assume
α > 0. If ht(α)
P = 1 then α is a simple root and we’re done. Else, since
(α, α) = 2 =
αi (α, i ) and all αi ≥ 0, we can pick i so that (α, i ) > 0.
Then si (α) = α − (α, i )i is another positive root of strictly smaller height.
Proceeding by induction on height completes the proof.
For any other real root α, we also have the reflection sα : β 7→ β − (α, β)α
in the hyperplane orthogonal to α. Note wsα w−1 = sw(α) . So by the lemma,
sα is conjugate in W to some si , i.e. all sα also belong to W and any element
of W that is conjugate to some si is of the form sα for a real root α.
Exercise 5. Suppose Γ is Dynkin or Euclidean and Γ 6= Ã0 , Ã1 . Show that
all the simple roots are conjugate under W , hence by the lemma the real
roots form a single W -orbit. How many orbits of real roots are there in type
Ã1 ?
Example 2.44. Suppose Γ is of type An . We can realize the lattice R explicitly by starting from a Euclidean space with orthonormal basis v1 , . . . , vn+1
and looking at the Z-module generated by the vectors i = vi − vi+1 for
i = 1, . . . , n. The reflection si is induced by the permutation vi ↔ vi+1 .
Hence, W in this case is precisely the symmetric group Sn+1 . So by the
lemma, the set of all roots is {vi − vj | 1 ≤ i 6= j ≤ n + 1}. The positive roots are the vi − vj for 1 ≤ i < j ≤ n. We’ve just enumerated all
30
the indecomposable representations of the algebra of lower triangular n × n
matrices!
Exercise 6. Check the definitions to convince yourself that: for Γ of type
Dn , the lattice R can be realized as the lattice inside a Euclidean space
with orthonormal basis v1 , . . . , vn generated as the Z-span of the vectors
1 = v1 − v2 , 2 = v2 − v3 , . . . , n−1 = vn−1 − vn , n = vn−1 + vn . Compute
the action of the generators s1 , . . . , sn of W on R, and hence prove that
∆+ = {vi ± vj | 1 ≤ i < j ≤ n}.
The point now is that you can analyse the finite Weyl groups W in a
geometric way. First you introduce the hyperplanes Hα = α⊥ for each
α ∈ ∆. These cut the space R into various “chambers” (it is best to extend
scalars to R then you really are thinking geometrically!). The fundamental
chamber is the set
C = {α ∈ R | (α, i ) ≥ 0 for all i = 1, . . . , n}.
Then you show that each orbit W α for α ∈ R intersects the fundamental
chamber C in exactly one point. In this way, you get a 1–1 correspondence
w 7→ wC between the elements of W and the chambers, which allows you
for instance to count the number of elements of W by counting chambers.
Let me just draw the picture for type A2 . Let me just summarize some facts
that you prove in this way (we already knew these things for An , Dn by the
above Exercise...)
Type
An
Dn
E6
E7
E8
|∆+ |
order of W
(n + 1)
n 2n−1 n!
36
27 34 5
63
210 34 57
120
214 35 52 7
n+1
2
n2 −
Sn+1
(Z/2Z)n−1 .Sn
I will omit the details of the proofs of these facts about chambers for now,
since we’ll have to do something a little bit more general later on.
For the remainder of the section, assume that Γ is Euclidean. Let me
now modify our standing conventions a little bit to make dealing with the
Euclidean case neater. I will start to label the vertices of Γ by 0, 1, . . . , n,
so that the vertex 0 has label δ0 = 1 on the Euclidean diagram. Thus,
removing the vertex 0 from Γ gives the associated Dynkin diagram which I
◦
will denote by Γ. We have the root lattice
R = Z0 ⊕ · · · ⊕ Zn
for Γ and
◦
R= Z1 ⊕ · · · ⊕ Zn
◦
◦
◦
for Γ. The bilinear form on R defined by Γ is just the restriction of the
form on R defined by Γ. The corresponding Weyl groups are denoted W ⊂
◦
◦
◦
Isom(R) and W ⊂ Isom(R). We will usually identify W with the subgroup
31
◦
of W generated by s1 , . . . , sn , so W = hW , s0 i. The Weyl group W fixes the
vector δ.
We know from our earlier results that the imaginary roots in ∆ are the
roots {rδ | r ∈ Z} and that the real roots are
{α + rδ | α ∈ ∆0 , r ∈ Z}.
P
The element θ := δ − 0 = ni=1 δi αi is going to be important.
Lemma 2.45. θ is the unique highest root in ∆0 .
Proof. Note that (θ, i ) = (−0 , i ) ≥ 0 for all i = 1, . . . , n. Hence, θ belongs
◦
to the fundamental chamber C = {α ∈R | (α, i ) ≥ 0 for all i = 1, . . . , n}.
Now let β be a root in ∆0 of maximal height. We must have that (β, i ) ≥ 0
for all i = 1, . . . , n, else si (β) = β − (β, i )i is a root of strictly greater
height. Hence β also belongs to C. Now I stated the theorem above that
every W -orbit meets C in exactly one point. Since θ and β are roots and
all roots are W -conjugate, they lie in the same W -orbit. Hence, β = θ. Now we are ready to explain the structure of W in the Euclidean case.
◦
For α ∈R, let tα : R → R be the translation tα (β) = β − (β, α)δ. Note that
tα tβ = tα+β ,
wtα w−1 = tw(α)
◦
◦
◦
for α, β ∈R and w ∈W . So T = {tα | α ∈R} is a subgroup of Isom(R)
◦
isomorphic to the free abelian group R.
Theorem 2.46. T is a normal subgroup of W . W is the semidirect product
◦
of the translation group T by the finite Weyl group W .
Proof. Check from the definitions that
tθ = s0 sθ .
Using this you show: (1) tθ belongs to W ; (2) tα belongs to W for all α ∈ ∆0
◦
◦
(since all such α are W -conjugate to θ) (3) tα belongs to W for all α ∈R
◦
◦
(since the tα for α ∈ ∆0 generate R); (4) T is normalized by W ; (5) W is
◦
◦
◦
generated by T and W ; (6) T ∩ W = {1} (since W is finite and T is a free
abelian group). This completes the proof.
2.11. Reflection functors. The last topic in this chapter: I want to expain
a different proof that if Q is a Dynkin quiver then it has finite representation
type. This proof was discovered by Bernstein, Gelfand and Pomonarev, and
is nice because it brings the Weyl group into play at the level of representation theory. In this section I will break from the usual convention and
assume just that k is ANY field.
Let Q be a quiver. A vertex i of Q is called a source (resp. sink) if all the
arrows incident with i point outwards (resp. inwards). Let si Q denote the
32
quiver obtained from Q by reversing the direction of every arrow incident
with vertex i.
Now suppose that i is a sink in Q. We are going to define the reflection
functors
Si+ : kQ − mod → k(si Q) − mod
and
Si− : k(si Q) − mod → kQ − mod
as follows. For Si+ , suppose V is a representation of Q. Let Si+ V be the
vector space with (Si+ V )j = Vj for j 6= i and (Si+ V )i be the kernel of the
sum of the maps going towards Vi :
M
η
φ
0 → (Si+ V )i →
Vs(ρ) → Vi
ρ∈Q1 t(ρ)=i
The maps in the representation Si+ V of si Q are the same as the maps
labelling the arrows in V except for an arrow ρ that has had its direction
switched. For such an arrow, the new map is the composite of the map η
above with the projection of the direct sum onto Vs(ρ) . There is an obvious
way to define the functor Si+ on morphisms.
The definition of the functor Si− is dual to this. Given a representation
W of si Q, let (Si− W )j = Wj for j 6= i and let (Si− W )i be the cokernel of
the sum of the maps going away from Wi . Thus:
M
ψ
ξ
Ws(ρ) → (Si− W )i → 0.
Wi →
ρ∈Q1 t(ρ)=i
On maps, Si− only changes the maps on arrows ρ incident with i: for such
an arrow ρ, it gets replaced by the restriction of the map ξ above to the
summand Ws(ρ) . Again there is an obvious thing to do on morphisms.
Now you check that if the map φ above is surjective and W = Si+ (W ),
then the map ψ above for W is injective and Si− (Si+ (V )) ∼
= V . The thing is
you have an exact sequence in that case
M
ψ
φ
0 → Wi →
Vs(ρ) → Vi → 0.
ρ∈Q1 | t(ρ)=i
Similarly if the map ψ above for W is injective and V = Si− (W ) then the
map φ above for V is surjective and Si+ (Si− (W )) ∼
= W . Thus Si+ , Si− define
mutually inverse equivalences between the subcategories of kQ-mod in which
φ is surjective and of k(si Q)-mod in which ψ is injective.
Finally observe that every representation of Q breaks up as a direct sum
of the cokernel of φ concentrated at i and a representation for which φ is
surjective. Similarly, every representation of si Q is a direct sum of the kernel
of ψ and a representation for which ψ is injective. We have proved:
33
Lemma 2.47. The functors Si+ and Si− establish a bijection between the
isomorphism classes of indecomposable representations of Q and the indecomposable representations of si Q, with the exception of the simple modules
Li in each case, which are killed by these functors.
By the exactness of the sequence above, it is easy to see that if V is a
representation of kQ such that φ is surjective then
X
dimSi+ V = dimV − (2 dim Vi −
dim Vs(ρ) )i = dimV − (dimV, i )i
t(ρ)=i
= si dimV.
Similarly if W is a representation of k(si Q) such that ψ is injective then
dimSi− W = si dimW . Thus the effect of the functors Si± on dimensions is
just like the reflection si , with the exception that they kill Li whereas si
sends i to −i .
Now suppose that 1, . . . , n is an admissible ordering of the vertices, that
is, that vertex i is a sink for the quiver si+1 . . . sn Q, for each i = 1, . . . , n.
For example, if you take for Q the quiver D̃4 with all arrows pointing in,
the admissible numberings must have the middle vertex as number 5. There
exists an admissible ordering for the vertices of Q if and only if Q has no
oriented cycles.
Then we define the Coxeter functor with respect to this ordering to be
the functor
C + = S1+ . . . Sn+ : kQ − mod → kQ − mod
(noting that each arrow gets reversed twice so s1 . . . sn Q = Q). We also
have
C − = Sn− . . . S1− : kQ − mod → kQ − mod.
Finally, let c = s1 . . . sn ∈ W , a Coxeter element of the Weyl group. Our
discussion shows:
Lemma 2.48. Given any indecomposable kQ-module V , either C − C + (V ) ∼
=
+
V and the effect of C on the dimension vector of V is the same as the effect
of the Coxeter element c = s1 . . . sn ∈ W OR C + (V ) = 0.
Finally we need a lemma about the Coxeter element c = s1 . . . sn ∈ W :
Lemma 2.49. Suppose that Q is a Dynkin quiver. Let c = s1 . . . sn ∈ W .
Then, c has no non-zero fixed points in R. Moreover, given any α ∈ R there
exists some m ≥ 0 such that cm (α) 6> 0.
Proof. Suppose α ∈ R is a non-zero fixed point of c. Then,
s1 (α) = s2 . . . sn (α).
Remembering that the action of si on α only changes its ith coordinate, this
means that the 1 -coordinate of s1 (α) is the same as in s2 . . . sn (α), hence
the same as in α. Hence, s1 (α) = α and s2 . . . sn (α) = α. Repeating the
argument one gets that s2 (α) = α, s3 (α) = α, . . . . Hence, α is fixed by
34
s1 , . . . , sn hence by all of W . But the action of W on R is faithful, so this
implies that α = 0.
Now suppose that α ∈ R is a vector for which cm (α) > 0 for all m ≥ 0.
Since W is a finite group, c has finite order h, say (IN FACT h is the Coxeter
number of W and is known to equal the sum of the δi in the corresponding
Euclidean diagram). But then (1 + c + c2 + · · · + ch−1 )α is a non-zero vector
fixed by c, contradicting the previous paragraph.
Now we can reprove:
Theorem 2.50. Let Q be a Dynkin quiver, k be any field. Then the map
V 7→ dimV is a 1–1 correspondence between the finite dimensional indecomposable representations of Q and the positive roots.
Proof. Choose an admissible ordering for the vertices of Q. Let C + be the
corresponding Coxeter functor, c = s1 . . . sn the Coxeter element. Suppose
V is indecomposable with dimension vector α. There exists m such that
cm α 6> 0. For this m we must have that (C + )m V = 0 – since C + either
sends an indecomposable to an indecomposable (with positive dimension)
or to zero. Choose the smallest m such that (C + )m V = 0. Then for some i,
+
Si+1
. . . Sn+ (C + )m−1 V 6= 0, Si+ . . . Sn+ (C + )m−1 V = 0.
+
But that means that Si+ kills Si+1
. . . Sn+ (C + )m−1 V , so it must equal Li ,
and then
−
V ∼
(Li ).
= (C − )m−1 Sn− . . . Si+1
This shows that the dimension vector of V is W -conjugate to i , so it is a
positive root. The same argument also shows that any indecomposable of
the same dimension as V is isomorphic to V .
Conversely, suppose α ∈ ∆+ . For some m, cm (v) 6> 0. Choose the
shortest expression of the form
si . . . sn (s1 . . . sn )m−1 (α)
which is not a positive root. Then si+1 . . . sn (s1 . . . sn )m−1 (α) is a positive
root send to a negative root by si , hence it must equal i . Now the repre−
sentation (C − )m−1 Sn− . . . Si+1
Li is indecomposable of dimension α.