COMPUTATIONAL PROBLEMS IN GARSIDE GROUPS A Thesis

COMPUTATIONAL PROBLEMS IN GARSIDE GROUPS
A Thesis
Presented to the
Faculty of
San Diego State University
In Partial Fulfillment
of the Requirements for the Degree
Master of Arts
in
Mathematics
by
Jeffrey David Burt
Spring 2015
iii
Copyright © 2015
by
Jeffrey David Burt
iv
DEDICATION
Dedicated to my beautiful wife, without whom I would be lost.
v
Victory at all costs, victory in spite of all terror, victory however long and hard the
road may be.
– Winston Churchill
vi
ABSTRACT OF THE THESIS
Computational Problems in Garside Groups
by
Jeffrey David Burt
Master of Arts in Mathematics
San Diego State University, 2015
Garside groups were first introduced by Patrick Dehornoy and Luis Paris in 1999.
Garside groups are a generalization of braid groups and were named after F.A. Garside to
recognize his ground breaking work on solving the word and conjugacy problems in braid
groups. Since their inception different authors have sought for better solutions to these same
two problems. While several solutions have come forward proving that they are theoretically
solvable, none have yet been proposed that can efficiently solve these problems in practice.
Aside from the intrinsic value of solving these problems to better understand the group, there
have been several proposed braid based cryptosystems. A practical solution for these
problems in Garside groups would also apply to braid groups and would show a viable
deterministic attack on these cryptosystems. While there is little to no expectation of a secure
braid based cryptosystem, it is still an open question and worth investigation. In this paper we
will examine an efficient solution to the word problem in Garside groups, and we will
examine the progress made thus far in solving the word problem, the conjugacy decision
problem, and the conjugacy search problem.
vii
TABLE OF CONTENTS
PAGE
ABSTRACT. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. vi
LIST OF FIGURES .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. ix
ACKNOWLEDGMENTS .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
x
CHAPTER
1
INTRODUCTION .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
1
1.1
Garside Groups .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
1
1.2
Decision Problems .. .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
8
1.3
Public Key Cryptography .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
9
1.3.1 Key Exchange Protocols . . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 10
1.3.2 Diffie-Hellman key exchange protocol for Bn . .. . .. .. .. .. .. .. . .. .. .. .. .. .. 10
1.4
2
3
Attacking Conjugacy Based Cryptosystems . .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 11
FINDING A NORMAL FORM AND THE WORD PROBLEM .. .. .. .. . .. .. .. .. .. .. 12
2.1
Introduction .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 12
2.2
Normal Forms and Left Weighting ... .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 12
2.3
Establishing the Normal Form Algorithm .. .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 16
SOLUTIONS TO THE CONJUGACY PROBLEM AND BOUNDS ON TIME . .. 21
3.1
Introduction .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 21
3.2
The Summit Set . . .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 22
3.3
The Super Summit Set . .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 26
3.4
The Ultra Summit Set .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 31
3.5
Black and Grey Components of the Ultra Summit Set . .. .. .. .. .. .. . .. .. .. .. .. .. 34
3.5.1 Special Cyclings and Decyclings ... .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 34
3.5.2 Black and Grey Compents of the Ultra Summit Set .. .. .. .. .. . .. .. .. .. .. .. 36
3.5.3 The Intersection of the Black and Grey Components as a Solution to the Conjugacy Problem . .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 43
3.6
4
The Set of Sliding Circuits ... .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 48
CONCLUDING REMARKS . .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 53
4.1
introduction... .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 53
viii
4.2
Other Topics to Study .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 53
4.3
Final Thoughts . .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 54
BIBLIOGRAPHY .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 55
ix
LIST OF FIGURES
PAGE
1.1
Equivalent braids .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
3
1.2
Braid crossings as generators . .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
3
1.3
∆, the half twist, in B4 with the classical Garside structure . . .. .. .. .. .. .. . .. .. .. .. .. ..
4
1.4
The lattice structure of B4 . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
5
1.5
A generator in the BKL presentation of B4 , namely a1,4 . .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
5
1.6
∆ in B4 with the dual Garside structure .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
6
1.7
Lattice diagram for Example 1.6 . .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. ..
8
2.1
σ1 σ2 σ2 σ1 σ2 . .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 13
2.2
σ1 σ2 σ1 σ2 σ1 . .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 13
3.1
The graph of ΓA . .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 37
3.2
The graph of ΓB . .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 38
3.3
The graph of ΓC . .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. . .. .. .. .. .. .. 39
x
ACKNOWLEDGMENTS
I would like to thank Dr. Imre Tuba, for many many years of council, help, and
friendship, without whom this thesis would not have been possible. Not only was he
encouraging and patient with this endeavor, but he continually and persistently offered
whatever assistance and advice was needed from thousands of miles away. I would like to
thank Dr. Vadim Ponomarenko for his continued advice, direction, and support in the
formation of this thesis. I would like to thank the other members of my committee, Dr.
Carmelo Interlando and Dr. William Root for their help and support. Finally I would like to
thank my family (especially my wife) for putting up with me and supporting while I worked
on this thesis. It is no exaggeration that this would never have happened on my own, and I
owe great thanks to all of the above mentioned.
1
CHAPTER 1
INTRODUCTION
1.1 G ARSIDE G ROUPS
Garside Groups were first introduced by Dehornoy and Paris in [10]. They were
named after F.A. Garside for his work on the conjugacy problem (discussed later) in braid
groups [14]. There are several presentations of Garside groups, but the one we will
predominantly use is found in [3]. The other most common presentation for a Garside group
involves defining a Garside monoid and defining the Garside group as its group of fractions. It
is given in [15, 20, 17]
Definition 1.1. A group G is called a Garside group if it satisfies the following:
1. G admits a lattice order (G, ⪯, ∨, ∧), which is invariant under left multiplication.
This means that there is a partial order ⪯ on the elements in G such that for a, b ∈ G if
a ⪯ b then ca ⪯ cb for all c ∈ G. The lattice order implies that for every m, n ∈ G there is
a unique lcm m ∨ n and a unique gcd m ∧ n with respect to ⪯. Notice that
P = {p ∈ G ∣ 1 ⪯ p} defines a submonoid of G. We call this subset the positive cone of
G. Clearly P ⋂ P −1 = {1}, because of the invariance of left multiplication. We also have
that a ⪯ b ⇐⇒ a−1 b ∈ P . This implies that the submonoid P determines the partial
order ⪯ and we can talk about the lattice (G, P ). Note that if a, b ∈ P then a ⪯ b if and
only if a is a prefix of b, that is, there is some c ∈ P such that ac = b. For this reason ⪯ is
often referred to as the prefix order.
2. The monoid P is atomic.
This means that for every x ∈ P there exists an upper bound on the length of a strict
chain 1 ⪯ x1 ⪯ . . . ⪯ xr = x. We define the atoms of G as the elements a ∈ P which
cannot be decomposed in P , i.e for a ∈ P there are no nontrivial b, c ∈ P such that
a = bc. We can now reword the above to say that there is an upper bound on the number
of atoms in a product x = a1 a2 . . . an , where ai ∈ P ∖ {1}. Hence if P is atomic, we can
define the magnitude of an element to be the maximal length of this type of chain as
follows:
∥x∥ = max{n ∣ x = a1 a2 . . . an }, where ai ∈ P ∖ {1}.
2
It is also a fact that the atoms generate G and that there are a finite number of atoms.
We will say the set of atoms is
A = {a1 , . . . , aλ }
3. There exists an element ∆ ∈ P , called the Garside element that satisfies the following:
(a) The interval [1, ∆] = {d ∈ G ∣ 1 ⪯ d ⪯ ∆} generates G. The elements of this
interval are called the simple elements of G. We shall always assume that G has
finite type, meaning that [1, ∆] is finite.
(b) P is invariant under conjugation by ∆, that is ∆−1 P ∆ = P .
It is of note that since conjugation by ∆ must preserve P , all atoms must be both
left and right divisors of ∆ (this is sometimes a requirement for Garside groups,
but since it follows from other definitions it is not a requirement here, and it is
proved in Corollary 2.2).
The above data defines a Garside Structure on G, which can be defined as follows: Let
G be a countable group with P a submonoid of G, and ∆ ∈ P . The triple (G, P, ∆) is called a
finite type Garside structure on G if (G, P ) is a lattice, ∆ is a Garside element (and this
implies that G has a finite amount of simple elements), and P is atomic. It is important to note
that a group G might have more than one element satisfying the requirements for ∆, thus G
may have more than one Garside structure.
It is also important to note that there is a suffix order, denoted with “ ⪰ ”, where a ⪰ b
if and only if there is some c ∈ P such that ca = b, or equivalently if ba−1 ∈ P . We can also say
that a is a suffix of b. It is very important to note that a ⪰ b does not mean that b ⪯ a. For
example see the Figure 1.4 and the following paragraph.
We will now look at several examples of different Garside Groups.
Example 1.1. The Braid Group
The braid group on n strands, Bn , was introduced by Emil Artin in his foundational
paper on the subject [1]. It is a good example for Garside groups because it is intuitive to
understand, and because the study of braid groups motivated the work in Garside groups. The
familiarity of this group is given away in the name “braid” because anyone who has ever put
different strings in a braid has a visual understanding of the group elements and the group
operation.
A geometric representation the braid group, Bn , is to have n strands cross over or
under each other as they move from left to right, without ever moving backwards. The
different relations in Bn allow the crossings to be transformed into equivalent positions, as
seen in Figure 1.1. All elements in a braid group can be constructed by concatenating the
generators given in Figure 1.2.
3
Figure 1.1. Equivalent braids
σ1
σ1−1
Figure 1.2. Braid crossings as generators
We will look at both the classical Garside structure and the dual Garside structure on
Bn .
Example 1.2. Braid Groups: classical Garside structure
The first presentation we will look at is the classical presentation that was given by
Artin, hence it is sometime referred to as the Artin presentation.
⎫
⎧
⎪
if ∣i − j∣ > 1 ⎪
σi σj = σj σi
⎪
⎪
⎬.
Bn = ⎨σ1 , . . . , σn ∣
⎪
⎪
if
∣i
−
j∣
=
1
σ
σ
σ
=
σ
σ
σ
⎪
⎪
i
j
i
j
i
j
⎭
⎩
+
With this presentation we have the Garside structure (Bn , Bn , ∆) where Bn+ is the
monoid of positive braids, or braids that can be represented without any negative powers of
these generators ({σ1 , . . . , σn−1 }), and
∆ = (σ1 )(σ2 σ1 )(σ3 σ2 σ1 ) . . . (σn−1 . . . σ1 )
This ∆ is called the fundamental braid and is often called the half twist. It has
received this name because if you had a group of strings laying in front of you, and you took
4
the bottom end of all the strings and rotated it 180 degrees (half a twist), you would have ∆ in
this Garside structure. See Figure 1.3.
Figure 1.3. ∆, the half twist, in B4 with the classical Garside structure
The simple elements are the the braids in Bn+ where any pair of two strands cross at
most once. The atoms are braids where there is exactly one crossing of two strands. Figure
1.4 illustrates the lattice structure of one particular Garside group, namely B4 . Here each
element is connected with a line that represents right multiplication. A solid line corresponds
to multiplication by σ1 , a dashed line to σ2 , and a dotted line to σ3 .
From Figure 1.4 we can see that for any two elements, a ⪯ b does not necessarily
imply b ⪰ a. For example we can see from above that σ2 ⪰ σ3 σ2 but σ2 ⪯̸ σ3 σ2 .
Example 1.3. Braid Groups: dual Garside structure
In [2] Birman, Ko, and Lee gave a new presentation for Artin’s braid group, which is
now referred to as the BKL presentation. It has the Garside structure (Bn , Bn+ , ∆). It is worth
pointing out that this is the same group, but with different generators and a different ∆. This
illustrates how one Garside group can have multiple Garside structures. In this case the BKL
presentation is referred to as the dual Garside structure on Bn .
The generators are called the band generators. They can be expressed as generators in
the Artin presentation, as one would expect, seeing how they both generate the same group. In
the Artin presentation the generators are the braids where exactly one pair of adjacent strands
cross. In the BKL presentation the generators are also the braids where exactly one pair of
strands cross, but without the restriction that the strands must be adjacent.
Geometrically, a generator, at,s , in the BKL presentation is the crossing of the tth and
the sth strands while all other strands remain unchanged with the crossing strands passing in
front of all intermediary strands, as in Figure 1.5.
5
∆
σ1 σ2 σ1 σ3 σ2
σ1 σ2 σ3 σ2 σ1
σ2 σ1 σ3 σ2 σ1
σ1 σ2 σ1 σ3 σ1 σ2 σ3 σ2 σ1 σ3 σ2 σ1 σ2 σ1 σ3 σ2 σ2 σ3 σ2 σ1
σ1 σ2 σ1
σ1 σ2 σ3
σ1 σ2
σ1 σ3 σ2
σ2 σ1 σ3
σ2 σ3 σ2
σ1 σ3
σ2 σ1
σ2 σ3
σ1
σ2
σ3
σ3 σ2 σ1
σ3 σ2
1
Figure 1.4. The lattice structure of B4
Figure 1.5. A generator in the BKL presentation of B4 , namely a1,4 .
Artin defined a map φ that takes Bn into Σn , the symmetric group. It plays a minor
role later in this work, but it also serves to illustrate the difference between generators the
classical and dual Garside structures on Bn . While the image of the generators of the Artin
presentation under this mapping are transpositions of the form (k, k + 1) for 1 ≤ k ≤ n − 1, the
image of generators in the BKL presentation are transpositions of the form (s, t) for
1 ≤ s ≤ t ≤ n − 1.
One can express braids in either presentation with the following equivalency:
6
−1 −1
−1
σt−1 )
. . . σt−1
at,s = (σt−1 σt−2 . . . σs+1 )σs (σs+1
In the BKL presentation we have a different ∆ than in Artin’s presentation. It is given
below in both the generators of the BKL and Artin presentation, and as Figure 1.6.
∆ = an,(n−1) a(n−1),(n−2) . . . a2,1 = σn−1 σn−2 . . . σ2 σ1
Figure 1.6. ∆ in B4 with the dual Garside structure
Example 1.4. Free Abelian groups of finite rank
Probably the simplest example of a Garside group is a free abelian group of finite
rank. Consider
Zn = ⟨x1 , . . . , xn ∣ xi xj = xj xi , i < j⟩.
Then our monoid is
Nn = {xk11 . . . xknn ; ki ≥ 0 for all i}.
The classical Garside structure is then given by (Z, N, ∆) where ∆ = (x1 , . . . , xn ), and
N are the positive integers. The simple elements are xk11 , . . . , xknn where ki ∈ {0, 1} for all
i = 1, . . . , n. It follows that there are 2n simple elements.
Example 1.5. Spherical Type Artin-Tits Groups
To explain this example we will first define an Artin-Tits group, and the restrictions
necessary for it to be of finite type.
If we are given a finite set S we can look at the Coxeter matrix over S . It is a
symmetric matrix M = (mst )s,t∈S , where Mss = 1 for all s ∈ S and all other entries Ms,t are in
the set N ∖ {1}. This matrix then defines a group AM defined as follows:
AM = ⟨S ∣ stst . . . = tsts . . ., for all s, t ∈ S⟩.
´¹¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¶
mst terms
mst terms
If mst = ∞ then there is no relation between s and t. We have now defined the
Artin-Tits group associated to M namely the group AM . For this group to be of spherical type
we need to have a special Coxeter group associated to AM . We obtain the Coxeter group by
7
adding the relation s2 = 1 for all s ∈ S to the presentation for AM . If this Coxeter group is
finite, we say the Artin-Tits group is of spherical type.
We then have a Garside structure given by (AM , A+M , ∆), where AM + is the monoid,
or positive cone, composed of products of elements of the set S . But this Garside element
now needs to be defined.
The generators of S can be separated into two sets S1 and S2 such that that
S = S1 ∪ S2 , and the elements in each Si commute with each other. This decomposition can
readily be found from the Coxeter diagram. The Garside element is then a special product of
the following two elements:
∆1 = ∏ s and ∆2 = ∏ s
s∈S1
s∈S2
Finally
∆ = ∆1 ∆2 ∆1 ∆2 . . .
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
h terms
where h is the coxeter number associated with that Coxeter group.
Example 1.6. This particular example does not belong to a class of known Garside groups,
but it has interesting properties that we will use later.
G = ⟨x, y ∣ xyxyxyx = y 2⟩. The Garside element is ∆ = y 3 . This particular example
will be very useful later. The lattice diagram is given in Figure 1.7.
8
∆
y 2 = xyxyxyx
yxyxyxy
xyxyxy
yxyxyx
xyxyx
yxyxy
xyxy
yxyx
xyx
yxy
xy
yx
x
y
1
Figure 1.7. Lattice diagram for Example 1.6
It is also worth noting that in [21] the author proves that the crossed product of
Garside monoids is also a Garside monoid, thus enabling the creation of an infinite number of
new Garside groups.
1.2 D ECISION P ROBLEMS
A decision problem is any problem with yes or no answer on an infinite set of inputs.
The decision problems that are discussed in this paper were introduced by Max Dehn in [8]
and are given below. Aside from the motivations given in the following section, it is generally
considered worthwhile to solve these problems for different groups so that the structure of that
group is better understood.
Word Problem: The word problem is a decision problem and is the first step to
solving the conjugacy problem, which will be introduced shortly. It is: given two elements, a
and b as products of generators and their inverses, or as words in the generators, is one able to
tell if a is equal to b.
The word problem for Garside groups is solved in a fairly standard way for all free
groups. Later we will introduce a normal form for elements in a Garside group. Then to solve
9
the word problem for a, b ∈ G we can utilize an algorithm to find their normal forms. If their
normal forms are the same, then a and b are equal.
Conjugacy Decision Problem (CDP): The CDP is also a decision problem and asks:
given a, b ∈ G, is there a c ∈ G such that a = c−1 bc?
If the CDP is answered in positive, the next logical question is “what is c?”, i.e. is it
possible to find the conjugating element. This question differs from the word problem and the
CDP because it is not a question with a yes or no answer, it is a search problem and will be
referred to as the Conjugacy Search Problem (CSP). Most methods that have been proposed to
solve the CDP also solve the CSP, including the ones in this work. Because the CDP and CSP
are so closely related, we will often refer them, conjugacy.
The solutions to these problems depend on algorithms. If an algorithm is found for a
problem we would like to know how “good” it is. An algorithm is considered “good” if we
are able to determine a practical bound on the time it would take for a computer to solve that
problem. If such a bound exists we will say that that algorithm is efficient. Obviously if no
such bound exists, that algorithm would not be efficient in practice. Ideally we hope to find
polynomial bounds for these algorithms, which in turn implies that these problem can be
computed in a practical time frame. For a more in depth look at computational time
complexity see [24].
We will be discuss in detail the algorithmic solutions of these problems, and they
make up the bulk of this work. An exhaustive search for solutions to the word problem and
the conjugacy problem is necessarily flawed. Since Garside groups are generally infinite
groups the word problem would obviously not lend itself to an exhaustive search solution. As
for the conjugacy problem, the conjugacy class, ([x]), for all non-trivial group elements is
generally infinite.
1.3 P UBLIC K EY C RYPTOGRAPHY
The goal of public key cryptography is to securely transmit data across a channel
without the involved parties having to physically meet in order to share private encryption
keys. This type of encryption is very widespread today and its security is of great interest.
Most of the current public key cryptographic systems that are in use rely on the group action
on elliptic curves or on the presumed difficulty of factoring large integers. See [19] for an
overview of current key exchange protocols and the cryptosystems they produce. With
advancements in advancement in algorithms or advancement in computational hardware, the
security of these systems could become compromised in the foreseeable future. Indeed with a
quantum computer, these methods of encryption would become insecure. This has sparked
much interest in finding other more secure encryption systems.
10
That interest has lead into research in non-commutative algebraic structures to find an
alternative to the above mentioned cryptographic systems. One area in the current research is
the Braid Group, Bn , one of the examples of Garside groups given above. Some of the
attractiveness of encryption with Bn came because of an efficient algorithm to determine the
normal form of elements in Bn and the presumed difficulty of the conjugacy problem in Bn ,
but even though the latter part is still an open question, it is not expected that an encryption
system using Bn will someday replace the existing systems, unless the requirements for which
elements in Bn are chosen are very stringent. Since Garside groups are a generalization of
braid groups, if we can solve the conjugacy problem in Garside groups, the problem will be
implicitly solved in braid groups. Aside from proving whether or not there are viable braid
based cryptosystems, the research into this open question may provide the insight for the
discovery of such a system. Also by studying these problems in Garside groups, one might be
able to find a way to construct a secure Garside group based cryptosystem.
1.3.1 Key Exchange Protocols
Public key cryptography uses key exchange protocols to establish a “shared secret”,
which is the key to encrypt messages between two parties over an unsecure channel. In
cryptography, those two parties are usually Alice (A) and Bob (B). We will give an example
of a key exchange protocol that relies on the conjugacy problem in Bn , and thus is of interest
to our topic.
1.3.2 Diffie-Hellman key exchange protocol for Bn
Here is the classical implementation of the Diffie-Hellman protocol, as implemented
in Braid groups found in [18]. While the Diffie-Hellman protocol’s security relies on the
discrete logarithm problem, this key exchange protocol is based on conjugacy and is tailored
for braid groups. Here is how it works:
First a sufficiently large enough n is chosen so that computations in Bn are
complicated enough. Then two disjoint subsets of Bn are chosen, LBn and RBn .
LBn = ⟨σ1 , σ2 , ..σ(n/2)−1 ⟩ with and RBn = ⟨σ(n/2)+1 , . . . , σn−1 , σn ⟩. This is done so that
elements between the subsets commute. Then some x ∈ Bn is chosen, which is public.
A chooses a braid a ∈ LBn and sends y1 = a−1 xa to B.
B chooses a braid b ∈ RBn and sends y2 = b−1 xb to A
A computes shared key K1 = a−1 b−1 xba
B computes shared key K2 = b−1 a−1 xab
11
Now since a ∈ LBnl and b ∈ Bnl we have ab = ba. Therefore:
K1 = a−1 b−1 xba = (a−1 b−1 )x(ba) = (b−1 a−1 )x(ab) = b−1 a−1 xab = K2
and Alice and Bob have obtained the same key.
1.4 ATTACKING C ONJUGACY BASED
C RYPTOSYSTEMS
It is easy to see that the security of the key exchange protocol above is dependent on
the presumed difficulty of the conjugacy problem. Once the public key (x in the above
examples) is intercepted, along with the public key of both A and B, if the conjugacy problem
is solved, the interceptor would have the conjugating elements a and b, which are the the
private keys of both A and B, thus breaking the security of the cryptosystem.
Since these systems have been proposed there have been several heuristic attacks that
have been implemented and shown to work with some success, see [9] for a nice summary,
but the only deterministic attack for all conjugacy based cryptosystems in braid groups would
be to solve the conjugacy problem for braid groups. It is worth noting that there are attacks
that will break specific conjugacy based encryption protocols without solving the conjugacy
problem. See [23] for a description on how to break the Diffie-Hellman method without
solving the conjujugacy problem.
While the conjugacy problem has been solved for different types of braids (for one
example see [4]), no efficient algorithmic solution has come forward for the general case,
either for Bn or for Garside groups. Therefore there has been research into solving these
problems in the more general setting of Garside groups, with the intent that solution would
also apply to braid groups.
12
CHAPTER 2
FINDING A NORMAL FORM AND THE
WORD PROBLEM
2.1 I NTRODUCTION
In this section we will discuss a method to find a normal form for elements in a
Garside group, thus solving the word problem. We will first define the normal form that is
used in the literature, called simply the left normal form, and then we will find an algorithmic
approach for putting words into that normal form. Having a normal form for group elements
lets us efficiently work in these groups using computers; therefore the algorithmic solution to
the word problem is a very useful step in the current methods to solve conjugacy problem in
Garside groups, and also working in the group in general.
Artin in [1] presented a solution to the word problem for Bn that involved looking at
the kernel of the previously mentioned map φ from Bn to Sn . He then used a process called
“combing” to put a braid into a normal form, thus solving the word problem. The time
complexity of Artin’s solution has not been studied, but it is most likely exponential in the
length of a word in the generators of the classical Garside structure of Bn [2], but since other
more efficient algorithms were later developed, this is not of crucial importance. We will
proceed to give an algorithm that has been generalized to work not only in Bn , but in all
Garside groups.
2.2 N ORMAL F ORMS AND L EFT W EIGHTING
Definition 2.1. Left normal form [3]. Given x ∈ G, we will say that a decomposition of
x = ∆p x1 x2 . . . xr with r ≥ 0 is the left normal form of x if it satisfies the following:
1. p ∈ Z is maximal such that ∆p ⪯ x (i.e. ∆p+1 ⪯̸ x). Also x1 , . . . , xr ∈ P and ∆ ⪯̸ x1 . . . xr .
2. xi = (xi . . . xr ) ∧ ∆ for i = 1, . . . , r . Equivalently we say that xi is the biggest simple
prefix of xi . . . xr . 1
Notice that the uniqueness of ∧ implies that left normal forms are also unique.
This second requirement introduces the idea of left weighting. We say a pair of simple
elements a, b ∈ G is left weighted if ab ∧ ∆ = a, equivalently ab is left weighted if it is in its
left normal form as written. Intuitively what we are doing with the left normal form, is
1
This particular part of the definition will be very useful later.
13
gathering the largest elements (with respect to “⪯”) in the decomposition of x to the left. For
this reason this is sometimes called the left greedy normal form. Hence we can check to see if
some x ∈ G is in left normal form by verifying that x1 ≠ ∆ each pair xi , xi+1 for i ∈ [1, r − 1] is
left weighted.
In Figures 2.1 and 2.2, we can see two equivalent braids, but the braid in Figure 2.2 is
left weighted, while the braid in figure 2.1 is not.
Figure 2.1. σ1 σ2 σ2 σ1 σ2
Figure 2.2. σ1 σ2 σ1 σ2 σ1
One can also define the right normal form of x as y1 y2 . . . yr ∆p and all the results for
the left normal form can be developed analogously for the right normal form as well.
There is an another definition for left weighting used fairly frequently in the current
literature using starting sets and finishing sets. The starting set of an element is the set of all
possible generators that an element can start with, and the finishing set is defined similarly.
We will now show the relation between the two.
Definition 2.2. The starting set for an element x ∈ P is the set of integers
S(x) = {i ∶ x = ai xi , ai ⪯ x}.
Were each xi ∈ P . Recall that the set of atoms is A = {a1 , . . . , aλ }.
The finishing set for an element x ∈ P is the set of integers
F (x) = {i ∶ x = xi ai , ai ⪰ x}.
In [11] the authors define a product ab to be left weighted if S(b) ⊂ F (a). Here we
will show that the two definitions are equivalent, provided that the Garside group in question
14
has no simple elements with repeated words, i.e. if a ∈ G is simple, then a2 is not simple. We
can think of this as a ‘square free’ Garside group.
Lemma 2.1. [11, 7] Let x ∈ G be simple. We have ai x is simple if and only if i ∉ S(x). Also
xai is simple if and only if i ∉ F (x).
Proof. First assume that ai x is simple and i ∈ S(x). This would mean that we can write ai x as
ai ai x′ = (ai )2 x′ for some simple x′ . This is a contradiction because (ai )2 is not simple, so
then it is impossible for ai x to be simple. Hence i ∉ S(x).
Now assume that ai x is simple and i ∉ S(x). This implies that there is no way to write
x = ai x′ . Therefore ai x does not contradict the properties of simple elements.
The result for finishing sets is analogous.
Proposition 2.1. [7] Let x ∈ P . x = ab is left weighted ( ab ∧ ∆ = a) if and only if
S(b) ⊂ F (a).
Proof. Assume that S(b) ⊈ F (a). Now let i ∈ S(b) ∖ F (a). Then b = ai b′ for some simple
element b′ , and we have ab = a(ai b′ ). If we then set a′ = aai , we have ab = a′ b′ . Notice that
a ⪯ a′ but a ≠ a′ . Also a′ ⪯ a′ b′ = ab and since ai ∉ F (a) we know from Lemma 2.1 a′ ⪯ ∆ .
This shows that ab ∧ ∆ ≠ a. So if ab ∧ ∆ = a then S(b) ⊆ F (a) by the contrapositive.
Again by contrapositive assume that ab ∧ ∆ ≠ a. This implies that there are some a′ , b′
such that a ⪯ a′ , a ≠ a′ , x = a′ b′ , and a′ b′ ∧ ∆ = a′ . Now let r ∈ P such that ar = a′ . Then
rb′ = b.
Now let j ∈ S(r). We now have r = aj r ′ where r ′ ∈ P . Hence a′ = aaj r ′ . Clearly
aaj ⪯ a′ . If a′ is simple, so is aaj . Then from Lemma 2.1 j /∈ F (a). Since b = rb′ , j ∈ S(b).
This shows that S(b) ⊈ F (a). So we then have S(b) ⊂ F (a) implies x = ab ∧ ∆ = a.
However this poses a significant problem. Not all Garside groups are ‘square free’.
Consider Example 1.6. Clearly y 2 is simple, so this property does not hold. To further prove
this point consider the element (yy)(y). S(y) ⊆ F (yy), however (yy)(y)∧ = ∆ not (yy)
since (yy)(y) = y 3 = ∆. It is not clear that the results in the literature that reference results
using the definition with starting sets and finishing sets generalize to all Garside groups.
Therefore from here on out we require all Garside groups to be ‘square free’.
It will be beneficial to our aims to define in more detail several of the ideas introduced
above.
Let the normal form of x be x = ∆p x1 x2 . . . xr
Definition 2.3. The infimum of x is p and it is denoted inf(x).
15
Definition 2.4. The supremum of x is p + r and it is denoted sup(x).
Definition 2.5. The length of x is r and it is denoted ℓ(x). This is also sometimes called the
cannonical length.
Definition 2.6. The inner automorphism τ ∶ G → G defined by τ (x) = ∆−1 x∆ is called the
twisting automorphism or shift map.
Definition 2.7. Given a simple element x ∈ G we define x∗ = x−1 ∆ as the right complement
of x. This means that xx∗ = ∆ and that x∗ is the maximal simple element (with respect to ⪯)
such that xx∗ is simple. We call the map ∂ ∶ [1, ∆] → [1, ∆] such that ∂(x) = x∗ as the right
complement map. We analogously define the left complement of x as x○ where x○ = ∆x−1 so
that x○ x = ∆ and the map ∂ −1 ∶ [1, ∆] → [1, ∆] such that ∂ −1 (x) = x○ the left complement
map.
The following result shows how the right complement helps us understand the
relationship between the left normal forms of x and x−1 .
Theorem 2.1. [11] Given x ∈ G with the left normal form x = ∆p x1 . . . xr . Then the left
normal form of x−1 is
x−1 = ∆−p−r x′r . . . x′1 ,
where x′i = τ −p−i (∂(xi )) = ∂ −2p−2i+1 (xi ) for i = 1 . . . r.
This proof heavily relies on the definition of left weightedness using starting and
finishing sets, of which we have discussed the problems. Therefore the interested reader is
referred to [11].
Lemma 2.2. [3] The right complement map ∂ ∶ [1, ∆] → [1, ∆] is a bijection and ∂ 2 = τ .
Proof. To show that ∂ is a bijection we will define its inverse as the left complement map
from above: ∂ −1 ∶ [1, ∆] → [1, ∆] as ∂ −1 (x) = x○ = ∆x−1 . Therefore
∂ −1 (∂(x)) = ∂ −1 (x−1 ∆) = ∆(x−1 ∆)−1 = ∆∆−1 x = x, and
∂(∂ −1 (x)) = ∂(∆x−1 ) = (∆x−1 )−1 ∆ = x∆−1 ∆ = x.
Also note that ∂ 2 (x) = ∂(x−1 ∆) = (∆−1 x)∆ = ∆−1 x∆ = τ (x), and we are done.
Corollary 2.1. [3] There exists a positive integer e such that ∆e belongs to the center of G.
Specifically τ ([1, ∆]) = [1, ∆] and τ (A) = A where the set A is the set of the atoms in G, and
τ e = idG for some positive integer e, so that ∆e is central.
Proof. We have ∂([1, ∆]) = [1, ∆] and it follows that ∂ 2 ([1, ∆]) = τ ([1, ∆]) = [1, ∆], which
implies that τ (A) = A. By way of contradiction if we suppose there is some a ∈ A such that
τ (a) ∈/ A, then τ (a) is a simple element that has a decomposition st where s, t ∈ [1, ∆] and
are non-trivial. However τ −1 (s) and τ −1 (t) are simple elements, and
a = τ −1 (τ (a)) = τ −1 (s)τ −1 (t) where both τ −1 (s) and τ −1 (t) are simple elements. Thus we
16
have a contradiction because a is an atom. We therefore have τ (A) ⊂ A. Now because A is a
finite set and τ is a bijection, it follows that A ⊂ τ (A) and therefore A = τ (A).
Lastly, since τ induces a permutation on A, there exists a positive integer e such that
τ e induces the trivial permutation on A. Because A generates G this directly implies that τ e is
the trivial automorphism on G. Hence we have ∆e is central in G.
Corollary 2.2. All atoms in G are both left and right divisors of ∆.
Proof. Let a ∈ A, then a is both a left and right divisor of ∆. From Corollary 2.1 we know
that τ (A) = A. Hence, τ (a) = b for some b ∈ A. Then a∆ = ∆b and a∆b−1 = ∆, so if ∆b−1 ∈ P
, then a is a left divisor of ∆.
Now assume that ∆b−1 ∉ P . Then ∆b−1 = c−1 for some c ∈ P . This implies that c∆ = b
but this is a contradiction, since b is an atom and ∆ ⪰̸ b. Therefore a is a left divisor of ∆.
The argument for right divisors is analogous.
2.3 E STABLISHING THE N ORMAL F ORM
A LGORITHM
Now that we have some vocabulary and some preliminary results, we can start to
prove that a normal form can be computed with an efficient algorithm. We first need to
establish that a few operations can be preformed efficiently, i.e. with good bounds on their
computational time complexity.
We assume that for a given Garside group G the list of atoms A = {a1 , . . . , aλ } of G is
known. We also assume that given a simple element s and an atom a ∈ A one can test whether
a ⪯ s and then compute the simple element a−1 s and that this can be computed efficiently. We
will call this time complexity O(C).
From [16] we have the following algorithms to compute x∗ , the right complement of
x, and x ∧ y.
Algorithm 2.1. [16] Input: x ∈ G, Output:x∗
1. Set d = ∆.
2. While x ≠ 1 do:
(a) Take an atom a ⪯ x.
(b) Set d = a−1 d and x = a−1 x.
3. Return d.
17
To explain what is going on here let x ∈ G be simple and ∆ be the Garside element.
−1
We then pick some atom a1 ∈ G such that a1 ⪯ x. We then compute a−1
1 ∆ and a1 x. If
−1
−1
∗
a−1
1 x = 1 then a1 = x and a1 ∆ = x ∆ = x and we are done. If not we pick some a2 ∈ G such
−1 −1
−1 −1
−1 −1
that a2 ⪯ a−1
1 x. Then compute a2 a1 ∆ and a2 a1 x. If a2 a1 x = 1 then a1 a2 = x and
−1
−1
∗
a−1
2 a1 ∆ = x ∆ = x and we are done. If not we continue iteratively to choose a3 , a4 , . . . , ak
until we have (a1 a2 a3 . . . ak )−1 x = 1 so that (a1 a2 a3 . . . ak )−1 ∆ = x∗
There are at most ∥∆∥ choices for the ai ’s. The cost of computing each
−1
−1
ai ⪯ ai−1 . . . a1 x is O(λC), since there are λ choices for each ai . The cost of computing
(a1 . . . ai )−1 x and (a1 . . . ai )−1 ∆ is O(C). We now have that the complexity of the algorithm
is O(λC∥∆∥).
We can also note that since τ (x) = (x∗ )∗ this algorithm can also be used to find the shift map
of x, showing that conjugation by ∆ also has complexity O(λC∥∆∥).
Algorithm 2.2. [16] Input: Two simple elements, x, y ∈ G, Output: x ∧ y
1. Set i = 1 and d = ∆.
2. While i ≤ λ do:
(a) If ai ⪯ x and ai ⪯ y , then
(b)
(c)
−1
−1
set d = a−1
i d, set x = ai x and y = ai y and set i = 1.
else
set i = i + 1
3. Return ∂ −1 (d) = x ∧ y .
The tests in step 2(a) and the operation in 2(b) have a cost of O(C). Step 3 has a cost
of O(λC∥∆∥). Since step 2(b) is repeated at most ∥∆∥ number of times with at most λ passes
through the while loop between two consecutive passes, the cost of step 2 is O(λC∥∆∥), so
the complexity of the algorithm is O(λC∥∆∥).
Lemma 2.3. Let a, b ∈ G be simple elements. There are simple elements â, b̂ ∈ G such that
âb̂ = ab and âb̂ ∧ ∆ = â. [2]
Proof. Let a, b ∈ G. Consider a∗ , the right complement of a. Now let c = a∗ ∧ b, let b′ be a
simple element such that b′ = c−1 b so that cb′ = b, and let a′ be a simple element such that
a′ = c−1 a∗ so that ca′ = a∗ . Since c can be no larger than a∗ we know that ac ∈ [1, ∆].
We now have that (ac)b is left weighted. This is because if there was some non-trivial
simple element d ⪯ b′ such that acd ∈ [1, ∆] then we would have the following two results.
First this would imply that there would be some b′′ such that cdb′′ = b hence cd ⪯ b. Second
18
because acd ∈ [1, ∆] we know that acd ⪯ ∆. This directly implies that d ⪯ a′ in the following
way:
acd ⪯ ∆ Ô⇒ d−1 a′ = d−1 c−1 ca′ = d−1 c−1 a∗ = d−1 c−1 a−1 ∆ = (acd)−1 ∆ ∈ P Ô⇒ d ⪯ a′
Since d ⪯ a′ , we have cd ⪯ ca′ = a∗ . However since c ⪯ cd, we have a contradiction
because c = a∗ ∧ b. Thus if â = ac, b̂ = b′ , then âb̂ = ab and âb̂ ∧ ∆ = â.
Then to find the left weighted decomposition we use the following algorithm:
Algorithm 2.3. [2]
Input: Simple elements a and b. Output:a′ , b′ such that ab = a′ b′ and a′ b′ ∧ ∆ = a′
1. Compute a∗ , the right complement of a.
2. Compute c = a∗ ∧ b.
3. Compute b′ = c−1 b.
4. Compute ac.
Steps (1) and (2) have complexity O(λC∥∆∥) . Step (3) has complexity O(C) and
step (4) is simply concatenation. Hence the algorithm has complexity O(λC∥∆∥).
It is worth noting, and clear from the above algorithm and Lemma 2.3, that two simple
elements a and b have a product, ab, that is left weighted if and only if a∗ ∧ b = 1.
Lemma 2.4. [2] Given a product of simple elements a1 a2 a3 ∈ G, if aˆ2 aˆ3 = a2 a3 such that aˆ2 aˆ3
is left weighted, and aˆ1 aˆˆ2 = a1 aˆ2 where aˆ1 aˆˆ2 is left weighted, then aˆˆ2 aˆ3 is left weighted.
The interested reader is referred to Lemma 3.14 in [2] for the proof of the previous
Lemma. It is not included here because of the length of development required for the result.
We can now establish the main result of this section. There have been several different
algorithms to compute normal forms in braid groups, but finding a detailed algorithm that
computes normal forms in polynomial time in Garside groups in this manner in the literature
proved somewhat difficult, so we will give one here, building on what has been done in [2, 16].
Algorithm 2.4 (The Left Normal Form Algorithm). We begin with some x ∈ G represented by
n simple elements of G or their inverses, xk1 , . . . , xjn ; j = ±1.
1. The first step is to rewrite all inverses of simple elements as ∆−1 followed by a simple
element. Recall x○ = ∆x−1 is the left complement of x, and is a simple element. Since
−1 ○
−1
−1
x−1
i = ∆ ∆xi = ∆ x1
19
−1 ○
we can replace each each x−1
i with ∆ x1 . This needs to be done at most n times and
each replacement has a computational cost of O(λC∥∆∥).
2. The next step is to collect all the ∆’s to the left. Recall that τ ([1, ∆]) = [1, ∆] so that if
xi is a simple element, τ (xi ) is a simple element also. We know that
xi ∆ = ∆∆−1 xi ∆ = ∆τ (xi )
xi ∆−1 = ∆−1 ∆xi ∆−1 = ∆−1 τ −1 (xi )
∆−1 ∆m = ∆m−1
These results allow us to move all the ∆′ s to the left. This gives us a representation of x
such that
x = ∆p x1 x2 . . . xr .
The number of times we apply τ or τ −1 is bounded by n(n − 1)/2 and each time we
apply them it has a cost of O(λC∥∆∥).
3. The final step is to make each pair of simple elements xi , xi+1 , where i ∈ [1, r − 1], left
weighted. This can be accomplished by repeated applications of Lemma 2.3. We are
able to do this working from both directions. Assume inductively that x1 , . . . , xi is left
weighted. Then apply Lemma 2.3 to xi xi+1 ; then apply Lemma 2.3 again to xi−1 xi
(because xi may have changed). Lemma 2.4 guarantees that xi xi+1 is still left weighted.
Lemma 2.3 need be applied at most r times to make x1 . . . xr left weighted, so we need
at most r(r + 1)/2 applications of Lemma 2.3 to compute the left normal form of
x1 . . . xr and the complexity is O(r 2λ). If we assume inductively that xi . . . xr is left
weighted we use an analogous process to the above to put xi−1 xi . . . xr into left normal
form.
4. Lastly, we may have that there may be a consecutive product of simple elements at the
beginning of x1 x2 . . . xr that may be ∆ and there may be a consecutive product of
simple elements at the end of x1 x2 . . . xr that is equal to the identity. These should be
absorbed, thus maximizing p and minimizing r . We know that a simple element x is ∆
if and only if ℓ(x) = ∥∆∥ , and x is the identity if and only if ℓ(x) = 0. This means we
can see if x is ∆ or 1 in O(λ), so this entire process has complexity of at most
O(Cλr 2∥∆∥).
Finally we have x = ∆p x1 . . . xr where p is maximal and each xi xi+1 i ∈ [1, r − 1] are left
weighted simple elements.
20
Theorem 2.2. [2] The word problem can be solved algorithmically in Garside groups with
time complexity O(Cλr 2∥∆∥).
Proof. We use the normal form algorithm above to find left normal forms on each word. If the
left normal forms are the same, then the words are equivalent. If the left normal forms are not
the same, the words are not equivalent. So this computation has a time complexity of at most
O(Cλr 2 ∥∆∥) where r is the length of the longer word.
21
CHAPTER 3
SOLUTIONS TO THE CONJUGACY
PROBLEM AND BOUNDS ON TIME
3.1 I NTRODUCTION
The conjugacy decision problem for braid groups was first introduced by Artin but he
did not solve the problem in his work. It wasn’t until 44 years later in 1969 that F.A. Garside
proposed a solution to the conjugacy decision problem that also solved the conjugacy search
problem. While Artin examined the kernel of the map φ, discussed earlier, Garside looked at
the similarity of the combinatorics of Bn to Σn . Since then, other authors have improved on
Garsides solution, minimizing bounds on the computational time of the algorithm to solve the
conjugacy problem. This sections will show that these solutions also apply to Garside groups.
The strategy that Garside used and that others improved on follow the same basic
pattern. Birman, Gebhardt, and Gonzalez-Menses in [3] give the following outline which
shows the strategy nicely.
The goal was to construct an algorithm that solved the conjugacy problem in the
following manner: given x, y ∈ G, determine if x is conjugate to y and if answered positively
to find z ∈ G such that z −1 xz = y, with the stipulation that this could be accomplished
efficiently. These algorithms achieve this by computing a finite subset, Cx , of the conjugacy
class of x that has the following properties:
1. For every x ∈ G the set Cx must be non-empty, finite, and depend only on the conjugacy
class of x. This requirement is necessary because it guarantees that two elements
x, y ∈ G are conjugate to one another if and only if Cx = Cy .
2. For a given x ∈ G, xc , an element of Cx , can be found along with a ∈ G such that
xa = xc .
3. Given a finite C ⊂ Cx we are able to efficiently determine if C = Cx and if not we can
produce an element b such that xb ∈ Cx ∖ C. The very useful application of this step is
that it gives us a way to construct Cx as the closure of this process as long as we have
found one a from step (2). Since Cx is finite, we know this process will terminate,
however it is the time taken to compute this step that has proven the most difficult to
bound, and has been the motivation for continually trying to decrease the size of Cx .
With this information, solving the conjugacy problem then proceeds as follows:
22
(a) Given x, y ∈ G, find xc ∈ Cx and yc ∈ Cy
(b) With one member of Cx known, use the process in step (3) while recording the
conjugating elements to find other elements of Cx until one of the following
situations occurs:
i. yc is found as a member of Cx showing that x and y are conjugate while also
providing the conjugating element
ii. Cx is computed in it’s entirety without yc being found as a member, showing
that x and y are not conjugate.
In [14] Garside introduced Cx as the Summit Set of x, which is the set of all conjugates
of x having maximal infimum, and it is denoted SS(x).
Then in [11] El-Rifai and Morton decreased the size of Cx by introducing the Super
Summit Set (SSS(x)). The difference is that the SSS(x) is the set of all conjugates of x
having minimal length ℓ(x). They also proved that elements in SSS(x) have minimal
supremum and maximal infimum and that SSS(x) is finite.
The next refinment was introduced by Gebhardt in [15]. He further restricts the size of
Cx by introducing the Ultra Summit Set. The USS(x) is the union of the cyclic parts of the
orbits in SSS(x) and is typically much smaller, but not provably so.
The remaining problem with all of these summit sets is that bounds on their
cardinality have yet to be found. In practice they behave nicely for many group elements, but
a general bound that is polynomial is not currently known.
3.2 T HE S UMMIT S ET
We will first show that Garside’s summit set generalizes to Garside groups. To do so
we need to formally define the Summit Set, show that it is nonempty, and prove that it’s
members can be found via an algorithm.
Recall that given x ∈ G with the left normal form ∆p x1 . . . xr , the infimum of x is p.
Hence the infimum is the greatest power of ∆ that can left divide x.
Definition 3.1. Given x ∈ G, we know that x can be written with g negative generators and h
positive generators. The index length of x is Λ(x) = h − g .
Note that since that since conjugation is multiplication by a word and it’s inverse, the
index length of x is equal to the index length of every element in [x].
Theorem 3.1. [14, 7]Let x ∈ G with inf (x) = p and conjugacy class [x]. The set
[x]p ∶= {y ∈ [x] ∣ inf(y) ≥ p}
23
if finite.
Proof. Let y ∈ [x]p with the normal form y = ∆m a where m ≥ p and a ∈ P . Now if ℓ(∆) = d,
then
Λ(y) = md + Λ(a).
Since Λ(a) = ℓ(a) > 0, we have
Λ(y)
.
d
This inequality shows that the infimums of elements in [x]p are bounded above. We
already know that they are bounded below by p. We know also that Λ(a) is a constant for a
fixed m, since Λ(y) and d are constant. Also since a ∈ P , Λ(a) = ℓ(a) and the number of
possible elements a is finite. The result now follows.
m≤
The above theorem implies that the set {inf(y) ∣ y ∈ [x]p } has a maximum, which is
called the summit infimum and is denoted inf s (x). We can now formally define SS(x).
Definition 3.2. The Summit Set of x ∈ G is the set
SS(x) = {y ∈ [x]p ∣ inf(y) = infs (x)}
The fact that SS(x) is non-empty will be clearly illustrated in the algorithm that we
now begin to establish.
In defining this set Garside made the first significant progress towards solving the
conjugacy problem since Artin posed the question. We will go on to see how the SS(x)
solves the conjugacy problem. To do this we will first need to show that one is able to find
SS(x). The difficulty arises from the fact that it is not obvious how to find conjugating
elements to take x into SS(x).
Lemma 3.1. [7] For x, x′ ∈ G, if x is conjugate to x′ , then there exists a c ∈ P such that
x = c−1 x′ c.
Proof. Assume that x is conjugate to x′ . There there exists a ∈ G such that x = a−1 x′ a. Let the
normal form of a be ∆m p where p ∈ P . Recall from Corollary 2.1 that there is some integer e
such that ∆e is central in G.
Case 1 If m ≥ 0 then a ∈ P and the result follows.
Case 2 If m < 0 choose k ∈ N such that m + ke > 0. We now have that:
x = (∆m p)−1 x′ (∆m p) = p−1 ∆−m−ke ∆ke x′ ∆−ke ∆m+ke p = (∆m+ke p)−1 x′ ∆m+ke p.
24
Since ∆m+ke p ∈ P , set c = ∆m+ke p and we are done.
Garside proved a result that states that every x ∈ P can be written as a product of
simple elements. No proof is needed here because of the normal form algorithm. Since every
x ∈ P has a normal form x = ∆p x1 . . . xr where p ≥ 0 and for all i, xi is a simple element, and
∆ is a product of simple elements, so we have the same result.
Lemma 3.2. [14] Let x ∈ G with inf s (x) = m and a ∈ SS(x). If y ∈ P such that
y −1 ay = u ∈ SS(x) , then there is a simple element c such that c−1 ac ∈ SS(x), where c = y ∧ ∆.
These results allow us to prove the following theorem, which then enables us to
establish an algorithm solve the CDP.
Theorem 3.2. [14, 7] For x, x′ ∈ G, x is conjugate to x′ if and only if SS(x) = SS(x′ ).
Proof. First assume that SS(x) = SS(x′ ). Let c ∈ SS(x). Then we also have c ∈ SS(x′ ) and
x is conjugate to c and x′ is conjugate to c. Since conjugacy is an equivalence relation, we
conclude that x is conjugate to x′ .
Now assume that x is conjugate to x′ . Let a ∈ SS(x) and b ∈ SS(x′ ) where both have
inf s (x) = inf s (x′ ) = m . Then because a is conjugate to x and b is conjugate to x′ we know
that a is conjugate to b. By Lemma 3.1, there is some c ∈ P such that a = c−1 bc. Since every
positive word can be written as a product of simple elements, we know that
c = u1 u2 . . . us ,
where ui = ui ui+1 . . . us ∧ ∆ and ui is simple for all i. By substitution we now have
a = (u1 u2 . . . us )−1 b(u1 u2 . . . us ).
We need to show that a ∈ SS(x′ ). To do so we note that inf(b) = inf(a) = m. We now
use Lemma 3.2 by setting y = u1 u2 . . . us . Now since u1 = y ∧ ∆ we know that
′
′
b1 = u−1
1 bu1 ∈ SS(x ). We continue by noting that inf(b1 ) = inf(a) and set y = u2 u3 . . . us , so
−1
′
that b2 = u−1
2 b1 u2 = (u1 u2 ) b(u1 u2 ) ∈ SS(x ). By continuing this process we eventually have
a = (u1 u2 . . . us )−1 b(u1 u2 . . . us ) ∈ SS(x′ ) which shows that SS(x) ⊆ SS(x′ ).
Analogously we can show that SS(x′ ) ⊆ SS(x) proving that SS(x) = SS(x′ ) and we
are done.
We can now give an algorithm that gives the summit set of a given element on a
Garside group using the simple elements of that group. The idea behind the algorithm is quite
simple. To determine if x ∈ G is conjugate to some x′ ∈ G we begin conjugating x by all
25
simple elements of G. Next we take the conjugates with the highest infimums and conjugate
those elements by all simple elements. This process continues until no higher infimums are
produced. This algorithm computes SS(x) therefore solving the CDP by the previous
theorem.
Algorithm 3.1 (Summit Set Algorithm). [7]
Input: x, x′ ∈ G, Output: SS(x), SS(x′ )
1. Compute the set S1 (x) = {d−1 xd ∣ d ∈ [1, ∆]}
2. Set S2 (x) = {d−1 yd ∣ y ∈ S1 (x) with maximal infimum}
(a) set p1 = max{inf(y) ∣ y ∈ S1 } and p2 = max{inf(z) ∣ z ∈ S2 }
(b) while p2 > p1 then set S1 (x) = S2 (x) and repeat step (2 a)
(c) return S2 (x) = SS(x)
3. Repeat steps (1) and (2) substituting x′ for x.
4. If SS(x) = SS(x′ ) then x is conjugate to x′ .
It is clear from the algorithm that SS(x) is non-empty.
While this was the first solution to the conjugacy problem, this algorithm is not very
useful in practice because the summit set is so large, possible bounded exponentially. In any
event it is larger than the SSS(x) which was the next refinement. However it was significant
in proving that the CDP was solvable in Bn and also in Garside groups. Garside’s progress
laid the foundation for others to improve upon his solution by decreasing the size of the
summit set.
This algorithm also is capable of solving the CSP if one keeps track of all conjugating
elements. This process does not add to the time complexity of the algorithm. The adaptation
is as follows:
Assume x is conjugate to x′ . Then by Theorem 3.2 we know that SS(x) = SS(x′ ). Let
y ∈ SS(x) = SS(x′ ). Let a be the product of the conjugating elements such that a−1 xa = y.
Similarly set b as the product of the conjugating elements such that b−1 x′ b = y. We then have
x = ab−1 x′ ba−1 = (ba−1 )−1 x′ (ba−1 ).
So set c = ba−1 and x = c−1 x′ c.
26
3.3 T HE S UPER S UMMIT S ET
El-Rifai and Morton in [11] introduced the super summit set (SSS(x)) as a refinement
to Garside’s original summit set. The difference is that in addition to having maximal
infimum, elements in SSS(x) also have minimal length. So if x ∈ SSS(y) has the left normal
form ∆p x1 . . . xr , then p is maximized and r is minimized. So equivalently SSS(x) has
maximal infimum and minimal supremum at the same time. Clearly this is a non-empty
subset of SS(x). In a similar way to Garside’s solution, two elements x and x′ are conjugate
to one another if and only if SSS(x) = SSS(x′ ), therefore SSS(x) is an invariant of the
conjugacy class.
To find SSS(x) we first need to find one element y ∈ SSS(x) and then conjugate y by
all simple elements in G, and do this until no new elements in SSS(x) are found. However
the approach to finding elements in SSS(x) is more involved than finding elements in SS(x).
To do this we will need to introduce two new special conjugations, called cyclings and
decyclings.
Definition 3.3. Given x ∈ G with the left normal form x = ∆x1 . . . xr (r > 0) the initial factor
of x is ι(x) = τ −p (x1 ). The final factor of x is ϕ(x) = xr . In the case r = 0 we define ι(x) = 1
and ϕ(x) = 0.
The names of these terms relfects the fact that ι(x) corresponds to the first non-∆
factor in the left normal form of x and ϕ(x) corresponds to the last factor in the left normal
form of x.
We will now show how the initial and final factors of x and x−1 are related.
Lemma 3.3. [11, 3] For x ∈ G we have ι(x−1 ) = ∂(ϕ(x)) and ϕ(x−1 ) = ∂ −1 (ι(x)).
Proof. We know that x has normal form ∆p x1 . . . xr . If r > 0 then from Theorem 2.1 the left
normal form of x−1 is ∆−p−r x′r . . . x′1 where x′i = τ −p−i (∂(xi )). We now have
ι(x−1 ) = τ p+r (x′r ) = τ p+r (τ −p−r (∂(xr ))) = ∂(xr ) = ∂(ϕ(x)).
Analogously we have
ι(x) = τ −p (x1 ) = τ −p (τ p (∂(x′1 ))) = ∂(x′1 ) = ∂(ϕ(x−1 ))
which implies ∂ −1 (ι(x)) = ϕ(x−1 ).
If r = 0 then we have that the normal form of x is ∆p . Then
ι(x−1 ) = 1 = ∂(∆) = ∂(ϕ(x)), and ϕ(x−1 ) = ∆ = ∂ −1 (1) = ∂ −1 (ι(x)) and we have proved our
claim.
It is worth noting that an equivalent form of Lemma 3.3 is: Given x ∈ G we have
ϕ(x)ι(x−1 ) = ∆ = ϕ(x−1 )ι(x).
27
Definition 3.4. Let x ∈ G. The cycling of x, denoted as c(x), is xι(x) . The decycling of x,
−1
denoted d(x) is xϕ(x) . We can see where these names come from when we look at the way
these special conjugations affect normal forms.
−p p
p
−p
= ∆p x2 . . . xr ι(x) = ∆p x2 . . . xr τ −1 (x1 )
c(x) = ι(x)−1 xι(x) = ∆p x−1
1 ∆ ∆ x1 x2 . . . xr ∆ x1 ∆
p
d(x) = ϕ(x)xϕ(x)−1 = xr ∆p x1 . . . xr−1 xr x−1
r = xr ∆ x1 . . . xr−1
Therefore cycling corresponds to passing the first non-∆ term to the end of the left
normal form while decycling takes the last term to the front of the left normal form, however
one does need to take account of the powers of ∆. In general cycling and decycling does not
preserve left normal forms, although considerable study has been given to elements of Bn
with the property that cycling does preserve left normal forms, or in other words c(x) is in
left normal form as written. These elements are called rigid. For the general case though, the
left normal form algorithm would need to be used after every cycling or decycing.
We have the very useful results from [6, 11] that shows how cycling and decycling
play a crucial role in finding elements of the super summit set:
Lemma 3.4. [11, 6] Let x, y ∈ G and suppose that x is conjugate to y . If inf(x) < inf(y), then
there is some positive integer k1 such that repeated cycling yields inf(x) < inf(ck1 (x)). If
sup(x) > sup(y) then there exists some positive integer k2 such that sup(dk2 (x)) < sup(x)
So now we can both increase the infimum and decrease the supremum. The natural
question to then ask is if a bound exists on the amount of times these operations need to be
preformed to reach a member of the super summit set, which was accomplished as one of the
the main goals of [6]. It was found that number of times that an elements infimum can be
increased is bounded by its length (ℓ), and the number of times the supremum of an element
can be decreased is bounded by the length of ∆.
Theorem 3.3. [3, 6] Let (G, P, ∆) be a Garside structure of finite type. Choose x ∈ G and let
r = ℓ(x). Let m be the letter length of ∆.
1. A sequence of at most rm cyclings and decyclings applied to x produces a
representative ̃
x ∈ SSS(x).
2. If y ∈ SSS(x) and α ∈ P is such that y α ∈ SSS(x) then y α∧∆ ∈ SSS(x).
This last result will show that once a single element of the super summit set is found,
that element can be used to compute the rest of the set.
Corollary 3.1. [11] Let x ∈ G and V ⊂ SSS(x) be non-empty. If V ≠ SSS(x) then there
exists y ∈ V and a simple element s such that y s ∈ SSS(x) ∖ V .
28
These results allow us to find the entirety of SSS(x), however the process is not
elegant. At the beginning we start with some single element subset V = {x̃} where
x̃ ∈ SSS(x). We can then conjugate x̃ by every simple element in G. If some conjugate of x̃,
say z, is also found to be in SSS(x) then we can set V = {x̃} ∪ z. We can continue in this
manner until we have found all of SSS(x), and this process also gives us the conjugating
elements, if we are keeping track. This is done via the following algorithm.
Algorithm 3.2 (Super Summit Set Algorithm). [11]
Input: x ∈ G, Output: SSS(x)
1. Compute the left normal form of x.
2. Use cyclings and decyclings to compute x̃ ∈ SSS(x).
3. Set v = x̃, V = {x̃}, and W = ∅.
4. For every simple element r , do the following:
(a) compute w = r −1 vr ∈ M
(b) compute the left normal form of w .
(c) If w ∉ V and ℓ(w) = ℓ(v) then set V = V ∪ {w}.
5. If V = W then return V and stop.
6. Else, take some new v ∈ V ∖ W and repeat from Step 4.
Then to see if x is conjugate to y, we we would need to use cyclings and decyclings
again to find ỹ ∈ SSS(y). If ỹ ∈ SSS(x) then x and y are conjugate.
But that could take a long time. For example in Bn with the usual Garside structure,
the number of simple elements is n! (where x̃, and in turn all elements of SSS(x), would
need to be conjugated by each one of these simple elements!) and the known upper bounds
for the size of SSS(x) are exponential in n. And it is shown in [12] that finding SSS(x) in
Bn has complexity O(kl2 (λ!)λ log λ). For large values of lambda this could take quite a
while. This problem led to the following results by Gonzáles-Meneses and Franco [12] that
drastically cuts the time for computing SSS(x) by finding a way around conjugation by every
simple element.
Lemma 3.5. [12] If conjugation by two simple elements, a, b does not decrease the infimum
of x ∈ P , then a ∧ b does not decrease the infimum of x.
29
Proof. Let x ∈ P where m = inf(x), and let a and b be simple elements such that inf(xa ) ≥ m
and inf(xb ) ≥ m. Also let c = a ∧ b and write x = ∆m x′ . Since τ preserves left divisibility, it
also preserves gcd’s. So τ (c) = τ (a ∧ b) = τ (a) ∧ τ (b). Therefore τ m (c) = τ m (a) ∧ τ m (b).
Notice that if a−1 xa = a−1 ∆m x′ a = ∆m τ m (a−1 )x′ a = ∆m (τ (a))−1 x′ a, then
(τ m (a))−1 x′ a ∈ P which means that τ m (a) ⪯ x′ a.
We now have τ m (c) ⪯ τ m (a) ⪯ x′ a. Similarly τ m (x) ⪯ x′ b. Clearly for all
x ∈ P, xa ∧ xb = xc, therefore since τ m (c) ⪯ x′ a and τ m (c) ⪯ x′ b, τ m (c) ⪯ x′ c. This means
that (τ m (c))−1 x′ c ∈ P and therefore∆m (τ m (c))−1 x′ c = c−1 xc has an infimum greater than or
equal to m, so congugation by c = a ∧ b does not decrease the infumum of x.
Lemma 3.6. [12] Let x = x1 . . . xr ∈ P be written in right normal form. Let s be a simple
element and suppose that we can write xs as a product of r simple elements, that is
x1 . . . xr s = u1 . . . ur . Then x1 ⪯ u1 .
Theorem 3.4. [12] Let x ∈ SSS(x). If we have xa ∈ SSS(x) and xb ∈ SSS(x) for some
a, b ∈ G, then xa∧b ∈ SSS(x).
Proof. Let a, b ∈ G be such that if x ∈ SSS(x), then xb and xa ∈ SSS(x). Let c = a ∧ b, and let
r1 and r2 be such that a = cr1 and b = cr2 . We then know that the only prefix of r1 and r2 is 1.
Now suppose that the left normal form of x is ∆p x1 . . . xr and let x′ = x1 . . . xr . Now
since xa ∈ SSS(x), we have a−1xa = a−1 ∆p x′ a = ∆p τ p (a−1 )x′ a = ∆p (τ p (a))−1 x′ a. Notice
that because a−1 xa ∈ SSS(x) and (τ p (a))−1 x′ a ∈ P , which implies it has length r, or more
explicitly, we can write (τ p (a))−1 x′ a as a product of r simple elements, say t1 . . . tr .
Analogously we have the same situation for (τ p (b))−1 x′ b.
Now lets look at what happens when we conjugate x by c. We know from Lemma 3.5
that this conjugation does not decrease the infimum, therefore it’s infimum is maximal. Now
we need to show that it’s supremum is minimal.
Assume that the supremum of c−1 xc is not minimal, that is it has length of r + 1,
therefore (τ p (c))−1 x′ c ∈ P . We know it can’t be greater than r + 1 because it is a right divisor
of x′ c, which has r + 1 factors. So let (τ p (c))−1 x′ c = z1 . . . zr+1 . We now have
t1 . . . tr = (τ p (a))−1 x′ a = (τ p (r1 ))−1 (τ p ((a))−1 x′ r1 = (τ p (r1 ))−1 z1 . . . zr+1 r1 . This shows
z1 . . . zr+1 r1 = τ p (r1 )t1 . . . tr and z1 . . . zr+1 is in right normal form. Then by Lemma 3.6
z1 ⪯ τ p (r1 ). Analogously z1 ⪯ τ p (r2 ). We then have
z1 ⪯ (τ p (r1 ) ∧ τ p (r2 ) = τ p (r1 ∧ r2 ) = τ p (1) = 1, which could only happen if (τ p (c))−1 x′ c was
the identity, so we have a contradiction, and c−1 xc must have minimal supremum. Therefore
xc ∈ SSS(x).
30
Corollary 3.2. [3] Let x ∈ G and y ∈ SSS(x). Then for every u ∈ P there is a unique
⪯-minimal element ρy (x) satisfying
u ⪯ ρy (u) and y ρy (u) ∈ SSS(x)
Proof. The gcd of the set {v ∈ P ; u ⪯ v, y v ∈ SSS(x)} satisfies all the requirements and has
all the properties of ρy (u).
Let us consider the set ρy (A) = {ρy (a) ∣ a ∈ A}, recalling that A is the set of atoms in
G. This set contains all nontrivial elements which are ⪯-minimal among the elements that
conjugate y to an element in the SSS(x). We call elements in this set minimal simple
elements for y with respect to SSS(x). Since it is possible that we have have strictly
ρy (a) ⪯ ρy (b) where a and b are distinct atoms, the set of minimal simple elements for y ∈ G
is generally strictly contained in ρy (A).
Corollary 3.3. [3]Let x ∈ G and V ⊂ SSS(x) be non-empty. If V ≠ SSS(x) then there exists
some y ∈ V and a simple element ρ = ρy (a) for some atom a such that y ρ ∈ SSS(x).
These results allow us to find the super summit set more efficiently, because instead of
conjugating each element y ∈ SSS(x) by all simple elements, we need only conjugate y by its
minimal simple elements. This is an improvement because the number of minimal simple
elements is bounded by λ, the number of atoms in G, whereas the number of simple elements
is not. For example there are n! simple elements in Bn with the classical Garside structure,
but there are only n − 1 atoms, so for large values of n there is much less computation
required to find SSS(x) with the minimal simple elements technique. To further illustrate
this point we can look at free abelian groups of finite rank. There are n atoms and there are 2n
simple elements.
The only question then becomes “How fast can we compute the minimal simple
elements for some y ∈ SSS(x)?” In [12] it is shown that there are less minimal simple
elements than there are generators for the group. This leads to computing the set of minimal
simple elements comparatively quickly. The algorithm is essentially the same as Algorithm
3.2 but instead of conjugating by every simple element, we only have to conjugate
x̃ ∈ SSS(x) by each minimal simple element.
For example in Bn it is shows that the time complexity of computing SSS(x) using
minimal simple elements is O(kl2 λ4 ) where k is the size of SSS(x), compared to
O(kl2 (λ!)λ log λ) using all simple elements.
The conjugacy decision problem is then solved in the same way as with the SS(x).
Given x, y ∈ G, we compute x̃ ∈ SSS(x) and ỹ ∈ SSS(y). We then begin to compute SSS(x)
31
and if we find ỹ ∈ SSS(x), then x and y are conjugate. This process allows one to keep track
of the conjugating elements to solve the conjugacy search problem also.
So we have seen that the method in [12] is an improvement than that in [11] which is
in turn an improvement on Garside’s original algorithm, but the problem that still remains is
that SSS(x) can be very large. For example in Bn all known bounds on the super summit set
are exponential in n.
3.4 T HE U LTRA S UMMIT S ET
The next improvement in the summit sets came via the work of Gebhardt in [15].
Instead of using the SSS(x) as the subgroup that is invariant under conjugation, Gebhardt
introduced the ultra summit set, which is a subset of the super summit set and is defined as
follows:
Definition 3.5. Given x ∈ G, the ultra summit set of x, denoted USS(x), is the set of
elements y ∈ SSS(x) such that cm (y) = y , for some m > 0.
Definition 3.6. For y ∈ SSS(x) the trajectory of y is Ty = {ck (y) ∣ k ≥ 0}.
So USS(x) then consists of a finite set of disjoint, closed orbits inside SSS(x) under
the cycling operation.
Theorem 3.5. [5] USS(x) is always non-empty.
Proof. Given some y ∈ SSS(x) we know that c(y) is also in SSS(x). Since SSS(x) is
finite, after some number of cyclings we will find m1 and m2 such that cm1 (y) = cm2 (y).
Hence we have cm1 ∈ USS(x).
As shown in [3] consider the example x = σ1 σ3 σ2 σ1 ⋅ σ1 σ2 ⋅ σ2 σ1 σ3 ∈ B4 . x has a super
summit set with 22 elements, but the ultra summit set only has 6. More interestingly those 6
elements are contained in 2 orbits, namely
O1 = {σ1 σ3 σ2 σ1 ⋅ σ1 σ2 ⋅ σ2 σ1 σ3 , σ1 σ2 ⋅ σ2 σ1 σ3 ⋅ σ1 σ3 σ2 σ1 , σ2 σ1 σ3 ⋅ σ1 σ3 σ2 σ1 ⋅ σ1 σ2 }
O2 = {σ3 σ1 σ2 σ3 ⋅ σ3 σ2 ⋅ σ2 σ3 σ1 , σ3 σ2 ⋅ σ2 σ3 σ1 ⋅ σ3 σ1 σ2 σ3 , σ2 σ3 σ1 ⋅ σ3 σ1 σ2 σ3 ⋅ σ3 σ2 }
From the above example it can be verified that O1 = τ (O2 ). In [15] it is shown that in
B3 for an arbitrary braid of canonical length l, the cardinality of the ultra summit set is either l
or 2l, depending on whether or not O1 = τ (O2 ) (as in the previous example). However this
does not hold in general. In fact the ultra summit set is not well understood in general. As an
example, in [13] the author points out that there is an element of B12 with cannonical length 6,
that has an ultra summit set of 264.
32
Then the algorithm for the conjugacy problem in Garside groups is analogous to the
algorithm for summit sets and super summit sets, but we only need to find USS(x) instead of
the summit set or super summit set. To do so we need the following results, which are
analogous to the results we needed for summit sets.
Theorem 3.6. [15] Let x ∈ G such that x ∈ USS(x). If s, t ∈ G are such that xs and xt are
both in USS(x), then we also have that xs∧t ∈ USS(x).
As we did before with the super summit set we can define some non-identity element
s ∈ G as a minimal simple element for y, but with respect to USS(x) instead of SSS(x), if
y s ∈ USS(x), and there is no proper prefix of s, say s′ , such that y s ∈ USS(x).
Theorem 3.7. [5] Let x ∈ G and y ∈ USS(x). For every u ∈ P there exists a unique element
cy (u) wich is minimal with respect to ⪯ among all elements that satisfy u ⪯ cY (u) and
y cy (u) ∈ USS(x)
′
The set of all minimal simple elements for y with respect to USS(x) is also contained
in the set cy (A) = {cy (a) ∣ a ∈ A}. Like before this set is bounded by λ. For the remainder of
this thesis, the term “minimal simple element” will always be with respect to the USS(x),
and not SSS(x).
In section 4 of [15] it is shown how to compute Cy (x), the set of minimal simple
elements for x.
Corollary 3.4. [15] Let x ∈ G and V ⊂ USS(x) be non-empty. If V ≠ USS(x) then there
exists some y ∈ V and an atom a such that cy (a) is a minimal simple element for y and
y cy (a) ∈ USS(x) ∖ V .
Using this technique we can obtain an algorithm not only to compute elements of the
ultra summit set, but also to create a directed graph which states the conjugating elements. We
define this directed graph as follows:
Definition 3.7. Let x ∈ G. Then the directed graph Γx is as follows:
1. The set of vertices of Γx are the elements in USS(x).
2. For each y ∈ USS(x) and for each simple element s with respect to y , the arrows in Γx
are labeled s and take y to y s .
Theorem 3.8. [15] Let x ∈ G and y, z ∈ USS(x). There exist y0 , . . . yt ∈ USS(x) and simple
ci
= yi for i ∈ [1, t].
elements c1 , . . . , ct such that y0 = y, yt = z and yi−1
The most beneficial result from this theorem is that it shows that the graph Γx is
connected. This graph makes it very simple to find and keep track of the conjugating
elements, hence making solving the CSP simpler.
To find an element y ∈ USS(x) we take some element in SSS(x) and apply repeated
cyclings, as implied in Theorem 3.5. The main benefit of using USS(x) instead of SSS(x) is
33
that it is generally a much smaller set than SSS(x) as can be seen in section 5 of [15], and the
main drawback of using USS(x) is that it is not known in general how many repeated
cyclings are needed to take an element from SSS(x) into USS(x) (this is open question 3 in
section 1.3 of [3]). However the theoretical complexity of the algorithm is not any worse than
the algorithm in [12]. While having the same theoretical complexity, this algorithm has shown
to be much better in practice for Bn [15].
Now we can state the algorithm for the conjugacy problem using the ultra summit set.
Algorithm 3.3 (Ultra Summit Set). [15]
Input: x ∈ G, Output USS(x)
1. compute x̃ ∈ USS(x). Set U = Tx̃ and U0 = ∅.
2. if x̃ = ∆k for some k then return ∆k
3. While U ≠ U0 , let y1 , . . . ym ∈ U such that U is the union of their trajectories. Now set
U0 = U .
4. For each y ∈ {y1 , . . . , ym } compute Cy , the set of minimal simple elements for y , and set
U = U ∪c∈Cy Tyc
5. We now have U = USS(x).
In [3] it is stated that the complexity for the conjugacy problem in Garside groups with
this algorithm is O(∣USS(x)∣p + q) where p is polynomial in ∥x∥ and q is tied directly to the
number of cyclings and decyclings that one must apply to a member of SSS(x) to obtain a
member of USS(x). They believe q plays a very minor part compared to ∣USS(x)∣p, so the
main hope is to find a bound on ∣USS(x)∣.
Generally the USS(x) is much smaller than SSS(x). However we have results like
the ones in [22], where it was found that there are elements of Bn where the size of the ultra
summit set is at least exponential.
And so we see again that there isn’t a polynomial bound on the size of the ultra
summit set, and hence no polynomial bound on the algorithm using the ultra summit set. The
other problem with the ultra summit set approach is that it is not known how many cyclings
are needed to get from the super summit set into the ultra summit set.
This brings us to another method for solving the conjugacy problem that bypasses this
specific problem by using subsets of USS(x), but once again we are finding a subset of a not
necessarily well bounded summit set, with the goal of finding a bound on the subset, but this
hasn’t been proved as of yet, although it is found to be faster in practice. This seems to be a
recurring theme.
34
3.5 B LACK AND G REY C OMPONENTS OF THE
U LTRA S UMMIT S ET
3.5.1 Special Cyclings and Decyclings
The next attempt to decrease the size USS was developed in [5], and it allows one to
solve the conjugacy problem for some x, y ∈ G by finding certain cyclic subsets of USS(x)
and USS(y), thus eliminating the need compute the whole of either USS(x) or USS(y). To
examine this we will need to introduce the following ideas.
Recall that given one element in USS(x) we can find the other elements in USS(x)
by conjugating that element by minimal simple elements. We will further examine those
minimal simple elements and see how they determine the graph Γx .
We will begin with a new definition.
Definition 3.8. The twisted decycling of x ∈ G is defined as τ (d(x)).
Lemma 3.7. Given some x ∈ G with ℓ(x) > 0, we have:
xι(x) = c(x), xϕ(x) = d(x), xι(x
−1
−1
)
= x∂(ϕ(x)) = τ (d(x)).
Proof. The first two claims follow directly from the definitions of cycling and decycling. The
third equality is shown as follows:
x∂(ϕ(x)) = xϕ(x)
−1
∆
= (d(x))∆ = τ (d(x)).
From the above lemma we can see that the cycling and twisted decycling of x are
conjugates of x by simple elements. This also gives us another way to look at the connection
between cycling and twisted decycling. The twisted decycling of x corresponds to a cycling
of x−1 because it is a conjugation by ι(x−1 ), and the cycling of x corresponds to a twisted
decycling of x−1 .
Lemma 3.8. [5] Conjugation by ∆ commutes with both cycling and decycling. Also, the
USS of any element is closed under cycling, decycling, and twisting.
Proof. It is clear from the definitions that ι(τ (x)) = τ (ι(x)) and ϕ(τ (x)) = τ (ϕ(x)). This
means that c(τ (x)) = τ (x)τ (ι(x)) = τ (xι(x) ) = τ (c(x)) and
−1
d(τ (x)) = τ (x)τ (ϕ(x) ) = τ (d(x)), hence τ commutes with both cycling and decycling.
Now we need to show that these operations are closed in the USS. Let x ∈ USS(x).
We know that c(x) is also in USS(x) from [15]. It remains to show that τ (x) and d(x) are
also in USS(x). We will first look at twisting.
35
Since x ∈ USS(x) ⊂ SSS(x), x has minimal cannonical length. τ (x) also has
minimal cannonical length, so τ (x) ∈ SSS(x). Furthermore since x ∈ USS(x) there is some
m such that cm (x) = x. Since twisting and cycling commute we have
cm (τ (x)) = τ (cm (x)) = τ (x), showing that τ (x) ∈ USS(x).
Now for decycling. We know xx = x ∈ USS(x). We also know that
p+r−1
x∆
= τ p+r−1 (x) ∈ USS(x) because it is a repeated application of the twisting
automorphism and as we have shown the USS is closed under twisting. Then by Theorem 3.6,
d(x) = x∆
p
x1 ...xr−1
= xx∧∆
p+r−1
∈ USS(x).
Theorem 3.9. [5]
Let x ∈ USS(x) and s be a minimal simple element for x. Then one and only one of
the following holds:
1. ϕ(x)s is a simple element.
2. ϕ(x)s is left weighted as written.
Proof. We know from Lemma 3.7 x∂(ϕ(x)) = xϕ(x)
−1
∆
= (d(x))∆ = τ (d(x)). Therefore by
Lemma 3.8, x∂(ϕ(x)) ∈ USS(x). Also since s is a minimal simple element, xs ∈ USS(x). We
can again use Theorem 3.6 to obtain x∂(ϕ(x))∧s ∈ USS(x). For ease let t = x∂(ϕ(x))∧s
Now t ⪯ s and since s is a minimal simple element, we have either t = s or t = 1
Finally we look at the first factor the left normal form of ϕ(x)s, which could be ∆.
This first factor is equal to ϕ(x)s ∧ ∆ = ϕ(x)s ∧ ϕ(x)∂(ϕ(x)) = ϕ(x)(s ∧ ∂(ϕ(x))) = ϕ(x)t.
Since t = s or t = 1, we know ϕ(x)t is equal to either ϕ(x) or ϕ(x)s. The first case implies
that ϕ(x)s is left weighted and the second case imples that ϕ(x)s is simple.
Theorem 3.10. [5]
Let x ∈ SSS(x) with ℓ(x) ≥ 0 and let s be a simple element such that xs ∈ SSS(x). If
ϕ(x)s is left weighted, we then have s ⪯ ι(x).
Proof. Let x = ∆p x1 . . . xr . If ϕ(x)s = xr s is left weigted then ∆p x1 . . . xr s is the left normal
form for xs. We know that xs ∈ SSS(x). We can rewrite xs as s−1 xs = ∆p τ p (s)−1 x1 . . . xr s.
This tells us that τ p (s)−1 x1 . . . xr s ∈ P , which implies that τ p (s) ⪯ x1 . . . xr s. Since τ p (s) is
simple, we have τ p (s) ⪯ x1 . . . xr s ∧ ∆. But since x1 . . . xr s is already in left normal form, we
have τ p (s) ⪯ x1 . This is equivalent to s ⪯ τ −p (x1 ) = ι(x) and we are done.
Corollary 3.5. Let x ∈ USS(x) and ℓ(x) > 0 and let s be a minimal simple element for x.
Then we have that s is either a prefix of ι(x), ι(x−1 ), or both.
36
Proof. From Theorem 3.3 we have that ι(x−1 ) = ∂(ϕ(x)) and we know that by Theorem 3.9
that ϕ(x)s is either simple or left weighted. If it is simple, s ⪯ ∂(ϕ(x)) = ι(x−1 ), and if it is
left weighted we have s ⪯ ι(x), by Theorem 3.10.
We could have s ⪯ ι(x) and s ⪯ ι(x−1 ), but only if ϕ(x)s is simple.
It is worth noting that these last two results are only for elements with length greater
than 0, otherwise we have y ∈ G such that ℓ(y) = 0, which implies that ι(y) = ι(y −1 ) = 1.
These results have shown that the minimal simple elements for elements of each
y ∈ USS(x) with ℓ(y) > 0 are prefixes of either ι(x) or ι(x−1 ). This plays an important role
in finding the subset of USS that we are looking for. They will help us keep track of the
conjugating elements between two elements in the USS. Since they are important we will
give them the following definitions:
Definition 3.9. Let x ∈ G. A partial cycling of x is a conjugation of x by a prefix of ι(x). A
partial twisted decycling is a conjuation of x by a prefix of x−1 = ∂(ϕ(x)).
Corollary 3.6. Let x, y ∈ USS(x). There exists a sequence of partial cyclings and parital
twisted decyclings that take x to y .
Proof. This follows directly from Theorem 3.8 and Corollary 3.5.
From here we can again look at the directed graph of USS(x), Γx , but instead of all
the arrows being the same, we divide them into two groups. If the minimal simple element
that takes x to y is a partial cycling, then we will color the arrow taking x to y in Γx black. If
the minimal simple element that takes x to y is a partial twisted decycling, then we will color
the arrow taking x to y in Γx grey. This new way to construct Γx will help to understand the
results in the next section.
3.5.2 Black and Grey Compents of the Ultra
Summit Set
By understanding the structure of the USS, we will be able to better understand further
solutions to the conjugacy problem. To better understand this structure, we will look at only
the black arrows, or only the grey arrows, in Γx . To do this we will examine the subgraph of
Γx containing all the same vertices as Γx , but only the black [grey] arrows. We will call this
subgraph Γblack
[Γgrey
]. It is important to note that while Γx is always connected, these
x
x
subgraphs are not necessarily connected, as in Example 3.1. This means that partial cycling,
or twisted partial decyclings, are not generally sufficient to generate all of the USS(x). In
most cases we need to use both partial cycling and twisted partial decycling.
37
We can look at the parts of Γblack
[Γgrey
] that are connected. We will denote the
x
x
connected parts of Γblack
by B1 , B2 . . . Bs , and the connected parts of Γgrey
by G1 , G2 . . . Gt .
x
x
We call B1 , B2 . . . Bs the black components and G1 , G2 . . . Gt the grey components. It will be
useful to talk about the black or grey component containing a specific element in USS(x) as
a vertex of the graph, so if we have some y ∈ USS(x) we will denote the black component of
USS(x) that contains y as By and the grey component containing y as Gy .
Figures 3.1, 3.2, and 3.3 show how the black and grey components interact in different
ultra summit sets.
σ1 σ3
A1
2
σ 3σ
A2
2
A5
σ2 σ
σ 3σ
1σ
3
σ2 σ1 σ3 σ2
2
σ2 σ
σ 1σ
1σ
3
A6
σ2 σ1 σ3 σ2
σ2 σ
1σ
3
2
σ 1σ
A3
1
2σ
3σ
σ
σ1
σ2 σ
1σ
3
A4
σ1 σ3
Figure 3.1. The graph of ΓA
Figure 3.1 is the USS(A) where A = σ1 σ2 σ3 σ2 σ2 σ1 σ3 σ1 σ3 ∈ B4 . USS(A) has two
cycling orbits that are conjugates of each other by τ . The first orbit is
A1 = σ2 σ1 σ3 ⋅ σ1 σ3 ⋅ σ1 σ2 σ3 σ2
A2 = σ1 σ3 ⋅ σ1 σ2 σ3 σ2 ⋅ σ2 σ1 σ3
A6 = σ1 σ2 σ3 σ2 ⋅ σ2 σ1 σ3 ⋅ σ1 σ3
And the second orbit is
A3 = σ1 σ3 σ2 σ1 ⋅ σ2 σ1 σ3 ⋅ σ1 σ3
A4 = σ2 σ1 σ3 ⋅ σ1 σ3 ⋅ σ1 σ3 σ2 σ1
38
A5 = σ1 σ3 ⋅ σ1 σ3 σ2 σ1 ⋅ σ2 σ1 σ3
Since all the black arrows in ΓA correspond to cyclings, we only need to find the grey
arrows. This is done with Lemma 3.3.
σ2 σ1
B7
B6
σ2
B4
B1
σ4
B3
B2
σ2 σ1 σ3 σ2 σ4 σ5
σ4 σ3 σ2 σ1 σ5 σ4
σ4
σ4 σ5
B8
B5
σ2
Figure 3.2. The graph of ΓB
Figure 3.2 is an example from B6 . Consider the graph of USS(B) where
B = σ2 σ1 σ4 σ3 σ2 σ1 σ5 σ4 ⋅ σ2 σ4 .
In this example each black arrow is a partial cycling and every two repeated partial
cyclings is a cycling. Hence there are 4 cycling orbits. The grey arrow starting at
Bi = ∂(ϕ(Bi )) for all i, hence all grey arrows are twisted decyclings. The labels of the grey
arrows are not written explicitly in the graph for space considerations.
39
σ3
C10
σ2
C8
C1
σ1
C3
σ4
σ1
σ4
σ2
C12
σ3
C6
σ3
σ2
σ2
σ3
C11
C5
C4
σ1
σ4
σ1
σ4
C2
C7
σ3
C9
σ2
Figure 3.3. The graph of ΓC
In Figure 3.3 we have C = σ4 σ1 σ2 σ3 σ4 which is in B5 . Since C is simple, have
inf(C) = 0, sup(C) = 1, and consequently ℓ(C) = 1.
What is new in the last graph is that all arrows are black and grey. These we call
bi-colored, since the conjugating element is both a partial cycling and a partial twisted
decycling. This illustrates an extreme case and it is important to note that in many graphs of
ultra summit sets some arrows are both black and grey, and some are not.
We saw in the USS(x) that given a specific y, ∈ USS(x) we could compute the whole
of USS(x) by cycling. We will show a similar result for the black and grey components of
USS(x). Specifically By can be computed by repeated partial cyclings of y. The result is
analogous for Gy by using twisted partial decyclings.
Definition 3.10. In the graph Γx , an arrow has a starting point, y ∈ USS(x), and an endpoint,
y s where s is the label of that arrow. It is clear that the endpoint of s−1 is the starting point of
s, and so on. We define a path in Γx to be a sequnce (se11 , se22 , . . . sekk ) for ei = ±1, where the
i +1
for all i. If a path is composed entirely of black
endpoint of sei i is the starting point of sei+1
[grey] arrows, we call that path a black [grey] path. We say a path is oriented if in the path
(se11 , se22 . . . sekk ) ei = 1 for all i = 1 . . . k .
When we look at the path (se11 , se22 . . . sekk ) we can see that we have an element of G
buried in there. Let α be the product of the simple elements in the path, that is,
α = se11 se22 . . . sekk . It is clear that if ei ≠ −1 for all i = 1 . . . k then α ∈ P . It follows that if α is
oriented, then α ∈ P . We also have that if x is the starting point of the the first arrow in the
path, and y is the endpoint of the last arrow in the path, then xα = y
40
Definition 3.11. It will be convenient to have a shorter notation for y∆−p , therefore let
y ⋆ = y∆−p = τ −p (y1 . . . yr ) and we will call it the pseudo twist of y . Now we can state some
results that will come in handy later.
Proposition 3.1. [5]
Let (s1 , s2 , . . . , sk ) be an oriented path in Γx starting at a vertex y , and let
α = s1 s2 . . . sk ∈ P be the corresponding element. We now have:
1. If α ⪯ y ⋆ then (s1 , s2 , . . . , sk ) is an oriented black path.
2. If α ⪯ (y −1 )⋆ then (s1 , s2 , . . . , sk ) is an oriented grey path.
Proof. We can assume that α ≠ 1, since this becomes trivially true for α = 1. We will proceed
inductively on k.
First suppose that α ⪯ y ⋆. Now we can see that inf(y ⋆) = 0 and ι(y ⋆ ) = τ −p (y1 ) = ι(y).
Because α ⪯ y ⋆ we know also that inf(α) = 0 and s1 ⪯ ι(α) ⪯ ι(y ⋆ ) = ι(y). Recall that the
definition of a black arrow is conjugation by a prefix of ι(y). Since s1 ⪯ ι(y), s1 is a prefix of
ι(y) and therefore s1 is a black arrow. Let k > 1 and we assume that the result is true for
oriented paths of length k − 1. We will show that the result still holds for oriented paths of
length k.
We have just shown that s1 is a black arrow. Now let t = y s1 . We have
p
−p p
−p = s−1 y ⋆ τ −p (s ) because t ∈ USS(x) ⊆ SSS(x).
t⋆ = y s1 ∆−p = s−1
1
1 ∆ y 1 . . . y r ∆ ∆ s1 ∆
1
⋆
⋆
⋆
Now since α = s1 s2 . . . sk ⪯ y we also have s2 . . . sk ⪯ s−1
1 y ⪯ t . By the induction hypothesis
we know that (s2 , . . . , sk ) is an oriented black path, and the fact that (s1 , s2 , . . . , sk ) is an
oriented black path follows directly.
To show that if α ⪯ (y −1 )⋆ then (s1 , s2 , . . . , sk ) is an oriented grey path is very similar.
It is of note that t ∈ SSS(x) implies that t−1 ∈ SSS(x−1 ) for all t ∈ USS(x). This means that
−1 ⋆ −p
s1
(t−1 )⋆ = s−1
1 (y ) τ (s1 ), where t = y .
We now make a connection of the above result to partial cyclings and partial twisted
decyclings.
Corollary 3.7. [5] Let (s1 , s2 , . . . , sk ) be an oriented path in Γx starting at a vertex y . If
s = s1 s2 . . . sk is simple, then we have one of the following:
1. If s ⪯ ι(y) then (s1 , s2 , . . . , sk ) is an oriented black path.
2. If s ⪯ ι(y −1 ) then (s1 , s2 , . . . , sk ) is an oriented grey path.
Proof. If s is simple, then s ⪯ y ⋆ if and only if s ⪯ ι(y ⋆ ) = ι(y), and s ⪯ (y −1 )⋆ if and only if
s ⪯ ι((y −1 )⋆ ) = ι(y −1 ). Then it is a direct result of Proposition 3.1
41
It is worth mentioning that the converse is not necessarily true.
Proposition 3.2. [5] Given y ∈ USS(x) and a black [grey] arrow s in Γx starting at y , there
exists and oriented black [grey] path (s1 , s2 , . . . , sk ) in Γx starting and ending at y , such that
s1 = s.
Proof. Let s be a black arrow and p = inf(y). We will use the following fancy trick. If we
⋆
consider y y = τ −p (y) ∈ USS(x), we know by Proposition 3.1 that every decomposotion of y ⋆
as a product of minimal simple elements has a corresponding oriented black path.
Furthermore, s is a black arrow, which means s ⪯ ι(y) ⪯ y ⋆. This tells us that there is a
decomposition of y ⋆ that is a product of minimal simple elements where the first factor is s. In
other words, there exists a black path (s1 , s2 , . . . , st ) in Γx , going from y to τ −p (y), such that
s1 = s.
Using the same reasoning as above, we can show there exists an oriented black path
from t−(m−1)p (y) to τ −mp (y) for all m ≥ 1. When we concatenate these paths for every m ≥ 1
we get an oriented black path from y to τ −mp (y) where the first arrow is s. Recall that there is
some integer e such that ∆e is central. When m = e there is a black path going from y to
τ −ep (y) = y, such that s1 = s.
The proof for grey arrows is almost identical, with the exception that
−1 ⋆
−1 ⋆
(y
)
y
= ((y −1 )(y ) )−1 = τ p+r (y −1 ))−1 = τ p+r (y) ∈ USS(x), when r = ℓ(y).
Corollary 3.8. [5] Given two elements y and z in a black component Bi [grey component Gi ]
of Γx , there exists an oriented black [grey] path going from y to z .
Proof. Assume that y and z belong to the same black component of Γx . We know that there
exists a black path (se11 , se22 , . . . , set t ) going from y to z. If for some black arrow sj going from
u to v, ej = −1, then s−1
j goes from v to u, and this makes our path not oriented. From
Proposition 3.2 there is an oriented black path (sj , b2 , . . . , bk ) going from u to itself. This
means that (b2 , . . . , bk ) is an oriented black path going from v to u. This allows us to
substitute (b2 , . . . bk ) for s−1
j . When we do this for every such j where ej = −1 we will have an
oriented black path going from y to z.
The proof for grey arrows is identical.
Before we can produce an algorithm for the conjugacy problem using black and grey
components we need one last result.
42
Proposition 3.3. [5] The set of vertices in a black component of Γx is a union of orbits under
cycling. This result does not hold for grey components.
Proof. Using the fact that ι(y) has a decomposition of minimal simple elements, i.e.
ι(x) = s1 s2 . . . sk , we can show that c(y) is a vertex of By . By Corollary 3.7, the
corresponding path, (s1 , s2 , . . . , sk ) is black, and it takes y to y ι(y) = c(y). This means that y
and c(y) belong to the same black component of Γx and therefore the set of vertices in a black
component of Γx is a union of orbits under cycling.
To show that the corresponding result is not true for grey components we can show a
counterexample. From the graph of ΓB in Example 3.2 we can see that the set of vertices in a
grey component are not a union of orbits under cycling. The grey component consisting of
{B3 , B7 } has only partial twisted decyclings as conjugating elements, and is therefore not a
union of orbits.
We can now give an algorithm that computes the black or grey components of an
element in the USS. The algorithm should seem familiar by this point. It is important to
remember that from Theorem 3.7 if we have some y ∈ USS(x) and an atom a ∈ P , there is a
unique element cy (a) such that y cy (a) ∈ USS(x)
Now if we want to compute Bx we first take x and conjugate it by all its minimal
simple elements which are prefixes of its initial factor. Whenever a new element y appears, we
conjugate y by all simple prefixes of its initial factor. This process continues until no new
elements appear. We are assured that we have found all elements of Bx by Corollary 3.8. We
follow the same process for computing a grey element. We can then state the algorithms to
find the black and gray components of some x ∈ USS(x):
Algorithm 3.4. Input: x ∈ USS(x), Output: Bx
1. Set V = {x} and V ′ = ∅
2. While V ≠ V ′ do
(a) Take y ∈ V ∖ V ′ .
(b) For every atom a ⪯ ι(y) do
i. Compute cy (a).
ii. If cy (a) is a minimal simple element, set V = V ∪ {y cy (a) }, and store cy (a) as
a black arrow going from y to y cy (a) .
(c) Set V ′ = V ′ ∪ {y}.
43
3. Return V , along with the information for all the black arrows.
Algorithm 3.5. Input: x ∈ USS(x), Output: Gx
1. Set V = {x} and V ′ = ∅
2. While V ≠ V ′ do
(a) Take y ∈ V ∖ V ′ .
(b) For every atom a such that ϕ(y)a is simple do
i. Compute cy (a).
ii. If cy (a) is a minimal simple element, set V = V ∪ {y cy (a) }, and store cy (a) as
a grey arrow going from y to y cy (a) .
(c) Set V ′ = V ′ ∪ {y}.
3. Return V , along with the information for all the grey arrows.
3.5.3 The Intersection of the Black and Grey
Components as a Solution to the Conjugacy Problem
We would like to show that we can solve the conjugacy problem by using black and
grey components, thus minimizing the size of the set that we need to compute, hopefully with
a good bound, and decreasing the computational time needed, also hopefully with a good
bound. We know that two elements x, y ∈ G are conjugate if and only if USS(x) = USS(y),
or in other words if x ∈ USS(y). We can not say that if two elements are conjugate, they
would both have the same black or grey components (i.e. Bx = By or Gx = Gy ). The reason
this is true is because the black and grey components are not necessarily connected, as seen in
the previous examples. Therefore we have the possibility that for x, y ∈ G, Bx ∩ By = ∅, and
the same possibility for grey components.
What we will proceed to show is how black and grey components can be used to solve
the conjugacy problem. To do this we will show that every black component intersects every
grey component in Γx . Then we will show that x is conjugate to y if and only if Bx ∩ Gy ≠ ∅.
We can prove this by showing that there is an oriented grey path followed by an oriented black
path (and vice versa) that joins any two elements in an ultra summit set. Then all that is
needed is to find Bx and Gy .
Proposition 3.4. [5] Let (s1 , . . . , sk ) be an oriented path in Γx with starting point y , and let
ℓ(y) > 0. If the associated element s = s1 . . . sk is simple then we have the following:
1. If ϕ(y)s is left weighted then (s1 , . . . , sk ) is an oriented black path.
44
2. If ϕ(y −1 )s is left weighted then (s1 , . . . , sk ) is an oriented black path.
Proof. For (1), from Theorem 3.10 we know that since ϕ(y)s is left weighted, s ⪯ ι(y). Then
from Corollary 3.7 that since s ⪯ ι(y), (s1 , . . . , sk ) is an oriented black path.
Similarly for (2), we know from Theorem 3.10 that since ϕ(y −1 )s, s ⪯ ι(y −1 ). So then
by Corollary 3.7 we have (s1 , . . . , sk ) is an oriented grey path.
The next result lets us apply this test to elements that are not simple.
Proposition 3.5. [5] Let x ∈ G and let y ∈ USS(x), where ℓ(y) > 0. Now assume that α ∈ P
such that inf(α) = 0 and y α ∈ USS(x).
1. If ϕ(y)ι(α) is left weighted, then α can be decomposed as α = s1 . . . sk , where
(s1 , . . . , sk ) is an oriented black path.
2. If ϕ(y −1 )ι(α) is left weighted, then α can be decomposed as α = s1 . . . sk where
(s1 , . . . , sk ) is an oriented grey path.
Proof. When α = 1 we know that the result follows from the Proposition 3.4 since 1 is simple.
Now let ∆p y1 . . . yr be the left normal form of y and let α have the left normal form α1 . . . αt .
We know from the definition of the left normal form that α1 = α ∧ ∆. Then we can use
Theorem 3.6 and we have that y α1 ∈ USS(x). If ϕ(y)ι(α) = yr a1 is left weighted then we can
use Proposition 3.4 to show that α1 can be decomposed as a product of black arrows.
Now let z = y α1 ∈ USS(x) ⊆ SSS(x), let ∆p z1 . . . zr be the left normal form of z, and
let α′ = α2 . . . αt . We will show that the result follows by induction. To do this we will need to
show that zr α2 is left weighted. Before we can say this we need to know what zr looks like.
We know that yr α1 is left weighted and z = α1−1 ∆p y1 . . . yr α1, so it follows from Proposition
2.1 in [15] that zr = βα1 , where yr ⪰ β (recall this is right weighted). This then tells us that
zr α2 = (βα1 )α2 is left weighted, because βα1 and α1 α2 are leftweighted. Therefore
ϕ(x)ι(α′ ) = zr α2 is left weighted. We can continue in this manner until we reach αt . We then
have α = s1 . . . st where si = αi for each i and (s1 , . . . , sr ) is an oriented black path.
For the second case we use the same procedure to show that α = s1 . . . sr where
(s1 , . . . , sk ) is an oriented grey path. The only trick is noting that if z = y α1 ∈ USS(x), then
z −1 = (y −1 )α1 ∈ SSS(x−1 ).
We now have the information needed to establish the main result of this section, that is
that any two elements in USS(x) can be joined by a grey path followed by a black path, and
we could also show that those two elements are joined by a black path followed by a grey
45
path. These two paths could either be distinct or they could be the same path, and it is possible
that some of the paths, grey or black, could be empty.
Theorem 3.11. [5] Given x ∈ G and y, z ∈ USS(x) where ℓ(y) > 0, there exists an oriented
path (g1 , . . . , gs , b1 , . . . bt ) in Γx going from y to z , such that (g1 , . . . , gs ) is an oriented grey
path and (b1 , . . . , bt ) is an oriented black path (both of which can possibly be empty). Also if
z = y α where α ∈ P , then the paths can be chosen so that α = g1 . . . gs b1 . . . bt .
Proof. We know that there is some α ∈ G such that y α = z since y, z ∈ USS(x). Now let the
left normal form of α be ∆m α1 . . . αt . Recall that there is some power e, such that ∆e is
ke+m
α1 ...αt = z, which is useful
central in G. This implies that for every integer k we have y ∆
because then we can assume that inf(α) > 0, i.e. α ∈ P because if inf(α) < 0 we can pick a
larger value for k.
We will now introduce y (1) = y and let the left normal form of y (1) = ∆p y1 . . . yr . Now
let α(1) = α. If inf(α(1) ) > 0 then ∆ ⪯ α(1) , which implies that that every simple element is a
prefix of α(1) . In particular, all grey arrows for y (1) are prefixes of α(1) . Now choose some
g1 ⪯ ∂(yr ) ⪯ α(1) , which is a grey arrow starting at y (1) , and then let y (2) = y g1 and let
α(2) = g1−1 α(1) . This process continues as long as inf(α(i) ) > 0, computing the grey arrows
g1 , . . . , gi such that α = g1 . . . gi α(i+1) and y (i+1) = y g1...gi . Now because the length of the
decompositions of α into a product of simple elements is finite, this process has to terminate.
This leaves us with α = g1 . . . gk−1 α(k) , where (g1 , . . . , gk−1 ) is an oriented grey path and
inf(α(k) ) = 0. Furthermore we have that α(k) is the conjugating element that takes
y (k) = y g1...gk−1 to z.
(k)
(k)
(k)
Now denote the left normal form of y (k) as ∆y1 . . . yr , and suppose that yr ι(α(k) )
(k)
is not left weighted, which implies that ∂(yr ) ∧ ι(α(k) ) ≠ 1. By Theorem 3.8 we know that
this element takes y (k) to some element in USS(x), and this implies that there exists some
(k)
minimal simple element gk such that gk ⪯ ∂(yr ) ∧ ι(α(k) ). However we know that this gk is
(k)
a grey arrow because gk ⪯ ∂(yr ), and because gk ⪯ ι(α(k) ) we also know that
α(k+1) = gk−1 α(k) ∈ P . This process allows us to continue adding new arrows to the oriented
grey path dividing α until ϕ(y (i) )ι(α(i) ) is left weighted, and we know this process must
terminate as above.
Now we have α = g1 . . . gs α(s+1) , and y (s+1) = y g1...gs ∈ USS(x) where (g1 , . . . , gs ) is a
grey path, inf(α(s+1) ) = 0, and ϕ(y (s+1) )ι(α(s+1) ) is left weighted. Since α(s+1) conjugates
y (s+1) to z ∈ USS(x), it follows from Proposition 3.5 that α(s+1) can be decomposed into a
product of black arrows, hence we have shown there exists a grey path followed by a black
path taking y to z.
46
We can now show that the result in the previous theorem also lets us construct a black
path followed by a grey path that connects any two elements in USS(x).
Theorem 3.12. [5] Let x ∈ G and let y, z ∈ USS(x) where ℓ(y) > 0. There exists an oriented
path (b1 , . . . , bt , g1 , . . . , gs ) in Γx going from y to z , such that (b1 , . . . , bt ) is an oriented black
path, and (g1 , . . . , gs ) is an oriented grey path (both of which can possibly be empty).
Furthermore if α ∈ P such that y α = z then the paths can be chosen such that
α = (b1 , . . . , bt , g1 , . . . , gs ).
Proof. This proof is much the same as the previous. There exists an α ∈ P such that y α = z.
We then construct a sequence {y (i) }i≥1 and {α(i) }i≥1, where α(1) = α and y (1) = y. If
inf(α(i) ) > 0, there is a black arrow bi dividing αi , and we denote y (i+1) as (y (i) )bi and
(i)
α(i+1) = b−1
i α .
If inf(α(i) ) = 0 and ϕ((y (i) )−1 )ι(α(i) ) isn’t left weighted then there has to be some
prefix, call it β, such that β ⪯ ι(α(i) ) and ϕ((y (i) )−1 )β is simple and (y (i) )β ∈ USS(x). As an
example we can use Lemma 3.3 to take β as
∂(ϕ((y (i) )−1 )) ∧ ι(α(i) ) ⪯ ∂(ϕ((y (i) )−1 )) = ι(y (i) ). This shows that every minimal simple
element dividing β (and hence α(i) ) is a black arrow. So we have showed that if
ϕ((y (i) )−1 )ι(α(i) ) is not left weighted, we can use a black arrow to decompose α(i) further
until the previous product is left weighted.
When this process produces some t such that ϕ((y (t+1) )−1 )ι(αt+1) ) is left weighted we
can use Proposition 3.5 to decompose α(t+1) as a prodect of grey arrows, showing that there
exists a black path followed by a grey path taking y to z.
With these two results we can show two more results that establish the basis for
another algorithm to solve the conjugacy problem, but this time using black and grey
components of Γx .
Corollary 3.9. Let x ∈ G and y, z ∈ USS(x). We have that Bx ∩ Gz ≠ ∅.
Proof. From the previous Theorem we know that there is an oriented path
(b1 , . . . , bt , g1 , . . . , gs ) going from y to z where (b1 , . . . , bt ) is a black path and (g1 , . . . , gs ) is a
−1
−1
grey path. Now define v = y b1 ...bt = z gs ...g1 . Notice that v belongs to both the same black
component as y and the same grey component as z, showing that By ∩ Gz is nonempty.
Corollary 3.10. Given x, y ∈ G, let x′ ∈ USS(x) and y ′ ∈ USS(y). We can show that x and y
are conjugate if and only if Bx′ ∩ Gy′ ≠ ∅.
47
Proof. We know that x and y are conjugate if their ultra summit sets coincide. First assume
that x is conjugate to y, so x′ , y ′ ∈ USS(x). Then we know from Corollary 3.9 that
Bx′ ∩ Gy′ ≠ ∅
Now assume that there is some v ∈ Bx′ ∩ Gy′ . We know that v is conjugate to x′ and y ′ .
We also know that x is conjugate to x′ and y is conjugate to y ′ , therefore x is conjugate to y
because conjugacy is transitive.
We now have all the results needed to state the following algorithm:
Algorithm 3.6. Input: x, y ∈ G Output: α ∈ G such that xα = y , or ’Failed’ if there is no
element conjugating x to y .
1. Compute x′ ∈ USS(x) using cycling and decyclings, and the element a such that
xa = x′ .
2. Compute y ′ ∈ USS(y) using cycling and decyclings, and the element b such that y b = y ′ .
3. Use Algorithm 3.4 to compute Bx′ , and for each vertex v in the black component,
record the element c(x′ ,v) , the element conjugating x′ to v .
4. Use Algorithm 3.5 to compute Gy′ , and for each vertex v in the grey component, record
the element c(y′ ,v) , the element conjugating y ′ to v .
5. If Bx′ ∩ Gy′ = ∅ return ’Falied’.
6. If Bx′ ∩ Gy′ ≠ ∅ choose some v in the intersection and return α = ac(x′ ,v) c−1
b−1 .
(v′ ,v)
This algorithm shows that instead of computing at most all of USS(x) we need only
compute one black component and one grey component, which is in most cases much smaller
than the whole of USS(x)[5]. This algorithm also solves the CDP and the CSP by providing
the conjugating element α. Once again though the problem with this approach is that it uses
ultra summit sets, and therefore is subject to the same limitations discussed previously. For
example, for one specific type of elements in Bn , namely periodic braids, Bx′ = USS(x) or
Gy′ = USS(y) which is problematic. There are techniques described to try and avoid this
problem in Bn , but this is enough to show that there is still further work to do for the general
case in Garside groups.
48
3.6 T HE S ET OF S LIDING C IRCUITS
To further reduce the size of the subset of the conjugacy class, the authors of [16]
introduce the operation of cyclic sliding, which leads to finding the set of Sliding Circuits of x
(SC(x)). The advantage to SC(x) over USS(x) is that it is much smaller for several
different Garside grops. There are several tables in Section 5 of [16] that show SC(x) is
several times smaller for elements of Bn with length 1.
From [16] we have the following Theorem that nicely summarizes the relationship
between all of our subsets of the conjugates of x.
Theorem 3.13. Given x ∈ G we have the following:
SC(x) ⊆ USS(x) ⊆ SSS(x) ⊆ SS(x).
However, this Theorem also show the possibility that SC(x) could be exponentially
bounded. In fact for an entire group of elements in Bn (periodic braids), ∣SC(x)∣ = 2n−2 − 2,
showing that more work needs to be done in finding a polynomial bound for the complexity of
solving the conjugacy problem in Garside groups.
But since SC(x) is still progress in the right direction we will briefly cover the main
ideas and concepts in solving the conjugacy problem with the SC(x).
Instead of conjugating elements by initial and final factors as in the past few summit
sets, we will define a new element to conjugate by.
Definition 3.12. Given x ∈ G we define the preferred prefix of x to be
p(x) = ι(x) ∧ ι(x−1 ) = (x∆− inf (x) ) ∧ (x−1 ∆sup(x) ) ∧ ∆.
The preferred suffix is
p↰ (x) = (∆− inf(x) x) ∧↰ (∆sup(x) x−1 ) ∧↰ ∆
where ∧↰ is the greatest common divisor with respect to ⪰, i.e. the greatest common right
divisor.
Definition 3.13. For x ∈ G we define the cyclic sliding (s(x)) of x to be xp(x) , i.e
s(x) = p−1 xp.
Because of the definition of p(x) we can see that p(x) = p(x−1 ) which implies that
s(x) = s(x−1 ). Clearly it follows that τ (p(x)) = p(τ (x)).
Like cycling and decycling, repeated cyclic sliding increases infimums and decreases
supremums. Eventually repeated cyclic sliding enters a cyclic orbit, similar to the ultra
summit set. Like the ultra summit set this orbit contains elements with minimial length.
49
One nice result that we get from cyclic sliding is that there is some integer m such that
∈ SSS(x) where m ≤ (r − 1)(∥∆∥ − 1) with ℓ(x) = r. This gives us an alternative to
find a conjugate of x in the super summit set.
We can now define the analogue of the ultra summit set.
sm (x)
Definition 3.14. Given x ∈ G we say that x belongs to a sliding circuit if sm (x) = x for some
positive integer m.
Definition 3.15. Given x ∈ G we define the set of sliding circuits of x, denoted SC(x), as the
set of all conjugates of x belonging to a sliding circuit.
Like Γx for USS(x), SC(x) also has a directed graph that will aid in our aims.
Definition 3.16. Given x ∈ G, the sliding circuits graph (SGC(x)) of x is the directed graph
whose set of vertices is SC(x) and whose arrows correspond to conjugating elements in the
following ways: There is an arrow with starting point u ∈ SC(x) and ending point v ∈ SC(x),
labeled by simple non-identity element, s, if and only if:
1. us = v
2. s is an indecomposable conjugator, which means that s ≠ 1 and there is no element t
such that 1 ⪯ t ⪯ s and ut ∈ SC(x).
We will now use these ideas to define two operations that will be necessary for the
new algorithm.
Definition 3.17. Given x, α ∈ G, we the transport of α at x under cyclic sliding is
α(1) = p−1 (x)αp(xα ).
Graphically α(1) is the element that makes the following diagram commutative,
p(x)
x
s(x)
α
xα
α(1)
p(xα )
s(xα )
Additionally we define α(i) = (α(i−1) )(1) . This implies that the transport of α(i−1) at
si−1 (x) is (α(i−1) )(1) . Also we define α(0) = α.
We also have another element we will conjugate by to find elements in SC(x) that we
will call the pullback of elements in SC(x). For our purposes we will only need to be able to
find the pullback in a specific case as follows.
50
Proposition 3.6. Let x ∈ G, z ∈ SC(x), y = s(z)) and let s ∈ P such that y s ∈ SSS(x). The
pullback of s at y is
b(1) = (p(z)sp↰ (y s )−1 ) ∨ 1.
So b(1) = β ∨ 1, where β ∈ G is the element that makes the diagram commutative:
p pz q
z
y
β
sè py s q
s
pè py s q
ys
We can now state the algorithm that will solve the conjugacy in G using SC(x).
However to use the algorithm it is assumed we know:
1. A list containing all atoms of G.
2. Like before, given an atom a and a simple element s we have a function that determines
if a ⪯ s and if so computes a−1 s
3. Similarly if we have an atom a and a simple element s we have a function that
determines if a ⪰ s and if so computes sa−1
If we have the above items, we can use the following three algorithms to solve the
conjugacy problem. Algorithm 3.7 finds an element in SC(x). Algorithm 3.8 finds the
conjugating elements connecting elements in SC(x), (i.e. find the arrows in SCG(x) that
begin at a given vertex). Finally Algorithm 3.9 uses the previous two algorithms to solve the
conjugacy problem.
Algorithm 3.7. Input: x ∈ G, Output: x̃ ∈ SC(x) and c such that xc = x̃.
1. set x̃ = x, c = 1 and T = ∅.
2. While x̃ ∉ T , set T = T ∪ {x̃}, c = c p(x̃) and x̃ = s(x̃).
3. Set y = s(x̃) and d = p(x̃).
4. While y ≠ x̃, set d = d p(y) and y = s(y).
5. Return x̃ and c = cd−1 .
51
Algorithm 3.8. Input: v ∈ SC(x), Output: The set Av containing the arrows in the graph
SCG(x) starting at v .
1. Compute the minimal integer N > 0 such that sN (v) = v .
2. Form the list a1 , . . . , aλ , the atoms of G. Set Av = ∅, and Atoms = ∅.
3. For t = 1, . . . , λ do:
(a) Set s = at .
(b) While ℓ(v s ) > ℓ(v), set s = s (1 ∨ (v s )−1 ∆inf(v) ∨ v s ∆− sup(v) ).
(c) If at ⪯ p(v), then compute the iterated N -pullbacks b, b(N ) , b(2N ) , . . . until the first
repetition, b(rN ) , and set b = b(rN ) .
(d) Compute the iterated N -transports b, b(N ) , b(2N ) , . . . until we have the first
repetition, b(jN ) . Then let i < j such that b(iN ) = b(jN ) .
(e) If at ⪯ b(mN ) for some i < m < j , then do:
i. If ak ⪯̸ b(mN ) for all k = 1, . . . , λ such that either ak ∈ Atoms or k > t, then set
Av = Av ∪ {b(mN ) } and Atoms = Atoms ∪ {at }.
4. Return Av
Algorithm 3.9. Input: x, y ∈ G, Output: If x and y are conjugate, the element c such that
xc = y , else Failed.
1. Use Algorithm 3.7 to compute x̃ ∈ SC(x) and ỹ ∈ SC(y), along with elements c1 , c2
such that xc1 = x̃ and y c2 = ỹ .
2. Set V = {x̃}, V ′ = {x̃} and cx̃ = 1.
3. While V ′ ≠ ∅, do:
(a) take v ∈ V ′ .
(b) Use Algorithm 3.8 to compute Av .
(c) For every b ∈ Av , do:
i. If v b = ỹ , then set cỹ = cv b. Return ‘x and y are conjugate and c = c1 cỹ c−1
2 ’.
ii. else if v b ∉ V , then set cvb = cv b, V = V ∪ {v b }, and V ′ = V ′ ∪ {v s }.
(d) Remove v from V ′ .
52
4. Return ‘Failed’.
A very well written description of these algorithms is found in [16], along with a
detailed examination of the complexity of each algorithm. For our aims it suffices to state the
following:
Theorem 3.14. [16] Let x, y ∈ G where k is the maximum number of factors in x or y . Let T
be distance between repetition in cyclic sliding (i.e. 0 ≤ i < j ≤ T , such that si = sj ), M is the
maximum number of cyclic sliding needed to enter a circuit , and let R be the distance
between repetition of transports (i.e. 0 ≤ i < j ≤ R such that b(iN ) = b(jN ) . Then the
complexity of algorithm for solving the conjugacy problem using SC(x) is
O(Cλk∥∆∥ (k + T + ∣SC(x)∣λ(∥∆∥ + RM))).
Here we can see a problem that should be all too familiar at this point. The only known
bound for ∣SC(x)∣ is SSS(x) which is still exponential. In several computer experiments the
algorithm using sliding circuits has proved to be better than previous algorithms for certain
types of elements in Bn (rigid elements), but for other types of elements in Bn (periodic
elements) the ultra summit set grows exponentially in n. This shows that a polynomial bound
for ∣SC(x)∣ in k and ∥∆∥ is unrealistic for the general case. Finding polynomial bounds for T ,
M, and R and understanding ∣SC(x)∣ are still open questions worth further study.
53
CHAPTER 4
CONCLUDING REMARKS
4.1 INTRODUCTION
This chapter will propose other topics to study and serve as a place for reflections of
the author on the topic.
4.2 OTHER T OPICS TO S TUDY
After surveying the literature for this thesis, there are a few questions that have come
to mind that I will include for the readers consideration.
With the current amount of literature and the apparent interest in the subject, the
author personally believes that an efficient solution can be reached in the future, especially for
Bn . Much effort has been exerted into dividing Bn into three separate types of elements, and
then solving the conjugacy problem for each. While there is a large amount of study and
research into this trichotomy , an efficient solution is only known for one type, and the
problems existing for the other two types seem achievable. However we don’t necessarily
have the same partitioning of elements in Garside groups.
In the spirit of new cryptosystems, are there other groups similar to Garside groups
that still have undecided conjugacy problems where the concepts in this paper might apply?
Of all the problems that still exist in solving the conjugacy problem, perhaps the most
important would be to find a polynomial bound on the size of the subsets of Garside’s Summit
Set. Since the vast majority of computer simulations seem to signify this, it seems that some
bound does exist. There seems to be a gap in examining the size of these sets and one would
naturally ask, “What techniques are that that might help in bounding these sets?”
Lastly, almost without exception, all of the examples in the literature come from Braid
groups. While the feasibility of a braid based cryptosystem is a relevant question, I can’t help
but wonder if a more generalized approach would yield different results. In particular, all of
the improvements of Garside’s original algorithm have the same foundation, with similar
limitations, and seem to require that the Garside groups in question have to be ‘square free’,
as discussed earlier. This seems as if it could be a small subset of all Garside groups. Would
an examination of more generalized Garside groups inspire new methods, or different
refinements for the conjugacy problem? Admittedly this would be difficult since there are not
required relations in a general Garside group, so the software would of necessity be
particularly clever.
54
And while discussing the so called ‘square free’ Garside groups, an interesting
research topic would be to examine the results in [11] and see if they generalize to all Garside
groups.
It seems that the current methods of looking at Garside groups are through “braid
colored” glasses. It is the authors hunch that an examination of other Garside structures in
other Garside groups will aid in finding an efficient solution to the conjugacy problems in
Garside groups.
4.3 F INAL T HOUGHTS
At the conclusion of this writing the concept that has impressed the author the most is
the amount of effort that goes into finding new innovative solutions, with literally hundreds of
pages in publications from all over the world, all trying to solve one problem. The level of
collaboration and communication between authors in the literature is impressive. It seems that
there have been a collective of people who have pledged victory, however long and hard the
road, against the conjugacy problem in Garside groups.
55
BIBLIOGRAPHY
[1] E. A RTIN, Theorie der Zöpfe, Abh. Math. Sem. Univ. Hamburg, 4 (1925), pp. 47–72.
[2] J. B IRMAN , K. H. KO , AND S. J. L EE, A new approach to the word and conjugacy
problems in the braid groups, Adv. Math., 139 (1998), pp. 322–353.
[3] J. S. B IRMAN , V. G EBHARDT, AND J. G ONZ ÁLEZ -M ENESES, Conjugacy in Garside
groups. I. Cyclings, powers and rigidity, Groups Geom. Dyn., 1 (2007), pp. 221–279.
[4] J. S. B IRMAN , V. G EBHARDT, AND J. G ONZ ÁLEZ -M ENESES, Conjugacy in Garside
groups. III. Periodic braids, J. Algebra, 316 (2007), pp. 746–776.
[5] J. S. B IRMAN , V. G EBHARDT, AND J. G ONZ ÁLEZ -M ENESES, Conjugacy in Garside
groups. II. Structure of the ultra summit set, Groups Geom. Dyn., 2 (2008), pp. 13–61.
[6] J. S. B IRMAN , K. H. KO , AND S. J. L EE, The infimum, supremum, and geodesic length
of a braid conjugacy class, Adv. Math., 164 (2001), pp. 41–56.
[7] J. G. B OISER, Computational Problems in the Braid Group, Master’s thesis, San Diego
State Universtiy, 2009.
[8] M. D EHN, Über unendliche diskontinuierliche Gruppen, Math. Ann., 71 (1911),
pp. 116–144.
[9] P. D EHORNOY, Braid-based cryptography, in Group theory, statistics, and cryptography,
vol. 360 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2004, pp. 5–33.
[10] P. D EHORNOY AND L. PARIS, Gaussian groups and Garside groups, two
generalisations of Artin groups, Proc. London Math. Soc. (3), 79 (1999), pp. 569–604.
[11] E. A. E L -R IFAI AND H. R. M ORTON, Algorithms for positive braids, Quart. J. Math.
Oxford Ser. (2), 45 (1994), pp. 479–497.
[12] N. F RANCO AND J. G ONZALEZ -M ENENSES, Conjugacy problem for braid groups and
garside groups, 266 (2003), pp. 112–132.
[13] D. G ARBER, Braid group cryptography, in Braids, vol. 19 of Lect. Notes Ser. Inst.
Math. Sci. Natl. Univ. Singap., World Sci. Publ., Hackensack, NJ, 2010, pp. 329–403.
[14] F. A. G ARSIDE, The braid group and other groups, Quart. J. Math. Oxford Ser. (2), 20
(1969), pp. 235–254.
[15] V. G EBHARDT, A new approach to the conjugacy problem in Garside groups, J.
Algebra, 292 (2005), pp. 282–302.
[16] V. G EBHARDT AND J. G ONZ ÁLEZ -M ENESES, Solving the conjugacy problem in
Garside groups by cyclic sliding, J. Symbolic Comput., 45 (2010), pp. 629–656.
[17] C. K ASSEL AND V. T URAEV, Braid groups, vol. 247 of Graduate Texts in Mathematics,
Springer, New York, 2008.
56
[18] K. H. KO , S. J. L EE , J. H. C HEON , J. W. H AN , J.- S . K ANG , AND C. PARK, New
public-key cryptosystem using braid groups, in Advances in cryptology—CRYPTO 2000
(Santa Barbara, CA), vol. 1880 of Lecture Notes in Comput. Sci., Springer, Berlin, 2000,
pp. 166–183.
[19] N. KOBLITZ AND A. J. M ENEZES, A survey of public-key cryptosystems, SIAM Rev.,
46 (2004), pp. 599–634 (electronic).
[20] J. M CCAMMOND, An introduction to Garside structures. Retrieved Sep 18, 2013 from
http://www.math.ucsb.edu/∼ mccammon/papers/intro-garside.pdf.
[21] M. P ICANTIN, Petits Groupes Gaussiens, PhD thesis, Universitè de Caen,
http://www.liafa.jussieu.fr/ picantin/publi.html, 2000.
[22] M. P RASOLOV, Small braids with large ultra summit set, Mat. Zametki, 89 (2011),
pp. 577–588.
[23] V. S HPILRAIN AND A. U SHAKOV, The conjugacy search problem in public key
cryptography: unnecessary and insufficient, Appl. Algebra Engrg. Comm. Comput., 17
(2006), pp. 285–289.
[24] M. S IPSER, Introduction to the Theory of Computation, PWS, Boston, MA, 1997.