Classic Markov Chains
Many probabilistic models are actually Markov chains. Recall the simple random walk models two
gamblers , playing with initial assets and (same letter for player and the initial asset for
convenience). Every time wins one dollar from with probability , and loses one dollar to
with probability = 1 − . Let be ’s asset at time , note that when 0 < < + we have
In addition when
and when
=
ℙ{
=
= 0 we have
+
= }=
+ 1|
we have
ℙ{
ℙ{
=
ℙ{
=
= 0|
= 0} = 1
+ |
=
= }=
− 1|
+ }=1
Clearly satisfies the Markov property, then is a Markov chain with state space being all
possible amount of asset can have, i.e. ∈ = {0,1,2, … , + }, and distributed according
1
=
to concentrated initial st. =
and transition matrix st.
0
≠
1
=
0
= = 0 or = = +
≠ 0, = + 1
≠ + , = −1
otherwise
This Markov chain is reducible (not strongly connected, check state 1 and 4) and aperiodic. An
example chain when = 3 and = 2 and its transition matrix are illustrated below.
0
1
2
3
4
1
5
=
Figure 6 The graph representation of the Markov chain of simple random
walk, with edge weights omitted for clarity.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
Note the game is over at time when = 0 ( has no asset to continue the game) or =
( has no asset to continue the game). In the Markov chain we use absorbing states 0 and
to mean the game is over.
+
+
with probability
it
Coupon Collection. One person wants to collect types of coupons of a product from a daily to
get a free one, and let be the number of distinct coupons the person has at day , then ∈
{0,1, … , }. For simplicity, suppose the daily uniformly chooses a random type every day. We can
see if it is known that distinct coupons have been collected at some day , then the next day,
the daily publishes a coupon already collected, and with probability
publishes a coupon not yet collected, regardless the value of
ℙ{
=
+ 1|
= ,
=
,…,
=
} = ℙ{
,
=
,…,
, i.e.
+ 1|
= }=
−
= |
ℙ{
= ,
=
,…,
=
} = ℙ{
= }=
= |
Clearly satisfies Markov property and it is a Markov chain with initial distribution concentrated
at state 0, meaning no coupon is collected at the beginning. The chain is reducible (not strongly
connected, check state 0 ) and aperiodic (due to the loops). Its transition matrix and graph
representation are illustrated below when = 4.
0
0
1 0
1 3
4 4
2
0
4
0 0
0 0
2
0
4
3 1
Figure 7 The graph representation of the Markov chain of coupon collection,
0 0 0
4 4
with edge weights omitted for clarity.
0 0 0 0 1
In coupon collection, the stopping time is defined as the time when all types of coupons are
collected. One interesting problem is the expected stopping time . Let as a RV representing
time when the th type of coupon is collected, then = and = 1, and observe that
0
1
2
3
+(
=
−
4
)+(
−
=
) + ⋯+ (
−
0
)
where each
− for ≥ 1 is the time between collecting the th coupon and the time
collecting the ( + 1)th coupon. Note that can be viewed as being geometrically distributed
with success rate 1, and each of
− , = 1,2, … , − 1 are geometrically distributed with
success probabilities
. Recall that if ~geometric( ) then
=
+ (
=
−
−
=
) + ⋯+ ( −
1
=
−
= , we have
)
1
Ehrenfest Urn Model & Lumped Markov Chain. Let , be two urns that contains balls in total.
Every time one ball is randomly chosen and then moved to the other urn, i.e. if the chosen ball in
currently in urn , then it is moved to urn , and likewise if it is currently in urn . Let be the
number of balls urn has at time , and suppose the ball choice is uniform, then
ℙ{
ℙ{
=
+ 1|
= |
= ,
=
= ,
=
,…,
,…,
=
=
} = ℙ{
} = ℙ{
=
=
+ 1|
− 1|
= }=
−
= }=
which looks somewhat similar coupon collection but actually different.
satisfies Markov
property and hence it is Markov chain. The transition matrix and graph representation are
illustrated below when = 4. This Markov chain is named Ehrenfest urn and is intended to model
the exchange of gas molecules between two containers.
0
1
2
3
4
Figure 8 The graph representation of the Markov chain of Ehrenfest urn
model, with edge weights omitted for clarity.
=
0
1
4
0
0
0
1 0
3
0
4
2
0
4
3
0
4
0 0
0 0
0 0
2
0
4
1
0
4
1 0
This chain is irreducible since its graph representation is clearly strongly connected. However, it
is not hard see it is periodic and every state is of period 2 by observing that any cycle must contain
the same number of “rightward” edges and “leftward” edges.
We now introduce a seemingly different problem which turns out connected with Ehrenfest run
by a technique called lumping or projection. Recall that the dimensional hypercube is a graph
= ( , ) st. = { = 0 or 1: ∈
} and ( , ) ∈ iff they only differ in one dimension, or
are of distance 1. Check the following,
1) If two vertexes differ in two or more dimensions, it is easy to see their distance is at least √2 and
they cannot be adjacent.
2) A vertex in an -dimensional hypercube is adjacent with other vertexes, since we can flip the
value of one of the coordinates of to derive one of its adjacent vertexes.
The concept of hypercube is a generalization of the 2-dimensional square and the 3-dimensional
cube. A 2-dimensional or 3-dimensional hypercube is simply the unit square or unit cube located
at the origin, and we can see the coordinates of adjacent vertexes differ only in one dimension.
Hypercube higher than 3 dimension cannot be trivially visualized.
Figure 9 The 3-dimensional cube located at the
origin is the 3-dimensional hypercube. Overserve
that every vertex is of degree three and the
coordinates of adjacent vertexes only differ in one
dimension. For example, (1,1,0) is adjacent to
(0,1,0) , (1,0,0) , and (1,1,1) , and (1,1,1) differs
from its adjacent vertexes in the first, second, and
third dimension respectively.
Note is bi-directional and -regular (recall that a bi-directional graph is - regular if each vertex
is of degree ), and we can define a simple random walk on by assuming the random walk has
equal chance to reach each of the adjacent vertexes no matter what the current vertex is. The
random walk can be realized in a simple way: every step one coordinate of the current vertex is
randomly chosen with equal chance, and then its value is flipped. Take Figure 9 for example,
suppose the current vertex is (1,1,0), then randomly choose one of , , -coordinates. If -
coordinate is chosen, then it is value is flipped from 1 to 0 and the random walk arrives at the
next vertex (1,0,0).
Now the analogy between random walk on and the Ehrenfest urn emerges. There are two
buckets holding balls in total, with all balls in the first bucket labelled 1 and all balls in the
second bucket labelled 0, analogous to the coordinates with value 1 or 0. Each time a ball is
chosen, analogous to that each time one coordinate is chosen. Then the ball is moved to the other
bucket and re-labelled to be consistent with the new bucket, analogous to that the value of the
chosen coordinate is flipped. Let be the total number of 1s at time , then is exactly the
same Markov chain as Ehrenfest urn.
However, if we look carefully, we’ll notice is a different Markov chain from the random walk
on , the former with state space {0,1,2, … , } while the latter with the vertexes as state space.
We now reveal their relation.
Theorem 30 Let ~ be an equivalence relation on set , and let ℙ( , [ ]) = ℙ{
∈ [ ]| =
}. If for any st. ~ ′ we have ℙ( , [ ]) = ℙ( ′, [ ]), then ℙ([ ], [ ]) = ℙ( , [ ]) where
ℙ([ ], [ ]) denotes ℙ{
∈ [ ]| ∈ [ ]}.
∈ [ ],
∈ [ ]}
ℙ{ ∈ [ ]}
= ,
∈ [ ]} ∑ ∈[ ] ℙ{
∈ [ ]| = }ℙ{ = }
] ℙ{
=
∑ ∈[ ] ℙ{ = }
∑ ∈[ ] ℙ{ = }
∈ [ ]| = } ∑ ∈[ ] ℙ{ = }
= ℙ{
∈ [ ]| = }
∑ ∈[ ] ℙ{ = }
ℙ([ ], [ ]) = ℙ{
=
=
∑
ℙ{
∈[
∈ [ ]|
In the calculation, the equality
∈ [ ]|
ℙ{
∈[ ]
holds since ℙ{
∈ [ ]} =
= ℙ{
∈ [ ]|
∈ [ ]|
= } = ℙ{
ℙ{
= }ℙ{
= }
= }
ℙ{
∈[ ]
∈ [ ]|
= }
= } through all
∈ [ ].
Given a Markov chain characterized by initial distribution and transition matrix , and an
equivalence relation ~ defined on the state space
that induces equivalence classes
(~)
, , … , , we may define a new state space
= { , , … , } , a new initial
(~)
(~)
distribution
= (∑ ∈
,…,∑ ∈
) , and a new transition matrix (~) st.
=
∑
∈
( , ) where
is any of
and it is obvious that ∑
(~)
=∑
∑
∈
( , ) = 1.
Now
,
,
altogether define a new Markov chain called the lumped Markov chain
based on the original Markov chain defined by , , with respect to relation ~.
(~)
(~)
(~)
Now let’s go back to the hypercube. Given a -dimensional vertex of the hypercube, define the
Hamming weight as the number of 1 s in the coordinates, i.e. ℎ( ) = ∑
and define
equivalence relation ~ ′ if ℎ( ) = ℎ( ). It is not hard to see the Ehrenfest urn is the lumped
Markov chain of the simple random walk on with respect to ~: let , , … ,
be equivalence
classes on the state space st. ℎ( ) = for any ∈ where = 0,1, … , . Then
ℙ( ,
ℎ( )
)=
∈
− ℎ( )
∈
which satisfies the above-discussed condition for lumping.
EX 16. Let
be the transition matrix for the Ehrenfest chain. Show that the binomial distribution
with parameters
KEY. If
1
and is the stationary distribution for this chain.
2
and , then ∀ ∈ Ω , ( ) =
is a binomial with parameters
transition matrix of an Ehrenfest urn.
. Let
be the
For state 0, the only state transition satisfying ( , 0) > 0 is (1,0) = .
(1)
( ) ( , 0) =
=
=
2
1
= (0)
2
Likewise, the only state transition satisfying ( , ) > 0 is ( − 1, ) = , and
( ) ( , )=
( − 1)
For other states 1 < < ,
=
2
=
1
= ( )
2
( ) ( , ) = ( − 1) ( − 1, ) + ( + 1) ( + 1, )
1
−1 2 ×
!
=
( − 1)! ( −
( − 1)!
=
( − 1)! ( −
−1
=
+
−1
=
− +1
1
+1
+1 2 ×
− +1
!
+1
×
+
×
( + 1)! ( − − 1)!
+ 1)!
( − 1)!
+
)!
! ( − − 1)!
1
−1
=
= ()
2
+
Polya’s Urn. Given an urn that contain one black ball and one white ball at the beginning, every
time one ball is drawn from the urn uniformly, and put back two ball with the same color as the
drawn ball. For example, for the first draw, if one white ball is chosen (the chance is ), then two
white balls are put back into the urn, and now the urn has one black ball and two white balls; for
the second draw, if one black ball is chosen (the chance is ), then two black balls are put back
into the urn, and then the urn has two black balls and two white balls. Let be the number of
black balls in the urn at time (after draw & put), note the total number of balls in the urn is +
2, then given
= , obviously black balls has to be drawn in order for +1 = + 1, and
ℙ(
=
+ 1|
= )=
+2
ℙ(
= |
= )=
+2−
+2
Again, this is a Markov chain, and it is named Polya’s urn. However, different from
previous Markov chains, 1) the state space ℕ is not finite; 2) the transition probabilities
is dependent on , making the chain inhomogeneous. The transition matrix and graph
representation are illustrated below up to time = 4.
1
2
3
4
( )
...
Figure 10 The graph representation of the Markov chain of Polya’s urn
model, with edge weights omitted for clarity. This chain has countable
state space and is inhomogeneous.
Theorem 31
1
+2
=
is uniformly distributed on set {1,2, … , + 1}.
0
0
0
0
+1
+2
2
+2
0
0
0
0
+2
3
+2
0
0
0
0
0
0
−1
+2
4
+2
0
0
…
…
Let , , … ,
be random variables independently and uniformly distributed on interval
[0,1] to form an order
≤
≤⋯≤
for , , … , ∈ {0,1, … , }, and let = |{ ∈
{0,1, … , }: ≤ }|, i.e. is the number of , , … , which are on the left side of or
coincide with . For example, suppose = 5 and
= 0.3, = 0.1, = 0.7, = 0.9, = 0.2, = 0.6
The order is
≤
≤
≤
≤
≤
and we have
= |{ }| = 1
=
=
= |{ , }| = 2
=
= |{ , , }| = 3
Note no matter what the realization of , , … , is, we always have = |{ }| = 1. We
now show, surprisingly, and are identically distributed for each , using the fact that
every ordering is equally likely to occur (a rigorous proof of this very intuitive fact is non-trivial
and requires measure theory), which implies ℙ{ = } can be solved using combinatorics –
treat each of , , … , like a ball, first randomly order them, and then = means is
the th ball in the ordering. This is equivalent to putting + 1 balls into + 1 buckets with
fixed in the th bucket. We have
number of orderings satisfying =
number of all possible orderings
!
1
=
=
( + 1)!
+1
ℙ{
and notice ℙ{
ℙ{
= }=
= } is uniform and independent from . Now consider
= ,
= + 1}
number of orderings satisfying = ,
=
=
number of all possible orderings
+1
In order for an ordering
≤
≤⋯≤
to meet the condition = ,
= + 1,
must be the ( + 1)th in the ordering and
has to be one of the first elements in the
ordering. Using the bucket analogy, first fix at the ( + 1)th bucket, then choose one of
the first buckets for
to reside, and then put each remaining ball in one bucket. Clearly,
ℙ{
Then ℙ{
=
= ,
=
+ 1} =
= }=
+ 1|
,
ℙ{
}
1
2
ℙ(
}
=
and hence ℙ{
= |
= }=
can only take values in { , + 1} given = . The conditional
distribution of ℙ{
| = } is the same as the conditional distribution of ℙ{
| = }.
Also verify that has the same distribution as
by
1−
=
ℙ(
Note that
ℙ{
, since
ℙ{
!
=
( + 2)! ( + 1)( + 2)
= 1) = ℙ(
= 1) =
= 2) = ℙ(
is also a Markov chain, then
= ,
= ℙ(
= ℙ(
= ℙ{
=
=
=
=
,…,
=
=
=
|
|
,
= }
) … ℙ( = |
) … ℙ( = |
,…, = }
= 2) =
1
2
= )ℙ( = )
= )ℙ( = )
Indicating the joint distribution ( , … , ) is the same as ( , … , ). The joint is the same,
then any marginal is the same. Then has the same distribution as .
The Birth-And-Death Chains.
Random Walk on Groups. Given a group = ( , +) and a multinomial distribution , define a
Markov chain ℳ with state space and transition matrix st. ,
=
for any , ℎ ∈ ,
then ℳ is called a random walk on group , and is called the incremental distribution. As the
modifier “incremental” suggests,
is the probability for state to be increased to state ℎ + .
Let = ( , ) be the graph representation of random walk on . Let , ℎ be any two states, then
( , ℎ) ∈ iff there is a positive probability for increment from to ℎ by ℎ − = ℎ + (− ), i.e.
> 0; if there exists a → ℎ path of some length , then there has to be some increments
, , … , st. +
+ ⋯ + + = ℎ st.
> 0 for = 1,2, … , , where the states along the
path are ,
+ , ,
+ ,…,
= 0 for some , then
+ ⋯+
+
and
=
,
since
+
(
+ ⋯+
⋯
+ ⋯+
),(
+ , ℎ. Note
⋯
)
has to be positive for every ; if
= 0, i.e. there is no edge connecting
+ , and the above-mentioned path is not possible.
In this definition, “+” is used instead of the conventional “⋅” to denote the operation of the group
in order for a better understanding of the “incremental distribution”. It is more common for a
group to be represented by = ( ,⋅) and the transition matrix for random walk on will
satisfy , =
for any , ℎ ∈ . Note there are other equivalent forms like
1)
2)
,
=
= ℎ(ℎ
since ℎ = (ℎ
).
) .
For an example of random walk on group, the random walk on an -cycle indexed by {0,1, … , −
1} is equivalent to a random walk on an -cyclic group (ℤ = {0,1, … , − 1}, + ), where + ≔
+
= ( + ) mod , by incremental distribution
Specifically, if
1
= 2 ℎ = 1 or ℎ = − 1
0
otherwise
= 5, then the transition matrix would look like the following
1
2
1
2
0
0
0
1
2
0
0
1
2
0
1
2
0
1
0 0
2
1
1
= 0
0
0
2
2
1
2
0
0
1
2
0
Given an element in ℤ where is also a state on the -cycle, then + 1 = ( + 1) mod is
exactly the state next to (clockwise) on the -cycle, and + ( − 1) = ( + − 1) mod =
= ( − 1) mod is exactly the state prior to (clockwise) on the -cycle. The equivalence is
hence quite obvious. Note that ℤ is an abelian group (satisfying commutativity), so it is a random
walk on an abelian group.
The RVs of Markov chain is not independent in general, however, the random walk on graph
can be treated as independently choosing an increment at each time according to the
increment distribution . Let be the initial distribution, and recall that
ℙ(
,
,
)=
,…,
If there exists a sequence of increments
, then
,
,…,
st.
…
1
=
1
0,
2
=
ℙ(
,
,
,…,
) = ℙ(
,
,
,…,
)=
…
>0
ℙ(
,
,
,…,
) = ℙ(
,
,
,…,
)=
…
=0
If there does not exist such sequence of increments, then we must have
Thus although
=
are not independent, the increments are independently chosen.
Theorem 32 Uniform distribution is a stationary for any random walk on a group. Let be the
uniform distribution on state space , and simply check
=
1, … ,
2
=
∈
1
| |
∈
1
| |
=
,
∈
1
| |
∈
1
| |
1
=
=
| |
=
The reason that the index of the summation
for any fixed ∈ . See the remark below.
∈
,
can be replaced by
=
1
| |
is that
∈
(
)
= { ℎ :ℎ ∈ }
Let = { ∈ :
> 0}, and let ℋ = (〈 〉, +) be the smallest subgroup of that contains
, then we say ℋ is a subgroup generated by . Note every element in 〈 〉 can be written
as the product of elements in . Every element in 〈 〉 could be the identity, or a product
of elements of , or the inverse of some element of . Since 〈 〉 is finite, then can be
written as a positive power of any element in (every element of a finite group is of finite
order, see remark below), and given ∈ the inverse
can be written as a positive power
of .
Theorem 33 The random walk on group
is irreducible iff ℋ = , i.e. generates . Necessity.
Suppose the random walk on is irreducible. Let be the identity and let be an arbitrary
state in , then there is a → path of some length , meaning there are increments
, , … , ∈ st.
…
= and
> 0 for = 1,2, … , . Note , , … , ∈ since
> 0 and ∈ 〈 〉. Also is arbitrary, then = 〈 〉 ⇒ = ℋ.
Sufficiency. Suppose generates , then we show for any two states , ℎ ∈ , there is a →
ℎ path. Note ℎ
can be written as a product of elements of since very element in can
be written as a product of elements in . Suppose ℎ
=
… where , , … , ∈ ,
and notice there is a path ,
,
,…,
…
,
…
= ℎ.
We would also like to discuss the reversibility of random walk on groups. An intuition is that the
probability of some increment should be the same as the probability of “decrement”
. Call
the increment distribution symmetric if
=
for any ∈ , and the above-defined as
symmetric as well if ∈ iff
∈ .
Theorem 34 If the increment distribution
Theorem 35 If the random walk on
is symmetric, then the random walk on
reversible with uniform initial distribution . Simply check
1
1
=
=
, =
,
| |
| |
is
is reversible, then the increment distribution is
symmetric. Let the random walk on be equipped with some initial distribution (not
necessarily uniform). Let = { ∈ :
> 0}, and let 〈 〉 be the subgroup of generated
by . Constrain the random walk to 〈 〉, with ̂ and as the initial distribution and transition
matrix of the constrained random walk. If the random walk on is reversible, i.e.
,
=
,
,
=
,
for any , ℎ ∈ , then clearly the random walk on 〈 〉 is also reversible, since
̂
=
,
= ̂
,
for any , ℎ ∈ 〈 〉 . Since generates 〈 〉 , then the random walk on 〈 〉 is irreducible by
Theorem 33 and the stationary distribution is unique for . ̂ is stationary for by Theorem 24,
and Theorem 32 implies uniform distribution is stationary for , so the only possibility is that ̂
is uniform on 〈 〉. Then
̂
,
= ̂
,
⇒
,
=
,
=
ℎ −1
=
−1
=
for any , ℎ ∈ 〈 〉. Since ℎ
can be any element in 〈 〉, then
implying
> 0 iff
> 0 for any ∈ 〈 〉, i.e. ∈ iff
∈
for any ∉ ,
∉ and
=
= 0. As a result
=
symmetric by definition.
ℎ −1
=
−1
=
for any ∈ 〈 〉,
. Now it is obvious that
for any ∈ , and is
A generalization of a feature of random walk on group is called transitivity. Examine an earlier
example, the transition matrix of a random walk on a 5-cycle, and it is not hard to see every row
is a re-index of the first row, for example, the second row ( , 0, , 0,0) is a re-index of (0, , 0,0 ).
Actually, let
be the transition matrix of any random walk on group, then
,⋅
can be re-indexed
to
,⋅
where is the identity, i.e. there is a -dependent bijection
simply due to , =
= ,
for any ℎ ∈ and we define
easy to verify such defined
is a bijection). Note that
( )=
1)
2)
−1
≔
(ℎ)
−1
−1
−1
= ;
(ℎ) = ℎ is a re-index of
unless
,⋅
= ℎ ℎ.
It is now sufficient to show any two rows
a ( , ℎ)-dependent bijection ( , ) : →
3)
,⋅ ,
,⋅
to
( ,ℎ) (
,⋅
respectively, then
)=
−1
ℎ
4) Fix , ℎ ∈ , then
( ) =
−1
(
ℎ
)=
( , )(
)
( , )(
,⋅ ;
and ,⋅ of
st. , =
=
( ,ℎ) (
)
( ) =
)
( ,ℎ) (
(
=
≔
)
−1
st.
=
,
(ℎ) = ℎ
,
( ),
(it is
(ℎ) = ℎ ≠ ℎ−1 =
is a re-index of each other, i.e. there is
. Let ,
be the re-index of
( , )( )
,
is a re-index of
∘
) = ℎ;
→
beware that in general
,⋅
=
( ,ℎ) ( ), ( ,ℎ) ( )
( , )(
⇒
( , )
to
:
−1
=
−1
)ℎ =
=
ℎ
=
,⋅
to
,
,⋅ .
Note that
for any ,
∈ . Check
ℎ=
Intuitively, this means a random walk on group “looks the same” from any state in the state space.
For example, the 5-cycle, the random walk starting at state 1 makes no difference from one
starting at state 4, by a bijection
( , )(
)=
( ) =(
( ) + 4) = ( + 4) + 4 =
+ 3
which basically means treating state 1 as 4, state 2 as 0, … Now we generalize this into a new
concept – transitivity. Let be the transition matrix characterizing any Markov chain with state
space , if for any , ℎ ∈ , there exists a bijection ( , ) st. ( , ) ( ) = ℎ and for any , ∈
we have ( , )( ), ( , )( ) = , , then we say the Markov chain is transitive.
Clearly if is transitive, then any two rows of is a re-index of each other. However, the
converse is false, i.e. even if every two rows of is a re-index of each other, it is not necessarily
0.1 0.3 0.6
a transitive chain. Consider = 0.1 0.6 0.3 with state space = {1,2,3} , and
0.6 0.1 0.3
transition bijection wrt. (1,3) has to satisfy ( , ) (1) = 3, then there should be
where the only possibility is
Theorem 36 Suppose
,
=
( , )(
( , ) (2)
),
( , )(
=3=
)
=
,
( , )(
)
= 0.3
( , ) (1), contradicting with that
is a bijection.
is transitive. Fix arbitrary states , ℎ , and let be the transition
bijection wrt. , ℎ. Then is transitive with as its transition bijection wrt. , ℎ. Proof by
induction. The base case when = 1 holds. Assume is transitive with as its transition
bijection wrt. , ℎ., then
=
+1
,
=
,⋅ ⋅,
∈
=
( ), ( )
,
,
( ), ( )
=
∈
( ),
, ( )
=
+1
( ), ( )
Since , ℎ are arbitrary, clearly is also transitive with as its transition bijection wrt. every
, ℎ. A corollary of this is that every two rows of is a re-index of each other.
Theorem 37 The uniform distribution is stationary for any transitive Markov chain. Fix states ,
let ℎ be any state other than , and let
uniform initial distribution, then
1
( ) =
, =
(
| |
∈
∈
The means every element of
uniform, i.e. = .
be the transition bijection wrt. , ℎ . Let be the
), ( )
=
1
| |
∈
( ),ℎ
=
∈
,ℎ
= ( )ℎ
has the same value as ( ) , and hence ( ) is still
REMARK: Coverage of
Given a group = ( ,⋅), fix some ∈ . Define
product of ℎ covers all elements in .
1) Since ℎ ∈ for any ℎ, then ⊆ .
2) For any ℎ ∈ , ℎ = (
ℎ) ∈ since
= { ℎ: ℎ ∈ }, we show
ℎ ∈ , then
= , i.e. the
⊆ .
REMARK: Every element of a finite group is of finite order.
Given a group = ( ,⋅), ∈ and let be the identity, if
= where ∈ ℕ , then
said to be the order of . For a finite group, every element is of finite order. Note
,
,
,…,
| |
,
| |
,… ∈
is
and all those powers of cannot all be different. Clearly there must exists two number
≠
st.
=
. Say
< , then
= . A corollary of this is that every element or its
inverse in a finite group can be written as a positive power of itself. Obviously can be written
as positive power of itself. For ≠ is of order , then =
=
=
, and
=
=
=
.
© Copyright 2026 Paperzz