TRANSACTIONS OF THE
AMERICAN MATHEMATICAL SOCIETY
Volume 360, Number 7, July 2008, Pages 3687–3730
S 0002-9947(08)04371-7
Article electronically published on January 25, 2008
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
JASON FULMAN
Abstract. Stein’s method is used to prove limit theorems for random character ratios. Tools are developed for four types of structures: finite groups,
Gelfand pairs, twisted Gelfand pairs, and association schemes. As one example
an error term is obtained for a central limit theorem of Kerov on the spectrum
of the Cayley graph of the symmetric group generated by i-cycles. Other
main examples include an error term for a central limit theorem of Ivanov on
character ratios of random projective representations of the symmetric group,
and a new central limit theorem for the spectrum of certain random walks on
perfect matchings. The results are obtained with very little information: a
character formula for a single representation close to the trivial representation
and estimates on two step transition probabilities of a random walk. The limit
theorems stated in this paper are for normal approximation, but many of the
tools developed are applicable for arbitrary distributional approximation.
1. Introduction
Given a fixed m × m matrix, it is natural to study the distribution of its eigen1
values, where each eigenvalue is chosen with probability m
. As a motivating example, consider the n! × n! transition matrix for random walk on the symmetric
group Sn , where the generating set consists of all i-cycles. Diaconis and Shahshaχλ
n−i
(i,1
)
hani [10] proved that the eigenvalues of this matrix are the numbers dim(λ)
occurring with multiplicity dim(λ)2 . Here λ parameterizes an irreducible representation of the symmetric group, χλ(i,1n−i ) is the corresponding character value
on i-cycles, and dim(λ) is the dimension of the irreducible representation. Since
χλ
dim(λ)2
(i,1n−i )
2
.
|λ|=n dim(λ) = n!, the eigenvalue dim(λ) is chosen with probability
n!
The probability measure on irreducible representations of Sn which picks the rep2
resentation corresponding to λ with probability dim(λ)
is known as the Plancherel
n!
measure of the symmetric group. Kerov [26] proved that if i ≥ 2 is fixed, and λ is
random
from the Plancherel measure of the symmetric group, then the random vari
n
( i )(i−1)!χλ(i,1n−i )
is asymptotically normal as n → ∞. Kerov used the method
able
dim(λ)
of moments and difficult combinatorics; a beautiful exposition of his work is the
paper [24]. Hora [21] gave another proof of Kerov’s result, also using the method of
moments, but with somewhat simpler combinatorics. In very recent work, Sniady
Received by the editors August 16, 2005, and in revised form, May 13, 2006.
2000 Mathematics Subject Classification. Primary 05E10; Secondary 60C05.
Key words and phrases. Stein’s method, normal approximation, Gelfand pair, character ratio,
symmetric group, Plancherel measure, association scheme.
The author was supported by NSA grant H98230-05-1-0031 and NSF grant DMS-0503901.
c
2008
American Mathematical Society
Reverts to public domain 28 years from publication
3687
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3688
JASON FULMAN
[40] uses the genus expansion of random matrix theory to give another method of
moments proof of Kerov’s result.
A more probabilistic approach to Kerov’s result for the case i = 2 was given in
[14], where Stein’s method was used to obtain the first error term in Kerov’s central
limit theorem; an error term of O(n−1/4 ) was proved and an error term of O(n−1/2 )
was conjectured. This error term was later improved to O(n−s ) for any s < 1/2
using martingale theory [17]. More recently, a proof of the O(n−1/2 ) conjecture
appears in [39], using a new refinement of Stein’s method. See [16] for another
proof of the O(n−1/2 ) bound, using Bolthausen’s variation of Stein’s method. All
of these results, it should be emphasized, were only for the case i = 2. However
even in the simple setting of i = 2, random character ratios arise in work on the
moduli space of curves [12].
Before proceeding further, it should be mentioned that familiarity with Stein’s
method is not necessary to read this paper. Section 2 gives a brief introduction to
normal approximation by Stein’s method. It presents the bare minimum needed to
understand this paper, but gives a few pointers to the literature for further reading.
Section 3 of this paper generalizes the set-up of Kerov’s central limit theorem to
the case when G is a finite group and the generating set consists of a single conjugacy
class C. This elucidates our early work on this problem [14] which required different
information than the current treatment. As a new application, it is shown that for
any fixed i ≥ 2, one obtains an error term O(n−1/4 ) in Kerov’s central limit theorem.
The approach presented here uses only the most elementary ingredients, namely a
well known character formula for the irreducible representation parameterized by
λ = (n − 1, 1) on all elements of Sn , and estimates on the two step transition
probabilities of the random walk on Sn generated by C.
Section 4 uses Stein’s method to study random spherical functions of a Gelfand
pair. As an application, a new central limit theorem with error term O(n−1/4 ) is
obtained for the spectrum of certain random walks on the set of perfect matchings
of 2n symbols. Equivalently, a central limit theorem is obtained for certain statistics
under the Jack2 measure on partitions, which is an interesting object [33]. As in
the group case, only the simplest ingredients are needed in the proof.
Section 5 focuses on a specific example of a twisted Gelfand pair. An error
term is obtained for a central limit theorem of Ivanov [23] on character ratios of
random projective representations of the symmetric group. There is a close parallel
to earlier sections, but new ideas are required since if one tries to generalize the
approach used in the treatment of Gelfand pairs, one encounters a Markov chain
which can have negative transition probabilities. A main contribution of Section 5
is a combinatorial argument designed to overcome this obstacle.
Section 6 develops limit theorems for the spectrum of an adjacency matrix of
an association scheme. The arguments are analogous to those in previous sections.
This is not surprising since Gelfand pairs and association schemes are both generalizations of the finite group case. However the perspective and examples are
quite different. We treat the Hamming association scheme, obtaining a central
limit theorem for the spectrum of the Hamming graph, or equivalently for values
of q-Krawtchouk polynomials.
Having outlined the contents of this article, we mention further reasons why the
results are interesting. First, the construction of an exchangeable pair and certain
moment computations are applicable for distributional approximations other than
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3689
normal approximation. The paper [6] illustrates this in the context of exponential
approximation. Second, the spectrum of random walks on G/K, where (G, K) is a
Gelfand pair, is of ongoing interest. In particular if G, K are finite classical groups,
this leads to difficult questions in number theory [43], [44] and we are optimistic
that the exchangeable pair constructed in this paper will yield useful information.
Third, Plancherel measure (which arises in the group case), Jack measure (which
arises in the Gelfand pair case), and shifted Plancherel measure (which arises in
the twisted Gelfand pair case), are all objects of interest to researchers in random
matrix theory [1], [4], [3], [7], [25], [31], [32], [33], [45]. As is evident from these
papers, there are many interesting statistics under these measures, and the method
of constructing exchangeable pairs in this paper allows one to study these statistics
by Stein’s method. Fourth, the examples in this paper will be a useful testing
ground for results on Stein’s method. For example the refinement of Stein’s method
in [39] arose from trying to obtain an O(n−1/2 ) error term for the i = 2 case of
Kerov’s central limit theorem. In forthcoming work we obtain an O(n−1/2 ) bound
for Theorem 3.14, and a similar approach gives an O(n−1/2 ) bound for Theorems
4.19 and 5.2.
2. Stein’s method for normal approximation
In this section we briefly review Stein’s method for normal approximation, using
the method of exchangeable pairs [41]. One can also use couplings to prove normal
approximations by Stein’s method (see [34] for a survey), but the exchangeable pairs
approach is effective for our purposes. For a survey discussing both exchangeable
pairs and couplings, the paper [36] can be consulted.
Two random variables W, W on a state space X are called exchangeable if for
all w1 , w2 , P(W = w1 , W = w2 ) is equal to P(W = w2 , W = w1 ). As is typical
in probability theory, let E(A|B) denote the expected value of A given B. The
following result of Stein (which follows from page 35 of [41] and Lemma 2.3 below)
uses an exchangeable pair (W, W ) to prove a central limit theorem for W .
Theorem 2.1 ([41]). Let (W, W ) be an exchangeable pair of real random variables
such that E(W 2 ) = 1 and E(W |W ) = (1 − a)W with 0 < a < 1. Then for all real
x0 ,
x0
2
− x2
P(W ≤ x0 ) − √1
e
dx
2π −∞
V ar(E[(W − W )2 |W ])
1
− 14
+ (2π)
E|W − W |3 .
≤
a
a
In order to apply Theorem 2.1 to study a statistic W , one needs an exchangeable
pair (W, W ). The usual way of doing this is to use Markov chain theory. A Markov
chain K (with the chance of going from x to y denoted by K(x, y)) on a finite set
X is called reversible with respect to a probability distribution π if π(x)K(x, y) =
π(y)K(y, x) for all x, y. This condition implies that π is a stationary distribution for
K. It is straightforward to check that if K is reversible with respect to π, then one
obtains an exchangeable pair (W, W ) as follows: choose x ∈ X from π, then obtain
x by taking one step from x according to K, and set (W, W ) = (W (x), W (x )).
A drawback with Theorem 2.1 is that in many problems of interest, it gives a
convergence rate of O(n−1/4 ) rather than O(n−1/2 ). When |W − W | is bounded,
the following variation often gives the correct rate. A similar result is in [35].
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3690
JASON FULMAN
Theorem 2.2 ([39]). Let (W, W ) be an exchangeable pair of real random variables
such that E(W 2 ) = 1 and E(W |W ) = (1 − a)W with 0 < a < 1. Suppose that
|W − W | ≤ A for some constant A. Then for all real x0 ,
x0
2
− x2
P(W ≤ x0 ) − √1
e
dx
2π −∞
V ar(E[(W − W )2 |W ])
A3
+ .41
+ 1.5A.
≤
2a
a
The following general lemma will also be helpful.
Lemma 2.3. Let (W, W ) be an exchangeable pair of random variables such that
E(W |W ) = (1 − a)W and E(W 2 ) = 1. Then E(W − W )2 = 2a.
Proof. Since W and W have the same distribution,
E(W − W )2 = E(E(W − W )2 |W )
= E((W )2 ) + E(W 2 ) − 2E(W E(W |W ))
= 2E(W 2 ) − 2E(W E(W |W ))
= 2E(W 2 ) − 2(1 − a)E(W 2 )
= 2a.
3. Finite groups
This section uses Stein’s method to study the spectrum of random walk on a
finite group G, where the generating set is a conjugacy class C which satisfies
C = C −1 . As will be explained in Subsection 3.1, by [10] this is equivalent to
χλ (C)
where λ is chosen from the
studying the distribution of the character ratio dim(λ)
Plancherel measure of the group G.
The organization of this section is as follows. Subsection 3.1 recalls the necessary background from representation theory. Subsection 3.2 then defines a Markov
chain on the set of irreducible representations of G, and uses it to construct an
exchangeable pair. This leads to a general central limit theorem. Subsection 3.3
applies the theory to the symmetric group Sn with C the conjugacy class of i-cycles,
where i is fixed and n is large.
3.1. Background from representation theory. We recall facts from the representation theory of finite groups, referring the reader to Chapter 1 of [37] or to the
first few chapters of [38] for more details. In what follows, χ denotes a character
of the finite group G, dim(χ) is the dimension of the corresponding representation,
and Irr(G) is the set of all irreducible characters of G. Also z denotes the complex
conjugate of a number z.
Lemma 3.1. Let χ be an irreducible representation of G. Then χ(C −1 ) = χ(C).
Thus if C = C −1 , then χ(C) is real.
Next we recall the orthogonality relations for irreducible characters of G.
Lemma 3.2. Let ν and χ be irreducible characters of a finite group G. Then
1 ν(g)χ(g) = δν,χ .
|G|
g∈G
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3691
Lemma 3.3. Let C be the conjugacy class of G containing the element g. Then
for g ∈ G,
χ(g)χ(h)
χ∈Irr(G)
is equal to
|G|
|C|
if h, g are conjugate and is 0 otherwise.
Lemma 3.4 while known, is perhaps not well known, and since analogous results
will be needed in later sections, a proof along the lines of one in [19] is included.
Lemma 3.4. Let G be a finite group with conjugacy classes C1 , · · · , Ct . Let
Ck be the conjugacy class of an element w ∈ G. Then the number of m-tuples
(g1 , · · · , gm ) ∈ Gm such that gj ∈ Cij and g1 · · · gm = w is
m
j=1
|Cij |
χ∈Irr(G)
χ(Cim ) χ(Ck )
dim(χ)2 χ(Ci1 )
···
.
|G| dim(χ)
dim(χ) dim(χ)
Proof. Identify each class Ci with its corresponding class sum in the complex group
algebra CG. If χ(1) , · · · , χ(t) are the irreducible complex characters of G, then the
elements
t
dim(χ(s) ) (s)
Es =
χj Cj (1 ≤ s ≤ t)
|G|
j=1
(s)
are a complete set of orthogonal idempotents for the center of CG, where χj
denotes the value of χ(s) at any g ∈ Cj . Lemma 3.3 implies that
Cj = |Cj |
t
s=1
(s)
χj
dim(χ(s) )
Es .
Since the Ej ’s are orthogonal idempotents (i.e. Er Es = δr,s Er ), it follows that
C i1 C i2 · · · C im
(s)
(s)
t
χ i1 · · · χim
= |Ci1 | · · · |Cim |
Es
dim(χ(s) )m
s=1
=
(s)
(s) (s)
t
t
χ i1 · · · χ im χ k
|Ci1 | · · · |Cim | Ck
,
|G|
dim(χs )m−1
s=1
k=1
as desired.
As a corollary one obtains the following result.
Corollary 3.5 ([10]). Suppose that C is a conjugacy class satisfying C = C −1 .
Then the eigenvalues of the random walk on G with generating set C are indexed
χ(C)
, occurring with multiplicity dim(χ)2 .
by χ ∈ Irr(G) and are the numbers dim(χ)
Proof. If M is the |G| × |G| transition matrix for the random walk, the chance of
being at the identity after k steps is the trace of M k divided by |G|. Thus Lemma
3.4 implies that for all k ≥ 0, the trace of M k is equal to
k
χ(C)
2
dim(χ)
.
dim(χ)
χ∈Irr(G)
Since this holds for all k ≥ 0, the result follows.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3692
JASON FULMAN
As mentioned in the Introduction, the Plancherel measure of G is the probability
2
measure on Irr(G) which chooses each χ with probability dim(χ)
|G| . So Corollary 3.5
says that the eigenvalues of the random walk on G generated by C are the “character
χ(C)
occurring with multiplicity proportional to the Plancherel probability
ratios” dim(χ)
of χ.
3.2. Central limit theorems for character ratios. The goal of this subsection
is to prove a central limit theorem for the random variable W defined by W (λ) =
|C|1/2 χλ (C)
,
dim(λ)
where C is a fixed conjugacy class such that C = C −1 and λ is random
from the Plancherel measure of G. From Lemmas 3.1 and 3.3 it follows that E(W ) =
0 if C is not the identity class, and that E(W 2 ) = 1.
To apply Stein’s method it is useful to construct a Markov chain on the set of
irreducible representations of G as follows. First, fix an irreducible representation
τ whose character is real valued. This gives a Markov chain Lτ by defining the
probability of transitioning from λ to ρ as
1 λ
dim(ρ)
χ (g)χτ (g)χρ (g).
Lτ (λ, ρ) :=
dim(λ)dim(τ ) |G| g
Let
remarks about the chain Lτ . First, note that the quantity
usλ makeτ some
ρ
g χ (g)χ (g)χ (g) is the multiplicity of ρ in the tensor product of λ and τ .
(For probabilist readers who might be unfamiliar with such algebraic facts, we refer
to pages 37 and 50 of [37].) Hence the chance of going from λ to ρ is determined by
how the tensor product of λ and τ decomposes into irreducibles. A closely related
construction was used for Stein’s method in the earlier work [14]. It should also be
noted that the idea of using tensor products to define random walks on irreducible
representations had been previously used in the context of compact Lie groups,
where the state space is infinite (see for instance [13]).
Lemma 3.6 verifies that Lτ is a Markov chain which is reversible with respect to
Plancherel measure.
1
|G|
Lemma 3.6. The transition probabilities of Lτ are real and non-negative and sum
to 1. Moreover the chain Lτ is reversible with respect to the Plancherel measure of
G.
Proof. The transition probabilities of Lτ are real and non-negative since as noted
above,
1 λ
χ (g)χτ (g)χρ (g)
|G| g
is the multiplicity of ρ in the tensor product of λ and τ . Letting id denote the
identity, it follows from Lemma 3.3 that
1
Lτ (λ, ρ) =
χλ (g)χτ (g)
χρ (id)χρ (g) = 1.
dim(λ)dim(τ
)|G|
ρ
g
ρ
For the reversibility assertion, the fact that χτ and the transition probabilities of
Lτ are both real valued implies that
dim(λ)2
dim(ρ)2
Lτ (λ, ρ) =
Lτ (ρ, λ).
|G|
|G|
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3693
An exchangeable pair (W, W ) is now constructed from the chain Lτ in the
standard way. First choose λ from the Plancherel measure of G, then choose ρ
with probability Lτ (λ, ρ), and finally let (W, W ) = (W (λ), W (ρ)). Note that if τ
is the trivial representation, then W = W . Since Stein’s method is in the spirit of
Taylor approximation, it is good for W to be close to W , so in examples τ should
typically be chosen to be as close as possible to the trivial representation.
The remaining results in this subsection show that the exchangeable pair (W, W )
has desirable properties.
τ
χ (C)
Lemma 3.7. E(W |W ) = dim(τ
) W.
Proof. From the definition of W ,
1 λ
dim(ρ)
χρ (C)
E(W |λ) = |C|1/2
χ (g)χτ (g)χρ (g)
dim(λ)dim(τ ) |G| g
dim(ρ)
ρ
|C|1/2 χλ (g) χτ (g) ρ
χ (C)χρ (g)
|G| g dim(λ) dim(τ ) ρ
τ
χ (C)
=
W (λ).
dim(τ )
The second equality switched the order of summation and the last step is Lemma
3.3. The result follows since this depends on λ only through W .
=
Corollary 3.8 is not needed in what follows, but is worth recording.
χτ (C)
dim(τ )
|C|1/2 χλ (C)
are
dim(λ)
Corollary 3.8. The eigenvalues of Lτ are
as C ranges over conjugacy
a basis of eigenvectors of Lτ ,
classes of G. The functions ψC (λ) =
orthonormal with respect to the inner product
dim(λ)2
f1 , f2 =
.
f1 (λ)f2 (λ)
|G|
λ
Proof. The proof of Lemma 3.7 shows that ψC is an eigenvector of Lτ with eigenχτ (C)
value dim(τ
) . The orthonormality assertion follows from Lemma 3.3, and the basis
assertion follows since the number of conjugacy classes of G is equal to the number
of irreducible representations of G.
τ
χ (C)
Lemma 3.9. E(W − W )2 = 2 1 − dim(τ
) .
Proof. This is immediate from Lemmas 2.3 and 3.7.
For the remainder of this subsection, if K is a conjugacy class of G, pm (K) will
denote the probability that the random walk generated by C started at the identity
is in K after m steps.
We remark that part (2) of Lemma 3.10 writes V ar(E[(W − W )2 |λ]) as a sum
of positive quantities.
Lemma 3.10.
(1)
E[(W − W ) |λ] = |C|
2
K
p2 (K)
χτ (K)
2χτ (C)
+1−
dim(τ )
dim(τ )
where the sum is over all conjugacy classes K of G.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
χλ (K)
dim(λ)
3694
JASON FULMAN
(2)
V ar(E[(W − W )2 |λ]) = |C|2
2
p2 (K)2 χτ (K)
2χτ (C)
+1−
|K|
dim(τ )
dim(τ )
K=id
where K ranges over all non-identity conjugacy classes of G.
Proof. Observe that
2
E((W ) |λ)
2
|C|1/2 χρ (C)
=
dim(ρ)
ρ
2
χλ (g) χτ (g)
χ (C)
1 ρ
·
= |C|
dim(ρ)χ (g)
dim(λ) dim(τ ) |G| ρ
dim(ρ)
g
χλ (g) χτ (g)
1 dim(ρ)
χρ (g)
|G| ρ
dim(λ)
dim(τ
)
g
= |C|
p2 (K)
K
χτ (K) χλ (K)
.
dim(τ ) dim(λ)
The second equality switched the order of summation, and the final equality was
Lemma 3.4.
χλ (K)
. To see this note from Lemma
Next we claim that W 2 = |C| K p2 (K) dim(λ)
3.4 that
|K| χρ (C)2 χρ (K)
= p2 (K).
|G| ρ
dim(ρ)
(K)
Multiplying both sides by |C|χ
dim(λ) and summing over all K, the claimed expression
for W 2 follows from Lemma 3.2.
Now note from Lemma 3.7 that
λ
E[(W − W )2 |λ] = E((W )2 |λ) − 2W E(W |λ) + W 2
2χτ (C)
W 2.
= E((W )2 |λ) + 1 −
dim(τ )
The first part of the lemma follows from this equation and the first two paragraphs.
From the first part of the lemma and Lemma 3.3, it follows that
2
p2 (K)2 χτ (K)
2χτ (C)
2
2
2
E E[(W − W ) |λ] = |C|
+1−
.
|K|
dim(τ )
dim(τ )
K
1
Note that if K is the identity class, then p2 (K) = |C|
, so that the contribution
2
χτ (C)
coming from the identity class is 4 1 − dim(τ ) . The second part of the lemma
follows since by Lemma 3.9,
2
χτ (C)
V ar(E[(W − W )2 |λ]) = E E[(W − W )2 |λ]2 − 4 1 −
.
dim(τ )
Lemma 3.11. Let k be a positive integer.
k χτ (K) pm (K)pk−m (K)
.
(1) E(W − W )k = |C|k/2 km=0 (−1)k−m m
K dim(τ )
|K|
4
(2) E(W − W ) is equal to
p2 (K)2
χτ (C)
χτ (K)
2
|C|
.
8 1−
−6 1−
dim(τ )
dim(τ )
|K|
K
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3695
Proof. For the first assertion, note that
E((W − W )k |λ)
=
dim(ρ) |C|k/2
χλ (g)χτ (g)χρ (g)
dim(λ)dim(τ ) ρ
|G|
g
ρ
m λ
k−m
k
χ (C)
χ (C)
·
(−1)
m
dim(ρ)
dim(λ)
m=0
λ
k−m
k
k
χ (C)
|C|k/2
=
(−1)k−m
m
dim(λ)dim(τ ) m=0
dim(λ)
dim(ρ) χρ (C) m
·
χτ (g)χλ (g)
χρ (g)
|G|
dim(ρ)
g
ρ
k
=
k−m
λ
k−m
k
k
χ (C)
|C|k/2
(−1)k−m
m
dim(λ)dim(τ ) m=0
dim(λ)
·
χ τ (K)χλ (K)pm (K).
K
The second step switched the order of summation and the final equality is by Lemma
3.4. Thus E((W − W )k ) is equal to
k
χτ (K)
(−1)
pm (K)
E(E((W − W ) |λ)) = |C|
m
dim(τ )
m=0
K
dim(λ)2 χλ (K) χλ (C) k−m
.
·
|G| dim(λ) dim(λ)
k
k/2
k
k−m
λ
The first assertion now follows from Lemma 3.4 and the fact that χλ (C) is real for
all λ.
For the second assertion, note by the first assertion that
τ
4
4
χ (K) pm (K)p4−m (K)
.
(−1)m
E(W − W )4 = |C|2
m
dim(τ )
|K|
m=0
K
If τ is the trivial representation, then W = W , which implies that
4
pm (K)p4−m (K)
2
m 4
0 = |C|
.
(−1)
m
|K|
m=0
K
Thus for general τ ,
E(W − W )4 = −|C|2
4
(−1)m
m=0
4
χτ (K) pm (K)p4−m (K)
1−
.
m
dim(τ )
|K|
K
Observe that the m = 0, 4 terms in this sum vanish, since the only contribution
could come from the identity, which contributes 0. The m = 2 term is
χτ (K) p2 (K)2
2
1−
.
−6|C|
dim(τ )
|K|
K
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3696
JASON FULMAN
The m = 1, 3 terms are equal and together contribute
χτ (K) p1 (K)p3 (K)
2
8|C|
1−
dim(τ )
|K|
K
χτ (C)
= 8|C| 1 −
p3 (C)
dim(τ )
χτ (C)
= 8|C|2 1 −
p4 (id)
dim(τ )
χτ (C) p2 (K)2
,
= 8|C|2 1 −
dim(τ )
|K|
K
where id is the identity and the last two equalities can be seen either directly or
from Lemma 3.4. This completes the proof of the second assertion.
Putting the pieces together, one obtains the following theorem.
Theorem 3.12. Let C be a conjugacy class of a finite group G such that C = C −1
and fix an irreducible representation τ of G whose character is real valued, and such
χτ (C)
that 0 < dim(τ
) < 1. Let λ be a random irreducible representation, chosen from the
χ (C)
. Then for all real x0 ,
Plancherel measure of G. Let W = |C|dim(λ)
x0
2
− x2
P(W ≤ x0 ) − √1
e
dx
2π −∞
2
|C| p2 (K)2 χτ (K)
≤
+ 2a − 1
a
|K|
dim(τ )
1/2
λ
K=id
1/4
|C|2 p2 (K)2
6
χτ (K)
+
,
8−
1−
π
a
dim(τ )
|K|
K
where a = 1 −
χτ (C)
dim(τ ) .
Proof. One applies Theorem 2.1 to the exchangeable pair (W, W ) of this subsection. To handle the first term in Theorem 2.1, note that the conditional version of
Jensen’s inequality (Section 4.1 of [11]) implies that
V ar(E[(W − W )2 |W ]) ≤ V ar(E[(W − W )2 |λ]).
Part (2) of Lemma 3.10 then gives the first term in the theorem. To upper bound
the second term in Theorem 2.1, note by the Cauchy-Schwarz inequality that
E|W − W |3 ≤ E(W − W )2 E(W − W )4 .
Now use Lemma 3.9 and part (2) of Lemma 3.11.
3.3. Example: Cayley graphs of the symmetric group. This subsection applies the theory of Subsection 3.2 to the case where the group is Sn and C is the
conjugacy class of i-cycles.
It is useful to recall some facts about the symmetric group. Since K = K −1 for
all conjugacy classes K of Sn , Lemma 3.1 implies that all irreducible characters of
n!
Sn are real valued. Also it is elementary that |K| = j m
j m ! , where mj is the
j
j
number of cycles of length j of an element of K.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3697
In order to upper bound the error terms in Theorem 3.12, the following estimate
on the two step transition probabilities of the random walk generated by C will be
useful.
Lemma 3.13. Let C be the conjugacy class of cycles of length i of the symmetric
(K)2
group Sn . Then for i fixed and n ≥ 2i, p2|K|
is equal to
(1) i2 n−2i + O(n−2i−1 ) if K is the identity conjugacy class.
(2) 2i2 n−2i + O(n−2i−1 ) if K is the conjugacy class consisting of two cycles of
length i.
(3) O(n−2i−1 ) otherwise.
1
Proof. The first assertion is clear since if K is the identity class, p2 (K) = |C|
. For
the second assertion, note that p2 (K) = 1 + O(n−1 ) since the only way that a
product of two i-cycles is not in K is if the i-cycles have a symbol in common.
For the third assertion, there are two cases. The first case is that K has exactly
n − 2i fixed points. Then |K| is at least ci n2i where ci is a constant depending
on i. Since K is not the class consisting of exactly two i-cycles, p2 (K) = O(n−1 ),
proving the third assertion in this case. The second case is that K has n − 2i + r
fixed points, where 1 ≤ r < 2i. Then |K| is at least ci n2i−r where ci is a constant
depending on i. However p2 (K) = O(n−r ), since in the product of the two i-cycles,
there are r symbols moved by the first i-cycle each of which is mapped back to itself
by the second i-cycle. Thus in this case also, the third assertion is proved.
Theorem 3.14. Let C be the conjugacy class of i-cycles in Sn , with i ≥ 2. Choosing λ from
the Plancherel measure of the symmetric group, define a random variable
(ni)(i−1)!χλ (C)
. Then there is a constant Ai such that for all real x0 ,
W =
dim(λ)
x0
2
− x2
P(W ≤ x0 ) − √1
≤ Ai n−1/4 .
e
dx
2π −∞
Proof. One applies Theorem 3.12, choosing τ to be the irreducible representation
corresponding to τ = (n − 1, 1). Then χτ (K) is the number of fixed points of K
χτ (C)
i
−1, dim(τ ) = n − 1, and a = 1 − dim(τ
) = n−1 . Note that for any non-identity
conjugacy class K,
2
p2 (K)2 χτ (K)
+ 2a − 1 = O(n−2i−3 ).
|K|
dim(τ )
τ
χ (K)
This follows from Lemma 3.13 and the fact that dim(τ
) + 2a − 1 vanishes if K has
n − 2i fixed points and is O(n−1 ) for any other class K with p2 (K) = 0, since such
n!
i
and a = n−1
, it follows
a class has at least n − 2i fixed points. Since |C| = (n−i)!i
that the first error term in Theorem 3.12 is at most Ai n−1/2 , where Ai is a constant
depending on i.
To bound the second error term in Theorem 3.12, observe that Lemma 3.13
implies that
p2 (K)2
6
χτ (K)
8−
1−
a
dim(τ )
|K|
K
is O(n−2i−1 ). Indeed, the identity class contributes 8i2 n−2i + O(n−2i−1 ) and the
class of two i-cycles contributes −8i2 n−2i + O(n−2i−1 ). All other K for which
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3698
JASON FULMAN
p2 (K) = 0 have at least n − 2i fixed points, so 8 −
6
a
1−
χτ (K)
dim(τ )
is O(1) and it
follows from Lemma 3.13 that the contribution of any such K is O(n−2i−1 ). The
same is true for the combined contribution of all such K since the number of K
with at least n − 2i fixed points is bounded by a constant depending on i. Hence
the second error term in Theorem 3.12 is at most Ai n−1/4 where Ai is a constant
depending only on i.
To conclude this example, we note that there are many conjugacy classes C of the
symmetric group for which W does not have a normal limit (see [21] for a detailed
discussion of the limit laws). In fact even if C has n − i fixed points (like the class
of i-cycles), it need not be the case that the distribution of W is approximately
normal.
4. Gelfand pairs
If G is a finite group and K is a subgroup of G such that the induced representation 1G
K is multiplicity free, then the pair (G, K) is called a Gelfand pair. Gelfand
pairs have applications in number theory (see [18] for a survey). Chapter 3 of [8]
shows that Gelfand pairs are useful for studying the convergence rate of random
walks on G/K. The purpose of this section is to show that the results of Section 3
extend to the setting of Gelfand pairs, and can be used to study the spectrum of
random walks on G/K.
Subsection 4.1 discusses the representation theory of Gelfand pairs. Subsection
4.2 derives a general central limit theorem for random spherical functions of a
Gelfand pair. Subsection 4.3 illustrates the theory on a toy example, proving a
central limit theorem for the spectrum of the hypercube. A more serious example
is considered in Subsection 4.4, which obtains a new central limit theorem for the
spectrum of certain random walks on the perfect matchings of 2n symbols. As
indicated there, this result can be stated in terms of Jack2 measure, which is of
interest to workers in random matrix theory.
4.1. Background from representation theory. We discuss some facts about
the representation theory of Gelfand pairs. A useful reference is Chapter 7 of [29].
s
decomposes
in
a
multiplicity
free
way
as
The induced representation 1G
K
r=0 Vr ,
where V0 denotes the trivial representation of G. Let dr denote the dimension of
Vr . For 0 ≤ r ≤ s, let ωr denote the corresponding spherical function on G, defined
by
1 (r) −1
ωr (x) =
χ (x k)
|K|
k∈K
(r)
where χ
is the character of Vr . The functions ωr are a basis of the space of
functions on G which are constant on the double cosets of K in G. Let K0 , · · · , Ks
denote the double cosets of K in G and let g0 , · · · , gs be corresponding double coset
representatives, so that Ki = Kgi K. It is convenient to take g0 to be the identity
element of G. From page 389 of [29], one has that ωi (g0 ) = 1 for all i.
Lemma 4.1 ([29], page 389). Let ω be a spherical function of the Gelfand pair
(G, K). Then ω(x−1 ) = ω(x).
The following two orthogonality relations are also useful.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3699
Lemma 4.2 ([29], page 389). For 0 ≤ i, j ≤ s,
s
di |Kr |ωi (gr )ωj (gr ) = δi,j .
|G| r=0
Lemma 4.3. For 0 ≤ r, t ≤ s,
s
|G|
.
di ωi (gr )ωi (gt ) = δr,t
|K
r|
i=0
Proof. Consider the s × s matrix whose entry in the ith column and rth row is
di |Kr |
|G| ωi (gr ). By Lemma 4.2, the columns of this matrix are orthonormal. Hence
so are its rows, proving the lemma.
If P is a K biinvariant probability on G (i.e. constant on double cosets of K in
G), let pm (Kr ) denote the probability that the m-fold convolution of P assigns to
the double coset Kgr K. Lemma 4.4 is an analog of Lemma 3.4 and could be proved
along similar lines, as in [19]. Instead, we use the language of Fourier analysis, as
developed on page 395 of [29].
Lemma 4.4. Let P be the K biinvariant probability on G which associates mass
1 to the double coset Ku and mass 0 to all other K double cosets of G. Then for
0 ≤ r ≤ s,
s
di
ωi (gu )m ωi (gr ).
pm (Kr ) = |Kr |
|G|
i=0
Proof. If f is a complex valued K biinvariant function on G, define fˆ(ωi ) (the
Fourier transform of f at the spherical function ωi ) by
fˆ(ωi ) =
f (g)ωi (g).
g∈G
Then the Fourier inversion theorem gives that
s
1 ˆ
f=
f (ωi )di ωi .
|G| i=0
It is also true that the Fourier transform of the convolution of two K biinvariant
functions is the product of their Fourier transforms. Thus the m-fold convolution
of f is equal to
s
m
1 ˆ
di ωi .
f (ωi )
|G| i=0
The lemma now follows by taking f = P .
To connect the study of random spherical functions with spectral graph theory,
suppose for convenience that (G, K) is a symmetric Gelfand pair, which means that
KgK = Kg −1 K for all g. This condition holds for the examples in Subsections 4.3
and 4.4. Then fixing a double coset Ku , one can define a graph Hu whose vertices
are the right cosets of K by connecting Kh1 to Kh2 if and only if Kh1 h−1
2 K = Ku .
A more general construction appears in [28]. The graph Hu is vertex transitive,
since G acts transitively on the right cosets of K by sending Kh to Khg −1 , and
Kh1 is connected to Kh2 if and only if Kh1 g −1 is connected to Kh2 g −1 . Lemma
4.5 determines the spectrum of random walk on Hu .
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3700
JASON FULMAN
Lemma 4.5. Let (G, K) be a symmetric Gelfand pair. Then for any double coset
Ku , the random walk on the graph Hu has eigenvalues ωi (gu ) occurring with multiplicity di for 0 ≤ i ≤ s.
Proof. Let M be the transition matrix for the random walk on Hu . Since Hu is
vertex transitive, the trace of M k is |G|/|K| multiplied by the probability that the
random walk on Hu started at the right coset K is at K after k steps. By Theorem
7.5 of [28], this return probability is
|K| di ωi (gu )k ,
|G| i=0
s
so that the trace of M k is
follows.
s
i=0
di ωi (gu )k . Since this is true for all k ≥ 0, the result
Note that since d0 + · · · + ds = |G/K|, one can define a probability measure on
{0, · · · , s} (or equivalently on the set {ω0 , · · · , ωs }) by choosing i with probability
di |K|
|G| . This is the Gelfand pair analog of the Plancherel measure of a finite group,
and in this paper it will be referred to as Plancherel measure. Lemma 4.5 showed
that if (G, K) is a symmetric Gelfand pair, then the eigenvalues of certain graphs
on G/K occur with multiplicity proportional to Plancherel measure.
4.2. Central limit theorem for spherical functions. The aim of this subsection
is to prove a central limit theorem for the random variable W defined by W (i) =
|Ku |1/2
ω (g ).
|K|1/2 i u
Here gu is fixed satisfying Kgu K = Kgu−1 K and i is random from
the Plancherel measure of the Gelfand pair (G, K). Lemma 4.1 implies that W is
real valued. It is not assumed that the Gelfand pair (G, K) is symmetric. Note
that by Lemma 4.3, E(W ) = 0 if Kgu K = K, and E(W 2 ) = 1.
To construct an exchangeable pair to be used for Stein’s method, it is helpful to
define a Markov chain on the set of spherical functions of (G, K). Motivated by the
construction in Subsection 3.2, we proceed as follows. Fix t with 0 ≤ t ≤ s such
that the spherical function ωt is real-valued. Let Lt be the Markov chain on the
set {ω0 , · · · , ωs } which transitions from ωi to ωj with probability
dj |Kr |ωi (gr )ωt (gr )ωj (gr ).
|G| r=0
s
Lt (ωi , ωj ) :=
Lemma 4.6 verifies that Lt is a Markov chain which is reversible with respect to
Plancherel measure.
Lemma 4.6. Let t be such that the spherical function ωt is real-valued. Then the
transition probabilities of Lt are real and non-negative and sum to 1. Moreover the
chain Lt is reversible with respect to the Plancherel measure of the pair (G, K).
Proof. From page 396 of [29], if akit are defined by
ωi ωt =
s
akit ωk
k=0
(where the notation ωi ωt denotes the pointwise product), then akit are real and
non-negative. Hence Lemma 4.2 implies that Lt (ωi , ωj ) is real and non-negative.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3701
Since ωj (g0 ) = 1 for all j, Lemma 4.3 gives that
s
Lt (ωi , ωj ) =
s
r=0
j=0
1 dj ωj (g0 )ωj (gr ) = 1.
|G| j=0
s
|Kr |ωi (gr )ωt (gr )
Reversibility of Lt with respect to Plancherel measure is equivalent to showing
that
s
s
di dj |K| di dj |K| |Kr |ωi (gr )ωt (gr )ωj (gr ) =
|Kr |ωj (gr )ωt (gr )ωi (gr ).
|G| r=0
|G| r=0
Both sides are real by the previous paragraph, so the result follows by the assump
tion that ωt is real valued.
An exchangeable pair (W, W ) to be used in a Stein’s method approach to studying W can be constructed from the chain Lt in the usual way. First choose i
from Plancherel measure, then choose j with probability Lt (ωi , ωj ), and finally let
(W, W ) = (W (ωi ), W (ωj )). As in the group case, it is useful to have W close to
W , and thus one typically wants to choose ωt as close as possible to the trivial
spherical function.
Lemma 4.7. E(W |W ) = ωt (gu )W .
Proof. From the definitions and Lemma 4.3,
s
s
|Ku |1/2 dj E(W |ωi ) =
|Kr |ωi (gr )ωt (gr )ωj (gr )ωj (gu )
|K|1/2 j=0 |G| r=0
=
s
s
dj
|Ku |1/2 |K
|ω
(g
)ω
(g
)
ωj (gu )ωj (gr )
r i r
t r
|G|
|K|1/2 r=0
j=0
= ωt (gu )W (ωi ).
The result follows since this depends on ωi only through W .
Corollary 4.8 is not needed in the sequel but is interesting.
Corollary 4.8. The eigenvalues of Lt are ωt (gr ) for 0 ≤ r ≤ s. The functions
|K
|1/2
gr
ψr (ωi ) = |K|
1/2 ωi (gr ) are a basis of eigenvectors of Lτ , orthonormal with respect
to the inner product
s
di |K|
.
f1 (ωi )f2 (ωi )
f1 , f2 =
|G|
i=0
Proof. By the proof of Lemma 4.7, ψr is an eigenvector of Lt with eigenvalue ωt (gr ).
The orthonormality assertion follows from Lemma 4.3, and the basis assertion follows since the number of spherical functions is equal to the number of K double
cosets of G.
Lemma 4.9. E(W − W )2 = 2(1 − ωt (gu )).
Proof. This is immediate from Lemmas 2.3 and 4.7.
Lemma 4.10.
(1)
|Ku | p2 (Kr ) (ωt (gr ) + 1 − 2ωt (gu )) ωi (gr ).
|K| r=0
s
E[(W − W )2 |ωi ] =
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3702
JASON FULMAN
(2)
s
|Ku |2 p2 (Kr )2
(ωt (gr ) + 1 − 2ωt (gu ))2 .
V ar(E[(W − W ) |ωi ]) =
|K| r=1 |Kr |
2
Note that the sum in the second part of the lemma begins at r = 1 and so excludes
the r = 0 term which corresponds to the trivial spherical function.
Proof. Observe that
=
|Ku | dj |Kr |ωi (gr )ωt (gr )ωj (gr )ωj (gu )2
|K| j=0 |G| r=0
=
|Ku | 1 |Kr |ωi (gr )ωt (gr )
dj ωj (gu )2 ωj (gr )
|K| r=0
|G| j=0
=
|Ku | ωi (gr )ωt (gr )p2 (Kr ).
|K| r=0
s
2
E((W ) |ωi )
s
s
s
s
The second equality switched the order of summation, and the final equality was
Lemma 4.4.
s
u|
Next we claim that W 2 = |K
r=0 p2 (Kr )ωi (gr ). To see this note by Lemma
|K|
4.4 that
s
dj
ωj (gu )2 ωj (gr ) = p2 (Kr ).
|Kr |
|G|
j=0
u|
Multiplying both sides by |K
|K| ωi (gr ) and summing over r, the claimed expression
for W 2 follows from Lemma 4.2.
To prove the first part of the lemma, note that Lemma 4.7 gives that
E[(W − W )2 |ωi ] = E((W )2 |ωi ) − 2W E(W |ωi ) + W 2
= E((W )2 |ωi ) + (1 − 2ωt (gu )) W 2 .
Now use the previous two paragraphs.
The first part of the lemma and Lemma 4.3 imply that
s
|Ku |2 p2 (Kr )2
(ωt (gr ) + 1 − 2ωt (gu ))2 .
E E[(W − W )2 |ωi ]2 =
|K| r=0 |Kr |
|K|
From Lemmas 4.4 and 4.3, p2 (K) = |K
, so the r = 0 term contributes
µ|
4(1 − ωt (gu ))2 . The second part of the lemma follows since by Lemma 4.9,
V ar(E[(W − W )2 |ωi ]) = E(E[(W − W )2 |ωi ]2 ) − 4(1 − ωt (gu ))2 .
Lemma 4.11. Let k be a positive integer.
(1) E(W − W )k is equal to
k/2 s
k
|Ku |
|K|
k−m k
ωt (gr )pm (Kr )pk−m (Kr ).
(−1)
m r=0 |Kr |
|K|
m=0
(2) E(W − W )4 =
|Ku |2
|K|
s
r=0
2
(Kr )
[8 (1 − ωt (gu )) − 6 (1 − ωt (gr ))] p2|K
.
r|
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3703
Proof. For the first assertion, note that E((W − W )k |ωi ) is equal to
|Ku |
|K|
k/2 s
dj |Kr |ωi (gr )ωt (gr )ωj (gr )
|G| r=0
s
j=0
k
·
(−1)
ωj (gu )m ωi (gu )k−m
m
m=0
k/2 k
|Ku |
k−m k
=
(−1)
ωi (gu )k−m
m
|K|
m=0
k
·
s
k−m
r=0
=
s
dj
ωj (gu )m ωj (gr )
|G|
j=0
|Kr |ωi (gr )ωt (gr )
|Ku |
|K|
k/2 k
(−1)
k−m
m=0
s
k
ωi (gr )ωt (gr )pm (Kr ).
ωi (gu )k−m
m
r=0
The first equality switched the order of summation and the last equality is Lemma
4.4. Hence
E(W − W )k
= E(E((W − W )k |ωi ))
k/2 k
|Ku |
k
(−1)k−m
=
m
|K|
m=0
·
s
ωt (gr )pm (Kr )
r=0
s
di |K|
i=0
|G|
ωi (gr )ωi (gu )k−m .
The first assertion follows from Lemma 4.4 and the fact that ωi (gu ) is real.
For the second assertion, one knows from the first assertion that
E(W − W )4 =
|Ku |
|K|
2 4
(−1)m
m=0
s
4
|K|
ωt (gr )pm (Kr )p4−m (Kr ).
m r=0 |Kr |
The special case t = 0 gives the equation
0=
|Ku |
|K|
s
4
|K|
pm (Kr )p4−m (Kr ).
(−1)
m
|K
r|
m=0
r=0
2 4
m
Hence in general, E(W − W )4 is equal to
4
s
|K|
|Ku |2 m 4
(1 − ωt (gr ))pm (Kr )p4−m (Kr ).
(−1)
−
m r=0 |Kr |
|K|2 m=0
The m = 0, 4 terms both vanish since p0 (Kr ) = 0 if r = 0. The m = 2 term is
equal to
−
s
6|Ku |2 (1 − ωt (gr ))p2 (Kr )2
.
|K| r=0
|Kr |
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3704
JASON FULMAN
The m = 1, 3 terms are equal and together contribute
s
8|Ku |2 (1 − ωt (gr ))p1 (Kr )p3 (Kr )
|K| r=0
|Kr |
=
=
=
8|Ku |
(1 − ωt (gu ))p3 (Ku )
|K|
8|Ku |2
(1 − ωt (gu ))p4 (K0 )
|K|2
s
p2 (Kr )2
8|Ku |2
(1 − ωt (gu ))
.
|K|
|Kr |
r=0
The last two equations can be seen directly or from Lemma 4.4. This completes
the proof of the second assertion.
Arguing as in the proof of Theorem 3.12, and using the above lemmas, one
obtains the following result.
Theorem 4.12. Let (G, K) be a Gelfand pair, and fix a double coset Ku = Kgu K
of G satisfying Kgu K = Kgu−1 K. Let ωt be a real valued spherical function such
that 0 < ωt (gu ) < 1. Choosing ωi from the Plancherel measure of the pair (G, K),
let W =
|Ku |1/2
ω (g ).
|K|1/2 i u
Then for all real x0 ,
x0
2
− x2
P(W ≤ x0 ) − √1
e
dx
2π −∞
s
|K|p2 (Kr )2
|Ku | 2
≤
(ωt (gr ) + 2a − 1)
a|K| r=1
|Kr |
1/4
s 1 |Ku |1/2 6(1 − ωt (gr )) |K|p2 (Kr )2
+
,
8−
a
|Kr |
(π)1/4 |K|1/2 r=0
where a = 1 − ωt (gu ).
4.3. Example: The hypercube. The n dimensional hypercube Zn2 consists of ntuples of 0’s and 1’s. Random walk on it proceeds by picking a random coordinate
and changing it. The spectrum of this random walk is well known;
for each 0 ≤ i ≤ n
n
there is an eigenvalue 1 − 2i
n occurring with multiplicity i (see for instance page
28 of [8]). Thus by the usual central limit theorem for the binomial distribution, the
spectrum of the hypercube is asymptotically normal with an error term O(n−1/2 ).
The purpose of this subsection is to revisit this classical result from the viewpoint
of Gelfand pairs, illustrating the construction of Subsection 4.2.
To begin, note from the remarks on page 58 of [8] that the hypercube can be
viewed as G/K for a certain Gelfand pair (G, K). Namely G is the semidirect product of Zn2 with Sn , where the group multiplication is (x, π)(y, τ ) = (x + π(y), πτ ),
where π(y) permutes the coordinates
of y. K is the subgroup
n {(0, π) : π ∈ Sn }. The
n
induced module 1G
K decomposes as
r=0 Vr where dr = r . Thus the Plancherel
(n)
measure chooses i ∈ {0, · · · , n} with probability 2in . For 0 ≤ r ≤ n, the double
coset Kr in G
consists of elements (x, π) where x has r coordinates equal to 1.
Thus |Kr | = nr n!, and as usual let gr denote some element of Kr . The spherical
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3705
function ωi (0 ≤ i ≤ n) is given by
i
r
n−r
1 (−1)m
ωi (gr ) = n
,
m i−m
i m=0
which is a Krawtchouk polynomial if one overlooks the ni in the denominator.
The following result emerges from Theorem 4.12. Since ωi (g1 ) = 1 − 2i
n (or by
Lemma 4.5), it can be seen as a central limit theorem for the spectrum of random
walk on the hypercube.
√
Theorem 4.13. Let W = nωi (g1 ) where i is chosen from Plancherel measure.
Then for all real x0 ,
1/4
x0
2
8
− x2
P(W ≤ x0 ) − √1
e
dx ≤
.
πn
2π −∞
Proof. Apply Theorem 4.12 with u = 1 and t = 1. Then a = n2 . Note that p2 (Kr )
is the chance that random walk on the hypercube started at the vertex with all
coordinates 0 will have r coordinates equal to 1 after 2 steps. Thus p2 (K0 ) = n1 ,
p2 (K2 ) = 1 − n1 , and p2 (Kr ) = 0 for all r = 0, 2. Since ω1 (g2 ) = 1 − n4 , one has that
ω1 (g2 ) + 2a − 1 = 0. Thus the first error term in Theorem 4.12 is 0. The second
8 1/4
error term in Theorem 4.12 is computed to be πn
, implying the result.
To get O(n−1/2 ) bounds by Stein’s method, it will be shown that |W − W | is
bounded, so that one can use the version of Stein’s method in Theorem 2.2 instead
of that in Theorem 2.1. The boundedness of |W − W | will follow from Lemma
4.14, which proves that L1 is in fact a birth-death chain.
Lemma 4.14. The chain L1 on the set {0, · · · , n} is a birth-death chain with
transition probabilities
i
if j = i − 1,
n
L1 (i, j) =
1 − ni if j = i + 1.
Proof. From the three term recurrence for Krawtchouk polynomials on page 152 of
[30], it follows that
n
n
n
(i + 1)
ωi+1 (gr ) = (n − 2r)
ωi (gr ) − (n − i + 1)
ωi−1 (gr ).
i+1
i
i−1
Since n − 2r = nω1 (gr ), one obtains that
n
n
n
n
ω1 (gr )ωi (gr ) = (i + 1)
ωi+1 (gr ) + (n − i + 1)
ωi−1 (gr ).
i
i+1
i−1
Simplifying, one obtains the relation
i
i
ω1 (gr )ωi (gr ) = 1 −
ωi+1 (gr ) +
ωi−1 (gr ).
n
n
The result now follows from Lemma 4.2 and the definition of L1 .
Now an O(n−1/2 ) error term is established.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3706
JASON FULMAN
√
Theorem 4.15. Let W = nωi (g1 ) where i is chosen from Plancherel measure.
Then for all real x0 ,
x0
2
− x2
≤ 5n−1/2 .
P(W ≤ x0 ) − √1
e
dx
2π
−∞
Proof. Apply the variation of Theorem 4.12 which would arise from using Theorem
2.2 instead of Theorem 2.1. Recall that a = n2 . Also |W − W | ≤ A with A =
√2 since L1 is a birth-death chain. As explained in the proof of Theorem 4.13,
n
V ar(E[(W − W )2 |W ]) = 0. The result follows.
As a final remark, Lemma 4.14 shows that L1 is closely related to the birth-death
chain used by Stein [41] in proving a central limit theorem for X1 + ... + Xn , where
the X’s are independent and each equal to 0 or 1 with probability 1/2. To form an
exchangeable pair Stein chose a random l ∈ {1, · · · , n} and then replaced Xl by a
new random variable which is also 0 or 1 with probability 1/2 and independent of
all of the X’s. The chain L1 would arise by picking a random index l and switching
the value of Xl .
4.4. Example: Random walk on perfect matchings. In this example G = S2n
and K is the hyperoctahedral group of signed permutations on n symbols, of size
2n n!. This Gelfand pair is discussed at length in Chapter 7 of [29] and in Section 3
of [19], to which we refer the reader for proofs of facts in the next two paragraphs.
Another useful reference is [9].
The induced representation 1G
K decomposes as
|λ|=n Vλ , where λ ranges over
all partitions of size n. An explicit formula for the numbers dλ appears in [29] and
is not needed in what follows. However it is worth remarking that the Plancherel
n
n!dλ
, is the α = 2 case of
measure of (G, K), which chooses λ with probability 2(2n)!
the so called Jackα measure on partitions, which chooses λ with probability
αn n!
s∈λ (αa(s) + l(s) + 1)(αa(s) + l(s) + α)
where the product is over all boxes of the partition. Here a(s) is the number of
boxes in the same row of s and to the right of s (the “arm” of s), and l(s) is the
number of boxes in the same column of s and below s (the “leg” of s). For example
the partition of 5 below
would have Jackα measure
60α2
.
(2α + 2)(3α + 1)(α + 2)(2α + 1)(α + 1)
The Jack measure on partitions is of interest to researchers in random matrix theory
[4], [33], [27].
The double cosets Kµ of K in G are parameterized by partitions µ of size n and
have the following concrete description. A perfect matching of {1, · · · , 2n} can be
regarded as a 1-regular graph with vertex set {1, · · · , 2n}. Let be the “identity
matching” in which i is adjacent to n + i for 1 ≤ i ≤ n. Given a permutation w in
S2n , let δ(w) be the perfect matching of {1, · · · , 2n} in which i is adjacent to j if
and only |w(i) − w(j)| = n. For example, δ(id) = . Note that the union δ1 ∪ δ2 of
two 1-regular graphs is a 2-regular graph, and thus a disjoint union of even length
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3707
cycles. Let Λ(δ1 , δ2 ) be the partition of n whose parts are half the cycle lengths
of δ1 ∪ δ2 . Then Kw1 K = Kw2 K if and only if Λ(, δ(w1 )) = Λ(, δ(w2 )). Thus
Kw−1 K = KwK for all w, so that the machinery of Subsection 4.2 is applicable.
One also has that Kw1 = Kw2 if and only if δ(w1 ) = δ(w2 ). It follows that G/K is
|K |
in bijection with the perfect matchings of {1, · · · , 2n} and that |K|µ is equal to the
number of perfect matchings δ with Λ(, δ) = µ, which by elementary counting is
2n n!
mj (µ) where l(µ) is the number of parts of µ and mj (µ) is the number
l(µ)
2
j
mj (µ)!j
of parts of µ of size j.
The main purpose of this
is to prove a central limit theorem for
subsection
n
i−1
n−i ) ), where i is fixed and λ is
the random variable W = 2
i (i − 1)!ωλ (g(i,1
chosen from the Plancherel measure of (G, K). Then Lemma 4.5 gives a central limit
theorem for the spectrum of random walk on the graph H(i,1n−i ) , whose vertices
are the perfect matchings of {1, · · · , 2n}, with an edge between matchings δ1 and
δ2 if and only if Λ(δ1 , δ2 ) = (i, 1n−i ). In the case i = 2, the spectrum of this graph
was determined in [9], and was shown to be asymptotically normal in [15] (with
error term O(n−1/4 )) and in [16] (with error term O(n−1/2 )). But the arguments
in those papers used different information, and it was not clear that they could be
pushed through to larger i. Theorem 4.12 will be used to deduce a central limit
theorem for any fixed i with error term O(n−1/4 ).
To apply Theorem 4.12, ωt will be taken to be the spherical function ω(n−1,1) .
An explicit formula for ω(n−1,1) is available.
Lemma 4.16 ([29], p. 411). Let m1 (µ) denote the number of parts of size 1 of µ.
Then
(2n − 1)m1 (µ) − n
ω(n−1,1) (gµ ) =
2n(n − 1)
for all µ of size n.
Lemma 4.17 is helpful.
Lemma 4.17 ([19], Lemma 3.2). The coefficient of Kµ in Kτ K(i,1n−i ) is equal to
|K|2
|Kµ |
multiplied by the number of pairs of perfect matchings (δ, γ) such that Λ(, δ) =
τ , Λ(δ, γ) = (i, 1n−i ), and Λ(, γ) = µ.
The final combinatorial ingredient is an analog of Lemma 3.13.
Lemma 4.18. Consider the random walk on G generated by K(i,1n−i ) . Then for i
fixed and n ≥ 2i,
p2 (Kµ )2 |K|
|Kµ |
is equal to
2
i
(1) 4i−1
n−2i + O(n−2i−1 ) if Kµ = K(1n ) = K.
2i2 −2i
+ O(n−2i−1 ) if Kµ = K(i,i,1n−2i ) .
(2) 4i−1 n
−2i−1
(3) O(n
) otherwise.
p2 (Kµ )|K
n−i
|2
(i,1
)
Proof. Lemma 4.17 implies that
is equal to the number of pairs of
|K|2
n−i
matchings (δ, γ) such that Λ(, δ) = (i, 1 ), Λ(δ, γ) = (i, 1n−i ), and Λ(, γ) = µ.
For the first assertion, it follows either from the previous paragraph or from
Lemmas 4.4 and 4.3 that p2 (K) = 2i−1 n1 (i−1)! . For the second assertion, it is
(i)
straightforward from the previous paragraph that p2 (K(i,i,1n−2i ) ) = 1 + O(n−1 ).
|K |
Then use the formula for |K|µ given earlier in this subsection.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3708
JASON FULMAN
For the third assertion, there are two cases. The first case is that µ has n − 2i
parts of size 1, but is not equal to (i, i, 1n−2i ). Then p2 (Kµ ) = O(n−1 ) and the
|K|
formula for |K
shows it to be at most ci n−2i where ci is a constant depending on
µ|
i. So in the first case, the result is proved. The second case is that µ has n − 2i + r
parts of size 1, where 1 ≤ r < 2i. From the first paragraph of the proof, it is
|K|
shows it
straightforward to see that p2 (Kµ ) = O(n−r ). Also the formula for |K
µ|
r−2i
), proving the result.
to be O(n
Combining the ingredients, one deduces the following result.
Theorem 4.19. Choose λ from the Plancherel measure of the Gelfand pair (G, K),
and define a random variable W = 2i−1 ni (i − 1)!ωλ (g(i,1n−i ) ). Then there is a
constant Ai such that for all real x0 ,
x0
2
− x2
P(W ≤ x0 ) − √1
e
dx ≤ Ai n−1/4 .
2π
−∞
Proof. One applies Theorem 4.12, choosing u = (i, 1n−i ) and ωt to be the spherical
i(2n−1)
. Observe that
function ω(n−1,1) . By Lemma 4.16, ωt is real valued and a = 2n(n−1)
n−2i
ωt (gµ ) + 2a − 1 vanishes if µ = (i, i, 1
) and by Lemma 4.16 is O(n−1 ) for any
other µ such that p2 (Kµ ) = 0, since any such µ has at least n − 2i parts of size 1.
Thus by Lemma 4.18,
|K|p2 (Kµ )2
(ωt (gµ ) + 2a − 1)2 = O(n−2i−3 ).
|Kµ |
n
µ=(1 )
u|
i−1 n
Since |K
|K| = 2
i (i−1)!, the first error
where Ai is a constant depending on i.
term in Theorem 4.12 is at most Ai n−1/2 ,
To bound the second error term in Theorem 4.12, we claim that
6(1 − ωt (gµ )) |K|p2 (Kµ )2
8−
= O(n−2i−1 ).
a
|K
|
µ
µ
2
−2i
Indeed, by Lemmas 4.16 and 4.18, the term µ = (1n ) contributes 48i
+
i−1 n
8i2 −2i
−2i−1
n−2i
−2i−1
) and the term µ = (i, i, 1
) contributes − 4i−1 n +O(n
). These
O(n
lemmas also show that all other terms coming from a µ with p2 (Kµ ) = 0 contribute
O(n−2i−1 ), and the number of such µ is bounded by a constant depending on i. It
follows that the second error term is at most Ai n−1/4 , where Ai is another constant
depending only on i.
5. Twisted Gelfand pairs
The approach taken in this section is a bit different than that in the sections
on finite groups and Gelfand pairs. Due both to a lack of interesting examples of
twisted Gelfand pairs which are not already Gelfand pairs and to technical complications which did not arise in earlier sections of this paper (see the discussion
in Section 5.2 for details), a completely general theory is not developed. Instead,
we focus on one very interesting example: character values of random projective
representations of the symmetric group. It should however be noted that many of
the lemmas resemble those of earlier sections, and the calculations are organized in
a way which should generalize to other examples.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3709
A partition λ of n is called strict if all of its parts are distinct, and is called odd
if all of its parts are odd. It will be useful to have the notation that DP (n) is the
set of strict partitions of size n and OP (n) is the set of odd partitions of size n.
This section uses Stein’s method to study the probability measure on strict
partitions of n which chooses λ with probability
2n−l(λ) gλ2
,
n!
where l(λ) is the number of parts of a partition λ and gλ is the number of standard
shifted tableaux of shape λ ([20], [29]). This measure on strict partitions is known
as shifted Plancherel measure, and is of interest to researchers in random matrix
theory [31], [45].
Ivanov [23] studied character values of random projective representations of the
symmetric group. It is known (see [20] for a friendly exposition) that the character
values in question are expressed in terms of the coefficients Xρλ , where λ is a strict
partition of n that parameterizes an irreducible character, and µ is an odd partition
λ
of n that parameterizes a conjugacy class. One also has that gλ = X(1
n ) . Ivanov
proved the following central limit theorem for these character values.
Theorem 5.1 ([23]). Fix i ≥ 1. Let λ be chosen from the shifted Plancherel
measure on strict partitions of size n. Then as n → ∞, the random variable
n
2i+1
2
λ
X(2i+1,1
n−2i−1 )
√
2i+1gλ
converges in distribution to a normal random variable with
mean 0 and variance 1.
2i
This section refines the result of Ivanov, which was proved by the method of
moments,
so as to obtain an error term. In the statement of the result, recall that
zµ = j≥1 j mj (µ) mj (µ)!, where mj (µ) is the number of parts of µ of size j.
Theorem 5.2. Fix i ≥ 1 and let µ = (2i + 1, 1n−2i−1 ). Choosing λ from the shifted
Plancherel measure on partitions of size n, define a random variable
Xµλ
n!
.
W =
zµ 2n−l(µ) gλ
Then there is a constant Ai such that for all real x0 ,
x0
2
− x2
≤ Ai n−1/4 .
P(W ≤ x0 ) − √1
e
dx
2π
−∞
The organization of this section is as follows. Subsection 5.1 defines twisted
Gelfand pairs and collects and develops facts about their representation theory.
Subsection 5.2 defines and studies a Markov chain to be used in the construction
of an exchangeable pair for a Stein’s method proof of Theorem 5.2. This is more
subtle than the corresponding treatment in earlier sections and involves interesting combinatorics, since the “obvious” adaptation of the construction for Gelfand
pairs does not work. Subsection 5.3 studies the exchangeable pair arising from the
Markov chain in Section 5.2, and uses it to prove Theorem 5.2.
5.1. Background from representation theory. If G is a finite group, K a
subgroup of G, and φ a linear character of K such that IndG
K (φ) is multiplicity
free, the triple (G, K, φ) is called a twisted Gelfand pair (this terminology was
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3710
JASON FULMAN
introduced in [42]). The Hecke algebra of the triple (G, K, φ) is the CG subalgebra
eCGe, where e is the primitive idempotent of CK defined by
1 φ(k−1 )k.
e=
|K|
k∈K
One reason that twisted Gelfand pairs are interesting is that the Hecke algebra
of the triple (G, K, φ) is commutative. The paper [42] is a good reference for the
theory of twisted Gelfand pairs.
The twisted Gelfand pair of interest to us is the one studied in [42]. Thus G = S2n
and K is the hyperoctahedral group Bn of signed permutations, imbedded in G as
the centralizer of the involution (1, 2)(3, 4) · · · (2n − 1, 2n). To define φ, note that
Bn is the semidirect product of the groups Tn and Σn , where Tn is the subgroup
(isomorphic to Zn2 ) generated by (1, 2), · · · , (2n − 1, 2n) and Σn is the subgroup
(isomorphic to Sn ) generated by the “double transpositions” (2i − 1, 2j − 1)(2i, 2j)
for 1 ≤ i < j ≤ n. Then φ is the linear character of Bn whose restriction to Σn is
the sign character, and whose restriction to Tn is trivial.
Stembridge [42] defines twisted spherical functions for twisted Gelfand pairs (the
analog of spherical functions for Gelfand pairs). One of his main results is that for
(S2n , Bn , φ), their values are (aside from scalar multiples) equal to the Xµλ , where
λ is a strict partition and µ is an odd partition.
Next we record orthogonality relations for the quantities Xµλ . These orthogonality relations are a special case of more general orthogonality relations for coefficients
of power sum symmetric functions in Hall-Littlewood polynomials, where the parameter t in the Hall-Littlewood polynomial is −1. What this paper calls Xµλ is
written as Xµλ (−1) in Chapter 3 of [29].
Lemma 5.3 ([29], page 247). For λ, ρ ∈ DP (n),
µ∈OP (n)
2l(µ) λ ρ
X X = δλ,ρ 2l(λ) .
zµ µ µ
Lemma 5.4 ([29], page 247). For µ, σ ∈ OP (n),
1
zµ
X λ X λ = δµ,σ l(µ) .
2l(λ) µ σ
2
λ∈DP (n)
As in Subsection 4.4, the double cosets of Bn in S2n are indexed by partitions ν
of n. It is useful to specify representatives wν . For the case ν = (n), one defines
w(n) = (1, 2, · · · , 2n),
and for the general case ν = (ν1 , · · · , νl ), one defines
ων = w(ν1 ) ◦ · · · ◦ w(νl )
where the operation x ◦ y (for x ∈ S2i , y ∈ S2j ) denotes the embedding of S2i × S2j
in S2i+2j with S2i acting on {1, · · · , 2i} and S2j acting on {2i + 1, · · · , 2i + 2j}.
For ν ∈ OP (n), define
1
K̃ν =
φ(x1 x2 )x1 wν x2 .
2
|Bn |
x1 ,x2 ∈Bn
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3711
Corollary 3.2 of [42] shows that if ν ∈ OP (n), then K̃ν = 0, and that {K̃ν : ν ∈
OP (n)} is a basis for the Hecke algebra of (S2n , Bn , φ). Thus for µ, ν ∈ OP (n), it
is natural to study the coefficient of K̃ν in (K̃µ )m . Lemma 5.5 gives a character
theoretic expression for this coefficient and is analogous to Lemmas 3.4 and 4.4.
Lemma 5.5. Suppose that µ1 , · · · , µm , ν ∈ OP (n). Then the coefficient of K̃ν in
K̃µ1 · · · K̃µm is equal to
2l(µ1 )−n · · · 2l(µm )−n
zν
2n−l(ρ) gρ2
ρ∈DP (n)
Xµρm
Xνρ Xµρ1
···
.
gρ gρ
gρ
In particular, the coefficient of K̃ν in (K̃µ )m is
ρ ρ m Xν Xµ
n!(2l(µ)−n )m
E
,
zν
gρ
gρ
where ρ is random from shifted Plancherel measure on partitions of size n.
Proof. The equality is immediate from the definition of shifted Plancherel measure,
so it is enough to establish the first expression. Stembridge [42] provides a basis
of orthogonal idempotents {Eρ : ρ ∈ DP (n)} of the Hecke algebra of (G, H, φ)
in terms of the projective characters of the symmetric group. More precisely, it
follows from Proposition 4.1 and Corollary 6.2 of [42] that one obtains such a basis
of orthogonal idempotents by defining
Eρ = 2n−l(ρ) gρ
Xµρ
K̃µ .
zµ
µ∈OP (n)
Multiplying both sides of this equation by 2l(µ)−n
DP (n), it follows from Lemma 5.4 that
K̃µ = 2l(µ)−n
ρ∈DP (n)
Thus
K̃µ1 · · · K̃µm = 2l(µ1 )−n · · · 2l(µm )−n
ρ
Xµ
gρ
and summing over all ρ ∈
Xµρ
Eρ .
gρ
ρ∈DP (n)
Xµρm
Xµρ1
···
Eρ .
gρ
gρ
The result now follows by the formula for Eρ in the previous paragraph.
(n−1,1)
Lemma 5.6 derives an explicit formula for Xµ
proach.
. This is crucial to our ap-
Lemma 5.6. Let m1 (µ) denote the number of parts of size 1 of µ. Suppose that
n ≥ 3, so that (n − 1, 1) is a strict partition. Then
Xµ(n−1,1) = m1 (µ) − 2
for all odd partitions µ of size n.
Proof. Recall the definition of power sum symmetric functions: for i ≥ 1, one sets
m (µ)
pi = j xij , and for µ a partition of n, one sets pµ = i pi i . The argument also
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3712
JASON FULMAN
uses Schur’s Q-functions ([29], Sec. 3.8). More precisely, equation 7.5 on page 247
of [29] shows that for µ ∈ OP (n) and λ ∈ DP (n), the coefficient of the power sum
2l(µ) X λ
symmetric function pµ in Qλ is equal to zµ µ .
To proceed one needs an expression for Q(n−1,1) . Using the notation that
[xn ]f (x) is the coefficient of xn in f (x), one has by page 253 of [29] that
Q(n−1,1)
=
[tn−1
t2 ]
1
=
[tn−1
]
1
1 − t2 /t1
1 + t2 /t1
2 ∞
1 + t i xj
1 − t i xj
i=1 j=1
∞
∞
∞
1 + t 1 xj
1 + t 2 xj
1 + t 1 xj
n
[t2 ]
− 2[t1 ]
1
−
t
x
1
−
t
x
1 − t 1 xj
1 j
2 j
j=1
j=1
j=1
= 2p1 Qn−1 − 2Qn .
By page 248 of [29], Xµn = 1 for all µ ∈ OP (n). It follows from the first paragraph
l(µ)
that for µ ∈ OP (n), the coefficient of pµ in Qn is 2zµ and (by considering separately
the cases that m1 (µ) = 0 and m1 (µ) = 0) that the coefficient of pµ in p1 Qn−1 is
m1 (µ)2l(µ)−1
.
zµ
Thus the coefficient of pµ in Q(n−1,1) is
first paragraph proves the result.
(m1 (µ)−2)2l(µ)
,
zµ
which by the
5.2. Markov chains on strict partitions. This subsection discusses a Markov
chain on DP (n) to be used in defining an exchangeable pair for the proof of Theorem
5.2.
Motivated by constructions in the group and Gelfand pair cases, it would be
natural, for τ a fixed element of DP (n), to define a “Markov chain” Jτ on DP (n)
with transition “probabilities”
Jτ (λ, ρ) :=
gρ
l(ρ)
2 gλ gτ
Using Lemma 5.4, one can see that
the reversibility condition
ρ
ν∈OP (n)
2l(ν) Xνλ Xνρ Xντ
.
zν
Jτ (λ, ρ) = 1 for all λ. Moreover Jτ satisfies
2n−l(ρ) gρ2
2n−l(λ) gλ2
Jτ (λ, ρ) =
Jτ (ρ, λ)
n!
n!
for all λ, ρ ∈ DP (n). Unfortunately, the quantity Jτ (λ, ρ) can be negative. For
instance one can check from Lemma 5.6 that J(2,1) ((2, 1), (2, 1)) = −1.
To deal with the complication raised in the previous paragraph, it is helpful to
introduce a genuine Markov chain L on DP (n), with transition probabilities
L(λ, ρ) =
2gρ
ngλ
2l(η)−l(ρ) .
η∈DP (n−1)
ηλ,ηρ
Here η λ means that η is obtained from λ by decreasing the size of some part
by exactly one.
Lemma 5.7 proves that L is a Markov chain which is reversible with respect to
shifted Plancherel measure. As is mentioned in the proof, the definition of L was
motivated by the theory of harmonic functions on Bratelli diagrams.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3713
Lemma 5.7. The transition probabilities of L are real and non-negative and sum
to 1. Moreover the chain L is reversible with respect to shifted Plancherel measure.
Proof. This is a special case of a construction in Section 2 of [14]. To see this,
one takes the underlying Bratelli diagram to be the Schur graph, the properties
of which are discussed in Section 5 of [5]. The vertices of the Schur graph are all
partitions of all non-negative integers with distinct parts, and the edge multiplicity
between η ∈ DP (n − 1) and λ ∈ DP (n) is 1 if η λ, and is 0 otherwise. The
combinatorial dimension of a shape λ is gλ , and if π denotes shifted Plancherel
measure, the function π(λ)
gλ is harmonic on the Schur graph.
Proposition 5.9 will establish a fundamental relation between the Markov chain
L and the “Markov chain” J(n−1,1) . The argument involves symmetric function
theory, and first a lemma is needed about properties of a certain subring Γ of the
ring of symmetric functions, defined on page 252 of [29].
Recall from page 255 of [29] that there is an inner product on Γ which satisfies
the properties
(1) pλ , pµ = 2−l(λ) zλ δλ,µ if λ, µ ∈ OP (n).
(2) Pλ , Pµ = 2−l(λ) δλ,µ if λ, µ ∈ DP (n).
Here pλ is a power sum symmetric function and Pλ is a Hall-Littlewood polynomial
with the parameter t = −1. It is also helpful to recall, from page 247 of [29], that
for λ ∈ DP (n),
2l(ν)
Xνλ pν .
Pλ = 2−l(λ)
zν
ν∈OP (n)
Lemma 5.8. Let p⊥
1 be the adjoint in Γ of multiplication by p1 .
1 ∂
(1) ([29], page 265) p⊥
1 = 2 ∂p1 .
⊥
l(η)−l(λ)
Pη .
(2) p1 Pλ = η∈DP (n−1) 2
ηλ
Proof. Only the second assertion needs to be proved. Consider the coefficient of Pη
in p⊥
1 Pλ . It is
Pλ , p1 Pη p⊥
1 Pλ , Pη =
= 2l(η) Pλ , p1 Pη .
Pη , Pη Pη , Pη From Section 3.8 of [29],
p1 Pη =
Pλ ,
λ∈DP (n)
ηλ
which implies the result.
Proposition 5.9 can now be proved.
Proposition 5.9. Suppose that n ≥ 3, so that (n − 1, 1) is a strict partition. Then
L(λ, ρ) =
n−2
J(n−1,1) (λ, ρ)
n
for λ, ρ ∈ DP (n) such that λ = ρ.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3714
JASON FULMAN
Proof. By Lemma 5.6,
J(n−1,1) (λ, ρ) =
gρ
2l(ρ) gλ (n
− 2)
ν∈OP (n)
2l(ν) Xνλ Xνρ (m1 (ν) − 2)
.
zν
Since λ = ρ, Lemma 5.3 and part 1 of Lemma 5.8 imply that this is equal to
gρ
2l(ρ) gλ (n
− 2)
ν∈OP (n)
2l(ν) Xνλ Xνρ m1 (ν)
zν
=
gρ
2l(ρ) gλ (n − 2)
=
2l(λ) gρ
2p1 p⊥
1 Pλ , Pρ gλ (n − 2)
=
2l(λ)+1 gρ ⊥
p Pλ , p⊥
1 Pρ .
gλ (n − 2) 1
ν∈OP (n)
2l(ν) λ
m1 (ν)
Xν pν ,
zν
ν∈OP (n)
2l(ν) ρ
Xν pν
zν
By part (2) of Lemma 5.8, this is
2l(λ)+1 gρ
2l(η)−l(λ) Pη ,
2l(η)−l(ρ) Pη
gλ (n − 2) η∈DP (n−1)
η∈DP (n−1)
ηλ
2gρ
=
gλ (n − 2)
=
ηρ
l(η)−l(ρ)
2
η∈DP (n−1)
ηλ,ηρ
n
L(λ, ρ).
n−2
5.3. Central limit theorem
Plancherel measure. This subsection
for shifted
λ
Xµ
n!
studies the statistic W = zµ 2n−l(µ)
,
where
µ ∈ OP (n) is fixed and λ ∈ DP (n)
gλ
is chosen from shifted Plancherel measure. The main goal is a proof of Theorem
5.2. Note that Lemma 5.4 implies that E(W ) = 0 if µ = (1n ) and that E(W 2 ) = 1.
Using L, one constructs an exchangeable pair (W, W ) as follows. Choose λ
from the shifted Plancherel measure on partitions of size n. Then choose ρ with
probability L(λ, ρ) and let (W, W ) = (W (λ), W (ρ)). Using Jτ , one constructs
an “exchangeable pair” (W, W ∗ ) as follows. Choose λ from the shifted Plancherel
measure on partitions of size n. Then choose ρ with “probability” Jτ (λ, ρ) and let
(W, W ∗ ) = (W (λ), W (ρ)). The pair (W, W ) is a valid candidate for Stein’s method.
However the pair (W, W ∗ ) is much easier to work with, and by Proposition 5.9, when
τ = (n − 1, 1), this gives insight into the genuine exchangeable pair (W, W ). Even
though the transition probabilities of Jτ can be negative, for convenience the usual
language of probability theory (expected value, variance, etc.) will be used when
working with them.
τ
X
Lemma 5.10.
(1) E(W ∗ |W ) = gτµ W .
(2) E(W |W ) =
m1 (µ)
n W.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3715
Proof. For the first assertion, the definition of J(n−1,1) and Lemma 5.4 imply that
2l(ν) X λ X ρ X τ Xµρ
n!
gρ
ν ν ν
E(W ∗ |λ) =
zν
gρ
zµ 2n−l(µ)
2l(ρ) gλ gτ
ρ∈DP (n)
ν∈OP (n)
2l(ν) X λ X τ Xνρ Xµρ
n!
1
ν ν
=
zν
zµ 2n−l(µ) gλ gτ
2l(ρ)
ν∈OP (n)
ρ∈DP (n)
Xµλ Xµτ
n!
=
zµ 2n−l(µ) gλ gτ
τ
Xµ
=
W (λ).
gτ
The first assertion follows since this depends on λ only through W .
For the second assertion, observe that by Proposition 5.9,
L(λ, ρ) (W (ρ) − W (λ))
E(W − W |λ) =
ρ
n−2 =
J(n−1,1) (λ, ρ) (W (ρ) − W (λ))
n
ρ
n−2
n−2 J(n−1,1) (λ, ρ)W (ρ).
= −
W (λ) +
n
n
ρ
By the first assertion and Lemma 5.6, this is
n−2
m1 (µ)
n − 2 m1 (µ) − 2
−
W (λ) =
− 1 W (λ),
W (λ) +
n
n
n−2
n
which implies the result.
Corollary 5.11 will not be needed but is worth recording.
Xτ
Corollary 5.11. The eigenvalues of Jτ are gτµ as µ ranges over OP (n). The
λ
Xµ
n!
are a basis of eigenvectors of Jτ , orthonormal
functions ψµ (λ) =
zµ 2n−l(µ) gλ
with respect to the inner product
f1 , f2 =
λ∈DP (n)
f1 (λ)f2 (λ)
2n−l(λ) gλ2
.
n!
Proof. The proof of part (1) of Lemma 5.10 shows that ψµ is an eigenvector of Jτ
Xτ
with eigenvalue gτµ . The orthonormality assertion follows from Lemma 5.4 and
the fact from [29] that all Xµλ are real valued. The basis assertion follows since
|DP (n)| = |OP (n)|.
Xτ
Lemma 5.12. E(W ∗ − W )2 = 2 1 − gτµ .
Proof. This follows from the proof of Lemma 2.3 (which does not require nonnegative transition probabilities) and part (1) of Lemma 5.10.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3716
JASON FULMAN
For the remainder of this subsection, pm (K̃ν ) will denote the coefficient of K̃ν
in (K̃µ )m . When m = 0 this is to be interpreted through Lemma 5.5, so that
p0 (K̃(1n ) ) = 1 and p0 (K̃ν ) = 0 for ν = (1n ). Due to the signs in the definition
so care must be taken in working with
of K̃ν , these numbers are not probabilities,
them. For instance it is not true that ν∈OP (n) pm (K̃ν ) = 1. However the following
three relations will be useful.
l(ν)−n
Lemma 5.13.
= 22[l(µ)−n] .
ν∈OP (n) p2 (K̃ν )2
Proof. Using Lemma 5.5, one has that
p2 (K̃ν )2l(ν) = 22l(µ)−n
ν∈OP (n)
2−l(ρ)
ρ∈DP (n)
By page 248 of [29],
Xνn
(Xµρ )2
gρ
ν∈OP (n)
2l(ν) ρ
Xν .
zν
= 1, so that Lemma 5.3 implies that
ν∈OP (n)
2l(ν) ρ n
Xν Xν = 2δρ,n .
zν
The result follows.
λ 4
l(ν)
X
Lemma 5.14. E gλµ
= (2n−l(µ) )4 ν∈OP (n) p2 (K̃ν )2 22n n!zν .
Proof. Consider the coefficient of K̃(1n ) in (K̃µ )4 . On one hand, by Lemma 5.5 it
λ 4
X
is equal to (2l(µ)−n )4 E gλµ . On the other hand,
⎡
(K̃µ )4 = (K̃µ )2 (K̃µ )2 = ⎣
⎤2
p2 (K̃ν )K̃ν ⎦ .
ν∈OP (n)
From Lemmas 5.5 and Lemma 5.4, it follows that the coefficient of K̃(1n ) in K̃ν1 K̃ν2
is 0 if ν1 = ν2 , and that the coefficient of K̃(1n ) in (K̃ν )2 is
coefficient of K̃(1n ) in (K̃µ )4 is equal to
ν∈OP (n)
p2 (K̃ν )2
2l(ν)−n zν
n!
. Thus the
2l(ν)−n zν
.
n!
Comparing the two expressions for the coefficient of K̃(1n ) in (K̃µ )4 proves the
result.
n−l(µ) 2 l(ν)−n
zν .
Lemma 5.15. p3 (K̃µ ) = 2 zµ
ν∈OP (n) p2 (K̃ν ) 2
Proof. By Lemma 5.5,
n!(2l(µ)−n )3
p3 (K̃µ ) =
E
zµ
Xµλ
gλ
!4
.
Now use Lemma 5.14.
The next lemmas are crucial.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
Lemma 5.16.
(1)
n!
E[(W − W ) |λ] =
zµ 2l(µ)
∗
3717
2
l(ν)
2
p2 (K̃ν )
ν∈OP (n)
2Xµτ
Xντ
+1−
gτ
gτ
Xνλ
.
gλ
(2)
V ar(E[(W ∗ − W )2 |λ]) =
n!2n
2
zµ 22l(µ)
2l(ν) p2 (K̃ν )2 zν
ν∈OP (n)
ν=(1n )
2Xµτ
Xντ
+1−
gτ
gτ
2
.
Proof. Note that E((W ∗ )2 |λ) is equal to
n!
zµ 2n−l(µ)
=
=
n!
zµ 2n−l(µ)
n!
zµ 2l(µ)
ρ∈DP (n)
ν∈OP (n)
gρ
l(ρ)
2 gλ gτ
ν∈OP (n)
⎛
Xνλ Xντ l(ν) ⎝ 1
2
gλ gτ
zν
Xνλ
ν∈OP (n)
Xντ
gλ gτ
2l(ν) Xνλ Xνρ Xντ
zν
ρ∈DP (n)
gρ2 Xνρ
2l(ρ) gρ
Xµρ
gρ
2
Xµρ
gρ
2
⎞
⎠
2l(ν) p2 (K̃ν ).
The first equality switched the order of summation and the second equality is
Lemma 5.5.
Xλ
Next we claim that W 2 = zµ 2n!l(µ) ν∈OP (n) 2l(ν) p2 (K̃ν ) gλν . To see this note by
Lemma 5.5 that
Xρ
22l(µ)−2n 2n−l(ρ) ν (Xµρ )2 = p2 (K̃ν ).
zν
gρ
ρ∈DP (n)
l(ν)
Xλ
Multiplying both sides by 22l(µ) zn!µ gλν , and summing over ν ∈ OP (n), the claimed
expression for W 2 follows from Lemma 5.3.
Now observe from Lemma 5.10 that
E[(W ∗ − W )2 |λ] = E((W ∗ )2 |λ) − 2W E(W ∗ |λ) + W 2
2Xµτ
= E((W ∗ )2 |λ) + 1 −
W 2.
gτ
The first part of the lemma follows from this and the previous two paragraphs.
From the first part of the lemma and Lemma 5.4, it follows that
τ
2Xµτ 2
Xν
n!2n
2l(ν) p2 (K̃ν )2 zν
+1−
.
E(E[(W ∗ − W )2 |λ]2 ) = 2 2l(µ)
gτ
gτ
zµ 2
ν∈OP (n)
Lemmas 5.5 and 5.4 imply that p2 (K̃(1n ) ) =
lemma now follows since by Lemma 5.12,
zµ
.
n!2n−l(µ)
The second part of the
Xµτ 2
.
V ar(E[(W − W ) |λ]) = E(E[(W − W ) |λ] ) − 4 1 −
gτ
∗
2
∗
2
2
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3718
JASON FULMAN
Lemma 5.17. Let k be a positive integer.
(1) E(W ∗ − W )k is equal to
n!2n−l(µ)
zµ
k/2 k
(−1)
k−m
m=0
k
m
ν∈OP (n)
Xντ zν
pm (K̃ν )pk−m (K̃ν ).
gτ n!2n−l(ν)
(2) E(W ∗ − W )4 is equal to
⎡
⎤
2
τ
2
Xµτ
z
n!2n−l(µ)
p
(
K̃
)
X
ν 2
ν
⎣
⎦.
8 1−
−6 1− ν
zµ
gτ
gτ
n!2n−l(ν)
ν∈OP (n)
Proof. To prove the first assertion, observe that
E((W ∗ − W )k |λ)
k/2 n!
=
zµ 2n−l(µ)
ρ∈DP (n)
gρ
l(ρ)
2 gλ gτ
ρ m
Xµ
k−m k
·
(−1)
m
gρ
m=0
k
Xµλ
gλ
ν∈OP (n)
2l(ν) Xνλ Xνρ Xντ
zν
!k−m
!k−m
k
λ
X
k
1 µ
=
(−1)k−m
m
gλ gτ m=0
gλ
zµ 2n−l(µ)
2l(ν) X λ X τ gρ2 Xνρ Xµρ m
ν ν
·
zν
gρ
2l(ρ) gρ
n!
k/2
ν∈OP (n)
=
n!
ρ∈DP (n)
k/2
k
1
k−m k
(−1)
(2n−l(µ) )m
m
2n gλ gτ m=0
zµ 2n−l(µ)
·
2l(ν) Xνλ Xντ pm (K̃ν ).
Xµλ
gλ
!k−m
ν∈OP (n)
The second equality switched the order of summation and the last equation is
Lemma 5.5. Hence E(W ∗ − W )k is equal to
E(E((W ∗ − W )k |λ))
k/2 n−l(µ) k k
n!
(2
)
k−m k
=
(−1)
m
2n
zµ 2n−l(µ)
m=0
⎡
!k−m ⎤
λ
τ
λ
X
X
p
(
K̃
)
X
m
ν
µ
⎦.
·
2l(ν) ν n−l(µ) k−m E ⎣ ν
gτ (2
gλ
gλ
)
ν∈OP (n)
The first assertion now follows from Lemma 5.5.
For the second assertion, the first assertion gives that E(W ∗ − W )4 is equal to
n!2n−l(µ)
zµ
2 4
(−1)m
m=0
4
m
ν∈OP (n)
zν
n−l(ν)
2
n!
Xντ
pm (K̃ν )p4−m (K̃ν ).
gτ
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
The special case τ = (n) gives that
n−l(µ) 2 4
n!2
m 4
(−1)
0=
m
zµ
m=0
ν∈OP (n)
Thus in general, E(W ∗ − W )4 is equal to
n−l(µ) 2 4
n!2
m 4
−
(−1)
m
zµ
m=0
ν∈OP (n)
3719
zν
pm (K̃ν )p4−m (K̃ν ).
n−l(ν)
2
n!
Xτ
1− ν
gτ
zν pm (K̃ν )p4−m (K̃ν )
.
2n−l(ν) n!
The m = 0, 4 terms vanish since only ν = (1n ) could contribute, but it contributes
0. The m = 2 term contributes
n−l(µ) 2 n!2
X τ zν p2 (K̃ν )2
.
−6
1− ν
zµ
gτ
n!2n−l(ν)
ν∈OP (n)
The m = 1, 3 terms are equal and together contribute
n−l(µ) Xµτ
n!2
8
1−
p3 (K̃µ ).
zµ
gτ
By Lemma 5.15, this is
n−l(µ) 2 Xµτ
n!2
8
1−
zµ
gτ
ν∈OP (n)
zν p2 (K̃ν )2
.
n!2n−l(ν)
Adding the terms together proves the second assertion.
The final combinatorial ingredient for the proof of Theorem 5.2 is an upper bound
for p2 (K̃ν ). Lemma 5.18 reduces this to a result obtained earlier in the section on
Gelfand pairs.
Lemma 5.18. Let p2 (K̃ν ) be as in this subsection and let p2 (Kν ) be as in Subsection 4.4. Then p2 (K̃ν ) ≤ p2 (Kν ) for all ν ∈ OP (n).
Proof. Clearly p2 (Kν ) is equal to |Bn ων Bn | multiplied by the coefficient of ων in
⎛
⎞2
⎝ 1
x1 ωµ x2 ⎠ .
|Bn |2
x1 ,x2 ∈Bn
Next consider p2 (K̃ν ). By definition it is the coefficient of K̃ν in (K̃µ )2 . Corollary
3.2 of [42] gives that the coefficient of ων in K̃ν is |Bn ω1ν Bn | . Hence p2 (K̃ν ) is
|Bn ων Bn | multiplied by the coefficient of ων in
⎛
⎞2
1
⎝
φ(x1 x2 )x1 ωµ x2 ⎠ ,
|Bn |2
x1 ,x2 ∈Bn
where φ is the linear character of Bn described in Subsection 4.1. Since φ(x1 x2 ) is
always ±1, the result follows.
Finally, we prove Theorem 5.2.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3720
JASON FULMAN
Proof of Theorem 5.2. One can assume that n ≥ 2(2i + 1). In the O notation
throughout the proof, i is fixed and n is growing.
One applies Theorem 2.1 to the pair (W, W ), where µ = (2i + 1, 1n−2i−1 ). Then
a = 1 − m1n(µ) by part (2) of Lemma 5.10. Proposition 5.9 implies that
E((W − W )2 |λ) =
for all λ. Thus
V ar(E[(W − W ) |λ]) =
2
n−2
E((W ∗ − W )2 |λ)
n
n−2
n
2
V ar(E[(W ∗ − W )2 |λ]).
Also note, as in the proof of Theorem 3.12, that
V ar(E[(W − W )2 |W ]) ≤ V ar(E[(W − W )2 |λ]).
Lemmas 5.16 and 5.6 give that the first error term in applying Theorem 2.1 to the
pair (W, W ) is at most
n−l(µ)
zν p2 (K̃ν )2 n + m1 (ν) − 2m1 (µ) 2
n!2
n−2
.
n
n−2
n!2n−l(ν)
1 − m1 (µ) z
ν∈OP (n)
µ
n
Observe that
n−2 n!2n−l(µ)
m (µ)
n
1− 1n
zµ
ν=(1n )
is O(n2i+2 ). In the sum over ν = (1n ), the term
coming from ν = (2i+1, 2i+1, 1n−4i−2 ) contributes 0. Since n ≥ 2(2i+1), Lemmas
2
ν p2 (K̃ν )
−4i−3
5.18 and 4.18 imply that zn!2
) for all other ν in the sum, and the
n−l(ν) = O(n
number of such ν is bounded by a constant depending on i but not n. Moreover if
& ν ) = 0, then p2 (Kν ) = 0, implying by Lemma 4.17 that m1 (ν) ≥ n − 4i − 2
p2 (K
1 (µ)
= O(n−1 ). Combining these observations gives us
and thus that n+m1 (ν)−2m
n−2
that the first term in the upper bound of Theorem 2.1 is O(n−1/2 ).
By the Cauchy-Schwarz inequality,
E|W − W |3 ≤ E(W − W )2 E(W − W )4 .
∗
k
Proposition 5.9 implies that E(W − W )k = n−2
n E(W − W ) for all k. Hence
Lemmas 5.12 and 5.6 imply that the second error term in Theorem 2.1 is at most
1/4
(n − 2) E(W ∗ − W )4
.
πn
(1 − m1 (µ) )
n
By part (2) of Lemma 5.17 and Lemma 5.6, this can be written as
⎤1/4
⎡
n−l(µ) 2
2
)
8m1 (µ) 6m1 (ν) zν p2 (K̃ν ) ⎦
⎣ 1 (n!2
+
.
2−
π (1 − m1 (µ) )(zµ )2
n
n
n!2n−l(ν)
n
The quantity
ν∈OP (n)
(n!2n−l(µ) )2
m (µ)
(1− 1n )(zµ )2
is O(n4i+3 ). The contribution from ν = (1n ) to the sum
3
is 8(2i+1)
p2 (K̃(1n ) )2 , which by Lemmas 5.5 and 5.4 is 8(2i+1)
n−4i−3 + O(n−4i−4 ).
n
24i
By Lemmas 5.18 and 4.18, the contribution to the sum from ν not equal to either (1n ) or (2i + 1, 2i + 1, 1n−4i−2 ) is O(n−4i−4 ), and the number of such ν
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3721
with p2 (K̃ν ) = 0 is bounded by a constant depending on i. The contribution
zν p2 (K̃ν )2
to the sum from ν = (2i + 1, 2i + 1, 1n−4i−2 ) is − 4(2i+1)
. Lemma 5.13
n
n!2n−l(ν)
implies that p2 (K̃(2i+1,2i+1,1n−4i−2 ) ) = 1 + O(n−1 ), since p2 (K̃ν ) = O(n−1 ) for
ν = (2i + 1, 2i + 1, 1n−4i−2 ) by Lemma 5.18. Thus the contribution from ν =
3
(2i + 1, 2i + 1, 1n−4i−2 ) is − 8(2i+1)
n−4i−3 + O(n−4i−4 ). Hence there is useful
24i
cancellation and
8m1 (µ) 6m1 (ν) zν p2 (K̃ν )2
+
= O(n−4i−4 ).
2−
n
n
n!2n−l(ν)
ν∈OP (n)
It follows that the second term in the upper bound of Theorem 2.1 is O(n−1/4 ),
completing the proof.
6. Association schemes
This final section adapts techniques from earlier sections to study the spectrum
of an adjacency matrix of an association scheme. As is well known (see for instance
Chapter 3 of [2]) this includes the spectrum of a distance regular graph as a special
case.
Subsection 6.1 discusses needed facts about association schemes. Subsection 6.2
derives a general central limit theorem for the spectrum of an association scheme.
Subsection 6.3 illustrates the theory of Subsection 6.2 for the special case of the
Hamming scheme, where the result amounts to a central limit theorem for the
spectrum of the Hamming graph, or equivalently for values of q-Krawtchouk polynomials.
6.1. Background on association schemes. This subsection gives preliminaries
about association schemes, using notation from [30]. Another useful reference is
[2], and what we call association schemes some authors call symmetric association
schemes.
Definition. An association scheme with n classes consists of a finite set X with
n + 1 relations R0 , · · · , Rn defined on X which satisfy:
(1)
(2)
(3)
(4)
Each Ri is symmetric: (x, y) ∈ Ri ⇒ (y, x) ∈ Ri .
For every x, y ∈ X, one has that (x, y) ∈ Ri for exactly one i.
R0 = {(x, x) : x ∈ X} is the identity relation.
If (x, y) ∈ Rk , the number of z ∈ X such that (x, z) ∈ Ri and (y, z) ∈ Rj
is a constant cijk depending on i, j, k but not on the particular choice of x
and y.
The adjacency matrix Di corresponding to the relation Ri is the |X|×|X| matrix
with rows and columns labeled by the points of X defined by
1 if (x, y) ∈ Ri ,
(Di )x,y =
0 otherwise.
The
n Bose-Melner algebra is defined to be the vector space consisting of all matrices
i=0 ai Di with ai real. Since the matrices in the Bose-Melner algebra are symmetric and commute with each other (by parts (1) and (4) of the definition of an
association scheme), the Bose-Melner algebra is semisimple and has a distinguished
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3722
JASON FULMAN
basis of primitive idempotents J0 , · · · , Jn satisfying
(1) Ji2 = Ji , i = 0, · · · , n.
i = k.
(2) J
i Jk = 0,
n
(3)
i=0 Ji = I.
Here I is the identity matrix.
The D’s are also a basis of the Bose-Melner algebra, so one can write
Ds =
n
φs (i)Ji ,
s = 0, · · · , n,
i=0
where the φs (i) are real numbers. Since Ds Ji = φs (i)Ji , the φs (i) are the eigenvalues of Ds . For later use note that since D0 and ni=0 Ji are both the identity
matrix, φ0 (j) = 1 for all j.
Let µi be the rank of Ji . For all s, this is the multiplicity of the eigenvalue
φs (i) of Ds . We define the Plancherel measure of the association scheme to be the
µi
.
probability measure on {0, · · · , n} which chooses i with probability |X|
Lemmas 6.1 and 6.2 are the orthogonality relations for eigenvalues of association
schemes. To state them it is helpful to define vi =
cii0 , so that for any x, one has
that vi is the number of pairs (x, y) ∈ Ri . Clearly ni=0 vi = |X|.
Lemma 6.1 ([30], page 655). For 0 ≤ k, l ≤ n,
n
φr (k)φr (l)
r=0
vr
=
|X|
δk,l .
µk
Lemma 6.2 ([30], page 655). For 0 ≤ k, l ≤ n,
n
µi φk (i)φl (i) = |X|vk δk,l .
i=0
Lemma 6.3 will be crucial.
Lemma 6.3. The coefficient of Dl in Ds1 · · · Dsm is equal to
n
1 µi
φs (i) · · · φsm (i)φl (i).
vl i=0 |X| 1
In particular, the coefficient of Dl in (Ds )m is
1
E[φs (i)m φl (i)],
vl
where i is random from the Plancherel measure of the association scheme.
n
Proof. Since Ds = i=0 φs (i)Ji , one knows that
Ds1 · · · Dsm =
n
φs1 (i) · · · φsm (i)Ji .
i=0
By pages 654 and 655 of [30],
1 µi φl (i)
Dl .
|X|
vl
n
Ji =
l=0
The result follows.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3723
Let pm (r) denote (vvsr)m multiplied by the coefficient of Dr in (Ds )m . It follows
from the definitions that pm (r) admits the following probabilistic interpretation.
Start from some point x0 ∈ X, move to a random x1 ∈ X such that (x0 , x1 ) ∈ Rs ,
then to a random x2 ∈ X such that (x1 , x2 ) ∈ Rs , and so on until one obtains xm .
Then pm (r) is the probability that (x0 , xm ) ∈ Rr .
The following fact will be useful.
2
n
Lemma 6.4.
(1) p4 (0) = r=0 p2v(r)
.
r
n p2 (r)2
(2) p3 (s) = vs r=0 vr .
Proof. The first assertion is clear from the probabilistic interpretation of pm (r).
For an analytic proof, note that by Lemma 6.3 (in the first and fourth equalities)
and Lemma 6.1 (in the second and third equalities)
!2
n
n
n
p2 (r)2
1
1 µi
2
φs (i) φr (i)
=
vr
v
(vs )2 i=0 |X|
r=0
r=0 r
=
n
n
1 1 (µi )2
φs (i)4 φr (i)2
(vs )4 r=0 vr i=0 |X|2
=
n
1 µi
φs (i)4
(vs )4 i=0 |X|
=
p4 (0).
The second assertion follows from the first assertion since p3 (s) = vs p4 (0), as can
be seen either by the probabilistic interpretation of pm (r) or by computing both
sides of the equation using Lemma 6.3.
6.2. Central limit theorem for the spectrum. Recall that the goal is to study
Plancherel measure of association schemes, which chooses i ∈ {0, · · · , n} with probµi
ability |X|
. More precisely, for s fixed, a central limit theorem is proved for the
. Note by Lemma 6.2 that
random variable W whose value at i ∈ {0, · · · , n} is φ√sv(i)
s
2
E(W ) = 0 if s = 0 and that E(W ) = 1.
Given t ∈ {0, · · · , n}, we define a Markov chain Lt on the set {0, · · · , n} which
moves from i to j with probability
n
µj φr (i)φr (t)φr (j)
.
Lt (i, j) :=
|X| r=0
(vr )2
Lemma 6.5. The transition probabilities of Lt are real and non-negative and sum
to 1. Moreover the chain Lt is reversible with respect to Plancherel measure.
Proof. By Theorems 3.6 and 3.8 of [2], the Lt (i, j) are non-negative real numbers.
Next, observe that
n
n
n
1 φr (i)φr (t) Lt (i, j) =
µj φr (j).
|X| r=0 (vr )2 j=0
j=0
Since φ0 (j) = 1 for all j, Lemma 6.2 implies that
n
n
φr (i)φr (t)
Lt (i, j) = v0
δr,0 = 1.
(vr )2
r=0
j=0
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3724
JASON FULMAN
For the reversibility assertion, it is clear from the definition of Lt that
µi
µj
Lt (i, j) =
Lt (j, i)
|X|
|X|
for all i, j.
One uses the chain Lt to construct an exchangeable pair (W, W ) in the usual
way. First choose i from the Plancherel measure, then choose j with probability
Lt (i, j), and finally let (W, W ) = (W (i), W (j)).
W.
Lemma 6.6. E(W |W ) = φsv(t)
s
Proof. By the definitions and Lemma 6.2, one has that
n
1 E(W |i) = √
Lt (i, j)φs (j)
vs j=0
1 1 φr (i)φr (t) µj φs (j)φr (j)
√
vs |X| r=0 (vr )2 j=0
φs (t)
=
W (i).
vs
n
n
=
The result follows since this depends on i only through W .
Corollary 6.7 will not be used but is worth recording.
Corollary 6.7. The eigenvalues of Lt are
φs (t)
vs
for 0 ≤ s ≤ n. The functions
φs (i)
√
vs
ψs (i) =
are a basis of eigenvectors of Lt , orthonormal with respect to the
inner product
n
µi
.
f1 (i)f2 (i)
f1 , f2 =
|X|
i=0
Proof. The proof of Lemma 6.6 shows that ψs is an eigenvector of Lt with eigenvalue
φs (t)
vs . The orthonormality assertion follows from Lemma 6.2, and the basis assertion
follows since n + 1 linearly independent eigenvectors have been given.
Lemma 6.8. E(W − W )2 = 2 1 − φsv(t)
.
s
Proof. This is immediate from Lemmas 2.3 and 6.6.
The next lemmas are quite helpful.
Lemma 6.9.
(1)
E[(W − W )2 |i] = vs
n
p2 (r)
r=0
(2)
V ar(E[(W − W )2 |i]) = (vs )2
φr (t)
2φs (t)
+1−
vr
vs
φr (i)
.
vr
2
n
p2 (r)2 φr (t)
2φs (t)
+1−
.
vr
vr
vs
r=1
Note that the sum in the second part of the lemma is over non-zero r.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3725
Proof. First observe that
E((W )2 |i)
=
n
n
1 µj φr (i)φr (t)φr (j)
φs (j)2
vs j=0 |X| r=0
(vr )2
=
n
n
1 φr (i)φr (t) µj
φs (j)2 φr (j)
vs r=0 (vr )2 j=0 |X|
= vs
n
φr (i)φr (t)
r=0
(vr )2
p2 (r).
The second equality switched the order of summation, and the third equality was
Lemma 6.3.
n
. To see this note that Lemma 6.3
Next we claim that W 2 = vs r=0 p2 (r) φrv(i)
r
gives
n
1 µj
φs (j)2 φr (j) = p2 (r).
(vs )2 j=0 |X|
Then multiply both sides of the equation by vs φrv(i)
, sum over r, and apply Lemma
r
6.1.
Now note from Lemma 6.6 that
E[(W − W )2 |i]
= E((W )2 |i) − 2W E(W |i) + W 2
2φs (t)
= E((W )2 |i) + 1 −
W 2.
vs
The first part of the lemma follows from this and the previous two paragraphs.
From the first part of the lemma and Lemma 6.2, one has that
2
n
p2 (r)2 φr (t)
2φs (t)
E(E[(W − W ) |i] ) = (vs )
+1−
.
vr
vr
vs
r=0
Since p2 (0) =
2
2
2
1
vs ,
it follows that the contribution to the above expression coming
2
. The second part of the lemma follows since by Lemma
from r = 0 is 4 1 − φsv(t)
s
6.8,
2
φs (t)
.
V ar(E[(W − W )2 |i]) = E(E[(W − W )2 |i]2 ) − 4 1 −
vs
Lemma 6.10. Let k be a positive integer.
k n φr (t) pm (r)pk−m (r)
k
(1) E(W − W )k = (vs )k/2 m=0 (−1)k−m m
.
'
r=0 vr vr 2 (
n
φs (t)
φr (t)
p2 (r)
4
2
− 6 1 − vr
.
(2) E(W − W ) = vs
r=0 8 1 − vs
vr
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3726
JASON FULMAN
Proof. For the first assertion, note that E((W − W )k |i) is equal to
n
n
1
µj φr (i)φr (t)φr (j)
(φs (j) − φs (i))k
(vr )2
(vs )k/2 j=0 |X| r=0
=
k
n
φr (i)φr (t)
1
k−m k
k−m
(−1)
(i)
φ
s
(vr )2
m
(vs )k/2 m=0
r=0
·
=
n
µj
φs (j)m φr (j)
|X|
j=0
k
n
1
φr (i)φr (t)pm (r)
k−m k
k−m
m
(−1)
(vs )
,
φs (i)
m
(vr )2
(vs )k/2 m=0
r=0
where the last equality is Lemma 6.3. Consequently,
E(W − W )k
= E(E((W − W )k |i))
= (vs )k/2
k
(−1)k−m
m=0
·
n
φr (t)
r=0
(vr )
p (r)
2 m
k
m
n
µi φs (i)k−m φr (i)
.
|X|
(vs )k−m
i=0
The first assertion now follows from Lemma 6.3.
For the second part, the first assertion gives that
4
n
φr (t) pm (r)p4−m (r)
4
2
m 4
(−1)
.
E(W − W ) = (vs )
m r=0 vr
vr
m=0
By page 654 of [30], φr (0) = vr for all r. Thus specializing to t = 0 shows that
4
n
pm (r)p4−m (r)
2
m 4
0 = (vs )
(−1)
.
m
vr
m=0
r=0
So for general t, one has that
E(W − W )4 = −(vs )2
4
(−1)m
m=0
n 4
φr (t) pm (r)p4−m (r)
.
1−
m r=0
vr
vr
The contribution from the m = 0, 4 terms is 0, since p0 (r) = 0 if r = 0. The
contribution from the m = 2 term is
n φr (t) p2 (r)2
.
−6(vs )2
1−
vr
vr
r=0
The m = 1, 3 terms are equal and together contribute
n φr (t) p1 (r)p3 (r)
φs (t) p3 (s)
8(vs )2
= 8(vs )2 1 −
1−
vr
vr
vs
vs
r=0
n
p2 (r)2
φs (t)
= 8(vs )2 1 −
,
vs
vr
r=0
where the final equality is part (2) of Lemma 6.4. Adding the terms completes the
proof of the second assertion.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3727
Arguing as in the proof of Theorem 3.12, and using the above lemmas, one
obtains the following result.
Theorem 6.11. Fix s ∈ {0, · · · , n}, and let W =
φs (i)
√
vs
where i is chosen from the
Plancherel measure of the association scheme. Fix t such that 0 <
for all real x0 ,
x0
2
− x2
P(W ≤ x0 ) − √1
e
dx
2π −∞
n
2
p2 (r)2 φr (t)
2φs (t)
vs +1−
≤
a r=1 vr
vr
vs
φs (t)
vs
< 1. Then
n 1/4
vs p2 (r)2
6
φr (t)
+
,
8−
1−
a
vr
vr
(π)1/4 r=0
√
where a = 1 −
φs (t)
vs .
6.3. Example: Hamming scheme. This subsection illustrates the theory of Subsection 6.2 for the Hamming scheme H(d, q), where d, q ≥ 2 are positive integers.
To begin we recall the definition of H(d, q) and its basic properties, referring the
reader to Chapter 3 of [2] for more details. The elements X of H(d, q) are d-tuples
of numbers chosen from {1, · · · , q}; clearly |X| = q d . A pair (x, y) is in Ri if x
and y differ in exactly i coordinates. For 0 ≤ i ≤ d, one has that vi = (q − 1)i di
and µi = (q − 1)i di . Thus the Plancherel measure on {0, · · · , d} chooses i with
(q−1)i (di)
probability
. The numbers φs (i) are equal to
qd
s
i
d−i
(−1)j (q − 1)s−j
.
j
s−j
j=0
The polynomial φs in the variable i is known as a q-Krawtchouk polynomial.
, where i is chosen from the Plancherel measure
In what follows W = φ√1v(i)
1
of the Hamming scheme H(d, q). The Hamming graph has vertex set X and an
edge between two vertices if they differ in one coordinate. Thus the eigenvalues of
the adjacency matrix of the Hamming graph are φ1 (i) with multiplicity µi , which
motivates the study of W . Hora [22] shows that if d → ∞ and q/d → 0, then W
converges in distribution to a normal random variable with mean 0 and variance 1.
(q−1)i (di)
µi
√
and |X|
=
, it is straightforward that W
In fact since W (i) = (q−1)d−qi
qd
(q−1)d
has the same distribution as
Y1 + · · · + Yd
,
(q − 1)d
where the Y ’s are independent random variables, each equal to q−1 with probability
1
1
[11] shows
q and to −1 with probability 1 − q . Hence the Berry-Esseen theorem
q
that W satisfies a central limit theorem with the error term C d where C is a
small explicit constant.
For comparison, let us study W using Theorem 6.11 with s = 1 and t = 1. Then
q
and from the probabilistic interpretation of pm (r) one computes that
a = (q−1)d
p2 (0) =
1
(q−1)d ,
p2 (1) =
(q−2)
(q−1)d ,
and p2 (2) = 1 − d1 . The first term in the error term
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3728
JASON FULMAN
which is less than dq . The
1/4
2q 2
,
second term in the error term of Theorem 6.11 is computed to equal π(q−1)d
2q 1/4
which is at most d
. Thus one obtains a central limit theorem for W with
q 2q 1/4
.
error term d + d
To close this section, it is shown how our exchangeable pair can be used to give
O(d−1/2 ) bounds for q fixed. The key is Lemma 6.12, which shows that the Markov
chain L1 is actually a birth-death chain.
of Theorem 6.11 is then computed to equal
(q−2)2
(q−1)d ,
Lemma 6.12. The Markov chain L1 on the set {0, · · · , d} is a birth-death chain
with transition probabilities
⎧
i
if j = i − 1,
⎪
⎨ d(q−1)
i
1
L1 (i, j) =
if j = i.
d 1 − q−1
⎪
⎩
1 − di
if j = i + 1.
µj n
φr (i)φr (1)φr (j)
Proof. Recall that L1 (i, j) = |X|
. From the formula for φr (l)
r=0
(vr )2
one sees that
d
φr (l) = (q − 1)r−l dr φl (r)
l
for any 0 ≤ r, l ≤ d. Thus
n d
µj
L1 (i, j) =
(q − 1)r φ1 (r)φi (r)φj (r).
dd
i+j+1
r
|X|d i j (q − 1)
r=0
From page 152 of [30], there is a recurrence relation
(i + 1)φi+1 (r) = [(d − i)(q − 1) + i − qr]φi (r) − (q − 1)(d − i + 1)φi−1 (r).
Since φ1 (r) = d(q − 1) − qr, the recurrence is equivalent to
φ1 (r)φi (r) = i(q − 2)φi (r) + (i + 1)φi+1 (r) + (q − 1)(d − i + 1)φi−1 (r).
Applying this to the expression for L1 (i, j) at the end of the previous paragraph,
the result follows from Lemma 6.2.
Lemma 6.12 implies that |W − W | ≤ A with A = √
q
.
(q−1)d
Thus one can apply
the version of Theorem 6.11 which would arise by using Theorem 2.2 instead
of
q
Theorem 2.1. Recalling that a = (q−1)d
, one obtains an error term of 12 dq +
2
.41q
√ +1.5q .
(q−1)d
For q fixed this goes as d−1/2 .
References
1. Aldous, D. and Diaconis, P., Longest increasing subsequences: from patience sorting to the
Baik-Deift-Johansson theorem, Bull. Amer. Math. Soc. 36 (1999), 413-432. MR1694204
(2000g:60013)
2. Bannai, E. and Ito, T., Algebraic combinatorics, I. Association schemes, Benjamin/Cummings Publishing Co., Inc., Menlo Park, CA, 1984. MR882540 (87m:05001)
3. Borodin, A., Okounkov, A., and Olshanski, G., Asymptotics of Plancherel measure for symmetric groups, J. Amer. Math. Soc. 13 (2000), 481-515. MR1758751 (2001g:05103)
4. Borodin, A. and Olshanski, G., Z-measures on partitions and their scaling limits, European
J. Combin. 26 (2005), 795-834. MR2143199 (2006d:60018)
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
STEIN’S METHOD AND RANDOM CHARACTER RATIOS
3729
5. Borodin, A. and Olshanski, G., Harmonic functions on multiplicative graphs and interpolation polynomials, Elecron. J. Combin. 7 (2000), Research paper 28, 39 pages. MR1758654
(2001f:05160)
6. Chatterjee, S. and Fulman, J., Exponential approximation by exchangeable pairs and spectral
graph theory, (2006), preprint math.PR/0605552 at http://xxx.lanl.gov.
7. Deift, P., Integrable systems and combinatorial theory, Notices Amer. Math. Soc. 47 (2000),
631-640. MR1764262 (2001g:05012)
8. Diaconis, P., Group representations in probability and statistics, Institute of Mathematical
Statistics Lecture Notes, Volume 11, 1988. MR964069 (90a:60001)
9. Diaconis, P. and Holmes, S., Random walks on trees and matchings, Elec. J. Probab. 7 (2002),
17 pages (electronic). MR1887626 (2002k:60025)
10. Diaconis, P. and Shahshahani, M., Generating a random permutation with random transpositions, Z. Wahr. Verw. Gebiete 57 (1981), 159-179. MR626813 (82h:60024)
11. Durrett, R., Probability: Theory and examples, Second edition, Duxbury Press, Belmont, CA,
1996. MR1609153 (98m:60001)
12. Eskin, A. and Okounkov, A., Asymptotics of branched coverings of a torus and volumes of
moduli spaces of holomorphic differentials, Invent. Math. 145 (2001), 59-103. MR1839286
(2002g:32018)
13. Eymard, P. and Roynette, B., Marches aléatoires sur le dual de SU(2), in Analyse harmonique sur les groupes de Lie, Springer Lecture Notes in Math, Volume 497 (1975), 108-152.
MR0423457 (54:11435)
14. Fulman, J., Stein’s method and Plancherel measure of the symmetric group, Transac. Amer.
Math. Soc. 357 (2005), 555-570. MR2095623 (2005e:05156)
15. Fulman, J., Stein’s method, Jack measure, and the Metropolis algorithm, J. Combin. Theory
Ser. A 108 (2004), 275-296. MR2098845 (2005h:60021)
16. Fulman, J., An inductive proof of the Berry-Esseen theorem for character ratios, Ann. Combin.
10 (2006), 319-332. MR2284273 (2007i:05189)
17. Fulman, J., Martingales and character ratios, Trans. Amer. Math. Soc. 358 (2006), 4533-4552.
MR2231387 (2007e:05178)
18. Gross, B., Some applications of Gelfand pairs to number theory, Bull. Amer. Math. Soc. 24
(1991), 277-301. MR1074028 (91i:11055)
19. Hanlon, P., Stanley, R., and Stembridge, J., Some combinatorial aspects of the spectra of
normally distributed random matrices, in Hypergeometric functions on domains of positivity,
Jack polynomials, and applications, Contemporary Mathematics, Volume 138 (1992), 151-174.
MR1199126 (93j:05164)
20. Hoffman, P. and Humphreys, J., Projective representations of the symmetric group, Oxford
University Press, New York, 1992. MR1205350 (94f:20027)
21. Hora, A., Central limit theorem for the adjacency operators on the infinite symmetric group,
Comm. Math. Phys. 195 (1998), 405-416. MR1637801 (99i:46058)
22. Hora, A., Central limit theorems and asymptotic spectral analysis in large graphs, Infin.
Dimens. Anal. Quantum Probab. Relat. Top. 1 (1998), 221-246. MR1628244 (99g:46089)
23. Ivanov, V., The Gaussian limit for projective characters of large symmetric groups, J. Math.
Sci. (N.Y.), 121 (2004), 2330-2344. Translated from Zap. Nauchn. Sem. POMI 283 (2001),
73-97. MR1879064 (2002k:60061)
24. Ivanov, V. and Olshanski, G., Kerov’s central limit theorem for the Plancherel measure on
Young diagrams, in Symmetric Functions 2001: Surveys of developments and perspectives,
Kluwer Academic Publishers, Dodrecht, 2002.
25. Johansson, K., Discrete orthogonal polynomial ensembles and the Plancherel measure, Ann.
of Math. 153 (2001), 259-296. MR1826414 (2002g:05188)
26. Kerov, S.V., Gaussian limit for the Plancherel measure of the symmetric group, Compt. Rend.
Acad. Sci. Paris, Serie I 316 (1993), 303-308. MR1204294 (93k:20106)
27. Kerov, S.V., Anisotropic Young diagrams and Jack symmetric functions, Funct. Anal. Appl.
34 (2000), 41-51. MR1756734 (2001f:05158)
28. Letac, G., Les fonctions sphériques d’un couple de Gelfand symétrique et les chaı̂nes de
Markov, Adv. Appl. Probab. 14 (1982), 272-294. MR650123 (83i:60078)
29. Macdonald, I., Symmetric functions and Hall polynomials, Second edition, Oxford University
Press, New York, 1995. MR1354144 (96h:05207)
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
3730
JASON FULMAN
30. Macwilliams, F. and Sloane, N., The theory of error-correcting codes, Elsevier Science B.V.,
Amsterdam, 1977.
31. Matsumoto, S., Correlation functions of the shifted Schur measure, J. Math. Soc. Japan 57
(2005), 619-637. MR2139724 (2006a:60013)
32. Okounkov, A., Random matrices and random permutations, Internat. Math. Res. Notices 20
(2000), 1043-1095. MR1802530 (2002c:15045)
33. Okounkov, A., The uses of random partitions, XIVth International Congress on Mathematical
Physics, 379-403, World Sci. Publ., Hackensack, NJ 2005. MR2227852 (2007c:05013)
34. Reinert, G., Couplings for normal approximations with Stein’s method, in Microsurveys in
discrete probability, DIMACS Ser. Discrete Math. Theoret. Comput. Sci., Volume 41 (1998),
193-207. MR1630415 (99h:60043)
35. Rinott, Y. and Rotar, V., On coupling constructions and rates in the CLT for dependent
summands with applications to the antivoter model and weighted U-statistics, Ann. of Appl.
Probab. 7 (1997), 1080-1105. MR1484798 (99g:60050)
36. Rinott, Y. and Rotar, V., Normal approximations by Stein’s method, Decis. Econ. Finance
23 (2000), 15-29. MR1780090 (2001h:60037)
37. Sagan, B., The symmetric group. Representations, combinatorial algorithms, and symmetric
functions, Second edition, Springer-Verlag, New York, 2001. MR1824028 (2001m:05261)
38. Serre, J-P., Linear representations of finite groups, Springer-Verlag, New York, 1977.
MR0450380 (56:8675)
39. Shao, Q., and Su, Z., The Berry-Esseen bound for character ratios, Proc. Amer. Math. Soc.
134 (2006), 2153-2159. MR2215787
40. Sniady, P., Gaussian fluctuations of characters of symmetric groups and of Young diagrams,
Probab. Theory Related Fields 136 (2006), 263-297. MR2240789 (2007d:20020)
41. Stein, C., Approximate computation of expectations, Institute of Mathematical Statistics Lecture Notes, Volume 7, 1986. MR882007 (88j:60055)
42. Stembridge, J., On Schur’s Q-functions and the primitive idempotents of a commutative Hecke
algebra, J. Algebraic Combin. 1 (1992), 71-95. MR1162642 (93c:05113)
43. Terras, A., Fourier analysis on finite groups and applications, London Math. Society Student
Texts 43, Cambridge University Press, Cambridge, 1999. MR1695775 (2000d:11003)
44. Terras, A., Survey of spectra of Laplacians on finite symmetric spaces, Experiment. Math. 5
(1996), 15-32. MR1412951 (97m:11154)
45. Tracy, C. and Widom, H., A limit theorem for shifted Schur measures, Duke Math. Journal
123 (2004), 171-208. MR2060026 (2006c:60028)
Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania
15260
Current address: Department of Mathematics, University of Southern California, Los Angeles,
California 90089-2532
E-mail address: [email protected]
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
© Copyright 2026 Paperzz