-1-
Asymptotic Sine Laws Arising from Alternating Random
Permutations and Sequences
by Gordon Simons and Yi-ehing Yao
Universities of North Carolina at Chapel Hill and Colorado State University
Summary
In the last century, Desire Andre obtained many remarkable properties of the
numbers of alternating permutations, linking them to trigonometric functions
among other things. By considering the probability that a random permutation is alternating and that a random sequence (from a uniform distribution)
is alternating, and by conditioning on the first element of the sequence, his
results are extended and illuminated. In particular, several "asymptotic sine
laws" are obtained, some with exponential rates of convergence.
1. Introduction. A finite sequence x l ,x 2, •.• ,xn of distinct real numbers is said to be alterna-
ting if either
(1.1)
A permutation
1r
on {l,2,... ,n} is alternating if the sequence 1r(1),1r{2),... ,1r{n) is alternating.
Denote the number of alternating permutations by 2Mni the inequality 1r(1) > 1r(2) holds
for exactly Mn of these. Some sample values are:
M 2 = 1, M 3 = 2, M 4 = 5, M s = 16, M 6 = 61, M 7 = 272, M s = 1385, M g = 7936.
Let X l,X 2,,,.,Xn be iid uniformly distributed random variables on [0,1], let An denote
the event that X l,X 2,,,.,Xn is alternating with Xl > X 2, and set
It is readily apparent that
i.e., Pn is the probability that a random permutation
1r(1)
> 1r(2).
1r
on {1,2,... ,n} is alternating with
-2A great deal is known about the sequence of values Pn' due in large measure to the
extensive study of the numbers Mn by Desire Andre (1879, 1881, 1883, 1894, 1895). We
shall be content with a brief history, described here in terms of the Pn 's.
In 1879, he
established a remarkable link with the tangent and secant functions:
The terms with odd powers of x generate tan x, the others sec x. (For this reason, the
Mn's with odd index are sometimes called tangent numbers, and with even index, secant
numbers.)
This was supplemented in (1881) by many other trigonometric relationships
such as
and
sec 2 x
= P 1 + 3 P 3 x 2 + 5 P 5 x 4 + ....
Moreover, he described various expansions of P n' for fixed n
Pn = 2
(~)D+1 {_1_ + (_l)D+l + _1_ +
3D+1
I D+1
SD+1
~
1, such as
(_l)D+l + ••• }
7D+l
(1.3)
m
= 2 (~)D+1 ~
£..J
1
k=-m
.
(4k+I)D+1
(This shows that the numbers P n are, for some reason, linked to the well known Riemannzeta function.)
Pn/ {2(2/1r)D+1}
In (1883), he concluded from (1.3) (or a similar result) that the ratio
-+
1 as n -+ 00. In fact, it follows from (1.3) that the convergence is expo-
nentially fast:
(1.4)
----1
2
(i )D+1
< -n
3
'
n
~
1.
To motivate the present note, we ask the question: How is the probability of An
influenced once Xl is observed? Clearly, An should be more likely when Xl is large, and
less likely when Xl is small. A related question: How is the probability of An influenced
after one is told the rank of Xl (among X l,X 2, ... ,X n)? Again, the probability of An should
-3be positively related to the rank of Xl' Very precise answers to these questions are possible
when n is large -
answers which expose a pair of "asymptotic sine laws."
Let Pn(u) denote the conditional probability of An given Xl = u. Thus,
1
J
(1.5)
o Pn(w) dw = P n =
M
n
!'
Starting with
(1.6)
and the recursion
(1. 7)
one obtains
etc. While no clear pattern is revealed, these functions are closely related, as the following
analog of (1.3) shows:
(1.8)
Pn(u)=2(2j7r)n
~
£.J
k =-00
sin {(4k +1) ~ u}
,0~u~1,n~2.
(4k+ 1 ) n
Of course, Andre's formula (1.3) is implied by (1.5) and (1.8). In fact, the occurrence
of the numbers
'K
(ik+1) in (1.3) is "explained" by (1.8): These are just the eigenvalues of
the kernel function K(u,v):= l{u+v > 1} arising in the derivation of (1.8).
What makes (1.8) particularly interesting, and useful, is that it immediately exposes
an excellent inequality, our first "(asymptotic) sine law":
(1.9)
'K
Pn( u)
- - - sin {-u}
2 (~)n
2
<
1
n-1'
3
0~
U
~ 1, n ~ 2.
Observe that the ratio
(1.10)
is just the conditional density of Xl given the event An occurs. Combining (1.4), (1.9) and
(1.10), we see that this first sine law can be expressed as:
-4fn(u) ~ ~ sin ~ u as n ~
(1.11)
00,
uniformly in u (0 ~ u ~ 1).
The convergence is exponentially fast.
Now, let Pnr denote the conditional probability of An given the rank of Xl is r
(among X l,X 2,. .. ,X n) (1 ~ r ~ n). Is there an analog of (1.8) for the Pnr's? While we can't
answer this question, we can obtain, by another approach, a somewhat weaker analog of
(1.9), our second sine law:
p
_=nr,-- = sin {! u } + O(!.) as n ~
2 (~)n
2 nr
n
(1.12)
00,
uniformly in r (r
= 1, ... ,n),
whenever u nr is in the interval [r~l, kl.
Observe that the ratio
f nr :=hL
----nP;
(1.13)
is the conditional probability that Xl has rank r (among X l,X 2, ... ,Xn) given the event An
Combining (1.4), (1.12) and (1.13), we see that this second sine law can be
occurs.
expressed as
(1.14)
fnr = 2: sin
{i 2~~1} + O(-b)
as n ~
00,
uniformly in r (r = 1,... ,n).
Notice that (1.9) and (1.12) together indicate there is very little difference between
being told "Xl has rank r", and being told, for any u in the interval [r~l, kl, that "Xl = u".
It is tempting to speculate whether important roles exist for the Pn(u)'s and Pnr's
that mirror the impressive role played by the P n's in formula (1.2). Not that we know of.
Nevertheless, Aubrey Kempner (1933) and R.C. Entringer (1966) have evidenced the mathematical usefulness of the integers
(It follows from the definition of Pnr that m n(r) is just the number of alternating permutations
1r
on {1,2,... ,n} for which '1r(1) >
recursion
(1.15)
1r
(2) and '1r(1)
= r.)
Kempner suggested the linear
-5and the obvious formula
(1.16)
as a simple way of evaluating the integers Mni he needed these for his qualitative description of a certain class of polynomial functions. Entringer formally proved (LIS) and used
it to establish an identity, involving a bivariate generating function, which, in turn, he
used to link Euler and Bernoulli numbers to the Mn's.
Section 2 establishes (1.8) and, hence, the sine law shown in (1.9). A generalization
of (1.11) for the ratio P{ An
I Xki = ui' i = 1,... ,r)/P{An) is also obtained,
where the ki's
(depending on n) are far apart from one another.
Section 3 establishes the sine law described in (1.12), and correction terms of orders
l/n and 1/n 2 are found which improve the accuracy in (1.12) by two orders of magnitude.
We conclude this introduction with the observation that the distributional assumption imposed on the (iid) Xi'S is merely a conveniencei any continuous distribution yields
similar but (according to a standard argument) equivalent results.
2. The continuous case. In this section, we shall (a) discuss the mathematics leading up to
equation (1.8), and (b) discuss a generalization under which attention is focused upon the
ratio P{ An
I Xki
= ui' i = 1,...,r)/P{A n) for large n with the ki's far apart. It will be
recalled that equation (1.8) provides the basis for our first "sine law" described in (1.9).
Included, is a quick review of relevant notation, with some amplifications.
Let X 1,X 2, ... be iid uniform random variables on [0,1]. Denote by An the event that
where the sense of the last inequality is either < or > according as n is odd or even, and
denote the probability P{ An) by P n' n
~
2. Also, let Bn be the event that
with the sense of every inequality reversed. For n
~
2 and 0 ~ u
~
1, define the functions
-6-
and
which are the only continuous versions of the indicated conditional probabilities, the
versions we shall use throughout the paper.
By considering the transform Yi
=1-
Xi' i
= 1,... ,n, it is easy to see that
so that
Pn(u) =
J
qn-l(w) dw =
OU
J
OU
Pn-l(1-w) dw,
which establishes (1. 7).
~
THEOREM 1. The junctions Pn(u), n
2, defined on [0,1]' are expandable as shown in (1.8).
We remark that the infinite sum in (1.8) clearly converges absolutely and uniformly
in u when n
~
2, and hence pointwise. The reader should interpret future infinite sums
from this perspective unless otheMJJise stated. In contrast, it is also possible to interpret
(1.8) in an L2 sense:
~
k
sin {(4k +1) ~ u}
n
k =_T
( 4k+ 1 )
1{
Jo
(2.1)
Pn(u) - 2 (2/1r)n
}2
du
~ a as T ~ 00.
In fact, (1.8) is valid in this sense for n = 1, with Pl(U) := 1.
PROOF OF THEOREM 1. Providing (1.8) holds for n = 2, a simple induction step completes
the proof:
Pn+l(U)
=
= JOU Pn(l-w) dw
U {
Jo
~
k
2 (2/1r)n
k=-lD
lD
= 2 (2/1r)n+l
L
k=-lD
Establishing (1.8) for n
observe that
= 2 seems
sin {(4k+1)
~ (l-w)}
} dw
(4k+1)n
sin {( 4k +1) -2TU}
.
(4k+1)n+ 1
to require an L2 approach.
To motivate this,
-7-
Pn+l(U) =
J
1 Pn(W) dw =
l-u
J K(u,w) Pn(W) dw,
1
0
where K is a self adjoint kernel given by
K(u,w) = l{u+w > I} ,
°
~ u,w ~ 1.
This kernel has eigenvalues
Ak
=r
(;k+l) (k = 0,2:1,2:2,... ),
with corresponding eigenfunctions
¢k(U) = sin {~ (4k+l) u}.
I.e.,
1
J
0 K(u,w) sin
{~ (4k+l) w} dw = {r (;k+1)} sin {~ (4k+l) u},
for all integers k. Further, observe that (1.8) can be expressed as
lD
Pn(u)
=2
L
(..\k)n ¢k(u).
k=-lD
It is easy to show that
lD
K(u,v)
=2
L
Ak¢k(u) ¢k(v)
k=-lD
as elements of L2([0,1])([0,1]), and that K has no eigenvalues equal to zero. Consequently,
by a result of Hochstadt (1973, p. 61), the collection of functions
{ J'2
sin
{~(4k+l) u}
: k = 0,2:1,2:2, ... }
forms a complete orthonormal set for L2[0,1]. Thus from (1.6),
1
10 p,(u) {.[2" sin {f(4k+l) u}} du =.[2" (
_- II 2 2
r 2(4k+l)2
so that
,
u sin {f(4k+l) u} du
-8which, for the current L2 context, has the interpretation shown in (2.1). This can easily be
extended to the pointwise interpretation envisioned in the statement of the theorem.
[J
The variant of the first sine law shown in (1.11) can be expressed more precisely as
(2.2)
max
O~u9
Ifn(u)-~sin~ul = Ifn(1)-~1
=
2?r
3n +1
+ 0(.1.) asn-+oo.
Sn
The latter equality follows directly from (1.3), (1.8) and (1.10) while the first can be shown
analytically for n
~
2. Briefly, this analytical argument uses
2(f)n
(2.3)
fn(u) - ; sin; u = - p n
{Sin C~r
u}
(_ 3)n
sin
-
{~
u}
(-3) n+l
+ R n(u)
}
,
where
sin {( 4k +1) ~ u}
Rn(u):=
k
*L {
(4k+1)n
sin {~u}}
---(4k+1)n+l
0,-1
The absolute value of the right side of (2.3), without the remainder term Rn(u), is maximized at u = 1. This remainder term is negligible enough to rule out the smaller u as a
maximizing argument, and its derivative is negligible enough to rule out the larger u
<
The details of this argument are omitted.
Let
(2.4)
(convenient notation that should not be confused with Pn(u)), to be interpreted here as
(2.5)
P(u
> X 2 < X 3 > ... < (» Xn - 1 > «) v).
The sense of the last inequality is either
< or > according as n is odd or even. Note that
this is the only continuous version of (2.4) for n ~ 3. For example,
(2.6)
P3(U,V) = P(u
While a continuous version for n
(2.7)
> X 2 < v) = min(u,v).
= 2 does not exist, the version
P2(U,V) = 1{u>v }
seems a reasonable interpretation of (2.4).
l.
-9Note that
The analog of (1. 7) is
Pn(u,v)
= J Pn_l(l-w,l-v) dw = J1
U
o
l-u
Pn-l(w,l-v) dw
= J1 K(u,w) Pn_l(w,l-v) dw.
0
Thus it follows that
(2.8)
Pn+iU,V)
= J01 K*(U,W) Pn(W,V) dw,
n ~ 2,
where
K*(u,w)
(2.9)
The initial cases, n
=
J K(u,s) K(s,w) ds = min(u,w), 0
1
0
,
= 2 and n = 3,
~
u,w ~ 1.
are given in (2.7) and (2.6), respectively. With equa-
tion (2.8), the unique continuous versions of the conditional probabilities in (2.4) can be
found recursively for n
~
P4(U,V) = U- uv -
4, versions that are in full agreement with (2.5). For example,
i (U-V)2 1{u > v}'
P5(U,V) = UV -
i uv max(u,v) -1 {min(u,v)}3,
etc.
THEOREM
2. For odd n
~
3,
lD
(2.10)
Pn(u,v)
= 2 (2/7f)n-1
L
k=-lD
For even n
~
Pn(u,v) = 2 (2/7f)n-1
L
k=-lD
For n
u,v
~
1.
4,
lD
(2.11)
sin {(4k+l) ~u} sin{(4k+l) ~ v}
----------- , 0 ~
(4k+l)n -1
sin {(4k+l) ~u} cos{(4k+l) ~ v}
(4k+l)n-1
, 0 ~ u,v ~ 1.
= 2, the same formula holds pointwise (with respect to symmetric partial sums) except
on the line u = v, and, for all v in [0,1], as an L2 limit in the variable u. I.e., for each v in
the unit interval,
1{
Jo
P2(U,v) - (4/7f)
~ sin {( 4k+1) ~ u} cos {(4k +1) ~ v} }2
£.J
k =-1
4k+l
du .... 0 as T ....
00.
-10 -
The same induction argument used in Theorem 1 is applicable here, based on (2.8)
PROOF.
rather than (1. 7). The initial cases, n = 2 and n = 3, need to be argued separately.
The case n = 3 is straightforward: According to (2.6) and (2.9), P3(U,v) = K •(u,v).
Thus,
Io1 piu,v) {~ siu {~(4k+l) u}} du = Io1 K*(u,v) {~ siu {~(4k+1) u}} du
= (Ak)2 {f2" sin {i(4k+1) v}} = {r (ik+l)V {f2" sin {i(4k+1) v}},
so that
lJ)
P3(U,v) =
{r (ik+l)V L {f2" sin {i (4k+1) u}} {f2" sin {i (4k+1) v}}
,
k=-lJ)
~
~
= 2 (2/1f,)2
sin {(4k+l)
k=-lJ)
iu}
sin {(4k+l)
i v}
.
(4k+l)2
This L 2 limit easily assumes the asserted pointwise interpretation as well.
According to (2.7), P2(U,v)
= 1{u>v} = K(u,l-v).
Thus
Jo P2(U,V) {f2" sin {i (4k+1) u}} du = J K(u,l-v) {f2" sin {i (4k+1) u}} du
1
1
v
= Ak
{f2" sin {i(4k+1) (l-v n}= {r (4~+1)} {f2" cos {i(4k+1) v}},
so that
lJ)
P2(U,v) =
L {r
(ik+l)}
{f2" sin {i(4k+1) u}} {f2" cos {i(4k+1) v}}
k=-lJ)
i
~
sin {(4k+l) u } cos {(4k+l)
k=-lJ)
(4k+l)
= 2 (2/1f) ~
i v}
,
the asserted form for P2(U,v) as an L 2 limit.
It can be shown for n = 2 that the right side of (2.11) equals 1 if u
u
< v;
equals
-! if 0 < u = v < 1; equals 0 if u = v = 0,1.
> v; equals 0 if
The details will be omitted.
Theorem 2 yields excellent approximations, especially for large n:
(2.12)
Pn(u,v)
r
- - - - sin {-2 u} cos {!2 v}
2 (~)n-l
1
< n-2'
3
0 ~ u,v ~ 1, even n ~ 2,
[J
-11and
Pn(u,v)
- - - - sin {~u} sin {~v}
(2.13)
1
< n-2'
2 (~)n-l
3
0 5 u, v ~ 1, odd n ~ 3.
We shall now derive a sine law for the conditional density of Xl at u given An and
Xn = v. A bicontinuous version of this density is described in the next lemma for n
~
4; at
most one such version is possible, and it is necessarily regular.
LEMMA
1. For odd n ~ 5, the junction
Pn(U,V)
(2.14)
f~
fn(ulv):=
Pn(w,v) dw
for
0< v 5 1)
,05 u 5 1,
[
f
and for even n
~
n - 2
for v = 0
(u)
4, the junction
Pn(U,V)
(2.15)
f~
fn(ulv):=
Pn(w,v) dw
for
05 v < 1]
,0 5 u 5 1,
[
f
n - 2
for v = 1
(u)
is a regular version of the conditional density of Xl given An and X n = v. Both are
bicontinuo'IJ.S in the closed unit square.
It is easily seen that the integrals appearing in (2.14) and (2.15) can be expressed in
terms of Pn( . ):
1
Jo
(2.16)
PROOF OF LEMMA
Pn(w,v) dw =
{Pn(V)
for odd n,
pn{1-v) for even n.
1. Observe that the upper and lower parts on the right sides of (2.14) and
(2.15) are density functions in u on the unit interval. Moreover, the upper parts are just
special cases of Bayes theorem (since Xl and Xn are independent and the unconditional
density of Xl is identically one on the unit interval). The asserted regularity follows from
standard theorems.
Now, these upper parts are bicontinuous, where defined, since Pn(u,v) is bicontinuous.
But they assume the indeterminate form ~ when v
= 0 and v = I, respectively.
Bicontinui-
-12 ty can be extended to the entire closed unit square by replacing these upper parts by the
conditional density fn-2(u) (referred to in (1.10) and (1.11», which is just the uniform limit
of fn(u Iv) as v approaches 0 and 1, respectively. This claim follows from (1.8) and (2.10)
and (2.11); it depends on the observation that Pn-2(u) is the (one-sided) partial derivative
of Pn(u,v) with respect to v at v = 0 and v = 1, respectively.
The form shown in (2.14) describes a regular version for n = 3 if we set f 1(u)
[]
=1.
But a bicontinuo'US version does not exist: When n = 3, the upper part on the right side of
(2.14) assumes the form min(u,v)j(v - v 2j2) (0 < v ~ 1), and this misbehaves as (u,v)
approaches the origin.
THEOREM 3.
(2.17)
I fn(ulv)-~sin{Iu} I ~ D~' n~4.
3
uniformly for (u,v) within the closed unit square.
PROOF. We shall argue this for odd n by using (2.10). The proof for even n requires (2.11)
and a similar argument. From (2.10), we obtain
(2.18)
and
(2.19)
r
Pn(w,v) dw
o
2 (~)n
= sin {! v}
2
+J
(v)
n
where
In(u,v) :=
k
and
L
*
sin H4k+l) ~u} sinH4k+l) ~ v}
( 4k+l)D -1
0
~ sinH 4 k+l) ~ v}
In(v):= .l.J
k
*
0
'
(4k+l) n
where the indicated sums are over all nonzero integers k. It is easily seen for integers r
that
-13 -
Thus
(2.20)
and
(2.21)
Since
the desired conclusion follows from (the upper form in) (2.14), (2.18), (2.19), (2.20) and
(2.21).
(Because of the bicontinuity described in Lemma 1, it is enough to restrict
attentions to v > 0, as the upper form in (2.14) does.)
o
An improvement in (2.17) akin to (2.2) seems feasible, but will not be pursued here.
For integers s
~
1 and t
~
s+2, let
lD
~
Pst(u,v) := 2 (2/1r)< t~)
£.J
es((4k+l) 2'
et((4k+l)
7(
u)
7(
2'
v)
(4k+l)<t~)
k=-lD
'0~
U,v ~ 1,
where
sin w for odd in tegers i
ei(w):=
{ cos w for even integers i .
This generalizes Pn(u,v) (see Theorem 2), which now can be written as P1n(u,v).
Now let 1
= So < Sl < ... < sr =
n with si
~
si-1 + 2, and consider the conditional
probability
PSOS1" .sr(uo,Ul""'u r) := P(Anl XSi = ui' 0 ~ i ~ r).
Then Theorem 2 has the following generalization: For all UO,u 1'... ,u r,
n
r
PSOs1" ,sr(UO'u1,· .. ,ur) =
i =1
PSi_1 si(Ui-1,UJ
'
This leads to analogues of (2.12) and (2.13): For all UO,u 1'... ,u r '
-14 -
where
While this bound is not best possible, it has the advantage of being simple. Clearly,
even when r increases with n, the bound can be kept small as long as Ll n is sufficiently
large.
3. The discrete case. In this section, we shall (a) derive the sine law described in (1.12),
and (b) discuss second and third order terms. Included, is a quick review of relevant notation.
The iid uniformly distributed random variables X l,X 2,.",X n provide a natural and
convenient means of describing a random permutation
1f'
on {1,2,... ,n}:
7r{i) := the rank of Xi among Xl,.",Xn (1 ~ i ~ n).
In particular, the event An' that Xl, ,X n is alternating with Xl > X2, can be described in
terms of ranks as the event that 7r{1), ,7r{n) is alternating with 7r{1) > 7r{2). Then
Mn := number of these random permutations that are alternating with 7r{1)
> 7r{2),
mn(r) := number of these that are alternating with 7r{1) > 7r{2) and 7r{1) = r,
and
-- r) -- rn=rJT
~.
Pnr .=
. P(A n I ....11)
n\
Further,
n
Mn =
L mn(r),
r
so that
=1
-15-
(3.1)
r=1
Finally, it is easily seen that the mn(r)'s are linked together through the recursion
r -1
m 2(1) = 0, m 2(2) = 1; mn(r) =
L mn_(n-k), 1 ~ r ~ n, n ~ 3,
1
k=1
so that
r -1
n
Pn+1,r = 1.n..t..
~ pn,n+l-k
(3.2)
= P n - 1.n..t..
~
k=1
Pn,n+l-k' 1 <
r < n+ 1.
- -
k=r
Here, and below, summations over empty index sets (when r = 1) are defined as zero.
Our initial objective is to establish the (asymptotic) sine law described in (1.12)
(under a slightly weaker assumption):
THEOREM 4.
(3.3)
p
_=nr_
2 (~)n
= sin {-2r u nr } + 0(-)1
as n ~
n
00 1
uniformly in r (r = 1,... ,n),
whenever
r
max{ Iunr - 22- 11 : r = 1,... ,n}
n
= 0(1.)
n
as n ~ 00.
PROOF. Since
.
I SIn
{r2' u } - SIn
. {r2' ~
2r-1} I
nr
~
2r-1 1 0(1)
2'r I unr - ~
=
n'
it is sufficient to prove (3.3) for the special case u nr
Hn = max{lhnrl: r
= 2~~1
(r
= 1,... ,n).
To this end, let
= 1,... ,n},
where
(3.4)
Lemma 2 below clearly implies that Hn is of order
Lemma 2. There exists a constant C
(3.5)
lin, completing this proof.
> 0 such that for all n ~ 2,
1
Hn +1 ~
~ Hn + ~ .
Proof. Separate arguments will show Ih n+ 1,r l
~ ~ Hn + ~ for r-1 ~ ~, and for r-1 > ~.
[J
-16 -
For r-l ~
i,
(3.4) and the first equality in (3.2) yield
r -1
2r-1 }) = n+1, r =!. ~
22(n+1)
2 (~)D
n
n+1,r
1t
p
+ sin {!
(l)(h
n,n+1-k
2(;)D
cos{! n+1-r}
2 n
2n sin..!.. .
r -1
=!.~h
n.t.J n,n+1-k
t
p
+
k= 1
4n
Hence,
h
I
•
I < 1r(r-l) H +
n+1,r -
2n
r-1}
{1r
Sin
'27
~ sin..!.. -
n
11'
.
1r
SIn
2r-l}
C
H
~ '4 n + Ii' '
1r
{'2 2(n+1)
4n
i and some C > 0 (independent of nand r).
for r-l ~
For r-l >~, (3.4) and the second equality in (3.2) yield
+ S·
(2)(h
it
n+1,r
{11' 2r-1}) _ Pn+1, r _
1
- 2 (~)D - 2 (~)D - Ii
"2 2(n+1)
In
pn
f
t
Pn,n+1-k
2(*")D
D
1 _ cos{.! n+1-r}
_!. ~ h
_
2
n
2 (1.)D n.t.J n,n+l-k
2n sin...!....
.
p
_
n
k=r
1t
4n
Hence,
h
I
I <1I'(n+1-r)H +
D+1,r -
2n
• {11'
r-1}
sin
2 n
n
4n
11'
for r-l >
11'
.
-SIn-
i and some C > O.
• {11' 2r-1}
- SIn
2' 2(n+1)
4n
P
+ 2
n
_1
+
(~)D +1
11'
1_ ~
sin
4:
Note that the latter two absolute values are of orders 3-0 (see
(1.4)) and l/n 2, respectively, and thus of smaller orders than lin.
D
It is possible to improve upon the accuracy shown in (3.3). An examination of the
proof of Lemma 2 reveals that
(3.6)
Por = s( t)
nr
2 (~)D
+ O( --r
1 )
n
L
as n
~
00,
'r ml'
)
UIDJ.or
y In r (r = 1,...,n,
for some integer t ~ 1 and a set of approximates {s~:) : r = l,2,... ,n; n ~ 2}, if
r -1
(3.7)
s( t) -..!.. ~ s( t)
n+1,r - 2n.t.J n,n+1-k
k=1
+ O(iiT
1 )
as n ~
00,
uniformly in r, r-I <_ ~2'
-17-
and
n
s( t)
(3.8)
0+1,r
= 1 - 2:.. ~ s( t)
2n ~ n,n+1-k
k=r
+ O(~)
n'
as n ....
00,
uniformly in r, r-l > ~2'
Conversely, (3.6) (together with (1.4), (3.1), (3.2)) readily implies (3.7), (3.8), and also
n
:n L s~:~ = 1 + O(Jr) as n ....
(3.9)
00.
k=1
Further, (3.7), (3.8) and (3.9) jointly imply
r -1
7r ~ sIt)
+ O(nr
1 ) as n .... 00, UIDJ.or
'r ml'
)
( t) -- 20
Sn+1,r
~ o,o+1-k
yIn r ( r = 1,... ,n.
(310)
.
k=1
To summarize:
Lemma 3. For each fixed integer t
(a)
(3.6),
(b)
(3.7) and (3.8),
(c)
(3.9) and (3.10).
~
1, the following are equivalent:
Now suppose (3.6) holds for a set of approximates {s~:~ r = 1,2,.. ,n; n ~ 2} and some
fixed t, so that (3.9) and (3.10) are valid statements. The objective is to find improved
(easily computed) approximates {s~:+1): r
(3.11)
Pnr
2 (~)n
= S(t+l)
nr
= 1,2,.. ,n; n ~ 2} which satisfy
+ O( ~
1+ ) as n .... 00, UIDJ.or
'r ml'
)
yIn r ( r = 1,... ,n.
LT1
n
To this end, let
(3.12)
r -1
(3.13)
p( t) := nt {2:.. ~ s( t)
nr
20 ~ o,o+1-k
- s( t)
o+l,r
} (r = 1,2,... ,n+l; n ~ 2),
k=1
and
n
(3.14)
"(t) := n t { 1-2:..
n
20
~
(n ~ 2).
i.J s(t)}
o,k
k=1
-18-
Then (3.9) and (3.10) (because of (1.4) and (3.2» are equivalent, respectively, to
D
..!.. ~
(3.15)
2n ~
a(
tl = 1( tl + O(nt 3-D ) as n ~
n,k
n
00
k=1
and
{ ~}t
(3.16)
r -1
t l _..!.. ~
a(
n+I
2n ~
n+I,r
et(
tl
-
n,n+I-k -
p( tl
nr'
k=1
. B (tl on [
] and a constant e (tl such tat,
h
· a e lD f unctIOn
Now suppose t here eXIst
0,1
as n
~
00,
P~:l = B (tl (nh) + OCri-)
(3.17)
uniformly in r (1
~ r ~ n+1),
and
Ct l _
-
(3.18)
1n
e(t l + 0 (1)
n'
Lemma 4 below shows that there exists a e lD function A Ctl on [0,1] satisfying
1
ACtl(x)-iJ
(3.19)
I-x
Actl(w)dw=B(tl(x),O~x~1,
and
!J 1 ACtl(w) dw = e(tl.
(3.20)
2 0
By (3.15)-(3.20), it follows that, as n ~
00,
n
2: L [a~:~ - A (tl (~)] = OCri-)
(3.21)
k=1
and
r -1
(3.22)
[aC
tl
n+I,r
_A(tl(
r)] _..!..
~
ii+I
2n ~
[aCtl
n,n+I-k
_A
k=1
Let
an,r :=a(tl-A
n,r
From (3.21) and (3.22), it readily follows that
( tl
(~).
Ctl
(D+1-k)] =
n
0(1)
n ,
-19 I
lan+1) ~ ~max{lan,kl: 1 ~ k ~ n} +~
I
for some constant C . Hence,
and
ACtl("k)
sC t+ll
nr
'= SCnrtl
.
+ _--.-_
nt
meets the requirements of (3.11).
Lemma 4.
Assume (3.17) and (3.18).
Then there exists a unique COO function
A (tl
satisfying (3.19) and (3.20).
Proof.
For ease of notation, we shall drop the superscript (t).
Equation (3.19) is
equivalent to
A(x) -
i J K(x,y) A(y) dy = B(x)
1
0
where K(x,y) is the self adjoint kernel defined in the proof of Theorem 1 of Section 2. If
(3.23)
(
B(x) sin
{i x} dx = 0,
then as an element of L 2[0,1],
00
A(x)
=
L
J'1
Ck
sin
{~(4k+1) x},
k=-oo
where
Ck
= (1 - ~+ltl
J B(x) J'1 sin {~(4k+1) x} dx, k *0,
1
0
and
by (3.20). Clearly, as an element of L2[0,1],
A(x) := B(x)
-~ J1
l-x
A(w) dw
-20 -
also satisfies (3.19) and (3.20), since A = A in L2[0,1].
The fact that A is continuous
implies A satisfies (3.19) pointwise, which, in turn, implies (via induction) that A is
The uniqueness of a
e
lll
e
lll
•
solution follows from the uniqueness of an L2 solution to (3.19) and
(3.20).
It remains to verify (3.23). Let
Then by (3.17),
n
bn
(3.24)
:=.!.n ~
~
{3n,k sin r~:!}
2n
-I
J.£ as n
~ 00.
k=1
Since a n,r = 0(1) (uniformly in r as n ~ 00), we have
n
(3.25)
an
:=.!.n ~
a k sin {-2'l " !}
~ n,
n = 0(1) as n ~ 00.
k=1
By (3.16),
where
n
b ln
. {'ll"-2-r}
:= {~}t.!. ~ a +1 Sin
n
n+1 n ~ n ,r
r =1
and
n
b2n :=
r -1
n
2: ~ L L a n,n+1-k sin {~;} = ~ L an,k sin {~~} + O(i)·
r=1 k=1
Thus bln = a n +1 + O(i) and b 2n = an
k=1
+ O(i), so that
By (3.24), if J.£ f 0, then an will be unbounded, contradicting (3.25).
completing the proof.
We are now ready to improve the first order approximation
Thus J.£
= 0,
0
-21p
_=nr_
2 (~)n
= S( 1) + 0(1)
nr
as n -+ 00,
n
for
= .!.in
Theorem 4).
n
(corresponding to U nr
By (3.13) and (3.14) with t
= 1, we get
/3~P = B( 1) (nh) + O(~) (uniformly in r, 1 ~ r ~ n+l) and 'Y~1) = a( 1) + O(~),
where
Solving (3.19) and (3.20) yields
(3.26)
A
( 1) ( )
x
'll'2
= 8"
(x -
X
2)
•
SIn
2"'ll' x - 2"'ll' ( I-x) cos 2"'ll' x .
Note that (3.19) and (3.20) can be reduced to the second order differential equation
A( t)
II
(x)
2
+~
A( t) (x) = B( t)
II
(x) -
i B(
I
t)
(I-x)
with initial conditions
To conclude, we have shown that
P nr
2
(~)n
= sin {!.!.} + .!. A( 1) (.!.)
2
n
n
n
+ O( 1)
ii'!
as n -+
00,
where A (1) is given in (3.26).
We can obtain the next order term by repeating the process.
Letting t
= 2 and
inserting
s (2) := sin {!.!.}
nr
2n
+ .!.n A( 1) (.!.)
n
into (3.13) and (3.14), we obtain
/3~~) = B (2) (nh) + O(~) (uniformly in r, 1 ~ r ~ n+1) and 'Y~ 2) = a( 2) + O(~),
where
-22-
and
Solving (3.19) and (3.20) yields
Thus·
Since the computational burden in finding B
( 2l, C( 2l
and,
.
In
turn,
A(2l
.
IS
substantial, we resorted to the use of a symbolic processing computer package (Macsyma).
We do not know whether there is a closed-form representation of A (tl for all t.
p
Table 1 below shows the accuracy of the three approximations of
.
Sin
{~r}
--
20 '
.
Sin
{~r
- -}
20
for sample values of n
1~ r
~
~
+ -01 A (1l(r-)
0 '
and
.
Sin
{~r}
-20
nr
2 (~)n
by
1 A( 2l (r)
+-01 A( 1l (r)
-0 +nl'
-0 '
n
100. In this table, .1 oi denotes the maximum with respect to r,
n, of the absolute error in the i-th approximation, i = 1,2,3. The quality of the
second and especially the third approximations is apparent, even when n is fairly small.
It seems a bit surprising that the accuracy improves with i at every value of n, even
when n = 2. We have no reason to believe this monotonicity continues indefinitely with
increasing orders of approximation, for fixed values of n.
-23-
TABLE 1
n
Ll n1
Ll n2
Ll n3
2
3
4
5
10
15
20
30
40
50
60
70
80
90
100
0.707
0.500
0.383
0.309
0.156
0.105
0.078
0.052
0.039
0.031
0.026
0.022
0.020
0.017
0.016
0.53847
0.24339
0.13271
0.08219
0.02108
0.00901
0.00502
0.00220
0.00123
0.00078
0.00054
0.00040
0.00030
0.00024
0.00019
0.4197399
0.0926487
0.0857672
0.0311274
0.00362260.0009768
0.0004002
0.0001151
0.0000479
0.0000243
0.0000140
0.0000088
0.0000059
0.0000041
0.0000030
References
Andre, Desire (1879). "Developpments de sec x et de tang x", Comptes Rendus Heb. Vol.
88, pp. 965-967.
Andre, Desire (1881). "Sur les permutations alternees", J. Math. Pures Appl. Vol.
7, pp. 167-184.
Andre, Desire (1883). "Probabilite pour qU'une permutation donnee de n lettres soit une
permutation alternee", Comptes Rendus Heb. Vol. 97, pp. 983-984.
Andre, Desire (1894). "Sur les permutations quasi alternees", Comptes Rendus Heb. Vol.
119, pp. 947-949.
Andre, Desire (1895). "Memoire sur les permutations quasi-alternees", J. Math. Pures
Appl. Vol. 1, pp. 315-350.
Entringer, R.C. (1966). "A combinatorial interpretation of the Euler and Bernoulli
numbers", Nieuw Archie! Wiskunde (Section 3) Vol. 14 pp. 241-246.
Hochstadt, Harry (1973). Integral Equations, Wiley, New York.
Kempner, Aubrey J. (1933). "On the shape of polynomial curves", Tohoktt. Math. J. Vol.
37, pp. 347-362.
© Copyright 2026 Paperzz