Jl
1.
Part I disC!Usses the gene!'al Unea!'ly o!'de!'ed case. (Simons 1972).
2.
This !'esea!'ch was suppo!'ted by the Ai!' Fo!'ce Office of
Scientific Resea!'ch unde!' G!'ant AFOSR 72-2386.
GENERALIZED CUMULATIVE DISTRIBUTION FUNCTIONS: II.
THE (J - Lm~ER FINITE CASE 1
by
Gordon Simons
2
Department of Statistics
•
Unive!'sity of No!'th Ca!'otina at Chapel Hi ZZ
Institute of Statistics Mimeo Series No. 888
August, 1973
GENERALIZED CUMULATIVE DIST?-IBUTION FUNCTIONS: II.
THE
(j
Lm4ER FINITE CASE·
-
Gordon Simons
ABSTRACT
Amass
m(x)
~
0
ordered countable set
= Lysx
M(x)
tion.
others,
(j
-
m(y) <
00
is assigned to each point
X.
of a partially
It is further assumed that
for each
X,
For certain sets
x
XEX.
M is called a distribution func-
it is shOwn that
M need not determine
m uniquely.
M determines
m.
A theory is presented for
lower finite spaces (sets), which are defined in the paper.
spaces are locally finite.
I.e., each interval [x,y]
has a finite number of points.
= {zEX
(j
-
Such
: xSzSy}
Mobius functions, which have been
defined for locally finite spaces, are used throughout.
functions on a particular
For
Distribution
lower finite space arise naturally from
boundary crossing problems analyzed by Doob and Anderson.
The theory
is applied to this example and to another.
Key words and phrases:
distribution function, Mboius function, partial
ordering, boundary crossing
2
1•
X be a partially ordered countable set with each point
Let
X E
X
possessing a non-negative mass
m(y) <
~,x E
m(x).
X, and refer to the function
distribution func;tion.
m(X)
INTRODIJCTI ON
= EXE Xm(x)
We assume M(x)
= Ey_x
<
M as the (cumulative)
Sometimes, we shall require the total mass
to be unity
1/.
vk shall concern ourselves with
the following questions:
(i)
When does the distribution function determine the
individual masses
(ii)
m(x), x
X?
€
How are they found when they are determined?
(iii)
vfuen is a function
M(x), x
€
X,
actually a distribution
function?
vfuen
terms of the
m(y)
(1)
X
is finite, one has an inversion formula expressed in
M~bius
= Ex-y
<
func;tion
~(x,y)
M(x)u(x,y), y
€
of
X:
X.
Section 3 of Rota's (1964) fundamental paper on the theory of Mobius
functions provides the relevant background.
Thus, the distribution
func;tion always determines the individual. masses whenever X is
finite.
The concept of a distribution function is fundamental in probability theory, an area where the requirement
m(X) = 1
is most natural.
However, we shall refrain from making this a reqUirement at the outset
since its inclusion would tend to complicate the theory presented below.
3
We shall be concerned with the more difficult situation in which
X is infinite.
The·full range·of possibilities is much more than we
can cope with at this early stage, and we shall be content with a
We shall confine all of our attention to ZoaaZZy
modest beginning.
finite spaces since it is for such spaces that a Mobius function is
defined.
That is, for each pair of points
that the interval
= {z:
[x,y]
x~z~y}
x, y
X,
€
we shall insist
be finite.
A rather uninteresting extension of the finite theory discussed in
the second paragraph can be made when the number of summands in
finite for each
y
€
X.
Formula (1) still applies.
X as Zower finite.
an
(1)
is
'(ve refer to such
A lower finite space is necessarily locally
finite.
An interesting example for which
X is locally finite but not
lower finite arises (apparently unnoticed) in a much cited paper by
Doob (1949):
points
if
X
Let
consist of a maximal point (0,1) and pairs of
~
(n,l), (n,2) for n
~
n > on!, 'n'
x = (n,j) < x'
1.
See Figure 1.
0.
Let
=
{W(t), t
Wiener process (mean zero and variance t) and let
U (the upper)
lines with
L
(the lower)
and, for
n time
be a standard
and
L be two
having positive slope and intercept, and
.0.
o<t <
l
U and
€
X}
< t
Let
M(O,l)
=
with
n
L beginning with
(Anything may happen before
{M(x), x
U
if and only
let M(n,l) and M(n,2) denote the probability that
alternately in
Then
~ o}
having negative slope and intercept.
n~l,
there exist
(n' ,j')
U and
L,
respectively.
t , between times, and after
l
t .)
n
is a distribution function corresponding to
individual masses such as
m(O,I)
=
P(W
never touches
U or
L),
1
4
m(l,l) = pew touches
touc:"lOS
L
before
U but never touches
U
theuto\..\c:.1CS
U
L)
= peW
m(2,2)
th2U lv.::vcr touclli2S
Doob describes how to compute M(x), x
expressing~
and
~
X.
L
D.Gaiu) •
His interest is in
in terms of these, a probability such as
m(O,I).
In
particular, he obtains a formula which, in our notation, becomes
(2)
{M(n,l) + M(n,2)}.
m(O,l)
Anderson (1960) obtains a formula for
L).
(W does not need to touch L.)
of the individual masses as
probability as
peW touches
U before
This can be expressed in terms
QO
En=l m(n,l).
Anderson evaluates the
M(l,l) - M(2,2) + M(3,1) - M(4,2) +
00.
FIGURE I
(4,1)
(5.1)
+
. (3,1)
(2,1)
(1.1)
decreasing x
(0,1)
(4,2)
(5,2)
For the
(3,2)
(2,2)
X of Figure I, we find that the distribution function
always determines the individual masses.
Besides equation (2), we
have the related equations:
(3)
QO
m(n,j) = M(n,j) - Lk=n+l(-l)
k-l
{M(k,l) + M(k,2)},
n~l,
j=1,2.
These equations can be checked by direct substitution.
Before we present some theory, we shall show that a locally
finite space can have distribution functions which fail to determine
the individual masses.
Our example is only slightly more complicated
than that of Figure 1.
See
X consists of a maximal
Figure II.
point (0,1) and triplets tn;l) , (n;2) , (n;3) for n;::l.· x=(n,J) -<-x'u
= (n',j') if and only if n>n',n';::O.
FIGURE II
(4,1)
+
decreasing
x
. . E'x amp 1e
l:
(2,1)
(3,1)
.'
(1,1)
. '-n
H(O,l)=l and M(n,1)=M(n,2)=M(n,3)=2
,n:<:l.
There
are many possible values for the individual masses but all of the
possibilities can be expressed as convex combinations of two extremal
solutions:
Solution la: m(O,l) = 1/2; ~(n,l) = m(n,2) = n(n,3) = 2- n- l
for n
= 2,4,6
...,. all other
Solution lb: m(n,l)
n =
1,~,5,
.•• ;
m(x) = O•
= m(n,2)
all other m(x)
=
= m(n,3)
O.
Example 2: H(O,l) '" 1. and :I(n,l)
There is only one solution:
m(n,3) = (2/5) .. ~-n, n ;:: 1.
m(O,I)
= 2-n- 1
for
= H(n,2) =)'1(n,3) = 3-n , n;:: L
=f~/5
and m(n,l)
= m(n,2)
=
These examples show that the issue of uniqueness for th~ indivi-
duaZmasses depends on the actuaZ distribution function as weZZ as
on the structure of X.
We shall return to these examples later
after we have some theory with which to justify our claims.
.
6
~{hile
the spaces described in figures I and II are not lower
a - tower finite.
finite, they are what we shall refer to as
is there exists a sequence of partiti-ons
(i.e., each point of
k
-+
each
co, and each
k~l,
{x e;
~
~:
)(. ~ +:B - Wifh-A > Bl.<
k
k
~ exceeds each point of B ),
k
is lower finite (1. e., for each
xSy} is finite).
necessarily locally finite.
That
~
1 X as
y e: X and
A a-lower finite space is
We shall give a definitive answer to
questions (i), (ii) and (iii) for
a-lower finite spaces in Section 3.
Although the a-lower finite assumption is stronger than one might
prefer, it permits a wide range of spaces.
Departures from this
assumption can be analyzed but the assumption permits us to develop a
reasonably uncomplicated theory.
2. SOME PRELIMINARIES
Let
X be locally finite.
It will be recalled that the M6bius
function can be defined recursively by
=1
(4)
II (x,y)
for x-y,
= -tx_z
< <y
=0
ll(X,Z) for x < y
for
x~
y.
Another useful formula is
(5)
ll(X,y)
= -tx <z_y
<
ll(Z,y) for x<y.
Let A' denote the complement of
X.
A for each subset
A of
..
7
Suppose
_emy~y
(6)
X ~ A + B + C with A > C > B, where A, B or C may be the
rp.
set
Define whenever the number of sUD1!lands is finite:
~(x,A)
= -Exsz<A
~(x,z)
X €
~(B,y)
= -EB<zSy
~(z,y)
Y€
~(B,A)
= -1
+ EB<x,y <A
At ,
ry
.l)
i
,
~(x,y).
It is easily checked that
(7)
~(B,A) =
-1 - EB<x<A ~(x,A) = -1 - EB<y<A ~(B,y).
It is most helpful to have an intuitive understanding of (6).
~(x,A), ~(B,y)
~(B,A)
and
are actual
ponding to various clustered
v~rsions
single point and the points of
At
M~bius
arises from
(5).
~(B,A)
X. If one views A as a
of
as individual entities, equation
~(x,A).
(4) with y=A yields the definition of
~(B,y)
function values corres-
In a similar manner,
arises from viewing A and B as
points and the points in between as individual entities.
Proposition 1. Suppose X = A +
and A > B. Then
A and
point, respeativel-y.
for eaah
a
E
A, b
Proof. Let a
A
~(b,ao) = ~(b,A)
B, minimal- point
€
where
and
B
are not empty
B have at l-east one minimal- and one maximal-
Moreover,
€
B
A and
b
€
B.
and
~(bo,a) = ~(B,a)
a e A and maximal- point b
o
[b,s]
.
0
€
is a finite interval
and must contain a minimal point in A and maximal point in
equalities above easily follow from definitions.
B.
The
o
B.
8
Proposition 2. Suppose X = A + B whepe A > B. Then
eaah
a
€
A
and b
B.
€
~(bo' A) • ~(B,ao)
Proof. Since
a o € A and maximal point
Proposition 1 when
point of
B.
a
b0
€
for each minimal point
B, the desired equality follows from
is a minimal point of
A
or b
is a maximal
The remainder of the proof uses induction based on the
total number of elements in [b,a].
=
= -1
~(z,A) ~(B,a)
Lb<Z<A
= (L b< z-a
<
~(z,a»
o
0
The induction step is:
+
~(B,a)
~(B,a)
o
Proposition 3. Suppose X = A + B whepe
Let
X
o
be a minimal, point of A.
distinat points satisfying
Proof.
x ~ x
o
A> B and
F'ur'thep, 'let x
and
y >
x .
o
and
B
y
~
$.
be
Then
Using (4):
o=
Lx_z_y
< < p(x,z)
=
Lx~z~y,z€B
LB < <
~
Z-y,ZTX
~(x,z)
~(x,z).
o
o
9
We describe now, in terms of the notation used in the definition,
the three possible types of
a - lower finite spaces:
Type I
X
is lower finite.
Type II
X
is not lower finite and
infinitely many
X
Type III:
only finitely many
distin~tion
~(Bk+.e.'
=0
for
Bk+2
X
= Ak +
B , k ~ 1.
k
and the points of
t\
as
~(Bk+2'~)
In turn,
.e.-I
= + nj=O ~(Bk+j+l' ~j)' .e. ~ 2.
t\)
and III is
II
partition~
Proposition 2 that
• - ~(Bk+2' ~l) ~ (Bk+l' ~).
(9)
~(Bk+l'~)
between types
By Viewing the points of
single points, it follows from
for
k.
independent of the partiau'lar sequence of
Proof.
=0
k.
is not lower finite and
Proposition 4. The
~(Bk+l'~)
It follows that
the type, II or III, is unaltered by adding or deleting partitions
0
from a given sequence.
3.
S0l4E THEORY
We shall successively examine
a - lower finite spaces of types
I, II and III.
We have already commented about type I spaces (lower finite
. spaces) in the introduction.
set {x: x S y}, Y
function (on
•
X)
€
to
X,
They are so easily analyzed because each
is finite and the restriction of a distribution
{x:x
S
y} is a distribution functl.on on that
10
finite space.
Therefore, questions (i), (ii) and (iii)
(appearing
_ in_the introduction) are easily-answered-with the use of (1).
X is a a - lower finite space of type II.
Suppose
X has
the following important property:
Propos i t i on 5.
eaah set
{x:~(x,y)
Proof.
~+l
y
X
o
€
is finite.
(9), we may assume, without sacrificing
E ~ .
Bk+l •
suffices to show that
€
Bk+l'
X,
a - z,OUJeY:' fini te spaae
II
0 for
for sone
k
Since
~(x,y)
=0
1.
Y€
Suppose
X
X.
is not lower
X. Consequently, there
[xo ' y] is a finite set, it
for each xi[x,
y]. We only need
o
For such an
plications of Proposition 2,
~
Since
k.
must be a proper subset of
exists a point
to consider x
x~
€
~(Bk+l'~)·
necessarily~
finite,
; O}, y
In view of
generality, that
Then,
FoY:' a type
~(x,y)
x,
we have, with successive ap-
= -~(x,~)~(Bk'
y)
=
o
The following theorem tells us that
(1)
holds for type II
spaces:
Theorem 1.
given
y
€
For any ZoaaZZy finite
8paae~
(1)
hoZds foY:' a
X whenever the number of non-zero summands in
(1)
is
finite.
-~Pr~o=o~f.•
Under the assumption,
~(y) E{x:xs:y, M(x);O}I~(x,y)1 <~.
Then, Exs: y M(x) ~(x,y)
= m(y).
•
= Exs:y
ExS:y
Ez~x Im(z)~(x ' y)1
Thus, Fubini's theorem applies.
Ez~x m(z)~(x,y)
= Ezs:ym(z)
Ez<x~~~(x,y)
0
11
While
can be used directly to answer question (iii)
(1)
for
type II spaces, it is easier to use the following theorem:
Theorem 2.
A funation
M on a type II
a - ZOIJ)er finite space
X is a distribution funation if and onZy if
the sum in
(b)
the set {x:x~y,IM(x)1 ~
and y
Proof.
ing
y
€
Assume
and an
(a)
~ ~
is non-negative for' eaah
€}
X, and
€
is finite for eaah
and
(b),
and fix y.
€
(1).
=0
z
~
Find an
y which is a minimal point of
for all
y
> 0
X.
in terms of M by
~(x,z)
(1)
(a)
~.
Except for a finite number of
Define m
x
X.
in any given finite subset of
contain-
€
X,
Thus, we
may interchange the order of summation below, and we have with the
aid of Proposition 3:
Letting
k +
~
leads to
distribution function.
and
(b)
Suppose
m(z)
Conversely, if
for the individual masses
1
Ez~y
m(x), x
€
= M(y).
is a
on the structure of
X, then (a) follows from Theorem
III.
Depending
there may be two distinct sets of individual
masses with the same distribution function.
•
o
X.
a - lower finite space of type
X,
M is a
M is a distribution function
follows from the local finiteness of
X
Thus,
type I and type II spaces where the
a~swers
This
contr.~·::. ~:::
-;dth
to questiOi.l (i) is
II
Always. II
/
12
ee
Proposition 4 permits us to assume that A1 + $ and ~(Bk,Al) + 0
for each k ~ 1. For ea~h_x~X~ ch90se an a:t"bit!'a:rY_ ~contain~n&
x and define
= -~(Bk,x)/~(Bk'
e(x)
e is well-define in the sense that the value
Proposition 6.
of e(x)
foX'
x
€
A1).
does not depend on how one chooses
k.
S(x)
= ~(B1'
x)
13 is not identically zero.
A •
1
Proof. For x
€
~, ~(Bk+1' x)
= -~(Bk+1'
~)~(Bk'x) (c.f.,(8»,
and ~(Bk+1' A1 )
= -~(Bk+1'
well-defined.
Al , ~(B1' A1) = -1 and S(x) = ~(B1' x).
is a minimal point of ~, then ~(Bk' xk ) = -1
Finally, if
and
If
~
x
~)~(Bk'
A1)(c.f.,(8».
Thus
B(x)
is
€
B(xk ) = ~(Bk' A1)
Let 13+ and 13-
-1
-+ o.
0
denote the positive and negative parts of
a,
respectively.
Theorem 3. For each y
Thus~ if t ~
x~y
€
X,
la(x)1 < = for each y
€
X, {a+(x)} and {S-(x)} re-
present two distinct sets of individual masses wi th the same distribution function.
Proof. Fix y and choose a minimal point xk of
for each --k
A_
containing y.
(c.f., Proposition 3), and hence,
~
s y,
(11)
E
<.
•
B Ie
Z S y, z
'F
" B(z)
~
Then (10) follows by letting
•
k +
Then LB,k <
Z
~
~
satisfying
~(Bk'z)=O
S y, Z r .x .
k
=0,
~•
o
13
A space
X
will be called a dete~ining spaae if every distribu-
X corresponds -to-a unique set of individual masses.-
tion-function on
Theorem 4.
A type III a - Z01J)er' finite spaae X is a deter'-
mining spaae if and onl.y if t
If3 (x) I =
x s Yo
for some
00
y
X.
€
0
•
The "only if"
We shall defer the proof of the "i£" part until later.
part is immediate from Theorem 3.
Define
v (x,y) .. 0 whenever x
€
B
k
and
v (x,y) .. 0 whenever
Likewise,
E XM(x) v(x,y)
x€
summands for each y € X.
the sum
Propos i ti on 7.
Le t
x
(12)
dete~ine
m(y)
Proof.
= Ex~X
ex. <
M
00
I
l .
'.
Al
and
~
I (c. f. ,(8».
y (c. f ';"~~~:4». Thus,
x '/.
be a distribution funation aorresponding
and l.et
ex.
= m(B I ).
+
ex. B(y)
,y
€
X.
since there exists a point
00.
k
M and
ex.
In particuZar,
M(x)v(x,y)
ex. .. m(B ) S M(yo) <
(c. £ • p(
m.
€
for some
has at most a finite number of non-zero
to the individual. masses {m(x)}
together'
€ ~
y
Fix
k
~
I
and view B
k
Yo
€
Al
and
as a point.
Then
l» ,
Next, view both
D
= hEAl: z
S y~,J'
and B as point~.;.:H:m (c.f.,(l».,
k
14
Consequently,
(14)
Combining (13) and (14), we obtain for
m(y)
= 1:
A. M(x)j.l(x,y)
XE -""k
+ 0::
X€
= EJlE XM(x)v(x,y)
E ~,
y
+
B -B M;(x)j.l(x,A )
1
1 k
a)l3(y)
o
+ a l3(y).
Theorem 4 Proof (i1if ll part).
Let
{m (x)}
and
1
{m (x)}
2
be distinct sets of individual masses which give rise to the distribution function M.
6m
= m2
6a
~
- m
1
O.
E -"
x",y
and
From (12), we have
6a
Thus for any
113 (x)
I=
Theorem 5.
= 6m(B 1).
y
6a- 1 Ex",y
-"
€
= 6al3(x), where
6m(x)
6m(x)
~
0 for some x,
It.m(x) I ~ 26a-1 M(y)
<
00.
Since
X,
A function M on a type III
C1 -
o
Zower finite
space is a distribution function if and onZy if
there exists an a
(a)
~hich
makes the right hand side of
(12) non-negative for each y EX, and
the set {x:x~y, IM(x) \ ~
(b)
and y
Each such
{m(x)}
a
E
E
}
is finite for each
E>O
X.
corresponds to a unique set of individual- l"".a8ses
defined by (12), and for this
8et~
a
= m(B 1).
15
Proof.
The proof of the "iftl part parallels that for Theorem 2.
Bu~heJ:'e_one
defines m __b_y_(12L_uSijlg the
tion (a), instead of (1).
v(x,y»
~
(both
are taken care of by using
= ~ x_y
< m(x), y
One obtains M(y)
predicated in assU1llp-_
The complications brought in by
directly and indirectly through
(11).
a
€
X. If
a
were not equal
to m(B ) , Proposition 7 would be contradicted. For the converse, (a)
l
is a consequence of Proposition 7 and (b) is due to the local finite-
X.
ness of
0
Let M be a distribution function.
possible values for
a
It is easy to see that the
= m(B 1 ) is an interval [amin' a
] , permax
haps a point.
Propos i ti on 8.
(15)
amin
(16)
Q
max
If I:
= sup{
-~X€X M(x)v(x,y)/~(y)
= inf{
-t
<
x-z
XEX M(x)v(x,y)/S(y)
I~ (x) I = co
for some
z
S (y) > 0 }.
~(y)
< 0 }.
(and neaessa:fli Zy for an
z),
then
(17)
a
min
= amax = ~ia.~{
k
~B
< <A I M(x)v(x,y)! I ~B <y<A ~3(y)I}·
k Y 1
k ~l
.
16
Proof.
(15)
Suppose
(16)
and
1f3(z)1 '"
follow easily from Theorem 5.
We may·· suppose-z
QQ..
E
A .- Then from
l
(12). we have for each k:
la
t
Bk <y<A l
IS(Y)I - t B < <A I t X M(x)v(x.y)I
k Y 1 xE
The first sum
~
as
and (17) follows.
k~.
Is
EB < A m(y)SM(z)<QO.
k Y< 1
(18) follows
o
directly from (14).
The condition for (18) is difficult to validate directly since
one is not likely to know the value of m(B k )
E~.
However.
m(B ) S M(x). x
k
is possible.
k
~
without knowing
a.
So. an indirect validation
1.
We now turn our attention toward the evaluation of the total
mass
m(X).
A requirement such as
=1
m(X)
sometimes determines
a unique choice for the individual masses when a distribution function. by itself. does not.
= ExeX
m(X)
m(x).
The most direct formula, of course, is
However, there are some advantages in securing
formulas which primarily involve the values of the distribution
function.
of
X* '" X+ {x*} be an augmented space satisfying Yo :If m(x *) '" 0, then M(x*) '" m(X). If X* is locally finite.
Let
~
then
of
.!!/
~
*(x.x*) '" ~(x,$),
to X* -4/ •
E
X,
~
where
*
denotes the extension
He shall superscript an extended function with an
tvhen it aids clarity.
.'.
x
~"(x,$).
Here,
].l(x,$)
The latter is zero for all
has a different
x
E
x.
X.
;:).s .~·:~:;.,::'z
c',
m:.ly
.::; from
17
~
Proposition 9. Suppose X* is a a-Zower finite spaoe. Then
- - - --- *- --- *-
X is a
0- Zower
fini te spaoe of the same type as
X.
If X
is of
type Ior type II,)
(19)
=-
m(X)
Lxe; X H(x) ~(x,~),
a swn with on'ly a finite number of non-zero swnmands.
If X* is of type III,) then Al is a finite set and
(20) ~(X)
=-
where a
E A M(x)~(x,~)
xe; I
-a ~(Bl'.)'
= m(Bl)'
Proof. Briefly, (20) follows from (c.f., (1»:
(19) is shown similarly.
The details are left to the reader.
X* is not o-lower finite:
Proposition 9 is still usable when
Express
some
X as {x, n
n
j=l, ••• ,n}
and
finite, then so is each
m(X ) '1 m(X)
n
as
~
l}.
Then define
Xn* = Xn + {x*}, n
~
Xn = {ze;X: z S xj for
1. If X is a-lower
Xn* (and of the same type as X). Furthermore,
n~.
Theorem 6. Suppose X is a a- 'lower fini te spaoe.
is of type I or II,) then
(21)
m(X)
= fi!m
0
Exe;X{M(X)
If X is of type III,)
Ey~ X
n
~(x,y)}
II,·
18
m(X) = a + Hm L A {(M(x)-a) r
(22)
n~
where_
CL
= m(B l ).
Proof.
x~
1
y~
X
n
j.l
(x,y) l,
Eaah swn has a finite number of non-zero summands.
The details easily follow from (6), (19), (20) and the
foregoing discussion.
4. APPLICATIONS
We shall confine our applications to the two spaces illustrated
in figures I and II.
Both are
cr - lower finite spaces of type III.
The basic facts about these spaces are summarized in Table I.
Figure I,
amin
= amax =
This is consistent with
and, for
n~l,
Sn =
n
~=l
For
co
n-l
Ln=l(-l)
{M(n,l) + M(n,2)} (c.f.,(18».
(2). For Figure II, define So = M(O,l)
(_2)k-l {M(k,l)+M(k,2)+M(k,3)} + (_2)n
min(M(n,1),M(n,2),M(n,3».
Then,
amin
= suP{Sl,S3'SS' ••• } and
amax = inf{S 0 ,8 2 ,8 4 , .•• } (c.f., (15) and (16».
Finally, we turn our attention to examples 1 and 2.
We see in
Example 1 that the first and second solutions correspond to
m(x) = 1/2 13 - (x) and m(x) = 1/2 f3 + (x)
the first solution
a
= a.
m1n = 1/2,
(x
€;
and for the second
For Example 2, (18) applies, and we obtain
values of m(x)
X), respectively.
ex
follow immediately from (12).
= 3/5.
a
For
= ex
·~~x
= 1.
With ';:;'; '.:;. the
19
TABLE I
X
Fi qure II
{x:x >(n,1)}
Fi Qure I
{x:x > (n, 1) }
A
n
(_1)n-n
lJ«n,j),(n' ,j'»
,
1
0
lJ(Bn,Cn'
C_l)n-n
,j'»
I
lJ«n,j), An')
I
C-·1)
for n' < n
for n=n' ,j=j'
otherwise
_ (_2)n-n'-1 for n' < n
for n=n' ,j=j'
1
otherwise
0
for n' <n
_C_2)n-n'-1 for n' < n
for n' Sn
_(_2)n-n
,
n-n'+1
,
for n' S n
,
].l(B , A ,)
n n
C_l)n-n'+1 for n' S n
_(_2)n-n
J3(n,j)
(_I)n+l
-(-2)
tJ3 (x) y
co
4
13+ (:It)
co
21- n
co
21- n
1 for n=n'=O,j=j'=l
(-1) n-n'-1 for n' >n~1
1 for n=n'=O,j=j'=1
(-2) n-n' -1 for n' >n~ 1
I:
for n' S n
-n
xS(O,1)
I:
xS(n,j)
I:
13-(x)
xS(n,j)
v«n,j),(n'
,j'»
-1 for
0
n=n'~
otherwise
1 ,j,&j ,
+1
for n'>n2:: 1
2
-" -12 for n=n'2:: 1, j:1j'
0
otherwise
20
REFERENCES
[1]
Anderson t T.W. (1960). "A modification of the sequential-probability ratio test to reduce the sample size." Ann. Math.
Statist. 31, 165-197.
[2]
Doob, J.L. (1949). "Heuristic approach to the Kolmogorov - Smirnov
theorems." Ann. Math. Statist. 20 t 393-403.
[3]
Rota, Gian-Car10 (1964). "On the foundations of combinatorial
theory 1. Theory of Mobius functions. Z. WahrsaheinUahkeitstheo:t'ie und Vero. Gebiete 2, 340-.368 ...
[4]
Simons, Gordon. "Generalized distribution functions: The linearly
ordered case with applications to nonparametric statistics."
To be published in AnnaZs of ProbabiUty.
© Copyright 2026 Paperzz