Nixon, D.E.; (1965)A study of the distributions of two test statistics for periodicity in variance."

A STUDY OF THE DISTRIBUTIONS OF TWO TEST
STATISTICS FOR PERIODICITY IN VARIANCE
David E. Nixon
Instituce of Statistics
ivIilileoGraph Series No. 434
l'flaY
1965
iv
TABLE OF CONTENTS
Page
vi
LIST OF TABLES . .
1.
INTRODUCTION.
1.1
1.2
1.3
2.
2.3
2.4
2.5
1
2
3
5
Matrix Representation of T and T '
Eigenvalues of C . . .
P
P
l
2.2.1 t even . . . . .
2.2.2 todd, N odd.
2.2.3 todd, N even
Eigenvalues of C .
5
7
9
10
12
14
P
Eigenvalues of S • . .
2.4.1 p = 1 .p . . .
2.4.2 P
1, N = 4m
2.4.3 p = 1, N = 2h, h odd.
2.4.4 p = 1, N odd
.
2.4.5 P:f 1
.
Preliminary Remarks Concerning the Distribution
of T and T I • • • • • • • •
•••••••
p
P
SUMMARY OF VON NEUMANN'S AND HART'S RESULTS WITH REMARKS
ON TWO APPROXIMANTS TO T • •
l
3.1
3.2
3.3
4.
Purpose and Motivation
Related Literature . .
Outline of Dissertation.
EIGENVALUES.
2.1
2.2
3.
1
Distribut ion of T1 . •
...•
General Formulas •.
Two Approximants to the Distribution of T
l
3.3.1 Comparison with C . .
l
3.3.2 Comparison with R . . . .
l
FORMULAS RELATED TO THE MOMENTS OF T AND T I .
P
4.1
4.2
4.3
4.4
P
Basic Formulas .
As Applied to T and T I .
l
l
As Applied to T and T / .
Pth
p
Proof That the m-- Moments of T and T I Are the Same
p
p
as Those of Tl for m < q
20
20
22
22
25
26
28
30
30
34
35
35
41
43
43
45
47
53
v
TABLE OF CONTENTS (continued)
Page
5.
ASYMPTOTIC DISTRIBUTION WITH THE EXACT DISTRIBUTION
OF T FOR N=2p AND N=3p • . • . . . . . . • . . .
55
P
5.1
5.2
5.3
5.4
6.
An Asymptotic Distribution for T and T' for q ~
Asymptotic Distribution of Tp fo¥ q fix~d.
Exact Densities of T for N=2p and N=3p . .
An Asymptotic DistriEution of T 2 + T'2
P
P
HEURISTIC REMARKS SUPPORTING THE USE OF CERTAIN APPROXIMANTS
TO THE DENSITIES OF T AND T'
P
6.1
6.2
7.
00.
p
...
Preliminary Discussion .
Approximants to the Densities of T and T' .
LIST OF REFERENCES.
p
P
55
58
60
61
64
64
65
68
vi
LIST OF TABLES
Page
2.1
Eigenvalues (A) of C
p
19
2.2
Eigenvalues (A * ) of S
p
27
3.1
Significance points for T
3.2
Comparison of T with C
l
l
41
3.3
Comparison of T with R
l
l
42
4.1
Values of F(m,q), G(m,q), and H(m,q) for m < 10.
51
l
.
33
1.
1.1
INTRODUCTION
Purpose and Motivation
The P4rpose of this paper is to study the distribution of the
statistics
N
T
(1.1)
P
~
r==l
==
(x
r
2nrp
- x_)2 cos
t'f
N
L
(x
r
- x)2
r=l
and
N
L.
T
(1. 2)
I
P
r=l
=
(x
r
- x)2 sin 2;rp
where xl' x ' • • . , x are N NID (u, 1) random variab 1es, p = 1, 2 •
2
N
, [~J, ~nd x is the mean of the x r ' r :;: 1, 2, . . . , N.
The statistics T and T ' arise as sufficient test statistics (Herbst,
p
p
1962) for testing the hypothesis that the x
r
have the
s~me
variance and
are respectfully directed against alternative hypothesis of the form
(1.3)
0-
r
2 = ex + ex cos ~
0
p
N
and
-2
0=13
+13 p sin 2TTpr
r
0
N
(1.4)
While Herbst studied statistics of the form (1.1) and (1.2) he considered the case u known.
to
investigat~
The present study was thus motivated by the need
these statistics when removal of the mean is necessary.
2
Related Literature
i
The
wor~
(
which relates to the present investigation has arisen as a
result of the search for distributions of ratios of quadratic forms
ing in time series analysis.
di~trib~tion
app~ar-
Von Neumann (1941) shed much light on the
of statistics of the type y = X/AX/X/X in his s~udy of the
distribution of the ratio
N-l
11
(1.5)
_1_
N-1
::>
r~
I
r=l
(xr+1 - x r )2
N
1
N
(x
r
- x)
2
Hart (1942) tabulated probabilities associated with 11 and also presented an approximation to its distribution.
Von
~eumann's
Koopmans (1942?
comp~imented
studies and presented a method whereby the characteristic
roots of A might be approximated by a continuous function in order that
the
~haracteristic fun~tion
of y might be inverted.
R. L. Anderson (1942) gave the distribution of
N
(1.6)
L~
;a
r~l
(x
r
-X) (xr+L - x)
N
(x
r
L
r::;l
"x)2
subscripts reduced modulo Nt for r = 1 &nd N/L ::; 2,3,4.
that, for (N,L) ::; 1,
L~
has the same distribution as
He also showed
1~'
Much of
Von Neumann's, Koopmans' and also Anderson's work is summarized in
Dixon (1944).
3
Because of the so-called double-root
the exact distribution of his
(1951); arid
G~esser
l~.
resul~
Anderson is able to give
Many authors, notably Durbin and Watson
(1957), have developed statistics or modified their
statistics so as to be able to use the results of Anderson's work.
The
double-root result is explained in Hannan (t960) and is summarized by
Durbin and Watson (1951) and Giesser (1957).
Other work on the distribution of ratios of quadratic forms has been
qone by R. L. Anderson and T. W. Anderson (1950), Gurland (1948), Madow
(1.945), Leipnik (1947), and wpittle (1951).
1.3
Outline of Dissertation
Since the distributions of T and T ' are uniquely determineq by the
p
p
eigenvalues of the numerfltpr quadratic form of E;!ach of these statistics
we concern ourselves in Chapter 2 with the determination of these eigenvalues.
We show that
Tl and Ti
that the distribution of T and
p
p are relatively prime.
have the same distribution and furthermore
T;
I
is the same as that of T when Nand
We also argue that T
has the same distribution
Pl
has the same distribution as T' provided that
and that T '
PI
P2
=
(N,P2)'
where
(N,
p)
denotes
the
greatest
common divisor of N
(N, PI)
as T,
PZ
anq p.
In Chapter 3 we present some of Von Neumann's results as they apply
to I
l
(Ti) and alsp note several of his results which are used in later
chapters.
In addition, we discuss two approximants to T •
l
Chapter 4 is
conc~rned
with the specification of formulas sufficient
for the evaluation of the moments of T
show that
Tp and T; are distributed as
m < q, where q is the integer
~efined
p
and T ' for arbitrary p.
p
T
l
We also
through the first m moments for
by kq = N when (N, p) = k.
4
We develop asymptotic distributions in Chapter S.
T with Anderson's R showing the direction of
p
p
tive distribution functions.
of T
p
T 2
P
for N
= 2p
and N
= 3p
In addition, we
We also compare
conv~rgence
pr~sent
of
thei~ re~pec­
the exact densities
and remark on an asymptotic distribution of
+ T /2 •
P
In Chapter 6 we use one of Von Neumann's results in presenting a
heuristic argument supporting the use of approximating functions to the
d~nsities of T
P
and T' .
P
5
2.
2.1
Since T
p
Matrix Representation of T
p
and T ' are unchanged when each x
p
we can consider the x
If we define
EIGENVALUES
u
p r
and
r
and T '
P
is replaced by x
r
- u
in (1.1) and (1.2) as NID (0, 1) random variables.
r
v
p r
respectively by
(2.1)
u = cos ~
p r
N
,
p
(2.2)
v
p r
2rrpr
N
,
p = 1, 2,
sin
1, 2,
.
.
, [~J
2
. . . ,
[~J
2
we may rewrite (1.1) and (1.2) as
N
L.
(x
r=l
(2.3)
T
P
=
r
- x)2
u
p r
N
L
(x
r
, p
N
1, 2, . • . , [2J
, p
1, 2, . . . ,
,
- x)2
r::l
and
N
L
L
(x
r=l
(2.4)
T
P
=
r
- x)2
v
p r
N
(x
r
N
[2J ,
- x)2
r=l
where the x
r
are regarded as N NID (0, 1) random variables.
It is well-known that the denominator quadratic form of T
p
and T '
P
has eigenvalues 1 of multiplicity n = N-l and 0 of multiplicity 1; hence
it reduces to ZIZ, where Z is a column vector of N NID (0, 1) random
variables.
Our first problem then is to determine the eigenvalues of the
matrices of the numerator quadratic forms of T
p
first give their matrix representations.
/
and T .
p
To this end we
6
LEMMA 2.1
T
p
Let neT ) and neT') denote the respective numerators of
p
p
and T' and let X be an N dimensional column vector of the x
p
r
then
(2.5)
where
Similarly,
neT') = X'S X = X' (sij) X,
P
P
(2.6)
p
where
v. (1 ~
2
N
) , for i = j,
sij =
1
N
Proof:
We verify (2.5) noting forms of neT ) and neT') which will
p
prove useful later.
p
Now
N
\
neT )
(x
L
p
r
- x)2
p
u
r
r=l
N
=
L
L
x
2XI
u
r p r
r=l
N
Since
N
2
u
p r
=
for
0
p
~
N
x
r pu r +
x
N
[2 J, we can write
r=l
N
(2.7)
neT ) =
x
p
r=l
r
u
p r
-
2
I
r=l
r=l
2
2
x \'
L
r=l
x
r
u
p r
u
p r
7
The corresponding form for n(T ') is
p
N
I
n(T')
p
(2.8)
N
x
2
v
p r
r
-
I
2"X
r=l
x
v
p r
r
r=l
where we have used the fact that also
N
I
o ,
v
p r
for p :s;
r=l
N
In (2.7) we replace
'x
by
1 \'
N L
x
to obtain
r
r=l
N
L
n(T p )
N
x
2
r
2
N
u
p r
r=l
I
r=l
N
~
u
p r
x
r
X
,
t
t=l
from which follows (2.5).
It will be convenient to work with the IIk1.triees NC
p
and NS
p
whose
elements are given respectively by (2.5) and (2.6) upon multiplying c
and s .. by N.
~J
by N.
The eigenvalues of C and S
p
P
ij
will follow upon dividing
The dependence of c .. and s .. on Nand p will be noted when needed
1.J
~J
from (2.5) and (2.6).
We start with the case p = 1, presenting in detail the method of
determining the eigenvalues of C as the results will be used in deterl
mining the
eigenvalues of the other matrices involved.
2.2 Eigenvalues (A) of C
l
In what follows we write lU
NC
I
r
as simply u
r.
The eigenvalues of
are given as roots A of the characteristic equation IAI - NCll = 0
which written in full is
8
A- (N -2)u 1
n
"1
A-(N-2)u 2
, . • 'J'N
+
u
+
tl 2
N
0,
where R(A) denotes the characteristic matrix of NC ,
1
Investigation of (2.9) when N
are A
=
0 and At
=
N
nt
cos~,
t
=
=
3~4,5,
and 6 indicates the solutions
1,2, ' , , ,N-l.
This we prove as
a theorem.
Theorem 2.1:
(2.10)
At
The roots of (2.9) are A = 0
nt
= N cos N '
Proof:
= 1,2,
The proof is based on elementary properties of determinants,
(see Browne 1958).
first row of
t
and
IR(A)I
of all the rows.
That A
= 0 is a root follows from the fact that the
becomes (A, A, • . . , A) when replaced by the sum
9
2.2.1
t even
Next At (t-even) as given by (2.10) can be shown to be a solution
by the following considerations. Let
(2.11)
A-(N-2)u r
th
denote the r--
column of 1,0(1)1.
~ Il
if in IR(A)I we replace
Since u r
Sr by Sr - SN-r
(N-odd) and r = 1,2, . . • ,
N-2
2
= uN -r' r = 1 , 2 ,
. , N-1,
N-1
for r = 1,2,
2
th
(N-even) we obtain new r-- columns
o
o
A-Nu
r
, r =
N-1
1,2, . " • [-2-J,
-(A-Nu )
r
which shows clearly that IR(A) I
o
o for
At (t-even) as given by (2.10).
To verify the other roots of (2.9) we consider two cases.
10
2.2.2
todd, N odd
Since
Au
"N-2k
= N cos
=
N cos
for
k
1, 2 . . • ,
-Nu
k
N-1
2
i f we show that
= 1,2, . • .
~t
satisfies (2.9), we show that
We prove that
I R( -Nuk ) I
N-1
2
(t-odd) as given by (2.10) satisfies (2.9).
by showing that the columns of
= 0
Considering?'
Sr as given by
are linearly dependent.
we exhibit a set of constants at (t
(2.11) with \
N
J,
.S.e=
0,
for s
=
a1
=1
or in component form that
N
(2.12)
L
a~z,o=O
)(;
8
Xl
1,2, . . . , N
t =1
Equation (2.12) implies
N
L
Yo
aJ, zs..e = as zss
::f8
or
N
L
J, 'fs
a n (u"
;;,
JiJ
+
u )
S
-a
=:
-Nu
1,2, . . . , N) not all zero which
=:
have the property that
I
IR( -,Nuk) I
s
k'
11
after adding 2 a
s
u
s
to both sides,
N
(2.13)
L
a 1, (u AI~ + u s )
N a
for s
s (uk + us)
1,2,
0
•
0
,
N.
1- =1
Investigation of the cases N = 3 and N = 5, in particular when k = 1,
indicates that a£ takes the form
1
(2.14 )
u
, x,
~
1,2,
0
• •
N.
,
AI
Therefore (2.13) can be rewritten as
1,2, .
(2.15 )
0
•
N .
If (2.15) holds then a£ a8 given by (2.14) makes (2.12) hold.
(2.15) is true for s = k and moreover for s
if (2.15) is true for s = 8
1
=
N - k.
and 8 = s2 where u
sl
Clearly
On the other hand
f u8
then we must have
2
N
I
(2.16)
o.
£,=1
Conversely the truth of (2.16) implies that (2.15 ) holds for
8
..
We prove
,
N, because we can write
u.e +
1.1
s
= U
u
2 + uk + 8 - uk'
(2.16) as a 1ennna.
LEMMA 2.2
N
~
£,=1
G£ ~
uJ"
0
, k
1,2,
.
. . , N-1
2
""
1,2,
12
N
Proof:
~
In
._-----...,.....
1
replace uk by x and
~--
;.Ix:,
+ uk
Q(x) by
,ll;;.'
..e,=1
N
L
Q(x)
1
un
..e =1
+ x
tJ
N
P(x) -
Letting
IT
1,= 1
+ un) we can write
)0
P I (x)
P (x)
Q(x)
since P(x)
1"
=
.f or
0 iff
£
~
1, 2 , . • , , N and HI,..
Therefore Q(-u ) =~ 0 iff P'(u ) "'" D.
k
k
k.
f
n
f.,
h,t' every
From Hobson (1928) we find
that P(x) is a polynomial containing only odd degree pom::rs of x; therefore, P/(x) "" p/(-X).
P(x)
=
O.
But P/(~uk) "" 0 since -uk is a double root of
Hence Q(-u. ) '" O.
k
Therefore (2.16) holds and '<theorem (2.1)
is thus proven for N-odd.
For todd, N-even we must use a slightly different approach since
At (t-odd) as given by (2.10) cannot be expressed in terms of any u .
r
We must show directly that Ie
a set of constants
t
(t-odd) satisfies (2.9).
3.£ (f= 1~2, .
" . , N) not all zero ha.ving the
property that:
N
I
..e =1
Again we exhibit
o
13
The equation corresponding to (2.13) is
N
(2.17)
I
a~
(U
,Z =1
where N A~
= At
~ A~),
t + us) = Na s (us
1,2, . • • , N,
s
(t-odd) ia given by (2010).
We assume, ana1agous with
previous results, that
1
U
AT
_
£
X,
,
=
1,2,
0
•
0
,
N
t
whence, (2.17) becomes
N
L
(2.18)
+ us
- At I
un
:4
u
Jl =1
J!,
N,
s
1,2,
. . . ,
N.
N
L
But (2.18) holds if
1
U
I- =1
;,
-
= o.
A/
t
This we prove as a lemma.
N
LEMMA 2.3
We have
L
1
u2 -
J- =1
(t-odd)
A't
as given by (2.10).
N
Proof:
In
L
1
A' - u
t
1.
£ =1
replace A'
t
N
Q(x) =
L
£ =1
so that Q(x)
P '(x)
P (x)
.
by x and define
N
1
x - u
and P(x)
i.
rr
i =1
(x - u.e,)
,
14
= cos
From Hobson (1928) we see that if we let x
e, which is 1-1
in [0, TI], then
N-l
2
P(cos e)
= cos
N8 -1.
By the chain rule for derivatives we obtain
-N sin N8
1
2N - l
For x
= A't = cos
whence, pi (A;)
TIt
(t
N
2
N-l
N sin Ne
sin e
an odd integer less than N), we have e
= 0 and Lemma 2.3 holds.
TIt
N
Reversing our steps we see that
Theorem 2.1 is also true for N-even.
2.3
We now find the
teristic matrix of NC
Eigenvalues of C
P
eigenvalues of C •
p
p
and
t
~,Z
We let R (A) denote the characp
th
denote the i- -
(2.19)
=
A-(N-2) u
p
column of R (A) so that
p
(z
r..e,
),i= 1,2, . . . , N.
i,
UN + u
p £
P
Letting (N, p)
k be the greatest common divisor of Nand p, we consider
two cases:
k
=1
When k
=
1 the sequence of numbers pU 1 ' pU 2 ' • • . , pUN
is a permutation of the sequence u ' u ' • • •
l
2
,~
(see Anderson 1941).
15
Thus by elementary operations, interchanging rows and columns, IR (A)I
p
can be transformed into IR(A)I as given by (2.9).
Therefore the eigen-
values are given by (2.10).
k
:f 1
For k :f 1 we let q
be the integer defined by qk
= N.
Then
we have
u
p q-r
=
cos 2T!P (q -r)
=
cos
N
(2.20)
2np (q -r)
kq
cos [ 2np' - 2W
=
= p.
since k divides p, kp'
p
u
.e
+ rq
u
p r
r
]
, r = 1, 2, . . . , q-1
Moreover,
=
cos
2 TIP(~ + rq)
=
cos
2T!Prq + 2npf,
kq
=
cos [2TIP 'r + 2T!P£
=
pUg,'
(2.21)
N
N
1,=
]
1,2, . . • , q;
r = 1,2, . . . , k-1.
Using (2.21) in conjunction with (2.19), we see that
be expressed as
tRp(A)
I
can
16
A(A)
A
.A
A (A)
A
A
(2.22)
A
A
where A and A(A) are the
q
x
q
A(A)
matrices:
U + U
P 1
P q
U
+
U
+
P 2
(2.23)
U
P q
A =
U
P q
+
pU 1
U
+
pU 1
+
P q
U
pU 2
P q
pU 2
u + U
P 1
P q
P q
and
A-(N-2)
(2.24) A(A)
U
P
=
U
p q
+ pu 1
A-(N-2)
U
P q
17
After substracting the first column in (2.22) from all other columns and
rows 2,3, • . . , k
from the first row we have
o
A(A) + (k-l)A
A('A) -A
A
A
o
A
0
• • • 0
. . • 0
o
.A ('A)-A
Since
A-N u
0
P 1
.
0
0
0
0
0
0
0
0
0
A('A) - A =
we can expand
'A-N u
P q
to obtain
IRp(A)1
q
IR p ('A)
I
=
1\
r=l
('A - N pUr)
k-l
rA('A) + (k-l)A-1
18
where
A
- -(q-2) u
k
p 1
u
p l
+
u
p 2
A
--(q-2) u
k
p 2
A(A) + (k-l)A
=
k
A
- -(q-2) u
k
p q
But we can write
u
p r
relatively prime;
in this section for
=
~
cos
N
I
=
cos
2TTP r
, where p I and q are
q
therefore, it follows from results in section 2.2 and
k
=
1 that
q-l
q
(2.26 )
c
where C is a constant.
nt=l
(A
~\
(,oN cos ;;") ) ,
From (2.26) we determine the eigenvalues of C
and their multiplicities as given in the following table.
p
19
Table 2.1
Eigenvalues (A) of C
a
p
q-odd
Mu1t !.E l i city
Eigenvalue
cos
TIt
N
(t: = 1,3,
· ··
cos
TIt
N
(t = 2,4,
·
,
q -2)
· · ,
q-1)
1
2k-l
1
k-1
0
1
q-even
Eigenvalue
cos
cos
TIt
N
TIt
N
Multiplicity
(t = 1,3,
·
(t = 2,4,
·
·
·
,
q-1)
·
,
q-2)
1
2k-l
1
k-1
-1
k-1
0
1
a
q
= N/k
k = (N, p).
20
2.4
2.4.1
Eigenvalues (A*) of Sp
1
p =
We consider the cases p
=1
an~
p ; 1 separately.
the cases N = 3,4,5, and 6 indicates the 8
This we now prove.
1v r
= vr
1
Investigation of
has the same eigenvalues as C1 .
First we establish the following lemmas concerning
as given by (2.2).
LEMMA 2.4
If N is even so that ~
= v1" , r
- r
is an integer we have
= 1,2,
..
,
N
- -1
2
=v r ,r=!!+l
!!+2 , • . • , N1
2
'2
- r
= vN
2
Proof:
= sin
= sin
,
2TT
i
[~
-r]
~
P-N -1"]
2TT N
2
I
i
= sin
(TT - 2TTr) = v
= sin
(3TT - 2TTr )
N
LEMMA 2.5
r
=
1,2, .
If N
= 2h,
h odd, the sequence of numbers
, N, correspQnds to the sequence
= vr
N
- r
cos
TIt
~
r
v
r
as follows:
21
=h
2r < h
t
h <
2r < 3h
t ;;:
3h <
2r < 4h
t;: :;: 5h - 2r ;
- 2r
?r -
h
r :;: h or 2h
t ::; h •
sin -2TTr
cos [_11 _ 2TTr
Proof:
;=
N
;;: cos
;
]
N
2
[h - 2r]
Every value of r has a comp,lementary value r' for which v
which cos
nt
iN
;;:
LEMMA 2.6
cos
r
;;: V
r
I
and fo+
nt'
N
If N ;;: 4m the sequence of numbers
is a permutation of the sequence
u
r
v , r :;: 1,2, . • . , N,
r
, r ;;: 1,2, . • • , N.
Proof:
sin ~:;:
N
c0 ~
1:1
co~
(m-r)
211
N
The+efore we have
v
r
= um-r
, r ... 1,2, . . . , m-l
vr :;: ~ + m-r, r ... m, m + 2 , . . . , N.
Now we let P(A*) denote the charaeteri~tic matrix of NS .
l
column of
P(A*) is penoted by
The rth
-
22
)..* -(N-2)Vr
(2.27)
, r
=
1,2, . . . , N.
p . 1, N • 4m
2.4.2
For N ... 4m it follows, by application of Lemma 2.6 and the discussion
for the case k
=1
given in section 2.3; that the eigenvalues of NS
are
1
the same as those of NC .
1
=
2.4.3
P'" 1, N , 2h (h-odd)
In
IP ()..*)1 we
rep lace
Pr
for r
=
h-l by
1,2, . • . , -2
Us i ng Lemma 2 . 4 we see t h at the new r th
Pr
-
P,h
-r
.
col·umn is
o
o
lIZ
*
).. -Nv
*
r
-().. -Nv )
r
o
r
=
1,2, . • . ,
h-l
2
23
Next we replace Ph
by
o
o
•
o
Finally we replace Ph+r; by P h+r - P N- r
th
obtain new (h + r)-- ~olumns.
for r
= 1,2,
... ,
h-1
2
o
o
o
, r =
o
which can be written, since vh+r
=-
1,2, . . . ,
h-1
2
to
24
o
o
'A.* + Nv
r
o
=
=
, r
1,2, . . . ,
h,..l
2
o
-( 'A. *
+ Nv )
r
o
We thus see that h factors of
Ip(A*)1 are given by
-h-1
2
1\
(2.28)
('A.
*
- Nv ) . ('A.
r
r=l
* + Nvr )
In order to show that the other h factors are given by
h-1
A*
1\
( A* - Nur ) '
r=l
we show that the columns of IP(O)I and Ip(Nu r
)I '
r
= 1,2,
are linearly dependent by exhibiting a set of constants
, N) not all zero such that
N
L
a,Z
Pf.
=0
•
t =1
For 'A. *
= 0,
we take at
= 1 and
use the fact that
. . . , h-1,
at (t
=
1, 2, .
25
For A*
=
Nu , r = 1,2, . .
r
(2.29)
a =
.~
, h - 1 we take
1
ur
+ vi.
, i.
= 1,2,
••• , N•
That a.,. as given by (2.29) has the desired property follows by applying
Lemma 2.5 to v
r
in (2.29) and in Ip(Nu )1 and then using arguments similar
r
to those in Section 2.2.
By the above remarks we can write
j
N-2
T
11
(2.30)
r=l
(A * -Nv ) (A *+Nv )[N;2
11 (A *-Nu ) ,
r
r
r r=l
C a constant, from which we obtain the eigenvalues.
2.4.4
p = 1, N-odd
*
Noting the dependence of P(A * ) on N we write P(A * ) : PN(A).
consider N'
(2.31)
= 2N
*
(A )
We
so that by (2.30) we can write
I
N-l
*2 2
= C A
1\
r=l
*
*
(A -N'v') (A +N'v')
r
where v; and u; have N replaced by N'.
r
~N-l
11
r=l
On the other hand if inIPN/(A*)1
we (in the order given)
(i) interchange columns N - rand N + r for r
=
1,2, . . . , N;l;
(ii) interchange rows N - rand N + r for r = 1,2, . • . , N;l;
(iii) subtract column r from column N + r for r
(iv) add row N + r to row r for r = 1,2,
=
1,2, . . . , N;
• , N;
and then follow these elementary operations by expanding by Laplace's
expansion on the last N columns, we find
26
N-1
2
~
r=l
(2.32)
where C1 is a constant.
Comparing (2.32) with (2.31) we see that
N-1
("'A. * - N 'u ')
1\
= C "'A.*
r
r=l
or
N-1
= C1 "'A. *
~
("'A.
*
- N cos
r=l
where C and C are constants.
1
TTr
N)
•
Thus the eigenvalues are again given
by (2.10) •
Using Lemma 2.5 to change the sines to cosines in (2.30) we see that
in every case the
2.4.5
eigenvalues of N8
1
are given by (2.10) .
p::f 1
For p
::f
1 we let (N,p) = k and q =
N
k.
When k
= 1,
it follows by
arguments identical to those used in section 2.3 that the eigenvalues
of Sp are the same as those of 8 ,
1
For k
::f
1 we perform elementary
operations identical to those performed on IRp("'A.)1 to reduce Ipp("'A.*
)1,
say, to
(2.33)
11
C a constant.
cos
TIt
2q
1
p (X*)I = C
("'A.* -N sin 2TTr)
I p
r=l
q
k-tr* r=l
'!IT
("'A.* -N cos rrr)],
q
Using (2.33) and the fact that sin 2rrr
q
as follows,
4r < q
q < 4r < 3q
t = q - u ;
r
t = 4r -q
3q < 4r < 4q
t = 5q
r = q
t = q
4r ;
,
we give the following table of eigenvalues of 8
P
.
corresponds to
27
Table 2.2
Eigenvalues (A * ) of S p
q-odd
Eigenvalues
Multiplicity
cos
TIt
2q
(t
= 1,3, · ·
cos
TIt
2q
(t
= 2,4,
··.
,
2q -1)
k-1
,
29 -2)
1
1
0
q-even (q
Eigenvalues
cos
cos
0
TIt
9
TIt
9
::f
4m)
a
Multiplicity
(t
= 1,3,
(t
= 2,4,
··
··.
, q-1)
,
q-2)
2k-1
1
1
~or 9 = 4m the eigenvalues of S are the same as those of C
and thus are given by Table 2.1. We Pnote that Table 2.1 and
P
Table 2.2 both hold if k = 1.
28
2.5
Preliminary Remarks Concerning the Distribution of T and T'
p
P
From the theory of the orthogonal reduction of real quadratic forms
(see Browne 1958) and the fact that xl' x ' . . • , x are NID (0,1) it
N
2
follows that T
p
is distributed (omitting one zerp eigenvalue which
appears in the numerator and denominator) as
n
L
(2.34)
T =
P
At Zt
2
t=l
n
L
Zt
2
t=l
where At' t
= 1,2,
. . . , n =(N-~ are given by Table 2.1.
of generality these values can be considered ordered
Without loss
ordinates on
Also T' is distributed as
p
n
TI =
(2.35 )
P
I
t=l
At *Zt
2
n
L
Z
t
2
t=l
where
"-t * '
t = 1,2, . . • , n, given by Table 2.2, can be considered as
ordered ordinates on y = cos x, xe[O,n] (1... 1* ' > 1.. * >
2
. . ~ An* ).
In each of (2.34) and (2.35) the Zt' t = 1,2,3,
n are NID (0,1).
Theorem 2.2:
Proof:
T
1
and T{ have identical distributions.
The proof follows from the fact that we have shown (section
2.2 and section 2.4) that C and 8 1 have identical eigenvalues.
1
29
Theorem 2.3:
I f (N, p) = 1, T
p
and T ' have the same distribution
p
as T ·
l
The proof follows since for this situation the eigenvalues
Proof:
of C and S are the same as those of C .
p
l
P
Theorem 2.4:
If PI and P2 are elements of E
k = {p
I (N, p)
= k} ,
the distribution of T
is the same as that of T ,and the distribution
PI
Pz
of T ' is the same as that of T ' •
PI
P2
The proof follows on consulting Table 2.1 and Table 2.2
Proof:
and noting that the eigenvalues of C and S
p
p
depend only on k and q.
Alternate proofs of Theorem 2.3 and 2.4 follow on using equations
(2.7) and (2.8) and making use of the fact that the distributions of T
p
and T' are invariant under a permutation of the x
p
r
(this corresponds to
an orthogonal transformation) or, what amounts to the same thing, a permutation of the
·u
p r
u
p r
or the
For example, if (N,p) = 1 the sequence
is a permutation of the sequence
L
r=l
x
xr
2
u
p r - 2ie
L
; whence,
r
N
x
u
r p r
)
~
r=l
r=l
r
u
N
N
where
v.
p r
is a permutation of
xr
N
,2
x
u
r
r - 2X"
L
x
r
u
r
r=l
Since form is preserved and since
the joint distribution of x , r = 1,2, . . . , N, is multivariate normal,
r
which is invariant under a permutation, the distribution is the same as
30
3•
SUMMARY OF VON NEUMANN'S AND HART'S RESULTS
WITH REMARKS ON TWO APPROXIMANTS TO T1
3.1
Distribution of T
Properties of the distribution of T
where it is shown that
~
1
are given in Von Neumann (1941),
1
as given by (1.5) can be written as
~
(3.1)
=
-l!!
N-1
from which we have
T
1
(3.2)
=1
_ N-1
2N
~
In particular it is shown that T = x is symmetrically distributed about
1
x = 0
and that for N-odd
o
t-even,
!! -1
(3.3)
d
2
w(x)
=
n -1
dx 2
TIt
~,
where At = cos
(-1)
~(n-r-1)
t -odd,
n = N - 1, and w(x) is the density of T
1
= x.
In view of (3.3) it follows by k successive integrations that
!! -l-k
(3.4)
d
2
w(x)
!! -l-k
dx
2
where C is a constant and -----..
k
we have
denotes behaves like.
For k :
~n
- 1
31
•
w(x)
~
= C (A
l
- x)
2
- 2
This result and symmetry considerations 1edR. H. Kent to suggest the
following approximation to w(x):
0()
w(x) =
(3.5)
L
h=O
Hart (1942) used Von Neumann's results in giving the density of T
N
a
h
= 3,5,7
(h
1
and used the first four terms of (3.5), determining the
= 0,1,2,3)
by the condition of normalization and fitting of the
first three even moments, to tabulate probabilities associated with
to N
= 60.
In particular Hart obtains
k'
(3.6)
for
=
J
- cos TI
N
~
32
=
(N-l)(N+l)(N+3) I [N-2 N-2] M* + (NoH) (N+3)(N+5) I
24
k
2' 2
1
24.
k
[B2' B]2 M*2
[N+2 N+2] *
(N+5) (N+7) (N+9)
[N+4 N+4]
*
+ (N+3) (N+5) (N+7)
24
Ik
2' 2
M3 +
24
Ik
2' 2] M4
where k
=
(k' + q)/2q, q
= cos
1
- 3" +
M2 (N+5)
--=---:2~­
q
M
k
denotes the Incomplete
The M* are as follows:
Beta-Function.
where
(n/N),and I
M (N+5)(N+7)
M (N+5)(N+7)(N+9)
6
4
---:.._---+ ~---~---3q4
45q6
M*
2
=
1
M (3N+13)
M (3N+ll) (N+7)
2
+ 4
2
3q4
q
M (N+3) (N+7) (N+9)
6
l5q6
M*
3
- -
1 +
M (3N+ll)
2
2
q
M (N+3) (N+5) (N+9)
6
l5q6
M*
4
=
1
R
3" -
= E(T 1I ),
M (3N+19) (N+3)
4
+
3q4
M (N+3)
M (N+3) (N+5)
2
+ 4
2
3q4
q
M (N+3) (N+5) (N+7)
6
45q6
R = 2,4,6.
The excellence of this approximation is supported by Hart's comparison of true and approximate eighth and tenth moments for N
[for example when N
=
= 7,8
and 9
7 the true eighth and tenth moments are .00413 and
.00202 respectively while (3.5) gives .00412 and .0020lJ and also by his
comparison of true and approximate probabilities for N
=
7 (see Hart 1942).
We see that the approximation is extremely accurate in the tail of the
distribution, where significance points are obtained.
By the use of (3.2) and Hart's (1942) table of significance points
for
~,
we determine the following table of significance points for T •
l
33
Significance Points for T
Tab Ie 3.1
1
5% Level
1% Level
N
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
45
50
55
60
T1
0.6872
0.7310
0.7192
0.6930
0.6686
0.6456
0.6241
0.6043
0.5860
0.5691
0.5434
0.5389
0.5254
0.5128
0.5011
0.4900
0.4797
0.4699
0.4607
0.4521
0.4438
0.4361
0.4287
0.4216
0.4150
0.4085
0.4025
0.3966
0.3911
0.3859
0.3807
0.3758
0.3715
0.3663
0.3619
0.3575
0.3533
0.3341
0.3186
0.3051
0.2927
N
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
45
50
55
60
T1
0.6198
0.5998
0.5549
0.5320
0.5088
0.4879
0.4689
0.4518
0.4362
0.4222
0.4092
0.3973
0.3863
0.3763
0.3670
0.3583
0.3502
0.3426
0.3355
0.3287
0.3224
0.3164
0.3107
0.3054
0.3004
0.2954
0.2909
0.2864
0.2923
0.2784
0.2744
0.2708
0.2672
0.2637
0.2604
0.2571
0.2539
0.2397
0.2282
0.2183
0.2092
34
3.2
In his study of
~
General Formulas
Von Neumann (1941) gave several formulas which
we use in later chapters.
We thus record these formulas here, modifying
Von Neumann's notation slightly.
Letting
n
L
L
t=l
n
Y
2
At Zt
2
Zt
t=l
where A1
~
A2
> ...
~
An' A1 ; An' and Zt' t = 1,2, • • . , n(= N-1)
are NID (0,1), and letting w(y) be the probability density of y,
Von Neumann shows that the following equation holds for Z complex (exc1usive of the real half line Z >
(3.7)
(y - z)
n
2
A).
n
1
w(y)dy =
-Vn C\ -
z)
t=l
Using this equation he determines for n-even (N-odd),
o
(3.8)
d
!!. - 1
2
w(y)
!!. - 1
dy2
, t-even,
=
(_1)~(n-t-1) (!!' -1)1
2
•
TT
-;;;:===1==~, t -odd,
""\V1ft=l
(At - y)
t
where y satisfies
At
> y > At + l' t = 1,2, . . . n - 1.
35
3.3
Two Approximants to the Distribution of T
1
Since the distribution of T given by Von Neumann and Hart is
1
approximate we compare it here with the distribution of two well-known
statistics, R. L. Anderson's R (1941) and Durbin and Watson's C (1951).
1
1
The comparison with C is accomplished by showing that an approximant
1
to T
1
is distributed as C while the comparison with R is numerical.
1
1
3.3.1
Comparison with C
1
Following a suggestion by Herbst, we define
N
2:
1!=1
(3.9)
N
L
(x
r
- x)2
r=l
where / . .
= e 2rriA
hence,
D =T +iT'
111
Letting
- rs
N
(3.10)
and
- rs
1
(3.11)
N
where x
r
=
~
y
r
z
N
r=l
+ y , so that y , r = 1,2, • . . , N are N NID (0,1) random
r
r
variables, it is easily shown that
36
(3.12)
I='
';;)s
IN x
From (3.10) we see that J o =
S :;.
,
0 .
and expanding (3.9) we see that D
1
can be written as
r
N
it
D1
(3.13)
2
x z N - 2J J *
r
o 1
N
I
2
r
x
i
0
r=l
where J *
s
.
denotes the complex conjugate of J
s
N
N
L
L
J s J*
s +1
=
Using the fact that
r
2
x
r
z
N
r=l
s=l
we rewrite (3.13) as
N
*
r~ J r Jr+1
J r J*
L
r=l
(3.14)
N
r
Several properties of J
J
s
= J
- 2J o J *1
N+s
= J*
-s
and I
i
0
follow from (3.10), for example we have
s
*
N-s = J s
Using these we rewrite (3.14) as
N-2
L
(3.15)
=
r=l
N-l
L
r=l
We see that D is expressible free of J o ' therefore we may write,
1
37
using (3.12),
N-1
I
(3.16)
r=l
N-1
=
L
r=1
Letting Sr
= u *r +
we have
iv*
r
N-2
L
(3.17)
°1
=
* *
* *
* *
(u r ur+1 + v r vr+1) + i (v r u r+1
* *
u r v r+1)
r=l
N-1
I
(u*2
r
+ v*2 )
r
r=l
If N = 2m, ';2
(1
~
u*
0
';2
If N = 2m + 1, ';2
s < N, s =f m) has a variance of unity.
variance of two and';2
u*
s
(1
.:s
s < N) has variance of unity.
v * and';2 v * have variance of zero and';2 v *
o m s
has variance of unity.
v
*s
(1
~
If N = 2m + 1,';2
s < N) has variance of unity.
are statistically independent.
u*
s
u* have variances of two and';2
m
and ';2
(1
.:s s
u*
has
0
If N= 2m,
< N, s =f m)
v* has variance of zero and
o
For any (r,r ,), u* and v* '
r
r
For r =f r', u* and u* ' are statistically
independent if they are distinct.
r
r
The same is true for v* and v * ,.
r
r
Thus we see that as written 01 contains correlated normal variables with
differing variances.
However, because of the periodicity in the u* and
r
v*, it is possible to eliminate the correlated variables by expressing
r
in terms of distinct u * and v *r· We have u* = u*
= u* _ '
°1
r
r
N+r
Nr
* = v*
*
= -v _ ' thus °1 can be written as
vr
N+r
Nr
38
D =
1
m-1
\'
**
**
*
* **
**
**
**
**
2 ~ [(urur+1+vrvr+1)+i(ur+1vr-urvr+1)]+e1[(umum+1+vmvm+1)+i(vmum+1-umvm+1)]
r=l
r=1
where e
1
= 1, e = 2 if N = 2m
2
+
1, e
1
= 0, e
2
= 1 if N = 2m.
We consider N = 2m and obtain
(3.18)
=
r=l
and
m-1
L(/2
(3.19)
T' = I D
m 1
1
*
*
-/2 u*r 12 vr+1)
v* 12 ur+1
r
r=l
m-1
L [( 12
* 2
* 2
u ) +( 12 v ) ] + u*2 + v *2
r
r
m
m
r=l
If all the 2m variables appearing in (3.18) and (3.19) are regarded
as NID (0,1) we obtain approximations to T
1
and T{.
so considered call the approximants T* and T '* .
1
1
For the variables
From its form we see
that T* is distributed as Durbin and Watson's C (1951).
1
1
As a check on
some of the previous work we now show that when the variables are so
,*
considered T
1
has the same distribution.
We write
39
m-1
(3.20)
L
Tl1*
(w r zr+1 - z wr+1)
r
r=l
=
m
I
(w
2
+ z r 2)
r
r=l
where we are regarding w1 ' w2 ' . . . , wm' zl' z2' . . • , zm as NID (0,1).
We define Zs = wm+s' s = 1,2, • . . , m, so that (3.20) may be written as
m-1
(3.21)
L
(w r wm+-r+1 - wm+-r wr+1)
r=l
T 1* =
1
2m
I
w
r
2
r=l
=
Wi
w'
Bw
w
Investigation shows that the matrix B for m = 2,3,4 and 5 has the same
eigenvalues as the numerator quadratic form in C , which indicates the
1
possibility that T{* is also distributed as C .
1
That this is the case
we now show by performing a sequence of orthogonal transformations to
put T 1* in the form of C .
1
1
We let
x. * ,1<
~
and
wm+-1
*
= ~1
i
:s m,
'
where O.~ = 1 exceptO.~ = -1 for i = 2 + 3k or i = 3 + 3k for some integer k.
For i > m we write i = m + j and let
wm+-1+j =
6j
x:.r1+j
The above orthogonal transformation carries the numerator of T 1* into
1
40
(3.22)
r=l
Next we apPlY the transformation
x * = xm+r (r-odd), x *
r
r
xm+r (r-even)
and
* = x r (r-odd)
* = x1n+r (r-even), xm+r
xm+r
,
so that (3.22) becomes
L
(xm+r xm+r+l + x r xr+l) +
odd r
L
(x r xr+l + xm+r xm+r+l)
even r
or
m-l
~
(xrxr+l + xm+r xm+r+l) ,
r=l
which is identical with the numerator of C .
l
The following table com-
pares the 5% points of Durbin and Watson's C (1951) with those of T
l
l
taken from Table 3.1.
41
Table 3.2
Comparison of T with C
l
l
N
Cl
Tl
10
0.426
0.469
12
0.403
0.436
14
0.382
0.409
16
0.364
0.386
18
0.348
0.367
20
0.334
0.350
22
0.321
0.335
0.322
24
It appears that an even better comparison is obtained if we drop the
terms u*
m
and v * from T and T{ (actually v*
m
l
m
= 0).
This amounts to
replacing m by m -lor N by N - 2 in C before comparing with T •
l
l
3.2 shows the remarkable closeness of T
3.3.2
l
Table
(for N) to C (for N-2).
l
Comparison with R
l
We present the following short table comparing Anderson's (1941),
R (uncorrected for the mean) with our T •
l
l
The closeness of these two
statistics can only be explained by the fact that in a sense the eigenvalues of the numerator or T
numerator of R .
l
l
are an average of the eigenvalues of the
42
Table 3.3
Comparison of T with R
1
1
1% Level
N
T1
5% Level
R1
N
T
1
R1
5
0.731
0.823
5
0.590
0.622
10
0.624
0.623
10
0.469
0.477
15
0.539
0.543
15
0.397
0.400
20
0.480
0.480
20
0.350
0.351
25
0.436
0.437
25
0.316
0.317
43
4.
FORMULAS RELATED TO THE MOMENTS OF T
4.1
Statistics of the type y
P
AND T I
P
Basic Formulas
= X/AX/X'X can be shown to be independent
(in a statistical sense) of their denominators (see Von Neumann 1941).
Hence if we let, say, T =
and T are independent.
f
z
2
t
, then T and T are independent and T '
P
P
t=l
Therefore it follows that
(4.1)
E denoting the expectation operator, from which we obtain
(4.2)
E(T t)
p
=
'[(1 '" 4]
E
(~)
For T; we have the same result with A~ in place of At' thus
(4.3)
E (T ' )
P
It is well-known that
n n
2 ("2) ("2
(4.4)
for ,?,
= 1,2,3,
+ 1) • . .
n
("2
+
,~ - 1),
. , T being chi - square with n degrees of freedom.
Furthermore it may be shown by the use of the moment generating function
\f'(z)
= E [e
z
n
r
t=l
At z/] that
44
(4.S)
E
where
t
t" =
(4.6)
,J
1
k~
k=l
"k
~
taken over all sl' s2' . . . , sk such that each si > 1,
i = 1,2, . . . , k, s 1 + s 2 + . . . + sk
(4.7)
for integer m.
w
m
=
=X
, and
1
2m
From (4.6) we obtain
t
1
= w
1
w 2
1
t =w + 2
2
2
t
3
w
1
3
= w + w w +6
1 2
3
2
4
w
w w
1 2
1
t = w + ww +
+24
2
4
1 3
4
t
s
=
W
s
w 2w
+ w1w4 + w2w3 + 12 3
45
As Applied to T and T{
1
4.2
For T
1
TIt
and T ' we have At = A * = cos
1
t
N '
t = 1,2,
. . . ,
n.
Thus
(4.7) becomes
n
w =
m
(4.9)
L
1
2m
(cos TIt)
N
m
t=l
From the fact that cos TI(N-t) = - cos TIt
N
we see that for m-odd, w = 0;
N
m
and, since each t, contains at least one w
m
odd moments of T
1
and T{ are zero.
(m-odd), it follows that all
We thus consider m even.
iTIt
N
2
to obtain
(4.10)
w
m
=
~ [e
1
m2m+1
N
+ e
TIt= e
cos -
We put
i: + e- i:] m.
t=l
Using
(x+y)m
=
I (:1
x
s
y
m-s
s=O
we rewrite (4.10) as
n
1
w =
m m2m+1
m
L I (:)
t=l
s=o
e
irrt(2s-m)/N
46
or, interchanging the order of summation, as
m
\' (: J ....;:;e_iTT_(_2s_-_m.,..)~_....;;e~i...,TT,...(_2_s_-m_)_I_N_
iTT (2s-m) IN
1
m2m+l L
e
s=o
1
= -..;;....~
iTT (2s-m) IN
if
e
;f:. 1, that is (2s-m)/N;f:. 2j, j =
2, . . . •
Since m is even e iTT (2s-m)
=
,
-2, -1, 0, 1,
1 so that
iTT(2s-m) IN
iTT (2s-m)
••
~e_...--_.,..--_--..,:e;...,-
= _
1 .
e iTT (2s-m)/N _ 1
Defining J
={ . . . ,
-2, -1, 0,1,2, . . • }, S
and E to be the subset of S for which 2s-m/N
J
= {O,l,2,
= 2j.
. . . , m},
jeE , we can write
J
(4.11) as
(4.12)
1
W =
1
m2m+l
s~s (:)
=
1
m2m+l
L (:In
seE
m
[e
+
iTT
(2S-m)!N ]
1
m2m+l
J
2:
seE I
I:)
t
(-1)
J
1
- 2iii
We
,
f~nd
w2
N-2
= -a-
3N-8
' w4 = ~,
and w6
and, putting these into (4.6) to
~et
and (4.4) in conjunction with (4.3),
= 5N-16
192 '
these hid'
0
~ng f or N > 3 ;
t , t , and t , obtain, using (4.5)
6
2
4
47
= N-2
2
E (T 2)
1
(4.13)
N _l
2
= 3 2(N + 2N - 12)
(T 4)
E
1
(N _l) (N+3) (N+5)
E (T 6)
and
=
1
th~se
It is
moments which
4.3
w~re
2
3
15 (N + l2N + 8N = 168)
(N 2 -1) (N+3)
used in the approximation (3.6).
As APplied to T
i
i
.
and T'
p}
i
P
It follows from the formulas given in section 4.1 that the moments
are obtainable once the t, are known.
These are determined
).,
by the w as given by (4.7).
m
and T'.
p
We thus present formulas for w for T
m
p
For T we have
p
q_l
t~l (cos
-)
We note that (4.14) holds for k
=1
(4.14)
wm
=
1
2m
[
'Ilft m
q
+
(k-l)
i (cos 2:,>mJ .
t=1
and that the evaluation of the first
sum in (4.14) follows as in the previous section.
we have
for the second sum
48
q
L
(4.15 )
q-1
2TTt m
=
(cos - )
q
t=l
L
?TTt m
) + 1
q
(cos
t=l
q-1
lU
L (: ) e
I
1
=
m
t=l h=O
2
i2TTt:n -i2TTtn (
m-n )
q
+ 1
e
q
i2TT (2n -m) t
+
e q
=
where N is the subset of {0,1,2, . . . , m}
J
Putting (4.15) into (4.14) and noting that
1
for which (2n-m)/q E J.
q-l
[cos TTt] m = 0, if m
t=l
q
l:
is odd, we can write (4.14) as
q (k-l)
(4.16)
, for m odd,
m2m+1
wm
=
N
for m even.
For T I
P
we have
q -1
(4.17)
[cos
~Jm +
(k-1)
L [Sin 2;;f ] ,
t=l
from which we see that w =
m
and sin 2n(g-t)
q
° for m-odd,
2m
sin -,-.
q
since cos TT(q-t)
Thus we consider
q
t
=-
cos TTt
q
even in evaluating the
49
second sum in (4.17).
q-1
L
t=l
We have
q1"'l
m
[Sin 2:J
=
m
L
t=l
I (:)
(2qm
e
-i ZTTt
(m-n)
q
( _l)m-n
e
n=O
r I (:1
t=l
=
i2TTtn
q
m
(_1)2 q
i £!!£.(2n-m)
q
n=O
\ (mn)
L
zm
(_l)m-n e
(_l)n
Therefore (4.17) becomes
(4.18)
w =
m
, for m-odd.
0
q
m2m+1
L
SEE
(~ ) -
-2m1
+ 9 (k-1)( _qm/2
m2m+1
L(:)
( _1)s
,
for m even.
SENJ
J
Defining
F (m, q) =
L (:)
SEN
G(m, q)
=
J
~ (:)
SEEJ
and
H(m, q) =
L (:)
SEN
it is clear that we may
rewrit~
(_l)s ,
J
(4.16) a,nd (4.18) respectively as
so
q(k-l)
m2m+l
(4.19)
W
m
for m odd,
F (m, q)
=
N
m2m+l
F (m, q) -
2~ +
m2
~l [G (m, q) - F (m, q) J, for m even,
and
o
, for m odd,
(4.20)
w
m
=
q
i
m2
m+l
G(m, q) -
_1
+ q(k_l)(_1)m/2
Zm
m2m+l
--
H(m, q), for m even.
We give the following table of values of values pf F(m, q), G(m,q),
and H(m,q), these being sufficient for evaluating w
m
for m S 10 (any q)
and thus sufficient for calculating the first ten moments of any T or T / .
p
P
51
TFlb1e 4.1
Values of F(m,q), G(m,q), and H(m,q) for m < 10
F (m,q)
~q
2
3
4
5
6
7
8
9
10
11
2
4
2
2
2
2
2
2
2
2
2
3
0
2
0
0
0
0
0
0
0
0
4
16
6
8
6
6
6
6
6
6
6
5
0
10
0
2
0
0
0
0
0
0
6
64
22
32
~O
22
20
~O
20
20
20
7
0
42
0
14
0
2
0
0
0
0
8
256
86
126
70
86
70
72
70
70
70
9
0
168
0
72
0
18
0
2
0
0
10
1024
342
240
254
342
252
272
252
254
252
G (m,q)
m\q
2
3
4
5
6
7
8
9
10
11
2
2
2
2
2
2
2
2
2
2
2
3
0
0
0
0
0
0
0
0
0
0
4
8
6
6
6
6
6
6
6
6
6
5
0
0
0
0
0
0
0
0
0
0
6
32
22
20
20
20
20
20
20
20
20
7
0
0
0
0
0
0
0
0
0
0
8
128
86
72
70
70
70
70
70
70
70
9
0
0
0
0
0
0
0
0
0
0
10
512
342
272
254
254
252
252
252
252
252
52
.
Table 4.1 (cont1nued)
a
H (m,q)
~q
2
3
4
5
6
],
8
9
10
11
2
0
-2
-2
-2
-2
-2
-2
-2
-2
-2
3
0
0
0
0
0
0
0
0
0
0
4
0
6
8
6
6
6
6
6
6
6
5
0
0
0
0
0
0
0
0
0
0
6
0
-18
-32
-20
-18
-20
-20
-20
-20
-20
7
0
0
0
0
0
0
0
0
0
0
8
0
70
128
70
S4
70
72
70
70
70
9
0
0
0
0
0
0
0
0
0
0
10
0
-162
-512
-250
-162
-252
-272
-252
-250
-252
a
The values for m
=
1 are not included since w
l
=0
for T
P
and T I .
P
S3
4.4
th
Proof That the m-- Moments of T
and T' Are the Same
p
p
as Those of T for m < q
1
We prove that E(T m)
p
= E(T,m)
= E(T 1m)
p
for m < q by showing that
w as given in (4.19) for T and (4.20) for T' reduces to w as given
m
p p m
for T1 by (4.12).
have F(m,q)
=0
since 2s-m is an odd integer satisfying -m < 2s-m < m
and is never divisab1e by q.
Thus w
m
= 0,
For m-even (2s-m) is even and since - m <
value of s, namely s
G(m,q)
= F(m,q)
For m < q and m-odd we
We consider (4.19) first.
m
= 2'
for which
2s~~
if m is odd and m < q.
2s-m~
m there is only one
is divisable by q.
for m < q (4.19) becomes for m < q
o
wm
, m-odd,
=
1
, m-even ,
- 2m
which is the same as given by (4.12) since N ~ q > m.
already have w
m
We obtain G(m,q)
Since
=0
In (4.20) we
for m-odd hence we consider m-even and m < q.
m
= (~)
and H(m,q)
=
(;1
n
(ol)m
so that (4.20) becomes
2
o
w
m
, for
=
1
~,
the same as before.
m-odd,
m-even,
54
It
i~ bec~use
of the above results that the values tabulated in
Table 4.1 are sufficient even for q > 11, the
the same as those of q
T
l
= 11.
Since T
p
and T'
p
va~ues
for q > 11 bedng
are thvs distributed &s
through the first q - 1 moments it seems that the studY of the distri-
butions of T
p
and T; should be done in terms of q
= ~,
a result
analo-
gous to that found by Anderson (1942) in his work on the serial correlatipn coefficient.
55
5•
ASYMPTOTIC DISTRIBUIIONS WITH THE EXACT DISTRIBUIION
OF T
5.1
p
FOR N=2p AND N=3p
An Asymptotic Distributton for T
p
and T' for q
p
---+
~
According to the remark ending the previous chapter it appears that
T
p
and
r'p
are distributed as T
l
for q large (p fixed).
That this is true
we now show by exhibiting an asymptotic distribution which holds for T
p
and T' for q --+ ~.
p
We start with equation (3.7) replacing z by -z so that it becomes
n
(x + z)
(5.1)
where we assume z> 1.
2 w(x)dx;
Equation (5.1) thus determines the density func-
tion of T =x when the A are given by Table 2.1 and the density function
p
t
of T;=X when At are given by Table 2.2.
For each case we now show that
the right hand side of (5.1) can be approximated by the same limiting
expression for n (q) large.
For T we have
p
(5.2)
-\ [(k-l)
211t
(cos - - +z)
q
=e
Then for ex,ample we can write
q
L~n
q
211t
(cos - - +z) =-9....
q
211
t=l
L
t=l
= -9211
211t
211
2n (cos -q- +z) q
56
where x
=2nt
--
t
2n
But from the definition of the Riemann
=-
q
li
integral
2n
q
L~
n(cos x
t=l
+ z)6x
t
t
f ~n(cos
J(o
=
x + z)dx .
It thus follows that (5.2) can be approximated for q large as
e-\ [(k-l)
t,;
J:
Rn(cos x
+
z)dx
+ :;
1
i
n(cos x
+
Z)dx] ,
n
~ n(cos
or, since
x + z)dx
(k-1) q + q = kq = N , by
and
-~
(5.3)
'2
1o ~n
(cos x + z)dx
~n(cos x +z)dx
'IT
l'
0
n(cos x + z)dx
e
'IT
We find
!'IT
= 21
o
o
= ~n
-V
(z + 2 z2 -1
I
' for z ~ 1.
Putting
this into (5.3) we have that (5.2) can be approximated for large q by
(5.4)
For T' we have
p
q
q-1
-\ [(k-1) )' 2n(sin 2'ITt +z) + ) 2n(cos 'ITt +
n
-n
O't +z)
t~
=e
q
t~
q
Z)],
t=l
which can be approximated by
-\ [(k-1)
e
s...l
2'IT
0
2n
2n (sin x
1~
'IT
+
z)
+ ;-
n(cos "
+
Z)dx]
57
2TT
and, since
1 ~n(sin
o
TT
x + z)dx = 2
Io R
n(cos x + z)dx ,
this also reduces to (5.4).
For N ---'00 we can then rewrite (5.1) as
(5.6)
which holds for T
p
=x
and T '
p
= x.
We now use (5.6) to obtain moments
for large N and from these an approximate density function.
We write
(5.6) as
1
(5.7)
or, since
N
J
(1 +~)
-1
z
I~ I < 1
-"2
N
w(x)dx
=2
2
, as
J [1+ r~ll:)J ¥ll} -1) I:12+00 J ~I· "!r -2+1
1
I:? .. 0)w<x)dx
-1
By an expansion due to Laplace (see Dixon 1944) we have
(~) (~+ 3)(~} 4
2~24
Putting this into (5.7) and equ~ting coefficients of
odd moments are zero.
For
~
=
2r we have
1z
+
we see that all
58
from which follows
1·3-5- - - (2r - 1)
(N+2) (N+4 ) . • _(N+2r)
(5.8)
which are the same as those moments which may be obtained by considering
the density
1
(5.9)
B(~, ~l)
where
2 ~(n-l)
(1 - x )
, -1 S x S 1 ,
B(x,y)
5.2
-_1
1
ux - l (l_u)y-l du .
Asymptotic Distribution of T for q Fixed
p
Preliminary to the results presented in this section we prove the
following Lemma.
L~
5.1
, x h) where xl'
Let w'=
n
are NID (0,1), let X(w)
yew)
,
and assume At
~
= l:
t=l
At' t = 1,2, _ . • ,n.
Then we have for each real
number z
(5.10)
where Fx(z) and Fy(z) are the respective
distr~bution
functions of
X
and Y.
Proof:
Let E ={w Ix(w) <
z)
and F =(w IY(w) S
z}.
Then i f wl€E
we have X(w l ) ~ z; and, since At ~ A~' we have Y(w l ) ~ X(w l ) or Y(w l ) < z.
Therefore Wl€F so we have E
£
F.
Therefore it follows that
59
~
P(E)
P(F)
or that
F
X
~
(z)
Fy<z).
As a consequence of Lemma 5.1 we also have that if zl satisfies
P(X~ zl)
=a
hand if z2 satisfies P(X~ z2)
z~ < z2.
= a,
and z{ satisfies P(Y~ z{)
=a
then zl ~ z{.
On the other
and z~ satisfies P(y ~ z~)
= a,
then
We also note that the lemma holds even if At ~ A~ does not hold
for all t if it is possible to rename the
tion is true.
in X(w) so that the condi-
XIS
For example if
=
X(w)
3x l
222
+ 2x2 + 4x:l
w'w
2x l
222
+ 3x 2 + x 3
and
y(w)
F
w'w
then At ~ At' qoes not hold for all t.
x
3
However, since xl' x '
2
and
are NID (0,1), obviously X(w) has the same distribution as
3x l
222
+ 4x2 + 2x 3
wiw
and this s~atistic satisfies A > A~' for all t with respect to Y(w).
t -
~
It can be shown that T (w) as given by (2.34) and
p
(see Anderson 194Z) satisfy
T (w) > R (w) ,
P
P
and moreover that theY differ by the statistic
.9.
2
(5.12)
Cq
L
r=l
[
cos
q2rrr -
cos (2r-l)rr] x 2
q
r
= ~""i";:;"'--------------w~w
Anderso~'s
R (w)
p
60
Using (5.11) along with Lemma 5.1 we have
(5.13)
F
We also have that E(C )
q
T
5
(z)
< FR
(z).
P
P
0 (1) and that E(C 2)
N
q
= 0 (~),
N
for
fixed q, so that
Since T + C
p
q
Moreover by a
5
R
p
PUm
C = 0 .
q
PUm
T
p
we have
conve~genc~
5
R •
p
theorem in Cramer (1946),
(5.13) showing that the approach is from below.
For q
=
2 we present
the following comparison of 5% points.
3
p
The values of
rp
'l'p
.785
R
P
.729
5
7
.612
.536
were obtained using the exact density of T
p
= x,
say,
which can be got for the cases N = 2p and N = 3p.
5.3
Exact Densities of T for N = 2p and N = 3p
P
The exact densities of T
p
~
by anderson's chi-square method.
x for N = 2p and N = 3p can be derived
These are:
61
N = 2p:
f(x) = f 1 (x) =
(5.12)
N =
for 0
~
x
~
0 •
3p:
1
J[(1+2x)-4(I-x)uJ~(p-3)(I-u)~(2p-3)u-~ du
o
(5.13)
~ ~ x ~ 1 .
for
22-p
t
r(1¥)
p
3 (3 -4)r
[l+2x]~(P-2)
1
J [4(I-x)-(l+2x)uJ~(2p-3)(I-u)~(p-3)u-%du
(~)rI2p;l) r(%) o
for -\ < x <
5.4
~
.
An Asymptotic Distribution of T 2 + T,2
P
P
We first prove the following lemma.
LEMMA 5.2
For T and T' as given by (2.3) and (2.4), we have
p
p
cov (T , T')
(5.14)
p
Proof:
p
=0
We have Cov (T , T') = E(T T') - E(T ) E(T') = E(T T'),
p
p
p p
p
p
p p
=
since we already have shown E (Tp ) = E(T')
p
N
T T' = r=1
p P
o.
We have
N
LL
(5.15)
•
(x _x)2
r
(Xt-~)
2
cos
2rrpr
N
t=l
N
~~
LI
r=l
t=1
(x _x)2 (xt--X) 2
r
. 2rrpt
Sl.n N
62
If we define
N
N
LI
Q =
r=l
(x -x)2 (x -x)2
t
r
t=l
and define
N
(xi -x) 2 (xr-X) 2
\
(5.16)
L
Q
.
Hn
~
N
r=l
and put (5.16) into (5.15) we obtain
N
(5.17)
\" cos
Tp Tp' = L
£TIE!.
N
f r (xl ,x ' • • • x )
2
N
r=l
Since T and T' are bounded E(T T') certainly is finite moreover
p
p
p P
(5.17) shows that
N
(5.18)
E (T T ') =
p p
I
cos
r=l
By symmetry consideration we obtain
N
L
(5.19)
sin
~,
S:fr
where
(x 1 -x) 2 (x i-Sf) 2
M1 = E [
Q
J
for i = 1,2, . . • , N,
and
M = E [
2
(x. -x) 2 (x. -x) 2
~
Q J
J
for i :f j.
63
Putting (5.19) into (5.18) we obtain
N
L
E(T T ') =
p p
N
cos ~
sin ~+M
[Ml
N
N
2
r=l
L 2ir· ]
sin
sIr
N
=
[M l - M2]
2
= 0
I
sin ~
N
r=l
N
2
for p <
In the density (5.7) which approximates T
p
large, p fixed, we putl/N+2
x = u.
= x and T ' = x for q
p
The density of u is
1
g(u)
(5.20)
N+2
It is easily shown that
N
Since T
p
2
1
tim
-+ oog(u)
-u /2
=V2TI
e
and T / , by Lemma 5.2, are unco~related and since we have
p
shown that for p fixed each is asymptotically normal we conclude that T
p
and T ' are asymptotically independent.
p
Hence we see that
Y = (N+2) (T 2 + T /2 )
P
p
can be treated as a chi-square random variable with 2-degrees of freedom.
64
6.
HEURISTIC REMARKS SUPPORTING THE USE OF CERTAIN
APPROXIMANTS TO THE DENSITIES QF T AND T'
P
P
6.1
Preliminary Discussion
We use the forms
n
L\
(6.1)
T
P
=
Zt
2
t=l
n
L
Zt
2
t=l
and
n
I
T'
(6.2)
P
=
At * Zt
2
t=l
n
~
Zt
2
t=l
in conjunction with equation (3.8),where the At
and the At *
by Table 2.2.
whence N = n + 1 must
b~
odd.
are given by Table 2.1
In order to use (3.8) we must consider n even
It can be shown, however, that for N odd
the approximations we are to propose do not follow from (3.8) due to the
multiplicities of the A's involved.
Our first approximation then is to
consider N-even but to use (3.8) as though N were odd.
That this approx-
imation makes sense follows from the fact that for N-even at least two
of
~
and
~
*
equals zero hence the numerators of (6.1) and (6.2) in
actuality depend on at most n - 1 variables; and, if we omit the variable from the denominators which does not appear in the numerators, the
denominators will depend on only n - 1 variables.
65
With the above approximation made, (3.8) holds for N even since
n - 1 is then even.
k
We will consider N o( the form N
= qp,
so that
P in Table 2.1 and Table 2.2, and give the approximations in terms
N, p, and q.
Approximants to the Densities of Tp and Tp'
6.2
We present in detail the derivation of only one of the approximants,
that one which approximates the density of T
p
for N
= qp
(q-even, p-even).
Equation (3.8) can be written
o
, for t -even,
~ -1
d
(6.3)
2
w(x)
dx
m -1
2
(_l)~(m-t-l)
1
TT
for t-odd,
At > x > A +1 , t = 1,2, . . • , m - 1,
t
where m
=n
- 1 and t4e At are given by Table 2.1 omitting two of the
zero values.
Since p is even p - 1 is odd, thus from Table 2.1 we see that
= Ap-1
Al _>
11.
= 1
and from the ordering
Ap- 1 > Ap -> • • . -> Am
2 >_ . • • >
-
we see that for x satisfying 1 > x > A
p
near x
=1
we see from (6.3) that the
= cos
q
we have t odd.
Hence
(I _l)st derivative of the density
w(x) behaves like
- (p-1) /
k(l - x)
-TT
2
66
Performing t successive integrations we obtain
_ (p-l)
~ -l-t
d
2
w(x)
t
-l-t
m
dx
+
2
2
m
which, for t = 2 - 1, gives
~
w(x)
(6.4)
~
_ 1 _ (p-1)
(l _ x)2
C
2
~ -1
2
the
CIS be~~g
constants.
On the other hand using results previously obtained concerning the
moments of T we see that all odd moments are zero for q-even.
p
an even function approximation is appropriate.
Therefore
These remarks in conjunc-
tion with (6.4) lead us to propose the following approximant to the density of T = x for N = qp (p-even and q-even):
p
N-p+1 _ 2 + r
(6.5)
w(x)
=
a
r
(l _ x 2 )
2
l<x<l.
r=O
The a
r
are to be determined by the condition of normalization and fitting
of the first few even moments as determined by formulas in chapter 4.
For q-odd the fact that the extreme A's are 1 of multiplicity p-1
and - cos rr of multiplicity 2p-1 lead us to propose the following approxq
imate density to T
P
= x.
q-odd
00
(6.6)
w(~)
=
I
a (1 - x)
r
r=O
the a
r
N-2p -l+r
N-p -1+r
to be determined as before.
2
(x + cos rr)
q
2
< 1,
for - cos rr<x
q -
67
Consistent with the fact that all odd moments of T ' are zero for
p
every q and a consideration of the extreme A's as given in Table 2.2 lead
us to propose the following two approximants to the density of T '
p
= x:
q-odd
(6.7)
C>O
w(x)
=
~
a r (cos
2
TT
2
2q - x )
N-p
2 -1
+
r
TT
TT
for - cos -< x < cos 2q
2q -
r=O
g-even
(6.8)
L
C>O
w(x) =
a
N-2p -1 + r
(cos 2 Jl _ x 2 ) 2
for _ cos ~ < x < cos TT
r
q
q q
r=O
We propose to investigate these approximations in further research.
In particular we propose to check (6.5) against the exact density as
given by (5.12) for q
=2
as given by (5.13) for q
and to
~heck
(6.6) against the exact density
= 3.
We remark that (6.3) - (6.6) are each of the form of the approximation studied by Hart (1942) and given by (3.6).
excellent results for T •
l
This approximation gives
68
7.
LIST OF REFERENCES
Anderson, R. L. 1941. Serial Correlation in the Analysis of Time
Series, unpublished thesis, Iowa State College.
Anderson, R. L. 1942. "Distribution of the Serial correlation
coefficient," AMS, 11: 1~13.
Anderson, T. W. and R. L. Anderson. 1950. "Distribution of the
circular serial correlation coefficient for residuals from a
fitted fourier series," AMS,
59.
n:
Browne, E. T. 1958. Introduction to the Theory of Determinants and
Matrices, Univ. of North Carolina Press.
Cramer, H. 1946. Mathematical Methods of Statistics, Princeton
University Press.
Dixon, W. J. 1944. "Further contributions to the problem of serial
correlation," AMS, 11: 119~144
Durbin, J. and G. S. Watson. 1951. '~xact tests of serial correlation
using non-circular statistics," AMS, 22: 446-451.
Geisser, S. 1957. "The distribution of the ratios of certain
quadratic forms in time-series," AMS, 28: 724.
Gur1and, J. 1948. "Inversion formulae for the distribution of
ratios," AMS, 19: 228-237.
Hannan, E. J.
1960.
Time Series Analysis, Methuen, London.
Hart, B. I. and J. Von Neumann. 1942. "Tabulation of the
probabilities of the ratio of the mean square successive
difference to the variance," AMS, 13: 207-214.
Herbst, L. J. 1962. Periodic Variances, unpublished thesis,
Harvard University.
Hobson, E. W.
1928.
Plane Trigonometry, Cambridge University Press.
Koopmans, T. 1942. '~eria1 correlation and quadratic forms in
normal variables," AMS, 13: 14.
Leipnik, R. B. 1947. "Distribution of the serial. correlation
coefficient in a circularly correlated universe," AMS, 18:
Mad ow , W. G. 1945. "Note on the distribution of the serial
correlation coefficient," AMS, 16: 308-310.
80.
69
Von Neumann, J. 1941. "Distribution of the ratio of the mean square
successive difference to the variance," AMS, 11: 367-395.
Whittle, P. 1951. Hypothesis Testing in Time Series Analysis.
Almquist and Wikse11s.