i
ON THE THEORY OF ELLIPTICALLY CONTOURED DISTRIBUTIONS
by
Stamatis Cambanis l , Steel Huang
2
and Gordon Simons
3
of
University of North Carolina
lThe work of this author was supported by the Air Force Office of
Scientific Research Grant AFOSR-7-S-2796.
2The work of this author was supported by Air Force Office of
Scientific Research Grant AFOSR-7S-2796 and also by-the Office of Naval
Research Contract N00014-C-0809.
3The work of this author was supported by the National Science
Foundation Grant MCS78-0l240.
iii
Abstract
The theory of elliptically contoured distributions is presented in
an unrestricted setting (without reference to moment restrictions or
assumptions of absolute continuity).
These distributions are defined
parametrically through their characteristic functions, and then studied
primarily through the use of stochastic representations which naturally
follow from the seminal work of Schoenberg on spherically symmetric
distributions, appearing in 1938.
It is shown that the conditional
distributions of elliptically contoured distributions are elliptically
contoured, and the conditional distributions are precisely identified.
In addition, a number of the properties of normal distributions (which
constitute
a type of elliptically contoured distributions) are shown, in
fact, to characterize normality.
A by-product of the research is a new
(and useful) characterization of certain classes of characteristic
functions appearing in Schoenberg's work.
AMS 1970 Subject Classification:
Key Words and Phrases:
Primary 60EOS
Elliptically contoured, multivariate, spherically
symmetric, characteristic function, Laplace transform, conditional
distribution, characterizations.
1
1.
Introduction.
The elliptically contoured distributions on"mn (n- dimensional
Euclidean space) are defined as follows.
If X is an n-dimensional random
(row) vector and, for some~E::R and some nxn nonnegative definite matrix
L:, the characteristic function ¢X_lJ(t) of X - l-i is a function of the
quadratic form t L: t', ¢X_lJ(t)
= ¢(tL:t'), we say that X has an elliptically
contoured distribution with parameters lJ, L: and ¢, and we write _.
X ~ ECn(lJ,L:,¢).
When ¢(u)
= exp(-u/2),
distribution Nn(lJ,L:); and when n
ECn(lJ,L:,¢) is the normal
= 1, the class of elliptically contoured
distributions coincides with the class of one-dimensional symmetric
distributions.
The location and scale parameters lJ and L: can be any
n
vector in IR and any nxn nonnegative definite matrix, while the class
of admissible functions ¢ will be discussed below.
-1
is clear that for any c > 0, EC n (lJ,L:,¢(-)) = EC n (lJ,cL:,¢(c -)).
It is shown in Theorem 1 that this is the only redundancy in the parametric
It
representation of a nondegenerate elliptically contoured distribution.
The
scale parameter L: is proportional to the covariance matrix of X when the
latter exists.
Excluding the degenerate case L:
= 0,
all of the components
of X have finite second moments if and only if ¢ has a finite right-hand
derivative ¢'(O) at zero, and in this case the covariance matrix is
-2¢'(0)L:.
(See Theorem 4.)
When the components of X have finite first
moments, it is clear that the location parameter lJ is the mean of X.
Several properties of elliptically contoured distributions have been
obtained by Kelker (1970) and by Das
invertible and a density exists.
Gup~a
et al. (1972) when L: is
Here, we considc£ the general case of
~_
2
elliptically contoured distributions and, by making use of a convenient
stochastic representation which follows from the results of Schoenberg
(1938), we show that their conditional distributions are also elliptically
contoured and permit an intuitively appealing stochastic representation.
From this work follow several new characterizations of the normal
distributions.
Section 2 deals with the theory of stochastic
representations; Section 3 is concerned with the subject of conditional
distributions; Section 4 discusses densities and the circumstances of
their existence; and Section 5 is concerned with characterizations of
normality.
In addition to the references already cited, there is an interesting
discussion of 'a subclass of the elliptically contoured distributions by
Dempster (1969).
2.
Stochastic representations.
Our approach to defining elliptically contoured distributions by
means of parametric triplets
(~,E,~),
besides providing greater
generality than other approaches which require X to be absolutely
continuous, has the advantage that the class of distributions is closed
under linear transformations of X with
transformations and with
~
~
preserved under such
and E transformed in the same way as a mean
vector and covariance matrix.
Nevertheless, many of the properties of
these distributions are more easily studied and described by means of
stochastic representations of the form
(1)
3
where U(~) is a random vector of dimension ~which is uniformly
distributed on the unit sphere in IR ~ (~~l), where R, independent of
U(~), is a nonnegative random variable, and where ~ and A are a
nonstochastic vector and matrix of appropriate dimensions.
Denoting the characteristic function of U(~) by n~(1 It\
1
2
), tEIR~,
and the distribution function of R by F, it easily follows that a random
vector XEmn which is representabie as in (1) is distributed as
EC n (~,r,<pL, where r = A'A and
(2)
Thus
u
eve~
~
0 •
random veator X whiah is representabZe as in (1) is
eZZiptiaaZZy aontoured.
Let ~~, ~ ~ 1, denote the class of functions <p : [0,00)
+
lR which
e
are expressible as in (2) for some distribution function F on [0,00); and
let ~oo denote the same kind of clas~ when n~(r2u) is replaced by
2
2
exp(-r u/2). Schoenberg (1938) showed that for each ~ ~ 1, <P(\ It I 1 ),
t EIR ~, is a characteris tic function if and only if <PE~~' and that·4> ~+ ~<X).
Let X ~ EC
L
n
(~,r,<p)
and the rank r(r) of r be k
~
1.
Further let
= A'A be a "rank factorization" of L, Le., a factorization such that
A is k x n, necessarily of rank k.
Then Y
=
(X-~)A-,
where A- is a
generalized inverse of A, has characteristic function
<Py(s)
'_
= <P(sA- LA
Sf)
= <P(sA
- I
I
A AA
_
Sf)
= <PClI sll
2
),
since AA- equals the k x k identity matrix I k , due to the fact that A is
of full rank k. (Cf., Rao and Mitra (1971), page 23.) Thus
e.
4
y ~ EC k (0, I k , </» and </>( II s 11 2), s €rn.k, is a characteristic function on mk •
Consequently
</>€~k'
and it assumes the form (2) for i = k and some
distribution function F on [0,00).
Thus if R is independent of U(k) and
has the distribution function F, then ~
+
RU(k)A ~ ECn(~'~'</»' and
therefore
xg~
(3)
In summary, X ~
ECn(~'~'</»
+
with
RU(k)A .
r(~)
= k if and only if X is representable
as in (3), where R is a nonnegative random variable which is independent
of u(k) and}:; = A' A is a rank factorization of }:; (A:kxn, r(A)=k) .
The
function </> and the distribution function F are related through (2) with
i = k.
(When}:; is the zero matrix, k =
° and X =
~
a.s.)
Also, it
follows from (3) that the class of admissible </> in the parametric
representation
ECn(~'}:;'</»'
when r(}:;) = k, is
~k'
We shall refer to the representation given in (3) as a canonical
representation of X.
(It is not unique.)
It is to be distinguished from
the more general representation in (1) which might hold for i > k
Observe, for instance, that if ~
of X and Y = XB
+
+
= r(}:;).
RU(k)A is a canonical representation
C is a linear transformation of X, then the
representation (~B
+
C)
+
noncanonica1 if r(AB) < k.
RU(k) (AB) of Y is canonical if r(AB)
For every index i
is a representation of the type shown in (1).
~
k for which
= k,
</>€~i'
and
there
(See Corollary 2 below.)
The distribution of R in (1) depends on i, which is apparent from (2).
There are precise relationships between the corresponding parts of any two
representations of an elliptically contoured random vector; these are
described in Theorem 3 below.
5
When
~
is of full rank n, then X ~ EC
n
(~,~,~)
has a canonical
representation taking the form
d
(n) !,,;
X = ~ + RU
~:2.
(4)
The distribution function F appearing in (2) for
~
= k = r(~) will
be called the canonicaZ distribution function associated with X.
Its
special significance is made clear in the following lemma.
If F is the canonicaZ distribution function associated with
Lemma 1.
X~
ECn(~'~'~)
and k = r(~)
~
1, then the quadratic form
(5)
where
is any generaZized inverse
~
of~,
has the distribution function
F(r-) .
Proof.
From the canonical representation (3), one obtains
(vii)), A(AIA)-AI does not depend upon the particular generalized inverse
Choosing the Moore-Penrose generalized inverse ~+, for which
chosen.
= A+A+ I (cf., Boullio-nandOdelT-rfg-lll, page 8), we 'have- -
(A' A) +
+
A(A'A) Al
2
= AA +A+ lA' = IkI k = I k . Thus Q(X) d= R,
and the theorem
follows.
Remarks.
0
The distribution function F and the quadratic form Q(X) are
both defined with respect to the parameters
(~,~,~)
used to describe the
distribution of X.
As Theorem 1 below points out, other parameterizations
are possible.
= r(~)
If k
= 0,
then X = ~ a.s. and Q(X)
=
0 a.s.
e.
6
In the remainder of this section, we develop the theory germane to
stochastic representations of elliptically contoured random vectors.
Theorem 1.
If X ~ EC n (~,~,¢) and X ~ EC n (~ 0 ,~ 0 ,¢ 0 ), then
(6)
~
Moreover, if
X
o =
~
•
is nondegenerate, then there exists a c >
0
such that
¢ (.) = ¢(c -1 .) .
o
By examining characteristic functions, one can easily see that
g -(X-~)
=
~
which (6) follows.
~o - (X-~o) g ~ - ~o
-
Write
~ =
(a . . ),
1J
~
+
(X-~o)
o
= (aij ),
0
~
X-
=
(2~o-~)' from
= (~l""'~n)'
If
X = (X 1 , ... ,Xn) is not degenerate, then one of its components X. is not
J
degenerate, and the characteristic function of X. - ~. is given by
J
=
A..
'I'
o
0
2
(a .. u ),
U€lR ,
JJ
with a .. , a~. > 0, which establishes (8) with c
JJ
J
JJ
.
= a~./a..
JJ JJ
The
hypotheses and (8) imply that the characteristic function of X satisfies
(9)
¢(t~t') =
¢
o
(t~
0
Now, if (7) is not true then for some t
2
(a =)
n
t') = ¢(c -l a t)
o
o
flR
t€:m
n
2
(=b ) •
.
~
7
Substituting ut
o
(utrR) for t in (9) leads to
2 2
¢(a u )
= ¢(b 2u 2),
2
a ,
recursively from (10) that
n=1,2, ... ,
2
which in the limit, as n ~ 00, yields ¢(u )
= 1, U€IR .• This contradicts
o
the nondegeneracy of X, and hence (7) follows.
Thus the distribution of an elliptically contoured random vector X
does not uniquely determine the parameters (L,¢) in EC n (~,L,¢). This
redundancy has its analogues in (1) and (3), where it is apparent that if
R is mUltiplied by a positive constant and A is multiplied by the
constant's reciprocal, then the distribution of X is unchanged.
Except
for this indeterminancy in scale, the distributions of R in (1) and (3)
are determined by the distribution of X when X is not degenerate.
This
is a consequence of Corollary 2 below.
Theorem 2.
A funation ¢: [0,00)
+
IR beZongs to
~~
if and onZy if¢ is
aontinuous and
(11)
~ ¢(2sv)g~(v)dv ,
s
~
0 ,
is the LapZaae transfoPm of a nonnegative random vaT'iabZe, where
g~
denotes the ahi-squared density with t degrees of freedom (l~t<oo).
Proof.
The normal random vector X -
EC~(O.I~.$) where $(u)
= exp(-u/2).
N~(O,It)
This
is elliptically contoured
$.~~. and
(2) yields the identity.
8
exp(-u/2)
=~
nt(uv)gt(v)dv ,
U <?: 0 ,
= xx'
where gt is the density of the quadratic form Q(X)
(see Lemma 1),
or, what is more convenient for our purposes:
(12)
S
Now suppose ¢ is an arbitrary member of
~t
<?:
0,
r~R
•
and F is the distribution
function on [0,00) associated with ¢ through (2).
It follows immediately
2
from (2) and (12) that (11) is the Laplace transform of R where R has
the distribution function F.
Conversely, suppose (11) is the Laplace
transform of a nonnegative random variable Sand ¢ is continuous on [0,00).
1
Let F be the distribution function of S~, and let ¢o€~t be the function
defined in (2).
We shall show that ¢o
= ¢.
From the first part of the
proof we have
S
<?:
0 ,
which yields
roo
t
2" -1 -u/4s
JO h(u)·u
e
du = 0 ,
where h
=¢
o
- ¢, and u
= 2sv
describes a change of variables.
~ -1
is the Laplace transform of h(u)u
h(u)
=0
s > 0 ,
2
The latter
in the variable 1/4s, and hence
a.e. on [0,00).
Since ¢ is continuous by assumption, and ¢ is
o
2
continuous in as much as ¢ (u ) is a characteristic function, it follows
o
that ¢o
-e
= ¢, and hence
¢€~t'
Remarks.
1.
It is apparent in the proof of Theorem 2 that for each fixed t
0
9
(1~~<oo),
there is a one-to-one correspondence between functions
distribution fUnctions F:
¢€~~
and
Equation (2) defines ¢ in terms of F, and (11)
describes the Laplace transform of the distribution function
F(~
)in
terms of ¢.
2.
According to Lemma 1, (11) is the Laplace transform of the quadratic
form Q(X) =
(X-~)L-(X-~)'
when X is not degenerate and
~ =
k = r(E).
It
follows, in particular, that an elliptically contoured random vector X
has a nondegenerate nor,mal distribution if and only if Q(X) is a positive
multiple of a chi-squared random variable with k = r(E) degrees of
freedom.
(The multiple is one if E is the covariance matrix of X.
corresponds to ¢(u) = exp(-u/2), u
3.
~
This
0.)
Theorem 2, together with Bernstein's theorem (cf., Feller (1971),
page 439), can be used effectively to show when a candidate ¢ belongs to
the class
¢(.)
~~.
For instance, it is easy to check that
= (1-a·)exp(-./2)€¢~
Corollary 2.
Suppose X .~
if and only if 0
ECn(~,L,¢)
~ ~
random variable.
multiple of L.
(l~~<oo).
1
and X is nondegenerate.
representable as in (1) if and only if
if and only if
~ a~ ~
~ ~
r(L) and
¢E¢~;
Then X is
equivalently,
r(E) and (11) is the Laplace transfor,m of a nonnegative
1f X is representable as in (1), then A'A is a positive
Moreover, if A is scaled so as to make A'A
=
L, then the
square of the random variable R appearing in (1) must have the Laplace
transform given in (11).
Proof.
Eo
X~
= A'A
We have already shown that (1) leads to X ~ EC
and ¢ 0 is the function defined in (2).
ECn(~,E,¢),
it follows from Theorem 1 that
n
(~,L
0
,¢ ) where
0
Since, by assumption,
¢€~~
and
4It
10
i
r(E ) ~ r(E). Conversely, if ~€~i and R is a random variable,
o
independent of u(i), whose distribution function is the F appearing in
~
rCA)
~
(2), then RU(~) ~ ECi(O,li'~) (a restatement of (2)), and hence
~
+
AlA
RU(i)A ~ EC (~,E,~), where A is i x nand chosen so as to make
n
= E. That A can be so chosen is easily established when i
~
r(E).
The remaining part of the first sentence, following the semi-colon, in the
statement of the corollary then becomes an immediate application of
Theorem 2.
Now suppose X is representable as in (I), so that
X ~ EC n (~,E 0
,~ ) with E = AlA. Then AlA is a positive multiple of r, and
00
it is apparent that A can be rescaled so as to make AlA = r and ~o = ~
(cf., Theorem 1).
Then it follows from the first remark following -t.ne
proof of Theorem 2, that the square of the random variable R appearing in
(1)
o
has the Laplace transform given in (11).
Thus if a nondegenerate eZZiptioaZZy oontoured random vector X has
two representations X g ~
AlA
= AlA) then
00'
Lemma 2.
~o
=
~
RU(i)A and X g ~
+
R u(i)A and A =A (or
+
000
d -~
and R0
= R.0
(If0
AlA = cAlA, then R0 = c :z R.)
Suppose X g RU(n) ~ EC (O,l ,~) and P(X=O) =
n
n
d
0
d
II xii = R
X/lixil g U(n)
,
o.
Then
,
and they are independent.
Proof.
Since the mapping x
+
(I Ixl I, xii Ixl I) is Borel measurable on
~n _ {O}, it follows from the representation X g RU(n) that
(I Ixi I, XII Ixi I)
-e
Write u(n)
g (R,U(n)),
= (Ui n) ,u~n))
which proves the lemma.
",here ui n) is m-dimensional (l~<n).
0
11
Lemma 3.
where Rnm
(~O),
(m)
U
(n-m)
and U
are independent, and
i. e., Rnm has the density funation
2r (n )
n,-m -1
2
- - - - rm-l (1 -r 2) 2,'
r(m)r(n-m)
2
2
o<
r < 1 .
Let X = (X l 'X 2) ~ Nn(O,I n), where the dimension of Xl is m.
Xl and X2 are independent, it follows from Lemma 2 that
Proof.
Since
are jointly independent and
In what follows, we set u(m) and u(n-m) equal to xIII Ixll
I
and x2/1 Ix 1I,
2
respectively, and define Rnm as Ilxlll/llxll (=IIXlll/(IIXlI12+llx2112)~).
Clearly R , u(m) and u(n-m) are independent, 'and R has the required
nm
nm
2
2
distribution since I Ixll 1 and I IX21 1 are independent chi-squared
variables with degrees of freedom m and n - m respectively.
Finally,
applying Lemma 2 again, we obtain
= (R . u(m), (1_R 2 )~ u(n-m))
nm
nm
o
e-
12
Remark.
If
~€~N
(N<oo), then for each n, 1
~
n
S
~€~n
N,
and, according
to (2),
u
~
0 ,
for some unique distribution function F on [0,00). F and F (l~<n~N)
n
m
n
are related in the following way: If Rand R have distribution
m
n
functions F and F , then
m
n
R gR R
m
nnm
(13)
where Rnm , distributed as in Lemma 3, is independent of R . Consequently
n
lcf., Lemma 4 below), Rm has an atom of size P(Rn=O) at zero if this
probability is positive, and it is absolutely continuous on (0,00) with
the density
(14)
o<
s < 00 •
Thus Fm takes the form
= Fn (0)
+
p
~
To show (13), let X = (X I ,X 2) g RnU(n) ~ ECn(O,In'~)' where Xl is of
dimension m. Then Xl g RmU(m) ~ ECm(O,Im'~)' But, on the other hand, we
have from Lemma 3,
0
13
Xl
g Rnu(n)
g RnRnmU(m)
1
where R ,R and U(m) are independent. Thus Xl has two canonical
n nm
representations, R U(m) and R R u(m), and (13) follows from the remark
m
nnm
following Corollary 2.
0
On the basis of this remark and the foregoing discussion, we are
able to easily justify the following theorem and its corollaries.
Suppose the nondegenerate eZZipticaZZy contoured random vector
d
(1)
d
(1~}
X has two representations X = ~ + RU A and X = ~ 0
+ R U
A, where
00
Theorem 3.
1 ;;:: 1.
o
Then
( i)
~o
=~ ,
o,
is independent of R and
==
Corollary 3a.
1 if 1
o
= 1.)
If the eZZipticaZZy contoured random vector X has the
representation (1) and X is not degenerate, then X has a canonicaZ
representation ~
U(k)A where k = r(AIA) AlA is a rank
1k
0
'
0 0
factorization of AlA, the stochastic entities R, R1k and U(k) are
2
k 1-k
independent, and R1k .,.,. Beta(2'~) .
(R1k == 1 if 1 = k.)
Corollary 3b.
AlA
Q(X)
+ RR
If X ~
= Eand k = r(E) ;;::
ECn(~,E,~)
has the
representation:{~)'with
1, then the quadratic form
= (X-~)E-(X-~)I g R2Rik' where E- is any generaZized inverse of E,
2
R an d R1k are independent, and R1k
k 1-k
N
Beta(2'~)
(R1k = 1
if 1. = kt)
1'1'
e-
14
Suppose X ~ EC
n
(~,E,~)
is nondegenerate and, for convenience, Rand
A in (3) are scaled (see the remark following Theorem 1) so that A'A = E
and R has the distribution F appearing in (2) when
~
= k = r(E).
Since,
from (3),
it follows that the covariance matrix of X, denoted E , exists if and only
o
1'f
ER 2 <
00.
= ~ and
In this case EX
(15)
(say) .
Because the distribution function of R (i.e., F) cannot be readily
expressed in terms of
the constant c to
Theorem 4.
~
~,
it is desirable to relate the existence of Eo and
directly.
Let X ~ EC n (~,E,~) with X nondegenerate.
The covariance
matrix Eo of X exists if and onZy if the right-hand derivative of
u
= 0, denoted
~'(O),
exists and is finite.
Eo
Proof.
= -2~'(0)E
.
Clearly
where ui k) denotes the first component of U(k).
characteristic function ~(II t
11 2),
at
When it exists and is finite,
We continue with the setting and facts established in the
paragraph preceding the theorem.
.e
~(u)
Since RU(k) has the
k
te:Ut , Ruf) has the characteristic
15
function
</>
If 6
o
2
(k) (u) = </>(u ) ,
uE]R •
RU I
exists, then </>
is twice differentiable and
RU(k)
1
-</>"
RU(k)
(0)
2
L- 2<PC0~
h+O
1
Thus the existence of 6
-lim p(h
+
</>((-h)2)-=~2.'(0)
h
implies the existence and finiteness of </>'(0),
and, moreover, -2</>'(0) = k- l ER 2, so that (cf., (15))
o
~
o
=-2</>'(0)6.
Conversely if </>'(0) exists and is finite then for
=
r:
h~O,
1 - c~s hx dH(x)
h
_00
where H is the distribution function of Ruik).
Then by Fatou's Lemma,
roo l·l·,m 1- cos
= 2J_
--2 hx dH(·x )
oo
h+O
~ 2 1im
h+O
r:
_00
h
1 - cos hx dH(x) = -2</>'(0) •
h2
16
When the covariance matrix Eo of X ~ EC n (~,E,¢) exists, choosing E to
be Eo has special appeal. This occurs (automatically) if and only if ¢ is
chosen so as to make -2¢'(0)
= 1.
This happens, for instance, in the case
of normality when, among the possibilities {exp(-cu), c>O}, one chooses
¢(u)
3.
= exp(-u/2), u
~
O.
Conditional distributions.
In this section, it is shown that if two random vectors have a joint
elliptically contoured distribution, then the conditional distribution of
one given the other is also elliptically contoured.
The location and
----
_..
scale parameters of the conditional distribution do not depend upon the
third parameter ¢ of the joint distribution and, consequently, the
formulas which apply in the normal case apply in this more general setting
as well.
The situation with the third parameter of the conditional
distribution is much more complicated, unfortunately, and needs further
discussion.
The case in which the scale parameter E of the joint
distribution is an identity matrix is considered first in Theorem 5, and
the general case is handled in Corollary 5.
The statement of each
includes parametric and stochastic descriptions of all distributions, the
latter being made in terms of canonical representations.
The following lemma, whose proof is
strai~htforward and
therefore
omitted, is needed in the proof of Theorem S.
Lemma 4.
Suppose R is a nonnegative random variabZe with (right
continuous) distribution function F, and S is an independent random
-e
variabZe which is nonnegative and absoZuteZy continuous with density g.
Then the product
T
= RS
has a1' atom of size F(O) at zero if F(O) > 0, and
17
it is absolutely continuous on
(0,00)
h(t) = J(O,oo)r
-1
with density h given by
g(t/r)dF(r) .
Mopeovep, a peguZap vepsion of the conditional distPibution of R given
T
= t is exppessible as
=
° for p < °
p(R~pIT=t) = 1 for t = 0, or t >
(16)
°with h(t)
= 0, p ~
= (h(t))-lJ(0,p]r- 1g (t/r)dF(r) for t >
h(t)
;t
°;
° with
0, P ~
°
Theorem 5~
Let X = RU(n) ~ ECn(O,In'~) and X = (X1 ,X2), whepe Xl is mdimensional, 1 ~ m < n. Then a pegulaP conditional distPibution of Xl
given
(17)
X
2
e
= x 2 is given by
(x1 Ix2=x 2)
~ EC (O,I ,~
m
m
2)
II x 11
2
with canonical peppesentation
(18)
whepe fop each a ~ 0, R 2 and
u(m)
aPe independent, the distPibution of
a
R 2
is given by (21) (below),
a
(19)
e-
18
and the funotion
~ 2
is given by
(2)
with
a
~
= m and
F
replaoed by the
distribution funotion of R 2·
a
Proof.
From Lemma 3,
where R, R , U(m) and U(n-m) are independent. Observe that
nm
2
2 k
2 k (n m)
RRnm = (R -II x211 ).2 when R(l-Rnm ) .2 U = x 2 " Thus
which is the canonical representation described in (18), where R
.
is distributed as expressed in (19).
That R
2
II x 2 11
upon x2 only through the value of IIx 1I , as
2
the notation indicates, is obvious when x = o.
2
follows from
-e
(20)
2 depends
II x 11
2
When x
2
~
0, this
2
19
since U(n-m) is independent of Rand R . Clearly (17) follows from (18)
nm
with ~
2 defined in the manner described. Finally from (19), (20),
II
x 11
.
2
2
=
...
~
and Lemma 4 with the density of (l-Rnm)
get)
4It
2
~
given by lcf., Lemma 3)
r(m)r(n-m)
'O<t<l,
2
2
r(~)
2
(zero elsewhere) and the distribution function of R given by F, one
obtains
R2
a
=0
a.s. when a
=0
or F(a)
=1
,
,
(21)
p
~
0
0, a > 0 and F(a) < 1 .
Remarks.
1.
There is no simple way (that we know of) to explicitly express
~
2
a
in terms of
~
(together with m, n and
~)
when a >0.
The relationship
arises implicitly as follows: (i) ~ determines the Laplace transform of
2
R (see the first remark following Theorem 2) which implicitly determines
the distribution function F; (ii) F determines F 2' the distribution
a
function of R 2' by means of (21); (iii) F 2 determines ~ 2 by means of
a
a
a
(2) (letting ~ = m and replacing F by F 2)' One complication is that the
values of F on [O,a) influence all of
with
~
t~e
values of
~
(apparent from (2)
= n), while these same values of F have no effect on F 2 (see C2t))
a
~"
20
and, therefore, no effect on ¢ 2'
Thus the mapping ¢
~
a
¢ 2 is many-to-one
a
in a very complicated manner.
In contrast, the description of (XII X =x )
2 2
through (19) and the canonical representation given in (18) is quite
explicit and straightforward, as far as it goes.
2.
While (21) shows that the distribution function F 2 (of R 2) is
a
a
determined by F (together with m, n and a), in the converse direction we
have the following which is germane to Theorem 7 below:
Denoting the
denominator in (21) by C 2' we obtain from (20):
a
r :::: a ,
(22) 1 - F 2 (r)
a
Thus, when a > 0, F 2 determines F on the
a
interval [a,oo) up to the unknown mUltiplicative factor C 2 :::: 0, and, of
a
course, it contains no information about the values of F on [0, a) • If
which is valid for all a > O.
e
F(r) is known for some r :::: a and F(r) < 1, then F 2 determines F uniquely
a
on [a,oo) by means of (22).
Corollary 5.
rCA)
=
r(E) = k :::: 1.
Further, Zet
where the dimensions of Xl' III and En are, respeativeZy, m, m and m x m,
and assume k 2 = r(E 22 ) :::: 1 and k l = k - k 2 :::: 1.
-e
row spaae of E22 .
X2
FinaZZy Zet S denote the
Then a reguZar aonditionaZ distribution of X, given
= x2 , is given by
21
(23a)
(23b)
with a canonicaZ representation
where, with ~22 denoting any generaUzed inverse of ~22'
and A* is a kl x m matrix of fuZZ rank k l satisfying A*'A* = ~*.
(k )
l
Moreover, for each a ~ 0, R 2 is independent of U
and its
a
distribution is given by (21) with n = k and m = k l ;
and the function
2 is given by (2) with
a
distribution function of R 2.
a
</>
R, =
K1,pnaF repw.cedUblitneU
Remarks.
1.
The description of (Xllx2=x2) given in (23a) is of primary interest
since
2.
X2E~2 +
S
a.s. (apparent from the proof below).
The excluded cases k
uninteresting:
If k
2
= 0 and k
l
= 0, then X =
2
2
=
~2
0 are trivial and largely
a.s., and the conditional
distribution of Xl given X2 is that of Xl.
If k l
=
0,
e-
22
of Xl given X2 is degenerate a.s.
Proof of corollary.
and let
r 22 = AZ
A
2
Write RU(k)
=
(Zl,Z2)' where Zl is kl-dimensional,
be a rank factorization of
r 22 ,
so that A is
2
It is easily checked through their
k 2 x (n-m) and r(A 2) = k 2 .
characteristic functions that
Thus
Since A A
2
Z
=
r 22 , the row space of A2 is also S, and it follows thaf-the
equation z2A2
X2€~2 +
S.
= x2
-
~2
admits a solution for z2 if and only if
Moreover, when a solution exists, it follows from Theorems
2.2.1 and 2.3.1 of Rao and Mitra (1971), pages 23-24, that it is unique
and assumes the form z2
inverse of A2 .
.e
Thus
But by Theorem 5,
= (x2-~2)A;,
where A; is any generalized
U
23
(k )
l
0, R 2 is independent of U
a
is given by (21) with n = k, m = k l . Also for
where for each a
~
and its distribution
X2€~2
+ S, x 2 -
~2=
z2A2
for some z2' and hence
since A- A-' is a generalized inverse of L 22 , and the value of the
2 2
,
product A L-A 2 is independent of which generalized inverse L one chooses.
2
(Cf., Rao (1973), page 26, (vii).)
true.
Since
P(X2€~2+S)
Thus the claims for
X2€~2
+ S are
= P(Z2A2€S) = 1, the conditiona! distribution
~
of Xl given X = x 2 may be defined arbitrarily on the complement of S.
2
In particular, definition (23b) leads to a regular version of the
o
conditional distribution.
4.
Densities.
Here, we discuss the densities of elliptically contoured
distributions with an emphasis on their relationship to the random
variable R appearing in their canonical representations.
Much of this
material appears elsewhere, such as in Kelker (1970), with a somewhat
different emphasis.
It is included here because we shall have occasion
to refer to it, and, in part, for the sake of completeness.
If X ~ EC (0,1
n
n
,~)
is absolutely continuous, its density is
invariant under all orthogonal transformations and thus is expressible
in terms of a function
g:
[0,00)
n
+
[0,00)
as
e.
24
(24)
g (x
n
2
2
l
n
+ ••• +X )
Moreover X has a canonical representation of the form R u(n), where R
n
is also absolutely continuous, and its density f
(25)
f
n
(r)
= s n r n-l gn (r 2)
r
~
n
n
is expressible as
0 ,
n
where sn denotes the surface area of the unit ball in IR (sl=l). Thus
n-l
2
gn
must satisfy the o
integrability condition
r
g (r )dr < 0 0 . (The
n
special case gn (u) = (2'IT) -n/ 2 exp (-u/2) corresponds to normality: ,and,
r:
as we have noted for this case, f n is a chi-density with n degrees of
freedom.) Conversely, if R is absolutely continuous, then so is X and
their densities are related through (25).
If an absolutely continuous random vector X = (X1, ..• ,Xn),hasa
density of the form shown in (24), then f , defined by (25), is a density
n
of a nonnegative random variable R, and X'"'" ,EC11(0,I ,</>), where,</> is'
n
defined by (2) with ~ = nanddF(x} = f (x)dx •
n
Let Y ..... EC (O,I,</» denote an arbitrary marginal of X o£ dimension
m
m
m (l~<n). Y has the canonical representation R R u(m), where R , R
nnm
n mn
and u(m) are independent, and R '"" Beta(~,n2m) (see (13) and the
nm
adjacent discussion).
Consequently, Y is absolutely continuous if and
only if P(Rn=O) = O.
When this occurs, the density f m of Rm is given
in (14), where F denotes the distribution function of R , and thus the
n
n
density of Y takes the form
.e
n...m
--- - --.-- - (n-2) 2
2 -2- -1
s(n-m)~'lY.fl2 r
(r -llyll )
dFn(r) ,
(26)
m
ydR .
25
When, moreover, R is absolutely continuous, this simplifies to
n
More generally, if X - EC n (~,~,~) with ~ of full rank n, then
!.:
(X_~)~-2 - EC (0,1 ,~). Thus the density of X exists and assumes the
n
n
form
(28)
if and only if R, appearing in the canonical representation ~
is absolutely continuous with the density f
R2 ~ Q(X)
n
+
defined in (25).
RU(n)~~,
Since
= (X_~)~-l(X_~)" it is seen that X is absolutely continuous
if and only if its quadratic form is, and that when they are absolutely
continuous, the quadratic form Q(X) has a density of the form
q
~
0 •
The occurrence of absolute continuity for Y, when Y is a marginal
of X - EC (~,~,~), is as one would expect, and it follows that, in. terms
n
.
.,.
__ ...._
.
_w
n _
_
• _ u __ · __ • - - -
of the notation of Corollary 5, the conditional density of Xl given
X2
= x2 is given by
when
~
is of rank n and X is absolutely continuOl;.;:;.
e-
26
Finally, suppose X definition of
~oo'
ECn(~,L,¢)
with r(L)
=n
and
¢€~oo.
By the
there exists a distribution function Foo such that
u
~
0 .
Since ¢(~t'), tEIRn , is the characteristic function of X -~, X is
absolutely continuous and has the density
(29)
if and only if Foo(O)
= O.
Otherwise X is absolutely continuous away from
the origin with the density given in (29), and it is atomic at the origin
with atom F00 (0).
5.
Characterizations of normality.
In this section we focus attention on several properties of the
normal distributions which do not extend to other elliptically cofitoured
distributions.
We have already discussed one of these in the second remark
following Theorem 2, appearing in Section 2.,
(a) When X = (X I ,X 2) is normally distributed, then, of course, the
conditional distribution of Xl given X2 is normally distributed and the
function ¢q (x ) in Corollary 5 assumes the form ¢q (x )(u) = exp(-cu/2),
2
2
where c ~ 0 is independent of x2 • The failure of ¢ ( ) to depend upon
q x
2
.e
the value of q(x 2) characterizes normality:
27
Asswne the pank vaz'ues kl and k 2 appeaxaing in Copoz,l,arry 5
st~ctz,y positive. Then ~q(X2)(u) is degene~ate fo~ each u ~ 0 if
Theorem 6.
a~e
and onz,y if X is nor.maz,z,y
Proof.
dist~buted.
In view of the foregoing discussion, it is only necessary to show
the "only if" part.
Hence, putting, for each u
~
0,
~q(X2)(U)
= ~(u)
a.s., it follows that
And since
it follows that
~(u+v)
=W(u)~(v),
u,v
~
O.
(Here, one makes use of
the assumption k 1,k Z ~ 1.) Setting v = 0, yields ~ = W, and thus
~(u+v) = ~(u)~(v), U,v ~ O. Since ~ is continuous with ~(O) = l.and
I~(u) lsI, it follows that ~(u)
X "" Nn(j.l,ct).
= exp
(-cu/2)
for some c ~ 0, and thus
o
28
(b) It is apparent from (21) that the normality of (xllx2=x2) does
not entail the normality of X, for the distribution function F
can be that of a chi-variable without F being that of a chivariable, or a multiple of a chi-variable.
Theorem 2.)
Il x2 11
2
(See the remarks following
Nevertheless, the following is true:
Assume X = (Xl' X2) has a nondegenerate eUipticaUy contoured
distribution and the rank vaZues k l and k 2 , appearing in CoroUar'Y 5, are
Theorem 7.
both positive.
Then X is
no~aZZy
distributed if and onZy if, with
probabiZity one, the conditionaZ distribution of Xl given X2 is
nondegenerate normaZ.
Proof.
The "only if" part states a well-known fact, and so we will only
show the "if" part.
We shall refer freely to the notation and results
appearing in the statement and proof of Corollary 5.
2
2
q(x 2) = I I (x2-~2)A;1 1 = I I z 2 1 1 , and, consequently,
For
X2€~2 +
s,
(30)
where Z2 denotes the vector consisting of the last k 2 components of
RU(k). Thus P(q(X 2)=O) ~ P(R=O) ~ p((xllx2) is degenerate) = 0, and
hence q(X 2) > 0 a.s.
Also, since (xllx2) is nondegenerate normal
a. s., the function ~q(X2)(·) must assume the form
u
.e
for some function c:(O,oo)
~
0
a.s.
If it can be shown that c(q(X 2)) is
a degenerate random variable, then the normality of X follows from
+
(0,00).
29
Theorem 6.
Let A be the set of all a > 0 such that
~ 2(u)
=
2
exp(-c(a )u/2) ,
u
~
0 .
a
1
~.-
Note P(q(X2)~€A)
=
~.-
..
Now (31) implies that the distribution function F 2
1.
--. _.
_.~-,-~~-~-~.~--a
of R 2' a€A, is that of a chi-squared variable with k l degrees of
a
2 1
freedom times c(a )~. Combining this with (22), when n = k and m = kl ,
we have for a€A
1 - F(r)
= Const.
~
J
r
Sk-l exp(-S 2/2c(a 2 ))dS ,
r
~
a
i. e.
F'(r)
( 32)
2
=
2
Const. rk-le- r /2c(a )
r
~
a .
2
It is obvious from (32) that c(a ) is a constant for all a€A, and the
o
degeneracy of c(q(X 2)) follows.
Remarks.
1.
An alternative proof of the "if" part of Theorem 7 without using
Theorem 6 can be given as follows.
(32) implies
2
F'(r) = const. rk-le- r /2c
( 33)
where c is some positive constant and a
on account of
An(O,e)
(3~
;Ie
(3~,
!.:
P(0<q(X2) 2<e) >:0
r > a
o
is the infimum of A.
= P(R=O) = 0,
Since,
for every e > 0, we have
t' for every e > 0 and, therefore, a
and the fact F(O)
o
o
=
O.
Now it follows from
as noted in the previous proof, that
e-
30
F is the distribution of a chi-squared variable with k degrees of fTeedom
1
times c~.
Thus X must have a normal distribution.
(See the second
remark following Theorem 2.)
2.
The reasoning in the previous remark can be used under weaker
X is normal if Xl given X = x2 is nondegenerate normal for
2
all X2€~2 + S for which IX 2 -11 2 1 < E: for some E: > O.
assumptions.
3.
The restriction in the statement of Theorem 7 that the conditional
distribution of Xl given X2 be nondegenerate is necessary: Suppose R in
(3) has a mass F(O) at zero with
< F(O) < 1 and a density of the form
°
(33) on (0,00).
Then the conditional distribution of Xl given X2 is
normal with probability one, but X is not normally distributed. (Of
course, if F(O)
4.
= 1,
then X is normally distributed, but degenerate.)
Theorem 7 was obtained by Kelker (1970) under the assumptions that E
is nonsingular and X has a density.
(c) We shall refer to the function g (-), appearing in (28) as the
n
functional foPm of the density of EC n (~,E,~) whenever it exists.Naturally its specification is arbitrary on a Lebesgue null-subset of
[0,00).
c
-!<:
2
In addition, it is apparent from Theorem 1 that for any c > 0,
g(c-) is also a functional form for a different parameterization of
Theorem 8.
ECn(~,E,~)
with r(E) ~ 2 is a normal distribution if and only
if two marginal densities of different dimensions exist and have
functional foPms which agree up to a positive multiple.
Proof.
part.
The "only if" part is obvious, so we shall only show the "if"
Suppose X -
ECn(ll,E,~)
has marginals uf dimensions p and p
with functional forms g and g
,and
p
p+q
t
q
31
= Const.
g
(u)
p+q
(34)
g (u) ,
p
~
u
0 •
(Here and below, "Const." stands, generically, for a positive constant,
not always the same.)
=n
r(~)
Without loss of generality we assume that
and, in fact, that
~
=0
and
~
= In.
consider Y = (X-~)A- ~ ECk(O,Ik'~)' where k
factorization of
2
~.)
2
g (X l +... +X )
p
p
(Otherwise, we could
= r(~)
and A'A is a rank
Then
2
= J~q gp+q (x l2+... +xp+q
) dxp+ 1'" dxp+q
= Const.
- 2
2
Jmq
gP (Xl +..• +xp+q ) dxp+ l' .. dxp+q
which implies
g (u)
P
= Const. J
IRq
2
2
g (u+z +... +Z )dzl ..• dz
1
P
q
q
u
~
0 •
It follows that
g
2
p+q
(xl+ ... +x
2
p+q
)
= Const. Jmq
2
2
2
gp (xl+···+xp+ 2q)dxp+q+ l"~dxp+ 2q
2
Thus a multiple of g (x l +... +x 2 ) is a density of an elliptically
p
p+ q
contoured random vector Z ~ Ep+ 2q (0,1 p+ 2q JIJ) (see the third paragraph of
Section 4), and Z has the (p+q) -dimensional marginal density
2
2
p+q (xl+···+xp+q ). Consequently, ~
~E~
•
for all j = 1,2, .... Hence
P+Jq
g
--- ~-
m
.
= ~E~p+ 2'
Similarly it follows that
q
~E~
00
and there exists a distribution
function Foo on [0,00) such that (cf., (29))
g. (u)
J
u
~
0 ,
j
= p,p+q .
32
By the uniqueness of Laplace transforms and (34), we obtain
r-PdF00 (r)
= Const.
r-(P+q)dF00 (r) ,
from which it follows that Foo is degenerate at some point cr > 0. Thus
2
g (u) = exp[-u/(2cr )] and X is normally distributed. (Observe that in
p
0
(34) the constant must be unity.)
(d) Because of the one-to-one correspondence between functions ¢
and distribution functions F described in the remark following Theorem
2, it is easily seen from Theorem 6, that, under the assumptions of
Theorem 6, the oonditionaL distribution of Rq (X ) (defined in Corollary
2
5) given X is independent of X if and onLy if X is normaUy
2
2
distributed. Thus if X is normally distributed, then every conditional
moment of positive order of Rq (X ) given X2 is nonstochastic
2
(degenerate). This invites the following characterization of normality:
Suppose X = (X 1 ,X 2) - ECn(~,L,¢) and the ranks k1 and k2
appearing in CoroLLary 5 are striotLy positive. Then, for any fixed
Corollary 8a.
positive integer p, E((Rq (X ))P!X 2) is finite and degenerate (i.e.,
2
independent of X2) if and onty if X is normaLty distributed.
Proof.
Just the "only if" part requires proof.
From the proof of
Corollary 5, it is apparent the assumption "E((R (x ))PIX2) is finite
q 2
and degenerate" is equivalent to "E(I Iz11 IP lz 2) is finite and
degenerate", where Z
-e
= (Z1'Z2)
- ECk(O,I k ,¢) with Zl and Z2 of dimensions
33
k l and k2 respectively.
Moreover, if Z is normally distributed, so is X.
Thus, without loss of generality, we may assume X = (X l 'X 2)
where Xl is of dimension m
(l~<n),
~ ECn(O,In'~)'
as in Theorem 5, and show that the
normality of X is implied by the finiteness and degeneracy of
E(I Ixll IP lx ).
The latter permits two cases: P(X=O) = 1 and P(X=O) = O.
2
The first case is trivial: X is degenerate normal. In the second case,
p(1 Ix2 11=0)
= 0, and according to (18) and (21),
(I Ixl ] IP lx 2) =
m+p -1
2
2
II
11
J(II x211 ,00) (r - X2 ) 2 r - (n-2) dF (r)
m
J(I Ix21 I,oo)(r 2- II X2 11
2
)
"2
-1 - (n- 2)
r
when F(I I x2 1I) < 1 ,
dF(r)
=0
where F is the distribution function of R in the canonical representation
X g RU(n).
Thus
m+p -1
- (n- 2)
2
II
11 2 2
J(II x211 ,00) (r - X2 )
r
dF(r)
m
= Const.
J(1I x
2
2 II 11 2 "2 -1 - (n- 2)
11,00)(r - X2 )
r
dF(r)
a.s.
which implies
J( U ,
m+p -1
r2_u2)
2
r-(n-2)dF(r) = Const.
00)(
~
J(u ,00) (2
r -u 2)2
-1
r -(n-2)dF( ~J)
34
for almost every u > 0 (Leb.), since
I Ix21 I
is absolutely continuous on
(0,00) with (according to (14)) a positive density at every u > 0 for
which F(u) < 1.
It follows that
(35)
for almost every u > 0, where Fn+p(p) = f(O,p]rPdF(r)/f(o,oo)rPdF(r~
defines the distribution function Fn +p
(That f(O,oo)rPdF(r) > 0 is
because P(X=O) = 0; that it is finite is because
EI Ixll IP = E[E(I Ixll IP lx 2)] = E(I Ixll IP lx 2) a.s., and hence is finite.)
It is apparent from (26) that the left and right integrals of (35) are
functional forms for marginal densities of Z '" Ee n+p (O,I n+p ,W) of
dimensions n - m and n - m + p respectively, where W(u) is defined by
the right side of (2) with t = n + p and F = Fn+p Hence by Theorem 8,
Z is normally distributed. Thus, F
is the distribution function of
n+p
a multiple of a chi-variable with n+p degrees of freedom. This implies
Fn is the distribution function of a multiple of a chi-variable with n
degrees of freedom, and, in turn, implies X is normally distributed.
Remarks.
We suspect that the corollary would hold for any real p > 0,
not necessarily an integer.
functions h:[O,oo)
+
The size and nature of the collection of
IR which guarantee that llE(h(llxlll)lx2) is finite
and degenerate" ....., "X is normal" is unknown.
,0
3S
(e) For X = (X 'X 2) ~ Nn(~,E), it is well-known that the conditional
I
central moments of all orders of Xl given X2 are finite and independent
of X .
2
This fact characterizes normality:
Suppose X = (X I 'X 2) ~ ECn(~,E,~), the ranks kl and k 2
appearing in Corol~ 5 are strictly positive, and p is a positive
Corollary 8b.
integer.
Then a p-th order conditional central moment of Xl given X2 is
finite and degenerate (i.e., independent of X2) if and·onlyif X is
normally distributed.
Proof.
Just the "only if" part requires proof.
From Corollary S we have
the canonical representation
valid for all x
2
in a set
~2
+ S for which
P(X2€~2+S)
= 1. Putting
= (Xl, ... ,XU), ~ x = (~l, ... ,~m) and A* = (al, ... ,am), where a i is
2
the i-th column of A, we have for Pl+ ... +Pm = p,
Xl
(k )
Since Rq ( x ) and U l
2
from Corollary 8a.
are independent, the normality of X is immediate
0
(f) The following, while not a characterization of normality, is an
interesting extension of Theorem 6.
Let X = (X l 'X 2) ~ ECn(~,L,~) and assume the ranks k l and k2
appearing in Coro l lary 5 aPe strict ly positive. Then
Theorem 9.
e,.
36
•
~(u)
(36)
= J [0,00)
2
u
exp(-ur j2)dG(r) ,
~
0 ,
for some distribution funotion G on [0,00) if and onZy if
u
~
0 ,
a.s.
where G 2 is a distribution funotion on [0,00) given by
a
p
~
0 ,
(38)
=
r
J(0, p]
J(O,oo) r
•
Proof.
-k 2
2
2
exp[-a j(2r )]dG(r)
-k 2
2
2
exp[-a j(2r )]dG(r)
a > 0 .
By Lemma 5 below the distribution function G is related to the
distribution function F of R, appearing in the canonical representation
X~ ~
+
RU(k)A with A'A
=
E, through the relationships
F(O)
= G(O)
(39)
k-1
fer)
r
= ~k~------
J(0,00)
p-k exp[-r 2j(2p 2)]dG(p) ,
- -1
k
22
r(~
where f is the density of F on (0,00) on which it is absolutely contInuous-.
If F 2 denotes the distribution function of R 2 appearing in Corollary 5,
a
then it follows from (21) that
•
a
~_~
__
u_
37
•
=r
kl-l
-k
J(O,oo) p
2 2
2
exp[-(r +a )/(2p )]dG(p)
a > O,r
~
0 ,
where f 2 denotes the density of F 2 (which is absolutely continuous when
a
a
a > 0). Again by Lemma S, the latter is equivalent to (37) and (38).
0
kl
-
22
Lemma S.
>,
-1
Suppose X ~
ECn(~,L,¢)
X g ~ + RU(k)A with AlA = L.
has the oanonioal representation
Then ¢ is given by (36) if and only if G
is related to the distribution funotion F of R as desoribed in (39).
If ¢ is given by (36), then RU(k)
Proof.
where Y and Z are independent, Y has distribution function
Z ~ Nk(O,I k).
F(r)
Setting W =
= G(O)
IIzi I,
we have R
+ J(O,oo) fw(r/p)dG(p) ,
g YW which
G,
g YZ
and
•
implies
r > 0 ,
where f W is the density of W, and the result follows from
= r k-l
2
exp(-r /2)
k
- -1
22
r > 0 .
k
r(~
o
References
Generalized Inverse
[1]
Boullion, T.L. and Odell, P.L. (1971).
Matrioes. Wiley, New York.
[2]
Das Gupta, S., Eaton, M.L., aIkin, I., Perlman, M., Savage, L.J.
and Sobel, M. (1972). Inequalities on the probability content
of convex regions for ell i ptical.ly contoured distr~btl:ti<?~~_!_
Proo. Sixth Berkeley Symp. Math. Statist. Prob. 2' 241-264
Univ. of California Press, Berkeley.
n
__
•
38
•
•
.e
•
[3]
Dempster, A.P. (1969). Elements of Continuous MUltivariate
Analysis. Addison-Wesley, Reading, Mass.
[4]
Feller, W. (1971). An Introduotion to ~obability and Its
AppZioations, ~.2., 2nd ed. Wiley, New York.
[5]
Kelker, D. (1970). Distribution theory of spherical distributions
and a location-scale parameter generalization. Sankhya Sere A
419-430.
[6]
Rao, C.R. (1973). Linear StatistioaZ Inferenoe andfts
AppZioations, 2nd ed. Wiley, New York.
[7]
Rao, C.R. and Mitra, S.K. (1971). GeneraZized Inverse of Matrioes
and Its AppZioations. Wiley, New York.
[8]
Schoenberg, I.J. (1938).
functions. Ann. Math.
Metric spaces and completely monotone
39' 811-841 .
.;;-.'
lInd assifj ed
SECURITY CLASSIFICATION OF THIS PAGE (Wh.n O.t. Ent.red)
READ INSTRUCTIONS
BEFORE COMPLETING FORM
REPORT OOCUMENTAJION PAGE
2. GOVT ACCESSION NO. 3.
I. REPORT NUMBER
.......
TITLE (l/JIId Subtitle)
4.
S.
RECIPIENT'S C"'TALOG "UM8E~
-
•
TYPE OF REPORT II PEAlOO COVERED
Technical
On the Theory of Elliptically Contoured
Distributions
6. PERFORMING O"lG. REPORT HUMBER
Mimeo Series #1246
7.
AUnIOR(.)
I. CONTRACT OR GRANT HUMBER(.)
S. Carnbanis; S. HuaA~; G. Simons
(
lAFOSR 75-2796
20NR NOOO14-C-0809
9. PERFORMING ORGANI ZATION NAME AND ADDRESS
10. PROGRAM ELEMENT. PROJECT. TASK
AREA II WORK UNIT NUMBERS
Department of Statistics
University of North Carolina
r"hn~~1 U~1'
~Tr
II. CO"TROLLING'OFFIC& NAME AND ADDRESS
12. REPORT DATE
Air Force Office of Scientific Research
Bolling AFB, DC
and
I
August 1979
13. NUMBER 0 ' PAGES
38
15. SECURITYCL"'SS. (of 'hi. r.po,t)
14,
Statistics &Probability Program
Office of Naval Research
Arlington, VA 22217
Unclassified
IS.. OECL ...SSIFIC ...TION/t)OWNGR ... OING
SCHEDULE
16. OISTRIIIUTION STATEMENT (of this Report)
Distribution unlimited
- approved for public release.
.
l'
17. DISTRIBUTION STATEMENT (of th• •b.ttaet ,,"t.r.d In aloek 10. If dlff.r,,"t
(tom
R.port)
II. SUPPLEMENTARY HOTES
19. KEY wOnDS (Conllnuo ora t.va,u .Id. If n.c• ..,ary . .d Id.ntlfy by block lllIOI".t)
Elliptically contoured, multivariate, ~~herically symmetric, characteristic
function, Laplace transform, conditional distribution, characterizations.
20.
ABSTRACT (Continue ora tovet. . .Id. If n.ea. .My . .d Identity by "'ock numb.,)
It is shown. that the conditional distributions of elliptically contoured disa
tributions are elliptically contoured, and the conditional distributions are
precisely identified. In addition, a number of the properties of normal dis...
tributions (which constitute a t)~e of elliptically contoured distributions)
shown, in fact, to characterize normality. A by-product of the research is a ne ~
characterization of certain classes of characteristic functions appearing in
,-....
Schoenberg's work.
DO
FORM
I JAN 73
\~
1473
EDITION OF I NOV 6& IS 08S0LETE
Unclassified
'\.\
d
© Copyright 2026 Paperzz