Basu, Sujit and Sen, Pranab Kumar; (1982).Asymptotically Efficient Estimator for the Index of a Stable Distribution."

· i
ASYMPTOTICALLY EFFICIENT ESTIMATOR FOR THE INDEX
OF A STABLE DISTRIBUTION
by
Sujit Basu
and
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1413
August 1982
ASYMPTOTICALLY EFFICIENT ESTIM\TOR FOR 1HE INDEX OF A STABLE DISTRIBUTION
Sujit Basu* and Pran8b Kumar Sen**
Indian Institute of Management, Calcutta, India 700027,
University of North Carolina, Chapel Hill, NC
Summary.
------.,-
and
27514, USA
Based on the sample characteristic function, a method of estimating
the index parameter of a stable. distribution is considered.
Weak convergence
of the empirical characteristic function (process) is incorporated in the choice
of an asymptotically optimal estimator and in the study of its asymptotic properties.
AMS Subject Classification:
Key Words & Phrases:
60F17, 62E20, 62L99
Asymptotic normality, convex function, iterative procedure,
sample characteristic function, weak convergence.
1.
Introduction.
-------"""-~-,..,
Although, the recent years have witnessed some developments in the area
-e
of inference concerning parameters of the stabZe Zaws,
the efforts have hardly
matched their importance in many areas of applications including astronomy,
business and economics.
The attempts have been mostly disconcerted, in the
sense that often these lack generality and apply to specific situations only.
The main reason for such a state of affairs lies in the fact that while the
probability density function (p.d.f.) of a stable distribution function (d.f.)
always exists, it may not always be expressible in a closed form.
characteristic function
(c. f.)
of a stable d.f.
F
However, the
is representible
as
*
The work of this author was partially carried out in the Department of
Statistics, University of North Carolina, Chapel Hill, NC
**
~
27514
USA.
Work of this author was partially supported by the National Heart, Lung and
Blood Institute, Contract NIH-NHLBI-71-2243-L.
-2-
~(t)
where
a=l
is real, the index (characteristic exponent)
t
ISI<l,
0>0,
= exp{ita-ltola[l+iS(sgnt)w(t,a)]}
~l.
or
-oo<a<oo, and
w(t,a) = (2/rr)10g/tl
F
a
satisties
tan (rra/2)
0<a<2,
according as
Press (1972) exploited this canonical representation of
suggest suitable estimators for the parameters
for
or
(1.1)
a,S,&
and
~(t)
to
a; in particular,
symmetric, these estimators were shown to have asymptotically a (multi-)
normal distribution.
His estimators are combinations of estimators of
at four fixed (but arbitrary)
t points,
~(t)
and hence, the asymptotic dispersion
matrix (as well as the efficiency) may depend on the choice of these t-points.
In a followup, Press (1975) has given a chronological accorent of the efforts
in this area, while, for the positive stable laws, some alternative (and efficient)
methods have been considered by Brockwell and Brown (1979, 1981).
For a
general stable law, de Haan and Resnick (1980) proposed an estimator of the
index parameter
a, based on the order statistics.
Unfortunately, their estimator
is not generally asymptotically efficient.
The object of the present investigation is to have a deeper look into the
method of Press (1972) and to locate the optimal t points (leading to asymptotically efficient estimators of the parameters).
In this context, the weak
convergence of the sample characteristic function, studied earlier by Feuerverger
and Mureika (1977) and Csorgo (1981), had been incorporated to provide valid
asymptotic solutions under minimal regularity conditions.
also been provided.
An algorithm
has
In fact, for the sake of simplicity and for the purpose
of getting a straight-forward insight into the issues involved, we have considered
here the case of a stable d.f.
with 0=1.
F
symmetric about the origin (so that a=S=O)
The problem of simultaneous estimation of
a,o
in such a (symmetric)
case can be treated similarly at the expense of a bit more algebra, while the
general case of
new tools).
a,B,o
and
a
requires a more complicated algorithm (but no
~
-
-32.
4It
Empirical Characteristic Function Based Estimator of
a
-----~---~-~---~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-
We are concerned with the estimation of the index parameter
of a symmetric stable law for which in (1.1)
0=1
(and a=S=O).
a(O<a<2)
Thus, (1.1)
reduces to
¢(t) = exp(-It!a),
Let
Xl' ••• ,X
n
be a set of
n
for all
t
(~,oo)
E
(2.1)
independent and identically distributed
random variables (i.i.d.r.v.) with a d.f.
F with c.f.
¢,
given by (2.1).
The empiriaaZ aharaateristia funation (e.c.f.) is defined as
n exp { itX } ,
n -1 Lj=l
j
t
E:
(2.2)
(-00,00)
Note that by (2.1),
a = (loglt)
I
-1
log(-log¢(t»
,
(2.3)
" real t
As such, following Press's (1972) moment-estimation method, for an arbitrary
t, we consider the estimator
an = an (t)
= (logltl)-llog(-logu (t»
n
,
(2.4)
where
U (t) = Re(¢ (t»
n
n
= real part of
¢ (t).
(2.5)
n
Note that (2.4) is properly defined for all real t, excepting
Since
A
t=O
and
+1.
A
a (-t) = a (t), ¥ real t, in the sequel, we shall consider only positive
n
n
values of
t(~l).
Now, by Theorem 2.1 of Feuerverger and Mureika (1977), for every
T«oo) ,
(2.6)
so that the real part of
¢n(t) (=Un(t»
by (2.3) and (2.4), for every fixed
sup{la (t)-al:
n
converges a.s. to
¢(t), and hence,
T«oo)
O<t(~l)<T} ~ 0
a.s. ,
as
n~
(2.7)
-4"-
Having established this (strong) consistency of
a (t), one would naturally
n
like to have an optimal choice of
t.
first the asymptotic behaviour of
{I:n(an (t)-a);O<t<T}
and incorporate the
--
same in our desired solution.
In this respect, we proceed to study
Let then
(2.8)
O<t<T
Note that as
F is a stable distribution and, in (2.1), a>O,
co
J
V O~<a
IxIPdF(x) < co,
(2.9)
-co
so that by Theorem 3.1 of Feuerverger and Mureika (1977) [and an improved and
generalized version of this theorem by Csorgo (1981)], as
n~,
for every fixed
T<co,
Y
n
= {yn (t),
O<t<T} --> Y
--
V
= {Y(t),
(2.10)
O<t<T}
I
where
Y is Gaussian with
EY(s)Y(t)
for every
(s,t)
E
2
[O,T] •
drift and
0
= ~[¢(s+t)
Further, (2.10) ensures that
sup{ly (t)l:
n
Therefore, writing
U (t)
n
O<t<T}
= ¢(t)+n-~n (t)
we obtain that for every fixed
-logU (t)
n
(2.11)
+-{j>(s-t)] - ¢(s)¢(t)
=
(2.12)
0 (1)
p
and using (2.3), (2.4) and (2.12),
T<co,
= -log¢(t) [l-n _k~n (t){-¢(t)log¢(t) }-1
Vt
E
+
(O,T)/{l}
(2.13)
so that by some routine steps,
As such, by (2.11), (2.12) and (2.14), for every fixed
t(~O
or 1),
-5-
(2.15)
e
where
=
~ 2h
a
(t* ); t * = t a
t>O
,
(2.16)
and
h (s) = e
2s
a
Note that for every
(-slogs)
-2
(1-2e
-2s
+e
_2 a s
a € (0,2], h (s)+-f-oo as s..j,O
a
we shall show in the Appendix that for every fixed
unique
So
(O,e
€
-2
),
s>O
or to
1
(2.17)
or to
00.
Further,
a(€(0,2]), there exists a
), such that
(2.18)
"~
where
sO(=sOa)
may depend on
a.
Let then
t
Oa
be the solution of the equation
t~a=soa' Then, from (2.15) through (2.18), we conclude that within the class
{~ (t), O<t(~l)<oo} of estimators of
n
is
A
an(t oa )'
Unfortunately,
unknown parameter
a
sOa
a, an asymptotically optimal estimator
as well as
t
Oa
may generally depend on the
(as ha(s) does so), and hence, we may not be in a position
A
to compute
an (toa ).
For this reason, in the next section, we proceed to construct
an iterative procedure yielding an alternative asymptotically efficient esitmator.
3.
-
An Asymptotically Optimal Iterative Estimator.
~-----------------~---~-~---~---~-~----------
Basically, we consider an iterative method of estimating
A
this in the formulation of the estimator a.
n
such that for every
n~nO'
£>0
Oa
and incorporate
The suggested procedur.e depends
implicitly on the weak convergence result in (2.6)-(2.7).
of (2.10)-(2.11), for every
t
and n>O, there exist a
Note that by virtue
0>0
and an
nO(=no(E,n»,
-6-
(3.1)
Hence, using (2.16) and (3.1), we conclude that for every
(and fixed
aE(0,2]), there exist a
0>0
£'>0
and
n~(=nO(a,£',n'»,
and an
n'>O
such that
n~n~,
for every
p{
(3.2)
t:
As a result, to find an asymptotically optimal estimator (within the class
{~n (t),t>O}, it suffices to consider a (stochastic) sequence
t-points, such that
and
"t
nm
stochastically converges to
" "
an(tnm)~
n-+oo, and then to use
estimator.
respectively.
sOa
is always
a"
"" )
= an(t
nO
m, as the desired
"(0)
In this way, at the
"t
-2
nO
for all
€
a:
0<a<2.
(0,1), preferably
"an (m-l)
t
um
m>l.
(3.3)
and
m-th
step, we define
=
a"
"
:
<e
and
= sOa
for every
m increases
"(0)
A
sOa = sO(a
nO )' by (2.4) and (2.18),
no
In the next step, we consider
~,compute
nm
of
Towards this objective, we consider the following iterative procedures.
As such, at the initial stage, we choose an arbitrary
"t
as
aa
for some appropriate
As will be seen in the Appendix,
close to
t
{tnm ,m>O}
-
nm
="
aA
(t )
n nm
and
"(m)
BOa,
A
=
(3.4)
so(anm )
The stopping number M(=M ), corresponding to some preassigned
nE
£>0, is defined by
(3.5)
In the Appendix, we shall show that
a:
is a continuous function of
0<a<2, and hence, by (2.6), (2.7) and the definition of
as n-+oo (for every m>l) , and hence, M(=M)
-
n
is a.s. finite.
sOa~
"tnm+t
Oa
Actually,a few
a.s.,
-7iteration will lead to the desired estimator
....
oM
CX
~nM
Since
=
(3.6)
stochastically converges to
to' by virtue of (3.2), (3.5),
(3.6) and (2.15)-(2.17), we conclude that
(3.7)
which, by (2.18), reveals the desired asymptotic optimality property.
It may be noted·that if, as :in Press (1972), we consider an arbitrary
t
and
....
cx (t), then by (2.15) and (3.7), the asymptotic relative efficiency
n
....
....
(A.R.E.) of CXn(t) with respect to CX
is given by
nM
~
where the equality sign holds when
1 depending on
t,t
ocx
and
procedure considered here.
....
also, since
4.
tnM
(3.8)
1, Vt
t=t
Ocx
'
Actually, (3.8) may be quite below
cx.
This explains the utility of the iterative
....
We may term cx
an adaptive estimator of cx
nM
is adapted from the data set.
Appendix.
_._------
The function
hcx(x)
described in Section 3.
in (2.17) plays a fundamental role in the procedure
We define
sO=sOcx
as in (2.18).
Then, we have the
following.
Lemma 4.1.
For every (fixed)
cx:
O<cx<2,
o < sOcx < e
-2
(4.1)
Consider first the case of h (x), O<x<l. Note that for every cx(O<cx~2),
cx
cx
2
2
-2
e x(1_2e- 2x+e- x ) is nondecreasing in x(~O), while (-x10gx)
is a convex
Proof.
-8(O<x<l)
function
minimum at
x=e
so that for
e
-1
with values at
-1
0
Hence, by (2.17),
and
1
equal to + 00 and a unique
ha(x)
O<x<l, the minimum value of
is nondecreasing on
ha(x)
[e
-1
,1],
occurs at some point below
Note that
h (e- l ) = e 2 (e 2 /e+e y /e_ 2)
a
-<
e 2 (e 2 /e+e 1 /e_ 2 )
= 11.74
Next, observe that for every
-2<y<1)
(as
for all
0<a<2
a:
(4.2)
x>l,
h (x) = (xlogx)
a
-2 2x
-2x _2 a x
e (1-2e
+e
)
> (xlogx)-2e2x(1_2e-2x+e-4x)
= (xlogx)
(V 0<a<2)
-2 2x
-2x 2
e (l-e
)
x -x
2
= {(e -e )/(xlogx)} = g(x)
say
(4.3)
Also, note that
~~
(log g(x»
dx
=
e x+e -x
x
e -e
-x
1
1
x
xlogx
+
where the first term on the right hand side of (4.4) is
to
1
as
~,while
-1
defined by
X
o
the other two go to
+ (xOlogx )
O
-1
= 1
0
as
(xO~2.3S),
x+oo.
g(x)
interval
on
(1,00)
(l,x )
O
does not occur at any
and denote
~
g(x)
Va
~
ha (e
-1
(0,2].
x <x<x
2
3
), for every
x>l.
by
xllogx
x
As such, if
and converges
X
o
be
x>x •
O
Hence, the minimum
Let us then consider the
x~xO.
-1
x 2=1.S, x =1.7S, x 4=2.00
3
2 1 -1 2
4
-2 2
-1
and xS"'"x O. Then, for 1<x<xl , g(x) ~ e(e -e ) = e (l-e ) > 40 > ha(e ).
-2
xl -xl 2
-1
Also, for xl<x~X2' g(x) ~ (x 10gx )
x (e -e
)
~ 20 ~ ha(e ), and a
2
2
similar case holds for
xl
in
then (4.4) is positive for all
x>x ' so that g '(x) is strictly positive for every
O
of
(4.4)
l
as well as
= e
x~x<x4
Thus, by (4.3),
Therefore, the global minima of
,
and
x4~x~xO.
inf{ha(x):
ha(x) (O<x<oo)
Hence,
x>l} > ha(e
-1
) ,
occurs at a value
e-
-9of
x<e
-1
•
Note that by definition, for every
~
=
c
r
are all
0<~<2,
00
{
r r
r-2
-2
= L =l (2 +y )/r!}x
(-logx)
r
h~(x)
where the
~:
00
r= 1c r x
r-2
(-logx)
non-negative.
00
-2
Thus, for every
{
Lr=lcr (r-2)+2(-10gx)
so that for every
V r>l, and
x:
2
(d/dx)h~(x)
within the interval
(e
~
-logx (i.e., x>e
> O.
-2
,e
-1
O<x<e
-2
-1
),
}x
r-3
h (x)
~
-2
.(4.6)
h~(x)
(O,e
-2
~
r-1
~
0,
does not occur
h~(x),
Hence, the minima of
-1
no matter whether
Q.E.D.
)
is the product of two convex functions, it is
not itself convex everYWhere.
- tit
x>O,
(r-2) + 2(-10gx)
unique or not, occur at a point lying in the interval
Note that though
(4.5)
(-logx)
Therefore, the minima of
).
-1
Hence, (4.2), (4.3) and (4.6) provide a
convenient tool for proving (4.1).
It may be remarked that for
~=2,
h (x)
2
is equal to
L;=1{22r+1/(2r)!}X2(r~1)(_10gx)-2,x>O,
and hence, is
so that
0<~<2,
s02=0.
and is unique.
Lemma 4.2.
(O,e
-2
Proof.
On
the other hand, for
t
in
x
£
(0,1),
we like to show that
-2
O<sO~<e
Towards this, we have the following.
For every (fixed)
~:
0<~<2,
h~(x)
has a unique minima on
).
Note that by (4.6) and the non-negativity of
x
-2
(-logx)
-3
-2
(on(O,e»,
2
3
x (-logx) (d/dx)h~(x) both have the same sign.
it suffices to show that
g~(x)
has a unique root in
(O,e
-2
).
Thus,
Note that by
(4.6)
(4.7)
-10g~(O)
Note that
(d/dx)g~(x)
> 0, V X
= x
= x
= x
g~(e
) > 0, and hence, it suffices to show that
(O,e -2 ).
€
Towards this, we have by (4.7),
-1
~
{r-2
r-l
1 }
cl + 2c 2 + Lr=3cr (r-l)x
«r-2)(-logx)+2)+x
(r-2)(- ~ )
-1
~
{r-2
}
[(r-l)(r-2)(-logx)+2(r-l)-(r-2)]
c l + 2c 2 + Lr=3cr x
-1
~
c l + 2c2 + Lr=3crx
for every
g~(x)
Hence,
-2
= -00,
x
€
c
(r-l)(r-2)(-logx)+r
>
°,
(4.8)
(sO~)
in
(O,e
-2
). Q.E.D.
y(=2-2~), and hence,
are polynomial functions of
r
}
(0,1)
has a unique root
Note that the
r-2{
are continuous and differentiable functions of
~.
Hence, by (4.7) and
(4.8), we conclude that
is a continuous function of
~:
We conclude this section with the remark that for
~+2, sO~~O.
that for
(4.9)
0<~<2
~=2,
s02=0
and for
Hence, looking at (2.4), (2.14), (3.3) and (3.4), we may argue
~=2
work out well.
or very, very close to
2, the iterative procedure may not
However, in such a case, if we let
(4.10)
where
A(O<A<~)
is some prefixed positive number, then the procedure works
A
out -- though we may not have the asymptotically optimal a nM •
REFERENCES
[1] BROCKWELL, P.J. and BROWN, B.M. (1979). Estimation for the positive
stable laws, I. AustPaZ. J. Statist., 21 139-148.
[2] BROCKWELL, P.J. and BROWN, B.M. (1981). High-efficiency estimation for
the positive stable laws. J.A.S.A. 76 626-631.
[3]
CS~RG~,
S. (1981). Limit behaviour of the empirical characteristic
function. Ann. FTobabiZity 9 130-144.
[4] DE HAAN, L. and RESNICK, S.I. (1980). A simple asymptotic estimate for
the index of a stable distribution. J.R. Statist. Soa. B 42 83-87.
[5] FEUERVERGER, A. and MURElKA, R.A. (1977). The empirical characteristic
function and its applications. Ann. Statist., 5 88-97.
[6] PRESS, S.J. (1972).
distributions.
Estimation in univeriate and multivariate stable
J.A.S.A., 67 842-846.
[7] PRESS, S.J. (1975). Stable distributions: Probability, inference and
applications in finance -- a survey, and a review of recent results.
A Moder-n Course on StatistiaaZ Distributions in Saientifia Work.
(eds. G.P. Patil, S. Kotz and J.K. Ord).
Boston.
D. Reidel Publishing Co.,
·e
I.