,
.
No. 1792
8/1989
I,
EMPIRICAL LAPLACE TRANSFORM AND APPROXIMATION OF
COMPOUND DISTRIBUTIONS
by
Sandor Csorgo 1, Szeged University, Hungary
Jef L. Teugels, Katholieke Universiteit Leuven, Belgium
SUMMARY
Let {Xili be LLd. non-negative r.v.'s with d.f. F and Laplace transform L. Let N
be integer valued and independent of {Xili. In many applications one knows that for
y~
CD
and a function
I{J
N
P{ E X. > yl
. 1
1=
·e
where in turn
T
1
tv
tp(y,r,L{r),L'{r),...)
.
is the solution of an equation
¢(r,L{r),... ) = 0 .
On the basis of a sample of size n we derive an estimator Tn for
¢{ r n,L n{r n),L~ (Tn)'''·)
= 0 where
T
by solving
Ln is the empirical version of L. This estimator is
then used to derive the asymptotic behaviour of tp(y,Tn,Ln(Tn),L~(rn)'."). We include
five examples, some of which are taken from insurance mathematics.
Keywords: Empirical Laplace transform, compound distribution, probability of ruin.
MOS~assification : 60EIO, 62FIO, 62P05
lWork partially supported by the Hungarian National Foundation for Scientific
Research, Grants 1808/86 and 457/88.
2.
1. GENERAL PROCEDURE
Let X be a non-degenerate random variable with dJ. F on [0,00) and with Laplace
transform
(1)
We always assume that L(s) exists in an open neighbourhood of the origin; let -(j be the
abscissa of convergence of L(s) and put I = (-0",00) or I = [-0",00) according to the case
where L(s) converges at -0" or not respectively. The function L(.) is arbitrarily many
times differentiable in the interior of I and since X ~ 0 it is a decreasing convex function
on I. Let int I be the interior of I.
Many applications are concerned with compound distributions for which an
approximation is needed. This approximation contains ingredients such as L(s) and
derivatives at a point s = T where T is a proper solutIon of an equation involving in turn
L(s) and/or some of its derivatives. Saddle-point type approximations for nonnegative
LV.-S provide a good general example.
In practical situations one often doesn't know the precise form of L( .); one
only has a sample version L (·) based on a sample of independent observations
n
Xl'X2"",X n of X. Hence
1 n -sX.
L (s) = -n E e J =
e-sxdF (x)
O
n
. 1
n
J=
r
(2)
where Fn(x) =
~ #{1 ~ j ~ n : Xj ~ x} is the empirical distribution function of the
sample. This empirical Laplace transform Ln(s) is a random analytic function for all
(complex) values of s. A natural sample-based estimator of the approximation will be
obtaintd by replacing the derivatives of L by those of L both in the approximation and
n
in the equation defining 1'; the latter will then automatically define an estimator Tn
for
T.
An adaption of the proof of the Proposition in Csorgo (1982) easily yields the
following auxiliary result, where L(k) is the k-th derivative of L, k = 0,1,2,....
e·
3.
Proposition 1. If J is any closed interval contained in I, then almost surely as n
any bounded k E IN
~
III
for
This proposition and some elementary analysis imply a crucial consistency result in that
'Tn ~
'T
almost surely whenever
'T E
int I.
A more elaborate analysis, based on the defining relation for
provide us with a limit in distribution for Jii( 'Tn -
'T).
'T
and 'Tn' will
The limit in distribution -
whenever it exists - wi.ll depend on the limits in distribution of some auxiliary r.v.'s that
we introduce here for later reference.
For any s E int I we define
(3)
-e
W
1 n
k -sX.
k X
J - E(X e-s )l.
(s):= - E {X.e
k ,n
.;n j=l J
Then it follows from the central limit theorem that
where ~ denotes convergence in distribution and
(4)
q~(s) = Var(Xke-sX )
= E[X2ke-2sX]_ E2(Xke-sX)
= (_2r2k L(2kJ(2S) _ (L(k)(s»2
The latter result only holds if 2s E int I. For later use we quote the next result.
Proposition 2. If 2s E int I then the r.v.'s defined by (3) satisfy
4.
where
O'~(s) is given by (4).
In many applications we need variables of the type (3) but with s replaced by a random
sequence s converging almost surely to s. A one term Taylor expansion of the type
n
-s X.
-sX.
-s (j)X.
e n J = e J - (s _ s)X.e n
J
n
J
(6)
introduces random quantities sn(j) satisfying the inequalities
(7)
We will use the following abbreviations
Sk (s)
,n
(8)
=!
E X~eJ - sXj = (-1 )kL(k)(s)
n
.
n j=l
1 n
k -sn (j)X j
[ Sk (s IS) =- E XJ.e
,n n
n j=1
e··
where sn(j) is determined by (6) and (7).
Combining a strong law of large numbers with proposition 1 we obtain a useful
result.
Proposition 3. Let s
surely as n
e int Ij let sn ~ s almost surely.
Then for any bounded k
e IN almost
~ 00
(i) Sk n(s) ~ E[Xke-sX]
I
(il) Sk,n(Sn) ~ E[Xke-sX]
(iii) Sk,n(Sn 'S ) ~ E[Xke-sX]
For easy reference we quote some Slutsky-type results (Serfling (1980, p. 19», where c is
a constant.
5.
Proposition 4. (i) Xn ~ X, Y n ~ c ~ Xn + Yn ~ X + c
(ii) X ~ X, Y ~ c ~ X Y ~ Xc.
n
n
n n
We now apply the above procedure to a number of specific examples.
introducing the example we then give subsequently
(i) general observations concerning the solution r
(ii) the asymptotic behaviour of m( r n - r)
After shortly
(iii) the asymptotic behaviour of the estimator for the required approximation.
We always assume u> 0, and 2r E int I whenever necessary.
2. COMPOUND PASCAL DISTRIBUTION
Assume Y =
N
E
. 1
1=
are
X. where P[N = n] =(r+n-1)prqn and where Nand {X·l 00
1
n
1 1
independent; further, rEIN, 0 < p.< 1, q = I-p. Then Y has a compound Pascal
-e
distribution
where F*n is the n-fold convolution of F.
It has been known for a long time that under our assumptions on L(s); i.e. u> 0,
the tail pry > y] has a gamma type approximation. See Beekman (1974, p.66). In
Teugels (1985) and Embrechts e.a. (1985) it is shown that more precisely
pry
(9)
where L( r) =
(i)
> y]
tv
TY r r-l
Py
, y_
r(r)lrl(L'(r)l
e
00
~. See also Sundt (1982).
First note that as q < 1, r < O.
need L( -u) >
~.
To have a proper solution within int I we
6.
(ii) \Ve will estimate
_
T
by solving Ln (Tn)
= ~.
Now Tn
~ T a.so Furthermore
»
n
-T X.
X
E (en J _ E( e -T
j=l
and by (6) and (7)
n
= E (e
j=l
-TX.
X
J - E( e -T
»-(Tn - T) j=1nE XJoe-Tn(j)X.J .
Hence
(10)
By prop. 2 and 3(iii) ,fD.( r n - T)
~ <)1(0, '?)
2
where 1 =
O"~( T)/E 2[Xe- TX ] if 2T E int
I.
The quantity 1 can be consistently estimated by
in view of proposition 3; hence almost surely 1 ~ 1n
With t denoting the standard normal distribution function, let za/2 be the
percentile point for which t(za/2)
= 1 - ~, 0 < a < 1.
we get an asymptotic confidence interval for
1n
Ii m P{ r - z /2 -
n~
m
n
Q,fD.
T
~ r~
Then by the above construction
if 2r E int I :
T
1n
n
·
+ z Q/2 -rn } = 1 - a .
e-
7.
(iii) The approximation (9) of PlY
> y] depends upon C :=
e ry
r which is itself
ITIIL'(T)I
estimated by
e
TnY
_
n .- I Tn II L~( Tn) Ir
C .
From proposition 1 and standard arguments Cn -1 C almost surely. We should like to
find the limiting distribution of.fD. (C n - C). Now
or
(11)
·e
Abbreviate the left hand side by In' By a one term Taylor expansion we can write
(12)
(13)
8.
However
r-l
(14)
{-L/{r ))r _ {_L/{r))r = {-L' (r ) + L/{r)][ ~ {-L/{r ))t{_L'{r))r-l-t]
n n
n
n
t=O
n n
where
r-l
R := ~ (_L/{r))r-l-t st (r) ~ r{_L'{r))r-l =: R
n
t=O
l,n n
almost surely. In turn
n{L/{r) -L/{r)}
n
n
=
=
n
-r X.
~ (X.e n J _
·-1
J
Jn
~
(X.e
j=I J
X
E{Xe- r ))
.
-rX.
X
n
2 -r (j)X.
J - E (X e -r ) )_(r -r) ~ X.e n
J
n
j=1 J
by relying on (6). Hence by (3) and (8)
(15)
Jii[L/(r) - L/(r
n n)] = WI ,n(r) - {Ii(rn - r)S2 ,n{r
n ,r) .
Combination of (13), (14) and (15) yields
We also replace .fD.( r n - r) by its value given by (10). It follows that
(16)
.;n In is of the form
e-
9.
where
v n = rn R n .
By proposition 3, almost surely
and V
n
-I
V:= rR where R = r(_L'(r»r-1. Rewrite (16) in the form
rn I
=
n
-!
n
l;
[U(e
-rX.
X
J - E(e- r
Jnj=1
»+ V(X.eJ -rX.J - E(Xe- rX»]
then proposition 4 can be applied a number of times. We ultimately find
where
Returning to Cn - C we obtain
where
by another application of proposition 4.
10.
3. CHERNOFF BOUNDS
Assume (1
e-syp[X
> O. Let s < O. Then for y ~ 0, L{s) = JOe-sxdF{x) ~
> y]. Hence we get the
J;
e-sxdF{x) ~
Chernoff bound
P[X > y] ~
in f eSYL{s).
-(1<s<O
This means that
(17)
where
(18)
(i)
+ L'{r) =
yL{r)
The complete monotonicity of L implies that
-L' (r)/L{ r)
> -L' CO)
=: IJ. if r
< O.
(ii) We estimate r by solving yL n{r n)
Furthermore
0= n{yLn{rn )
=y
n
+
-T
X.
X
is decreasing for s
>
-(1. Hence
Hence we restrict attention in (17) to y
+
L~{ Tn)
» _j=lE
By (6) and a little algebra we obtain
(19)
+
L~{rn)} - n{yL{r)
E ( e n J _ E{e -T
j=l
O.
n
= O.
+
Now, as before r n .... r a.s..
L'{r)}
(X .e
J
> IJ..
X.
X
n J _ E{Xe -T
-T
».
e-
11.
where
As before a consistent estimator for
'Yi and an asymptotic confidence interval for
rn can be constructed.
(iii) We estimate the Chernoff bound C := eryL( r) by the sample statistic
rny
Ln(rn ). As in (11) using (12)
Cn := e
-e
•
Now {ri(L n (r n ) - L( r)) = Wo,n(r) - {ri( rn - r)Sl ,n(rn , r) so that
m e-ry(C n - C) = m(rn - rHye
yOn
Ln(rn) - 51 ,n(r
n,rn
+ Wo,n(r).
Introducing (19) into this expression yields after easy algebra
where
12.
and
Note however that by (18) y2L( r)
= -yL' (r) so that
'D
2ry 2
(JO(
m(C n - C) - - . <J1(O,e
U = ery while V
= O.
Hence
r».
4. THE CLASSICAL RUIN PROBLEM
Assume that F is the distribution of claim sizes {Xili
with EX = JL in an
insurance context where claims arrive according to a Poisson' process {N(t),t ~ O} with
intensity.>t. Starting with initial reserve x ~ 0 and ~ith incoming payments in the time
interval [O,t] equal to t, the company accumulates the risk reserve
Y( t) = x
+t
N (t)
-
t
. 1
1=
X..
1
The probability of non-ruin with initial reserve x is then
W(x) := P{i n f yet) > O} .
t>O
can be shown (see Takacs (1967,p.150) Feller (1971,p.377) or Buhlmann
(1970, p. 144» that, if the expected claims paid per unit time p := )..p. < 1, then
It
W(x) =
III
n •
E (1 - p)p F n(x)
n=O
where
F is the equilibrium distribution corresponding to F, i.e.
-
F(x) =
1 x
pi
0 [1 - F(y)]dy .
e·
J 3.
Clearly
L(s) = {I - L(s)}/~ .
The famous Lundberg-Cramer ruin estimate is given by
1 - W(x)
(20)
-
where L(r) =
N
1 ~ P e1X
plrIIL'(r)1
,
x .... m
1
Ii.
(i) Note first that r < 0 since P < Ij the existence of r clearly depends on the condition
L( -0") > ~ or L( -0") > 1 +
sL' (s) - L(s); then h' (s)
X.
We rewrite (20) somewhat.
= sL"(s) which is negative for s < o.
Consider h(s) :=
Hence for r < 0,
h(r) = rL'(r) - L(r) > h(O) =-1.
Now in (20) L'(r) = {-rL'(r) -1 + L(rH/W7? < 0 so that (20) is replaced by
·e
1 - W(x)
(21)
(l - p)/f
e 1X
p(-L'(r) - })
-
where we used L( r)
•
N
= p1 or L( r) = 1 - Xr .
r
(ii) We estimate rby rn' solution of Ln(rn )
or with (6) and the usual algebra
(22)
where
f
=
O"~( r)
12·
(-L'(r) - X)
= l-A~
Hence
14.
·
(ii-i) To estimate the probability of non-ruin we put C = e
r
x
1 -1
e n {-L~(rn) - xl
.
lX
{
-L' (r) -
il
-1
and Cn =
Then
can be rewritten by (12) in the form
lOx
In =L'(r
n;
n n )-L'(r)-(rn -r)(L'(r)+T)xe
A
with the usual operations and (22) we obtain
.;n I n =
Un W oIn(r) - WI In(r)
where
Hence
where
5. THE COMPOUND POISSON DISTRIBUTION
Starting with a similar setup as in the previous example one can derive an
asymptotic expression for the distribution of the totality of claims incurred up to
15.
.
time t, i.e. X(t)
N (t)
= . t 1 X..1
Under a number of weak conditions it was shown in
1=
Embrechts, Jensen e.a. (1985) that
~ ->. t[ 1 - L( r)]
P{X(t»y}",e
(23)
e
Irl J2.,;)' t L"( rJ
-e
->.t1.
,y-+oo
where
(24)
->.tL'(r) = y.
(i) It is clear that we require y
> JL>.t to obtain r < o. We also write z = y/(>.t) for
brevity.
(ii) Again r n is estimated by the solution of L~(rn) = -z. As before
m(rn - r)
.
·e
2
•
where
e=
V
(r).n
1n
= ~----.<J1(O,e
"2 n\rrn ,rr)
,n n
2
)
2
0"1 (r)
(L"(r»
2·
and
•
We use the identity.fi. -.;0 = (a-b)/(Ji. + .;D) and a further Taylor expansion
e
>.tL (T )
n n.
= e>.tL(r) + >.t[Ln (rn ) _ L(r)]e
>.to
n
16.
where
..
After tedious calculations we arrived at the following expression
where
U := -rAte
n
AtO
)
nJL"(TJ ~ _r>.te>.tL( r JL"(TJ =: U,
-1
AtL( r)
yOn
>.tO n
Vn :={S2 ,n(rn,r)} {JL"(TJ[(e
-l)(I-rye
)+rAte
SI ,n (rn ,r)]
oIL" (r)
yO +AtO n
-rAtye n
(Wo (r)-{rl(r -r)SI (r ,r»}
Jii
,n
n
,n n
using (24), and
AtL(r)
._ Tn ( e
- 1) a.s. tie>.tL(r) W .~
n JL~(rnr+JL"{TJ
2JL"(r)
As belore we arrive at
11_.
-. W.
) 7.
where
l'
6. THE COMPOUND P6LYA DISTRIBUTION
In Embrechts, Jensen e.a. (1985) the compound P6lya distribution was
approximated as wellj here
where k > 1.
Under weak conditions it was shown that
·e
(25)
where
•
-tL' (T)
(i)
Since y ~
1
+
1Il
= y{l + ~ [1 - L( T)]}.
we can assume that y >
jlt
so that the increasing function
t [1 - L(s)] (with value 1 at s = 0) intersects the decreasing function - ~ L' (s)
(with value
7-
at s = 0) at some value
T
< O.
We simplify the notation somewhat by introducing the constants
•
(ti) Again Tn is defined by -L~(Tn)
defines
T.
= z + b[l - Ln(TnH where -L'(T) = Z + b[l - L(T)]}
Hence elimination of z yields
18.
By now the calculations are routine; we obtain
_
WI,n(t)
.jn( rn - r) - S
( r r)
2, n n'
+ bWO,n(t)
+ b S i r r)
'D
----t
l,n' n'
2
91(O,K )
where
Again, very much like in the Pascal case, we find
•
where
!::!:... (1 _yr) [( -z )k _
IT(TJ
=:
a] _krzkL"( r)
[_L'(r)lK+l
~ UE[X(X + b)e-,-Xl
•
19.
and
-bS
T{ )k
k -1
(r ,r)
-z
~ (L'(r »m(L'(r)k-1-m
1,n n [L'(r)L'(r )]k m=O
n n
n
n
k
k
~ (1 - yr) [( -z ) _a] + kbrz =: VE[X(X + b)e- rX].
I7"(TJ
[_L'(r)]k
Hence
where
..
7. REMARKS
As mentioned in the summary) the relationship
N
(25)
P{ ~ X. > y}
. 1
1=
N
1
cp(y)r,L(r»)L'(r»)...) = :'P
where 1/J(r)L(r»)...) = 0) holds for y ... m. On top of this first approximation we replaced
the right hand side of (25) by
•
'Pn := cp(y)rn)Ln(rn»)L~(rn»·")
where L was the empirical version of L and where rn was the sample version of the
n
solution of the equation 1/J( rn)L n(rn»)"")
superimposed approximations will be.
= O.
The reader could wonder how accurate the
20.
To evaluate the accuracy of (25) in itself is an important but hard problem. In
some cases the proof of (25) is easy, like in the Chernoff bounds; however more typically
the proof depends on deep theorems from the theory of stochastic processes or on
intricate procedures from asymptotic analysis. Hence a second order term or even a
series expansion would be desirable. Let us remark that for the compound P6lya and for
the compound Poisson case, second and higher order terms are available in Embrechts,
Jensen e.a. (1985).
A systematic study of the accuracy of the empirical version I{Jn in evaluating I{J is
an entirely different issue, depending heavily upon simulation studies. As a simple test
case we evaluated the Chernoff bound for the exponential distribution with mean 1.
I y
Then L(s) = (I+Sr I ; given y > I, T = y-I_I and C = e TY L(T) = y e - ; also I = (-I,m).
After substantial simulation we could only conclude that the problem is far from trivial.
For small y-values the accuracy was always found to be sufficient even though we
encountered systematic bias; for large values of y however the accuracy is far less
satisfying.
Here are some of our temporary conclusions:
(i) In many examples L(s) and its derivatives have a vertical asymptote at s = u; as a
result the solution of the implicit equation ,p(Tn,Ln(Tn ),...) = 0 for Tn induces
systematic bias. Also the numerical procedure (bisection, Newton-Raphson, ...)
influences the accuracy but to a lesser extent.
(ii) In all of our examples we made heavy use of proposition 2; a requirement for its
applicability was however that 2T > u. In our simulation example, y values greater
than 2 caused large fluctuations in the estimation of T and C. Moreover, the
.
larger y, the closer we get to the singularity of the function L(s).
(iii) Presumably the most important reason for the eventual poor behavior of the sample
version T n is this: the empirical functions Ln (s) are determined by the positive
.
sample values Xl'X2,.",Xn ; in all our formulae however we use the extrapolated
values of Ln(s) for s < 0; moreover the latter function is entire for every n while L(s)
has a singularity at
(J
< O.
•
21.
ACKNOWLEDGEMENT
\Ve take pleasure in thanking C. Coleman for extensive simulation work that
•
unfortunately could not be incorporated in this paper. Also thanks are due to W. Van
Assche and to a referee for some helpful comments.
Part of this work was done while the second author was on sabbatical leave at the
Statistics and Applied Probability Program at the University of California, Santa
Barbara. The author extends his gratitude to J. Gani and UCSB for the kind
hospitality.
REFERENCES
BEEKMAN, J.A. (1974) : Two Stochastic Processes, Almquist & Wiksell, Stockholm.
BUHLMANN, H. (1970) : Mathematical :Methods of Risk Theory, Springer Verlag,
Berlin.
CSORGO, S. (1982) : The empirical moment generating function. CoIl. Math. Soc. J.
•
Bolyai, 32, Nonparametric statistical inference, B.V. Gnedenko, M.L. Puri &
1. Vincze ed., Amsterdam, 139-150.
EMBRECHTS, P., MAEJIMA, M. & TEUGELS, J.L. (1985) : Asymptotic behaviour of
compound distributions, Astin Bulletin, 15, 45-18.
EMBRECHTS, P., JENSEN, J.L., MAEJIMA, M. & TEUGELS, J.L. (1985) :
Approximations for compound Poisson and P61ya processes, Adv. Appl.
Probability, 17,
•
623~37.
FELLER, W. (1977) : An Introduction to Probability Theory and its Applications,
Vol. II, ed, J. Wiley & Sons, New York.
SERFLING, R.J. (1980) : Approximation Theorems of Mathematical Statistics,
J. Wiley & Sons, New York.
22.
SUNDT, B. (1982) : Asymptotic behaviour of compound distributions and stop loss
premiums, Astin Bulletin, 13, 89-98.
TAKAcs, L. (1967) : Combinatorial Methods in the Theory of Stochastic Processes,
J. Wiley & Sons, New York.
•
TEUGELS, J.L. (1985) : Approximation and estimation of some compound
distributions, Insurance: Math. & Econ., ~, 143-153.
•
•
© Copyright 2026 Paperzz