,.
This research bJaB partial.1:y supported by the Office of Naval Research
under Contract No. N00014-67-A-0321-0002 •
•~
University of North
SbJeden.
Carolina~
Chapel
Hill.~
and University of
Lund~
DISCRETE WAVE-ANALYSIS OF CoNTINUOUS STOCHASTIC PRocESSES"
by
Georg Lindgren**
Department of statistics
University of North Carolina at Chapel Hilt
Institute of Statistics Mimeo Series No. 761
June, 7977
DISCRETE WAVE-ANALYSIS OF CoNTINUOUS STOCHASTIC PRocESSES ll
By
Georg l indgren,
Univel'sity of NOl'th Cal'oZina, Chapel. Hil.l.
and Univel'sity of Lund, Sweden
1.
Introduction.
In the theory of stationary stochastic processes, much effort
has been devoted to the problem of finding the statistical distribution of wavecharacteristics such as the time between successive crossings of a level and the
time and the vertical distance between a local maximum and the following minimum, which are all important for the physical appearance of the process paths.
In a very few cases, exact solutions are known; see Slepian [ 9] and Wong
[12].
In others, one has to rely on series approximations whose accuracy are
highly dependent on the covariance structure of the process; see LonguetHiggins [ 7] and Lindgren [ 6].
One fruitful approach, founded on Kac and Slepian's horizontal window concept [ 3], was introduced by Slepian [10], who derived an explicit expression for
the conditional process in which a zero upcrossing occurs at time zero.
The same
technique is used by Lindgren [ 4] to describe the process after a local maximum
with a predescribed height
u.
In both cases, the choice of conditions has to be
justified by delicate ergodic arguments.
The aim of this paper is to derive similar representations considering only
a sampled version of the process.
It is also shown that, if the continuous pro-
cess is sufficiently regular, then these discrete representations and those in
[ 4] and [10] coincide in the limit as the sampling interval tends to zero •
...
This l'eseal'ch was pal'tiaUy supPol'ted by the Office of Naval. Reseal'ch undel'
Contl'act No. N00014-6?-A-0321-0002.
2
Numerical illustrations are given in Section 4.
Previously, Tick and Shaman [11]
have shown that the expected number of crossings, maxima etc. per time unit are
approximately the same for the continuous and the discrete version if the process is known to be of the highly regular band-pass spectral type.
One minor
difference is that extremely high maxima are likely to be overlooked which causes
an excess of moderate height maxima in the sampled process.
In general, the discrete approach has two main advantages.
Firstly, regard-
less of the regularity of the process, it is more close to the practical situation in which crossings, maxima, etc., are identified in a sampled process.
Secondly, and more important in the non-regular case, it contains no reference to
the "infinitesimal" properties of the random process.
the covariance derivatives
spectral moments
With a continuous approach,
r(O), r"(O), rIV(O), etc.,
AO' A2 , A4 ,...
- playa heavy role.
- or equivalently the
These quantities are
often difficult to estimate from a record of a realization of the process which
is either sampled or affected by the impact of a filtering device.
In both cases,
one has to make far reaching assumptions in order to extrapolate to the behaviour
of the covariance function at the origin or the spectral function at infinity.
This vague statement can be exemplified as follows.
process
~(t)
r(t)
Suppose the continuous
has mean zero and the covariance function
=
E(~(t)
•
~(O»
with the spectral density
f(A)
•
2~
I::
r(t) exp(-iAt) dt.
Then, sampling at discrete time points
{kd, k=O,±1,±2, ••• } yields accurate es-
timates of the discrete covariance function
linear filter yields estimates of
tion of the filter.
{r(kd), k=O,±l,±2, ••• }, while a
2
f(A)·1 g(A) 1 where g is the "gain" func-
In both cases, it is possible to extrapolate to the values
3
of
r
r(2n) (0)
or
f.
or
A2n
= !A 2n
f(A)dA
only if one imposes severe constraints on
To use these quantities to make probability statements about the num-
ber of crossings. the wave-height. etc., for the continuous process can be practically meaningless.
In the irregular case when
r
is not differentiable at the origin
the continuous approach breaks down.
(A
2
=00)
The sampling technique can still yield
useful results as is illustrated in Example 4 in Section 4.
2. Discrete and continuous conditional processes. Let
{~(t),
teR}
tionary, zero-mean. Gaussian process with the covariance function
r(t) •
Cov(~(s), ~(s+t»
for all
s,
process observed at the discrete time points
sampling interval
d > 0;
~k
thus
{~k'
and let
= ~(kd).
be a sta-
r,
i.e.,
k-0,±1,±2, ••• } be the same
••• -2d, -d, 0, d, 2d,
Also write
...
with a
r k • r(kd) -
COV(~i';i+k)'
Then we say that a zero upcrossing occurs in the sampled process at time
k
if
;k-l < 0 < ;k'
Similarly, we say that a maximum with height in
;k-l
<
;k'
;k+l
<
;k'
[u, u+h]
occurs if
u S ;k S u+h.
Then, for any set of integers
kl, ••• ,kn ,
butions of
given say, a zero upcrossing or a maximum with
height in
(;k' k-kl, ••• ,kn )
[u, u+h]
for k - O.
we can compute the conditional distri-
Considering the limiting case hlO,
that we have the conditional distributions given a maximum with height
time
Os
we say
u
at
4
The representations in the following two theorems describe explicitely what
happens with the process
THEOREM'.
~k
after a zero upcrossing or a maximum.
{~klzero up crossing for
The conditional process
k
= O}
has the
same distributions functions as the process
.- .
d
2
(2.1)
k
= 0,
±l, ±2, •••
{K , k=0,±1,±2, ••• } is a nonstationary Gaussian sequence with mean zero
k
and the covariance function
where
=
r i _j -
1 2 {rirj - rlri+lr j
l-r
l
- rlrir j + l + ri+lr j + l } ,
and
~d and nd are random variables, independent of the process
and with
K
the joint density
=
o
otherwise.
PROOF.
Then the random variable
(~,n'~k
1
' ••
"~k)
n
Cov(~,n)
is normal with mean zero and the covariances
=
0,
=
V(O
~(l+rl)'
V(n)
Cov(n'~k )
i
COV(~k '~k )
i
j
=
rk
i
=
=
2d
d
-1
-2
(l-r ),
l
(rk -rk +1)'
i
i
-k •
j
Let us first find out what happens if
~
and
n
assume the values
Then the conditional n-variate normal distribution of
(~k ""'~k)
I
n
x
and
y.
has mean
5
and covariances (see Rao [ 8], p. 441)
E(~k.l~
= x,
n
= y)
~
=
• "2d •
= x·
Thus, if
K
k
is a sequence with the covariance function
(~k' k=kl,···,knl~ = x, n = y)
and
y
has the same distribution as
d
- Y • - •
2
x •
Since
R then
~-l < 0 < ~O
if and only if
21~1 < dn,
we have now only to give
free with the conditional distribution of
(~,nI21~1
< dn)
x
which has
the density
f d
~
,n
d (x,y)
where the constant by integration is found to be
d/ll-r Z ~eeo~ rIo
l
Before we can state a similar theorem for the maximum case, we need some
definitions.
Let
0
6
1
L
n
=
L
22
~d
0
2d
=
-2 (l-r )
l
12
and write
umns in
21
0
-4
d (6-8r +2r )
2
l
r
:k l -k 2
1
r
.k -k
• 2 n
r k -k
1 n
r k -k
2 n
1
~d
=
-2
-1
r k -k
1 n
k -k
l 2
k
(l-r )
l
(1-r )
2
r
r
1
~d
(rk _l-rk +1)
1
1
-1
kn
(rk _l-rk +1)
n
n
-2
d (2rk -rk _l-rk +1)
n
n
n
(2rk -rk _l-rk +1)
III
= L 12 ' ,
L2 • 1 = LZZ-L21Ell
E21 Lll
-2
1
d
L
-2
0
r
L
2d
0
-1
•
-1
LZI '
Also write
f
l,
For example, the i:th element in
If we denote this expression by
then
co
(~)k=-co
f
Z'
f
f
1
3
for the three colis equal
to
is a real sequence such
y co
that (r l)i = ~.' In the same way, we can define sequences (Bk )k=-oo'
~
z co
00
Y
Z
(B k )k=-co and (Ci,j)i,j=-oo such that (f 2 )i = B
,
(f 3 )i = -Bk . '
ki
~
Z
(L
201 )ij = Ck.,k.' The reason why we chose the minus sign for B will be ap~ J
-2
2-2
parent later on. Also write m(u) = uoZd (l-r ), 01 = ~d (1-r 2),
l
7
THEOREM 2.
{~klm~mum<'with height u for.k=O}
The conditional process
has the
same distribution functions as the process
~ + n dB y _
(2.2)
-K
where
{6
k
, k
u
k
r
~u
dB z + 6
k
k'
= 0,±1,±2, ••• }
k = 0, ±l, ±2, •••
is a non-stationary, zero mean, Gaussian sequence
= Ci,j'
COV(6 ,6 )
with the covariance function
i
j
dom variables, independent of the process
f
Here
d
d(Y'z)
n u ,Z;u
k
is
u
{
=
00
2TI 0110
6
and
nu
and
Z;u
d
are ran-
and with the joint density
eXP{-~2/02 - ~(z-m(u»2/02}/k
1
2
u
0
d
21yl ~ dz
for
otherwise.
~2~(dzJ20l)-1}~«z-m(u»/02)dz
~
and
~
and
the
standardized normal density and distribution functions.
PROOF.
If we define
then
the matrix
is the covariance matrix of the (n+3)-variate normal variable
(~',n',z;"~k ""'~k)
whereas
E21 Ell
-1
(u,y,z)'
and
= E22-E21Ell -1 E12
E20l
1
n
are the conditional mean and covariance matrices respectively for
k = k1, ••• ,k
n
given
(2.3)
~I
= u,
n' = y,
=
Z;'
= z.
But
y
z
- zB k '
+ yBk
i i i
u~
and we can proceed as in Theorem 1 and define the sequence
ance function
given
21n'l
~'=
u
6
k
with the covari-
To find the proper distribution of the coefficients in
(2.3), we observe that the event
~' = u,
~k'
<
dZ;'.
~-l < ~O'
~l < ~O'
~O
= u
Since the conditional distribution of
are independent and normal with mean
0
and
is equivalent to
n'
m(u)
and
Z;',
and variances
8
o~ we get that the distribution of n ' , r; , II; ,=u,
and
21n'l
has
< dr;'
just the density
122
exp{-(z-m(u» /20 }
2
rz:rr O 2
--~-
constant •
=
for
2/yl
<
dz
o
and the theorem is proved.
It is convenient to introduce the processes
I;*d
and
I;u
d
so that
are defined by (2.1) and (2.2) respectively for
and
k = 0,±1,±2, •••
and by linear approximation for other t-va1ues.
t = dk,
To
stick to the discrete approach, we still define crossings, maxima, etc., as before, i.e., if
l;*d(d(k-1»
crossing at time
interval
dk
<
0
<
I;*d(dk)
then
I;*d
is said to have a zero up-
although the crossing actually occurs somewhere in the
(d(k-1), dk).
We shall now present the results of Slepian and Lindgren concerning conditiona1 continuous processes.
{I;(t), tER}.
I;(t)
$
0
for
We say that
tE(to-e, to)
We also have that
[u, u+h]
which
I;
if the derivative
I;
Consider the continuous time process
has a zero upcrossing at
and
I;(t)
~
0
for
has a local maximum in
1;'
to
if, for some
e > 0,
t€(t o ' to+e).
(O,h')
with height in
has a zero downcrossing at some
tE(O,h')
for
t , ••• ,t ,
n
1
de-
I;(t) € [u,u+h].
As
in the discrete case we can then, for any set of times
fine the conditional distribution of
crossing in
(O,h')
(I;(t ), i=l, ••• ,n)
i
given, say a zero up-
(resp. maximum in (O,h') with height in [u,u+h]).
considering the limiting cases
butions of the process at times
maximum with height u) for
h,
h'~
0
tl, ••• ,t
t = 0".
Again
we arrive at "the conditional distrin
given a zero upcrossing (resp. local
9
THEOREM 3.
{~(t)lzero
(SZepian).
If
upcrossing for t=O}
=
-r"(O)
A <
Z
00
then the conditional process
has the same distribution functions as the
process
- nr'(t)/A2 + K(t)
=
(2.4)
where
{K(t), tER}
is a non-stationary, zero-mean, Gaussian process with the
covariance function
=
R(s, t)
and
n
COV(K(S),K(t»
=
r(s-t) - r(s)r(t) - r'(s)r'(t)/A '
Z
is a random variable, independent of the process
K and with the prob-
ability density
y > o.
=
f (y)
n
For a similar maximum theorem, assume
=
-r"(O)
rIV(O)
=A
4
<
00
and define
A(t)
=
AZr" (t» / (A 4-A/),
B(t)
=
r" (t» / (A -A Z) ,
=
C(s,t)
THEOREM 4.
cess
4
If
Z
2 -1
2
r(s-t) - [A (A4- A2)]
tA A r(s)r(t) + A r(s)r"(t)
2
Z
2 4
-r"(O) = )..2'
r
IV
(0) = )..4
{~(t)lmaximum with height u for t=O}
<
then the conditional pro-
00,
has the same distribution functions
as the process
(2.5)
where
~
u
(t)
{~(t),
=
tER}
variance function
uA(t) -
~
u
B(t) +
~(t),
is a non-stationary, zero-mean, Gaussian process with the coC(s,t)
= Cov(~(s),~(t»,
and
~
u
is a random variable,
10
independent of the process
fT,; (z)
~
and with the probability density
z > O.
=
u
Here
and
k(u)
can be expressed in terms of
4>.
3. Discrete versus continuous approach. As we have seen, the discrete conditional processes
~ d and ~ ud are of elementary character and their dis~
tributions are well suited for quite obvious frequency interpretations (see
Remark 3 below).
Their logical simplicity is, however, coupled with a fairly
complex probabilistic structure, including two dependent random variables and a
random sequence, and this makes them less suitable for numerical work.
drawback does not apply to the continuous processes
only one random variable and a stochastic process.
E;u
~
~
and
~
u
This
which contain
The real meaning of
is, however, not clear from their definition, and, therefore, it is desirable
to derive them directly from
theorems; see also Appendix.
This is dOne in the following two
and
E;~
Thus
and
defined by (2.4) and (2.5),
E;u'
can be regarded as limiting cases of the easily comprehensible processes
and
...l: ~ and
E; d
~
E; d.
u
THEOREM 5. If
-r"(O) - A <
2
00
and if
i = l, ... ,n
as
d + 0
then
PROOF.
Let
E;m
L
+
E;
By expanding in Taylor series,
denote convergence in law.
it is directly shown that, as
=
d
+
0
=
11
(Kk ' i=l, ••• ,n) ~ (K(t ), i=l, ••• ,n).
i
i
which in turn implies that
-I-
r(t.)
and
~
n
=
f
+~dY
. -~dy
d
"2 •
nd has the density
The marginal distribution of
f dey)
Also
£ d d(x,y)dx
~,n
2
2 2
exp{-d y /4(1-r )}
l
d y
f
n
(y)
as
d
-I-
O.
The convergence is easily seen to be dominated so that actually
d
n ~ n.
Since
I~dl s ~dnd we also get that ~d ~ O. The theorem is proved if we apply the
n-variate
Cram~r-Slutsky
o
theorem.
REMARK 1. The density
y > 0,
can be regarded as
"the density of the derivative at a randomly chosen zero upcrossing".
REMARK 2. The normed average d-1 ~ d has the limiting density
(2~/A2)~{1-~(2Ixl/A2)}·
RE~~RK 3. There are two possible frequency interpretations of p(~Rd(dki)
xi' i=l, ••• ,n).
Firstly, it is equal to a certain relative frequence in the
classical situation in which one, from several independent realizations of
picks out those for which
dkl, ••• ,dk •
n
S
~-l
< 0 < ~O'
~,
and looks at their values at
The second interpretation is just as natural, namely when one has
one single very long realization and picks out all zero upcrossings of
~k
and
12
looks at the values of the process after additional
kl, ••• ,k
n
sampling steps;
the process is supposed to be ergodic.
THEOREM 6.
as
If
d -+ 0
= A2 ,
-r"(O)
= A4
rIV(O)
< m
and if
i
= l, ... ,n
then
o
The proof proceeds exactly as the previous one and is omitted.
Numerical ill ustrations.
4.
sampled processes
~ d
~ d
u
and
)f
continuous processes
In this section, we give some examples of how the
and
~)f
~u
and the wave-length
distance
can be used to describe the behaviour of the
and vice versa, with respect to the zero
L
u
:
= the time for the first zero downcrossing by
~)f
L)f
T
~ d
d
U
L
=
u
the time for the first local minimum of
~u
U
We use the technique with moment approximations which has proved useful in many
cases (cf. [7] and [6]).
Let us start with
N d
k
= the
L
)f
and define
number of zero downcrossings by
~
N
x
Also define
P(X
d
k
=1).
~ )f d(dj)
X d ... 1
k
Then
if
~ )fd(d(k-l»
> 0 >
)f
(t)
~ )f d(dk)
for
for
j'" 1, ... ,k
0
<
t
~
x.
13
P(T d~dk)
= P(Nk d~)
H
We see that
p
d
H
~E(Nkd(Nkd_l»
(k)
is a good approximation for
remains small,
than one downcrossing in
P(t
d
H
=dk)
as long as
~
d
=
(which also means that the probability of more
(0, dk)
is small).
The probability
PHd(k)
can be
easily calculated by the formula
JJ2lxl~dy
where
a
and
k
Sk
are the factors for
~d and
n
d
in
~Hd(k).
Similar relations hold in the continuous case:
P(t
Here
f
~x)
H
~
and
lH
say.
f
2H
E(N)
- ~E(N x (N X -1»
x
= fox
f 2H (t) dt
can be used to approximate the density
they are also easy to compute.
f
H
of
T
H
and
We recapitulate the formulas for them, given by
Longuet-Higgins [7]; let
p
=
t
be the conditional variances and correlation coefficient for
given that
~(o)
= ~(t) = O.
=
Similarly, let
I
27T~
~'(o), ~'(t),
Then
(J
(J
o2
2
t2
l-r (t)
{ h-p 2
- P t aJr..c.c.o.6
t
E
be the covariance matrix of ~(O), ~(T), ~(t) and let
Tt
be the conditional variances and the conditional
14
correlation matrix for
O.
~'(O),
~'(T),
~'(t)
given that
~(O)
= ~(T) = ~(t) =
Also let
s1
and define
=
aJLC.C.O.6
s2' s3' a , a
by cyclical permutation of indices.
2
3
=
REMARK 4.
over dk ±
JT=O
The discrete probability
comparisons
pxd(k)
~d
EXAMPLE 1.
\.10
t
p
d
\
222
]..IT ]..It
det I: Tt
is the analog of
x
Then
To facili tate
is represented in the diagrams as the area of a rectangle
with height
In Figure 1, we compare
for different
d-values for the covariance function
=
The agreement between the two approaches is striking even for such big sampling
intervals as
EXAMPLE 2.
d
= 0.5.
Figure 2 illustrates the results for the covariance function
The characteristic angle in
fIx
for
t
~
0.3
is clearly seen for
(See Figures 1 and 2 at the end of this section.)
d
= 0.25.
15
We now turn to the wave-lengths
u.
If
p d(k)
u
T
and
T
after a maximum with height
u
= P(~ u d(d(k-1»>~ ud(dk), ~ ud(d(k+1»>~ud(dk»
~ u d(dj)
Mx ) is the number of local minimas of
for
d
u
for
j
and K d
-1<:
= 1, ••. ,k
(resp.
(resp. of
~u
0 < t S x) then, as before
k
I
s
j=l
P(T d s dk)
u
k
I
~
j=l
Pu d(k)
Thus
d
Pu (j)
d
d
Pu (j) - Sk •
P(T d=dk)
is a good approximation for
u
is small.
Similarly
P(T
U
S x)
which approximate the density
The functions
not be as explicite1y expressed as
f 1H
and
f2~
f
u
of
can-
and we refer the reader to
[6] in which defining integrals can be found.
EXAMPLE 3.
Pu d with
In Figure 3a-c, we compare
(and f
2u
) for the co-
variance function
=
.6.<.n
13
t
I3t
representing a band-pass spectrum over
height
u
= 1,
a sampling distance of
for higher u-values (u
bution of
T
u
(0,
= 3)
a smaller
d
d
1:3).
= 0.25
For the moderate maximum
seems quite sufficient but
is needed to catch the narrow distri-
•
(See Figures 3a, 3b, 3c at the end of this section.)
16
The covariance functions so far have all been of the regular type with a
finite
A
2
and
A
4
respectively, and we have seen a good agreement between the
continuous and the sampled version.
We now turn to the non-regular case in
which the covariance function fails to be differentiable at the origin.
EXAMPLE 4.
The covariance function
= (I-c)
!.lin
13 t +
I3t
c exp(-Itl)
represents a process of the band-pass type with an independent superposed
Markov process.
r ,
3
For small
at least when
Since
r
4
t
c
the function
r4
is hardly distinguishable from
is bounded away from zero.
has an infinite second derivative at the origin the expected
number of level-crossings per time unit is infinite.
a time
t
~(t)
at which
=u
~
the process
In every neighbourhood of
crosses the level
u
infinitely
often, and thus both the zero-crossing distance and the wave-length degenerate
to zero in the continuous approach.
In the sampled version, however, the concepts of zero-crossing distance and
wave-length still make sense, and we can calculate the probabilities
Pu
d
as before.
pared with
r •
3
Figure 4 shows
flu (f
2u
)
Pu
d
for
c
= 0.02,
u
=1
and
and it is also com-
for the pure band-pass process with covariance function
The effect of the "disturbing" Markov process is easily seen for
d
= 0.25
but is hardly visible for bigger sampling distances.
The evident conclusion is that in the presence of an irregular disturbance
over a regular process one can "filter out" the disturbance by sampling with an
appropriate sampling interval and still get a good idea of the appearance of
the "pure" process.
17
This again emphasizes the need for careful appraisement of the model when
~
•
•
dealing with practical crossing problems and estimated covariance functions •
-
.
: ~...
;.
"
..
'
0.3 ...
-
~''''
.
..
"'''!'' •.'
,-
~
flJl
p
and. . £2K
£
1K
K
d·
:
d-0.25 ~
.
-~
.
d - 0.50 _ _
r--'-
'.
d - 1.0'0
.. -L_
·:...--~r···
0.2
.
"
=---=-_
• •• -=--- =-._._••==__ •• =-=• __
._.
..... -:-'"E;
l_ ...._.
~_.
0.1
I"·'
1
I
I
2
3
,
I
4
5
,
6
~
--".--
t
...,.
F.1gure 1 .. Discrete and continuous zero-distance for
.....
~"T
.,'
..•.
-1\"
T"l.
....co
-
-
e
:,
I
0:3'
,-
0.2
'.
•
fa
end
f
f
2Jl
,, =- -t .- • • •
1Jl
PJld:-- d . 0.25
~
d - 0.50
r-l_
0.1,
d - 1.00
_
-- ... J--- ._"\--
"'1;,
,
,
,
.
1
2
3
4
5
Fiaure 2.
••
Discrete and continuous zero-distance for
I
6
>
t
r 2•
to
-
-
tit
."
,,flu
A·:
and
f 2u
d
PU :-- d • 0.25
d • 0.50
d- - 1.00
~
---r-'-.
r-----\..
---_oJ
o.s
--- _.- _..... -
---
1
2
Figure 34.
Discrete and continuous wave-length for
..._ ..... Jlu~~~~heig1)t
-
..
3
U.-
-1•.. _....
t
r :
3
N
o
-
e
e
,,-
£1u
and
£2
'U
t 1u
Pu
d
d·
l
.
--.
r---1
O.2~ ----J
--
d .. 0.50
__ J- - ,-
d .. 1.00
----_.'•
I'
.••••
-~
l..
0.5
~
,•
•,
I
"
•J
•
I
r····· . . ······-
•
1
-
Figure 3b.
"
2
.
3
Discrete and cont1~'1 lu.."l wave-length for
,r----r::raz.'£-~--=---
'.. ,
.----::,___
-_=:,..=F~
r-~
--I
~
"',>
5
t
4
r ;
3
Dlax-height
'
u. 1.
...
....I-.>
f
I
~
~
-
e
e
.
.r
i
.....
A
~iu and
d
Pu:
". j
"
~
\
I
I.
••
d • 0.50
__
r-'-
. __ ....r-·-- _.
•
I..
I
••
t
•
t
t
'-I
I
I
0.5"
I
"
I
.
d · ~.125 ~
d • 0.25
1.01-
1
/
f 2u
\
r-- .. _-•
•
•••
••
••
•
;
I
,"
"
\
---------~+?(
,
c- -
1.5
Figure 3c.
-,
.
.~::::.:-,
2.0
I
,
2.5
Discrete and continuous wave-length for
•
3.0
r •
3
-.x-height
3.5
----~
;'
t
u. 3.
N
N
f
-
e
~.
,
e
.
and
.flu
f
2u
flu
I:.
I
---,
d
Pu:
d ... 0.25
d
f
j
,I
t
••
I
I
I
r-r/~
• t
I
•
:•
L_:_· ~. __ - r - :..-_._C".:-~
--l/
r--L__ -.JI
1i I
...
••l• __ ..
I
i
.,..-----.
oj
-'.
p •
•
••
j
..
--_. --------,• I
••
...
I
f
1--'--,-
'J
{
j
LI!
•
I
1
I
0.5
= 0 sn
d = 1.00
I
j
--.J
,___
,.-.----------1
J
I
I
,
2
3
4
Pisure 4.
Discrete wave-length for
max-height
u ... 1.
r4
__
--_._ .. _---~~
.
I
""
,,>
5
t
~.
(c'" 0.02) compar~d with continuous wave-length for
r ;
3
N
W
24
APPENDIX:
Weak convergence of probability measures.
In Section 3, we proved by elementary methods the convergence of the finite
~ d and ~ d to those of
dimensional distributions of the sampled processes
~
and
H
u
H
Now we will show some more sophisticated statements about weak con-
~.
u
vergence of probability measures which will enable us to prove the convergence
in law of the zero-crossing distance
"[
U
H
.
We start with
~ d and
functions on the interval
~(t)l,
and let
d
"[
H
Let
H
[O,T]
d
"[
and the wave-length
"[
d
u
to
"[
H
and
C be the metric space of continuous
d(w,~) = 4up[O,T]lw(t)-
with the distance
C be the smallest a-algebra that contains all open sets.
Since
~ H d obviously has continuously sample paths with probability one, and the same
~H
is true for
under the given assumptions (they are actually continuously
differentiable, cf., Lemma 1.1 in [5]), we can define unique probability mea-
•
sures
on
and
~ Hd{w~C;w(t.)
~
{C,C}
S
such that
X., i=l, ••• ,n}
~
and with a similar relation holding for
A set
A in
boundary of
A.
C is called
~
~
=
S X.,
~
and
-continuous if
~
~
(aA) = 0
Then the definition of weak convergence on
stated as follows:
Jl
d
!f
(Jl~
==> Jl ~
d
i=l, ••• ,n)
converges weakly to
Jl )
H
where
{C,C}
as
aA
is the
can be
d
~
0
if
d and
~~ are mea!f-continuous set A. When Jl~
sures for stochastic processes ~ d and ~!f we have the following sufficient
!f
~~d(A)
~ IJ
~
(A)
for every
IJ
conditions for weak convergence (see Billingsley [1, p.95]):
1)
The finite dimensional distributions of
~
2a)
H
.
4uP d EI~!fd(O)12
<
00
~ d converge to those of
~
25
2b)
There is a
K
such that for any
t,
t+h
E
[O,T]
EI~ d(t) _ ~ d(t+h)1 2 ~ Kh 2 •
~
~
= >"2
ll~ on {e,C} as d
THEOREM Al.
If
PROOF.
Since all involved functions are continuous, Theorem 5 gives the
1)
-r"(O)
<
00
then
ll,,-d
=i>
O.
-+
convergence of the finite dimensional distributions.
Since E(K O) = 0, V(K ) · ROO = 0, we have that
O
d
d d d
~~ (0) = ~ +n '2 almost surely, so that 2a) is fulfilled.
KO = 0
It suffices to show that 2b) holds for
t
2)
To see this, we observe that if
I~ Hd(t)_~Hd(t+h)I
the difference
"-
~
~
t,
t < d(k+l), d(k+i)
~
= dk,
t+h
= d(k+i).
t+h < d(k+i+l)
then
Furthermore, for any random variables
E(max(~12""'~42» ~ 4maX(E(~12),••• ,E(~42».
it holds
But for
~
of the form
is bounded by the maximum of four differences
I~ d(dk)_~ d(d(k+i»I.
of the type
~1""'~4
dk
t, t+h
and
t+h
the coefficients for
of the said type we have, if we write
a
k
and
B
u
for
~d and nd in (2.1)
EI~,,-d(t)_~,,-d(t+h)12 = V + E2 = v(~d(ak-ak+i) - nd(Bk-B k+i »
2
d
+ V(Kk-K k+i ) + E (n ) • (Sk-Bk+i)
2
=
A+ B+
e, say.
Then
A
~
These estimates are all based on the fact that i f
exists for all
t.
-r"(O) <
00
Thus 2b) holds and the theorem is proved.
then
r"(t)
o
26
Now we can apply the weak convergence theorem to several important random
in6{t~O; ~ Kd(t)~u}.
variables, e.g., the first hitting time of a level
u:
However, if we try to use it on the probability
peT
d
>x)
K
o<
= {W€C;
t $ x)
then we have to consider the set
A
x
= P(~ Kd (t»O
w(t»O
for
for
O<t$x},
aA = {W€C; lnnO$t$Xw(t)=O}. This set has a positive
x
A is not ~ -continuous. We, therefore, introduce a new
x
K
which has the boundary
~
-measure and thus
K
concept for which the convergence still holds.
DEFINITION.
A set
the family
a
~
A in
-continuous set
A~A
A
If
~ ~
K
A is u.a.
as
(A)
Take
PROOF.
€
~
d
> 0
}),
~
-continuous, relative
K
if, for each
there is
€ > 0
such that
€
is the symmetric difference:
€
LEMMA Al.
~Kd(A)
d
~K-continuous, {~K
(u.a.
K
where
C is uniformly almost
~
(A-A) u (A -A).
€
€
then
and
-continuous,
K
O.
and an approximating
~
Then
-continuous set A.
€
K
$
~K
d(A € ) + ~K d(A- A€ ) '
so that
Since
LEMMA A2.
set
A
PROOF.
x
o
is arbitrary, the lemma is proved.
€
If
-r"(t)
= {W€C;
w(t»O
= A2-O(ltogltl I-a)
for
O<t$x}
We show that for small
0
is u.a.
the set
t ~ 0
as
~
for some
a > 1
then the
-continuous,
K
A
ox
= {W€C;
w(t»O
for
o$t$x}
fulfills the requirements for an approximating set.
a)
is
~
-continuous:
H
and by
Its boundary set is
a theorem by Ylvisaker [13] this has
~
-measure zero.
K
Note that
27
V(SK(t»
~
b)
is uniformly close to Ax
Aox
0' > 0
w(t)~O
{WEe;
for
t > 0,
for some
V(SK(O»
= o.
for small
0:
but
tE(O,O)}
differs from
The difference Ax 6A~ux
CAe
only by a set of
~
K
=
-mea-
sure zero so that
~
K
(Ax 6A~uX )
=
1 - ~ K (A0 )
•
1 -
P(~
= 1 - P(K(t»nr'(t)/A2
But i f -r"(t) = A2-O(I£.og/tll- a )
with
K
(t»O
O<t~o)
for
O<t~o).
for
a > 1
then K has (or can be rede-
fined to have) continuously differentiable sample paths (cf •• Lemma 1.1 in [5]).
Since
P(K(O)=O) = 1
we therefore have
P(K(t»nr'(t)/A2
if
0
~
J: fn(Y)P(-i.n60<t~oK'(t»-~)
is small.
Since
E(K'(O»
P(K'(O)=O)
(A 6A~ ) ~ 0
X
P(K'(t»nr"(t)/AZ
J: fn(y)P(-i.nnO<t~oK'(t»y .6UPO<t~orll(t)/A2)
p(~n60<t~oK'(t»-ty) ~
K
~
~
we have that
~
O<t~o)
for
uX
as
=1
as
1
e~
= 0,
0
~
0
dy
V(K'(O»
for
= -a2R(S,t)/dSdtls=t=0 = 0
as
y > O.
t~)
= 1.
This gives that
and it follows that
O.
To get a similar relation for
~
d
K
•
P(~ Kd(t»O for O<t~o) ~ P(~Kd(t»O for
inition of
O<t~o)
dy
P(K'(t)~
and
for
let
Ke = [d
-1
O<t~d(Ko+l».
e].
Then
~H
d
(Ae )
If we recall the def-
S d as a conditional process, we see that this probability is
K
nothing but
k=l, ••• ,Ke+l
(AI)
=
I
s_l<O<~O)
P(~_l<O<~O' ~k>O>~k+l for some k=O,I •••••K )
e
l---;;;.....-----~~~.;;;.-.--=---:--------;;...
P (~-l<O<~O)
=
28
If
N
d
and
intervals
Nod
denote the number of times
[-d,O]
and
[O,d(Ko+I)]
P(Nd~l, Nod~l) ~
respectively, then the event in the nomi-
Nd~l, Nod~l
nator in (AI) implies that
E(NdoN od )'
crosses the zero level in the
~
and the nominator is less or equal to
Now
~
o
Jt
I
JO+2d
... -d
~(T)
dTdt
T=O
where
For the denominator in (AI), we have
as
d
~
O.
P(~_I<O<~O}
= (2n}-1 ~eeo~
r -d;r;/(2n)
l
In total, we see that there is an M such that
~
d
(A ~A~ )
x
:K
\)x
Now the condition
~
-r" (t)
I
JO+2d
Md-·d 0
~(T) dT.
:I
A -O( I£.og It II-a}
2
implies that
over an interval near zero, (cf. [2], p.210).
£im
d
~up
~ :Kd(A ~A~} ~
0
~
X
\)X
which finally proves that
A
ox
M foo
~(T)
dT
~
is integrable
Therefore,
~°
as
0
~0
is uniformly approximating.
By combining the two lemmas, we get the desired theorem for
THEOREM
A2.
If
-r"(t}
P(T d>x} ~ P(T >x}
:K
:K
as
= Az-O(I£.ogltl
d ~ 0.
I-a}
as
t ~ 0
for some
a > I
then
l
29
~ d and
We now turn to
T
u
and show weak convergence on
d
U
{C,C}
One possibility is to proceed as for
~ d
K
which will give us the convergence in law
for all random variables depending continuously on uniformly small changes of
the sample paths.
However. the wave-length
T
depends on the derivative of
U
~u (t) •
and can be greatly affected even by small changes of
~u
~ d
u
d-l(~ d(t)_~ d(t_d»
u
u
this is to look at. not the process
The remedy for
• d
~u • which is
but its "derivative".
~ d(t) =
for t
dk. k = O.±l•••••
u
and by linear approximation for other t-values. The derivative ~ u '(t)
defined by
uA'(t)-~
T
u
d
u
Btt)+6'(t)
and
T
T
:0:
exists and is continuous under the assumed conditions, and
can be defined as
u
• d
~u
d
u
the time for the first zero upcrossing by
:0:
t.'u
'
T
u
Now we can proceed as before and define measures
• d
~
processes
:0:
~
"'u'
,
u
• d
).J'
\lu •
on
u
for the
{C.C}
and finally arrive at the following theorems. the proofs of
which are left out.
THEOREM A3. If rIV(O) = A4
THEOREM A4.
peT d>x)
u
r
If
-+ P(T
u
IV
>x)
<
00,
then
• d
).Ju
(t)
= A4-O( Ilog It I/-a)
as
d
-+
as
continuously differentiable functions on
dew,;:;)
D
and
w ~ o(w)
such that
• d (o(A»,
).J d (A):o: U
u
u
THEOREM A5.
If
r
t
d
as
u
-+
O.
a > 1
for some
-+ 0
Let
IV
then
(0)
).Ju
= A4
then
[O.T]
D be the metric space of
w(O)
with
:0:
and with the
0
~UP[o,T]lw'(t)-;:;'(t)l, and define an isometry
:0:
C
).J'
O.
Theorem A3 can be restated as follows.
distance
~
= W'€C.
If
\l
u
(A)
gives the distributions for
<
00.
then
d
).Ju =>).Ju
on
between
0
= u' u (o(A»,
~
"'u
{D,'D}
and we have:
as
d
-+
O.
30
REFERENCES
[1]
P. Billingsley,
1968.
[2]
H.
Cram~r
Convergenoe of ProbabiZity Measures,
Wiley, New York,
and M.R. Leadbetter, Stationary and ReZated Stoohastio
Wiley, New York, 1967.
Prooesses,
[3]
M. Kac and D. Slepian, "Large excursions of Gaussian processes,"
Ann. Math. Statist. 30 (1959), pp. 1215-1228.
[4]
G. Lindgren, "Some properties of a normal process near a local maximlUll,"
Ann. Math. Statist. 41 (1970), pp. 1870-1883.
[5]
G. Lindgren, "Extreme values of stationary normal processes," Z.
WahrsoheinZiahkeitstheorie und verw. Gebiete, 17 (1971), pp. 39-47.
[6]
G. Lindgren, "Wave-length and amplitude in Gaussian noise,"
in Advanaes in AppZ. ProbabiUty (1972).
[7]
D. E. Longuet-Higgins, "The distribution of intervals between zeros of a
stationary random function," PhiZos. Trans. Roy. Soa. London, Sere
A 254 (1962), pp. 557-599.
[8]
C. R. Rao, Linear StatistiaaZ Inferenae and its AppZiaations,
New York, 1965.
[9]
D. Slepian, "The one-sided barrier problem for Gaussian noise," BeZZ
System Teah. J. 41 (1962), pp. 463-501.
[10]
D. Slepian, "On the zeros of Gaussian noise,1I Time Series AnaZysis,
M. Rosenblatt ed., Wiley, New York, 1962, pp. 104-115.
[11]
L. J. Tich and p. Shaman, "SaIIlpling rates and appearam:e of stationary
Gaussian processes," Teohnometrios 8 (1966), pp. 91-106.
[12]
E. Wong, "Some results concerning zero-crossings in Gaussian noise,"
SIAM J. AppZ. Math. 14 (1966), pp. 1246-1254.
[13]
N. D. Y1visaker, "A note on the absence of tangencies in Gaussian sample
paths," Ann. Math. Statist. 39 (1968), pp. 261-262.
To appear
Wiley,
© Copyright 2026 Paperzz