(i) 1 - Wharton Statistics

Z. Wahrscheinlichkeitstheorie verw. Geb. 16, 250-260 (1970)
9 by Springer-Verlag 1970
Conditions for Absolute Continuity
between a Certain Pair of Probability Measures
T.T. KADOTAand L. A. SHEPP
1. T h e o r e m s on D i s c r e t e - P a r a m e t e r Processes
Let t/=(t/l, t/2, ...) be a sequence of independent standard normal variables
and ~=(~l, (2,--.) a sequence of arbitrary random variables. Denote by P, and
Pc+, the measures induced by t/and ( + t/respectively.
Theorem 1. Assume 11, is independent of t/,_ 1, ..., th, ~,, ..., (l for every n. Then
/f
j=l
a.s.
(1)
The assumption in the theorem is the precise version of "~ being independent
of the future of t/". Note the dependence of ~ on the past of,/is not specified. That
is, t/, may or may not depend on ~,+1, ~,+z, .... Suppose it does not. Then the
condition (1) implies absolute continuity in the other direction also. That is,
Theorem 2. Assume the two sequences ~ and t~ are mutually independent. Then
the condition (1) implies P~+ , ~ P,.
If ~ is a deterministic sequence, the condition (1) is also necessary. In general,
however, it is not, as illustrated by the following example.
Example 1. Let t/'=(t/[, t/i, ...) be another standard normal sequence independent of t/ and set ~ =c,t/', where o's are known constants. Then through a
straightforward calculation the Hellinger integral [1] of ~1+t/1,..., ~,+t/, with
respect to t/l, ..., t/, becomes
jf=l i _ ~ (l+c~)-'lexp
=
1 -t-
j=t
,
2 I + Q 2 ~2 (2~)-lexp
_
d~
ci
4 l+c~
Since both ( + t / a n d t/are Gaussian, we conclude that Pc+-~Pn iff ~c)~<oo [-2].
On the other hand, it is easy to show that the condition (1), ~ c~ t/~ < oo a.s., hold
iff ~ c~ < oo. Hence the condition (1) is unnecessary.
Instead of~ being independent of the past oft/, suppose on the other extreme it
is completely specified by the past of I/. Then the condition (1) becomes necessary.
Theorem 3. Assume ~l = q~o (constant) and
(j(~ + ~/)= r
(~l + t/l, ..-, ~j-l+t/j-O,
j=2,3,...,
Absolute Continuity between a Certain Pair of Probability Measures
251
where ~o's are measurable functions. 7bus,
j = 1, 2, ....
Then
(2)
oo
iff
P+<v++,,
E tim exp
n+oo
_
y q~;_~(~+ ~)<oo
j=l
(3)
a.s.,
j=1
~o+-, (r/) ~7+-z
a.s.,
j=l
=P
(4)
~ ~0~_t (++r/)< oo
(5)
t.j=l
where the limit is shown to exist and is finite a.s.
Neither the condition (3) or (4) alone is sufficient for Pr
by the following example.
as illustrated
Example 2. Let % _ 1(r = 0 if+ 1+ . . . ~j_ 1< J - 1 for somej < n and let ~o,_ i (~) = 2
otherwise. Then
P
qg]-l(r/)<oo = P { r l l + ' " r l , _ l < n - 1
for some n } = l ,
and so, by Theorem 3, P~<P;+,. On the other hand,
~n+~/,=2+~/,
if
j<n
~1+.-.+~/_1>j-1,
and so
P
~o~-1(~+~/)<oo = P { t h + ' " + r / , _ ~ < - ( n - 1 )
forsomen}<I.
Hence, by Theorem 3, P;+, ~ P,.
An example with P~+,<P~ but Pn~P~+, is given by taking q~,_l(~)=0 if
~1 + " " ~j- i > J - 1 for some j < n and q~,(~) = 2 otherwise. The proof is similar.
In proving Theorem 3 we make use of the fact that Y~~o~_1(t/) ~/j converges a.s.
on the set where ~, q~_ 1(~)< oo. This has already been established in a more
general framework I-3]. However, we prove this by imbedding the likelihood
ratio in a Wiener process. It is an interesting technique in itself and the proof is
considerably simpler. It seems likely that ~ q~i_ 1(r/) r/j diverges a. s. (with the partial
sums having - Go as lim inf and + 0o as lim sup) on the set where Y' q~_ t (r/) = oo.
However, we have not been able to prove this. We note that it is not sufficient
that the partial sums form a zero-mean martingale, as the following example
shows.
Example 3. Let w={w(t), t > 0 } be a standard Wiener process and z(a) the
first time w reaches a. Define a sequence of random variables 0 =(01, 02 .... ) by
Oj=w(min[~(-aj),~(bj)]),
j = l , 2 ....
where 0 < a l < a 2 < . . . and 0 < b l < b 2 < .... Then 0 is a martingale and 0 j - - a j
or bj. Hence if we set E 0j = 0 then P {0j = - a j} = b/(aj + bj). Now put aj = 2 J and
252
ZT Kadota and L.A. Shepp:
bj=j. Then ~ P { O j = - a j } = ~ j / ( 2 ~ + j ) < oo. Hence, from the Borel-Cantelli
lemma, lim O, = oe a.s.
n~oo
2. Theorems on Continuous-Parameter Processes
The analogous problem in the case of continuous-parameter processes is the
question of absolute continuity between two measures P~ and Py corresponding
to a standard Wiener process w={w(t), 0 < t < l } and another process y =
{y(t), 0 < t < 1}, where y(t) is defined by
t
y(t)= S z(s) ds+w(t)
(6)
0
and z = { z ( t ) , O < t < l } is an integrable process. The continuous-parameter
analogue of Theorem 1 is the following.
Theorem 1'. Assume w (t) - w (s) is independent of {w (tj) - w (sj), z (uj), j = 1..... n}
for arbitrary sj< tj<s<t, uj<s and n. Then
1
Py<P~, if
~z2(t) dt<oo
a.s.
(7)
0
The assumption in Theorem 1' is usually referred to as "z being independent
of the future increments of w". The theorem was proved previously [4] under a
slightly stronger independence condition, namely, that the independence hold
for arbitrary sj < t j < s < t, uj < s + e and n and some e > 0. Here we give a more
direct proof for the case e=0. It was further proved that if E ~ z2(t)dt< oc in
addition to ~> 0 then the Radon-Nikodym derivative at x = {x(t), 0 < t < 1} of
Py with respect to Pw is given by
1
dPw (x, =exp [0s
1 1
oSr
where
f (t, x3= E {z(t) ly(s), 0 < s < t}l,=x.
One simple sufficient condition for Py"~Pw is that z be uniformly bounded.
We note that z being bounded a. s. is not sufficient as the following example shows.
Example 4. Let z(t)= 1/(t- 1) until y first crosses - 1 and z(t)=0 thereafter.
Then z is bounded a.s. since P { y ( t ) > - 1, 0 < t < 1} = 0. However P { w ( t ) > - 1,
0=<t< 1}>0. Hence PwskPy.
The continuous-parameter analogue of Theorem 2 is obvious.
Theorem 2'. Assume the two processes w and z are mutually independent. Then
the condition (7) implies Py.,. P,.
This theorem is essentially a special case of Theorem 1' and Py< Pw instead of
Py~Pw, together with (8) under E~z2(t)dt<oo, was established earlier [5, 6].
Here we give a more elementary proof for Py~Pw based on Theorem 2, though
our proof does not extend to prove (8).
Absolute Continuity between a Certain Pair of Probability Measures
253
The continuous-parameter analogue of Theorem 3 has still not been established. Unlike the discrete case y cannot be defined explicitly in terms of q~ and w.
Instead it is given as a solution of an integral equation
t
y(t)= ~ q~(s, y,) ds + w(t),
o
(9)
and existence and uniqueness of the solution becomes an additional problem 9
Unfortunately (9) has been shown to have a unique solution only under very
restrictive Lipschitz conditions on q~. Indeed a solution need not exist due to
"explosion" of y in finite time.
Finally we provide a proof for a theorem which has been known but never
been explicitly proved [7].
Theorem 4. Let v = {v(t), 0 < t < 1} be a measurable zero-mean Gaussian process
with a continuous and strictly positive-definite covariance R(t,s), and z={z(t),
0_<t<l}, another measurable process (not necessarily Gaussian). Denote by P~
and P~+v the probability measures induced by v and z+v, and by R and R ~ the
integral operator with the kernel R(t, s) and its positive square-root9 If v and z are
mutually independent processes and if almost every sample function of z is in the
range of R ~, then P~+v~P~.
This is a generalization of a classical theorem where z is a degenerate process,
i.e., a deterministic function [8].
3. Proof of Theorems
Proof of Theorem 1. First, assume
Z E(~ < m.
(10)
Let p,(~) and q,(~) be the probability densities at ~=(~1, 42, .-.) of ql, .9 t/n and
~1+~11 .... , ( , + q , . Then q,(~) exists and the likelihood ratio L~(~)=q,(O/p,(~)
is calculated to be
L.(~)-- f i
j=l
Define
~ exp(u{,- 89162
+rll=~l, ..., ~j_l +l,lj_l=~j_l).
-co
if j - l < t < j ,
2(t)=~j
t
)~(t)= S~(s)ds+w(t)
0
where {w(t), t>O} is a standard Wiener process with w(j+ 1 ) - w ( j ) being independent of { w ( k ) - w ( k - 1), (k+l, k = 1.... ,j}. Then
E log L , ( ( + q ) = ~ E log ~ e x p { u [ ~ ( j ) - ; ( j j=l
9 dP
1)] - 8 9 2}
-co
[-(j < ul.~(l) .... , fi ( j - 1)].
Put
~ j (t, v, u) = u Iv - y ( j - 1)3 - 89 2 It - ( j - 1)3,
gj(t, v)=log ~ exp[0j(t , v, u)] dP[(j<ulfi(1), ..., f l U - 1)],
-oo
254
T.Z Kadota and L.A. Shepp:
and apply Ito's differential rule [9]:
g[J' f(J)] = j-1~ ~ o g ( t ' v ) o='y(odr(t) --
j-i~ (1 -~-02
2 8,
g(t,v) v=~(odt'
and use a formula for conditional expectations [4]"
E {(ill(I), ..., f l U - 1), f(t)}
u exp [~h~(t,f(t), u)] dP[~j<u[f(1), ..., f ( j - 1)3
--oo
,
t>j--1.
exp [Oj(t, f(t), u)] dP[(j< u[f(1), ..., f ( j - 1)]
-co
Then
J
E log L , ( ( + t / ) - ~ E ~ [E{(~lf(1), ..., f ( j - 1), f(t)} (i
j=i j - i
- ~1E 2 {r ^
..., f ( j - 1), f(t)}] dt
J
+~E
~ E{r
j=l
j--1
By using Jensen's inequality and a property of stochastic integrals,
ElogL,(r
~ E ( 2,
j=l
thus l i m E l o g L , ( ( + t / ) < o o
n--, oo
from the assumption. Hence, it follows from a
theorem on entropy and absolute continuity [10] that P~+, < pn.
Next replace ~ E ( 2 < oo b y ~ (2 < oo a.s. Define a sequence ( u = ((ul, (u2,--.)
by
if
(Mj =--
k=l
otherwise,
and define P~ +. as the measure induced by ( u + ~/. Let ~ and ~+~ be the measures
given by
~{~i <al, .-,, ~, < a,} =P{t/1 < a i, ..., tl,<a,, E ~ <M}
and
~+~(~1 < a i , "", ~n< a~} = P { ( i
-'[- t / i < a l ,
"",
~n-l-tln<an , E ~ 2 < / }
for arbitrary a's and n. Then it follows from what has just been proved that
P~M+,<Pn" Hence ~+~</5. Now suppose P~+,~P~, namely, there exist a set F
such that P~(F)= 0 and P~+, (F)> ~ for some e > 0. Then, for a sufficiently large M,
+, (F) > ~l for some e~(0 < e~ _ e) since lim P {~ (2 < M} = 1 while ~ (F) = 0. Hence
--
M~oo
+~.~/5 for a sufficient large M, which is a contradiction. Therefore P~+~< P~ if
~ < 0 9 a.s.
Absolute Continuity between a Certain Pair of Probability Measures
255
Proof of Theorem 2. Since the condition of Theorem 2 implies that of Theorem 1,
we have Pc+. < P~.
Because of independence between t/ and (, the likelihood ratio in this case
becomes
L. (4) . . . .
exp
--
-- 00
uj r
L j=
~1uj2 dP((, <ul, ..., (,<u.).
1
.3
First, assume ~ E (2 < oe. Then, by using Jensen's inequality,
n
-ElogL,(q)< 89 E(}.
j=l
Hence !irnE log [1/L,(q)] < o% implying [10] P~<Pc+,.
The argument for replacing Z E ( } < o9 by ~,, ~ < oo a.s. is similar to the last
half of the proof of Theorem 1.
Proof of Theorem 3. Because 5, = ~o,_ 1 is a measurable function of ~1, ..., 4,_ 1,
the likelihood ratio in this case becomes
L.(~)=exp
j=1
q)j_ 1(~) ~j - 1
j=j
(]9 1
"
Since {L,} is a martingale with respect to P~ and E [L,(r/) I = 1, Loo(~/)exists a.s. and
E]Lo~(r/)J < 1 [11]. Then it is easy to verify that Pc+, has the following Lebesgue
decomposition [12] with respect to P,:
Pc+,(A)= ~ L~ dP,+ Pc+,(AN)
A
(11)
where N = {L~ = oQ}. Hence
PC+,<P~
iff PC+,(N)=0,
i.e.,
P{L~((+r/)<oo}=l.
By reversing the roles of Pc+, and P, in the preceding argument, we can conclude
also that
P~<PC+,
iff P{1/L~o(rl)<oo}=l '
i.e.,
P{L~(r/)>0}=I.
Put A = f2 (the whole space) in (11). Then
ELo~(r/) = 1 - P {Lo~((+ q) = oo} = P{L~o(~+r/) < oo}.
Thus, in order to establish (3), (4) and (5), it suffices to show
{L~(~ + ~ ) < 0(3} = { 2 @2_ 1(~+/~) < oo},
(12)
(13)
Define recursively random variables 0 = (01, 02 .... ) by
oj = [w
- w
1 (o)
(14)
256
T~Kadota and L.A. Shepp:
if (oj_i(0) is nonzero, and let 0j be independent of 0 i , . . . , 0r_l and standard
normal if q~j_ 1(0) = 0, where
j=a
It is easy to check that 0 is a standard normal sequence; indeed from (14), for any
al, ..., an, we have
Eexp(i~ajOr)=E[E{ei""~
..... O,_l}exp(i"~lajOr)]
j=l
\ j=l
1
=exp(-~a,)
2
Eexp
(n-1
)
i ~ arOr
r=l
(- gi ) a r
1
=exp
\
2
j=l
/
/
.
Again from (14),
or-l(0) 0 r = w(T.(0))
j=l
and
L, (0) = exp [w (T, (0)) - 89T, (0)].
Since
exp [w(t)@]
is continuous in
t,
(15)
nonzero for finite t, and zero in the
limit t ~ o% we have from (15)
{L~(0)> 0} = {T~(0) < oo}.
(16)
Since 0 and t/have the same distributions, this proves (13).
To prove (12) we first observe that the transformation (2) from t / t o ( + t / i s
one to one. Thus there exist ~o, ~91, ... which satisfy
~kj_ l(t/1, -.., t/j_ l) = q~j- l((~ + t/i, ..., ~j- 1 + t/j- 1).
(17)
We can therefore write, using (2),
j=l
=exp
j=i
j_ l(t/) t/j+
Lj=I
j=l
ff/2j-l(t/
9
Taking the reciprocals we have
1/L(~+t/)=exp
-0j_l(t/)t/j)- 89
j=
0z_l(t/
.
j=l
Comparing with (16) with ~o replaced by - qJ we immediately obtain
{ 1/Loo(~ + t/) > 0} = { ~ 0~_ l(t/) < oc}.
This together with (17) gives (12).
A b s o l u t e C o n t i n u i t y b e t w e e n a C e r t a i n P a i r of P r o b a b i l i t y M e a s u r e s
257
1
Proof of 7-heorem 1'. First, assume E ~ z 2 (t) dt < oo. Then there exists a sequence
0
of step-function processes z,, = {zm(t), 0 < t < 1} such that w ( t ) - w (s) is independent
of {w (ti) - w (si), z m(ul) , i = 1, ..., k} for arbitrary s i < t i < s < t, [2" ui] 2- ~ < s and k,
where [x] is the integer part of x, and for a given e > 0
1
E ~ Iz(t)-Zm(t)[ 2 dt< ~
(18)
0
t
t
for a sufficiently large m Ill]. Hence ~ z , , ( s ) d s ~ ~z(s)ds in probability and
0
t
0
y,, (t) ~ y (t) in probability also where y~ (t) = ~ z m(s) ds + w (t).
0
Let 0 = t(o") < t~") <... < t ,~")1 --t ,(")~, - 1 be a densely refined partition of [0, 1] with
max (t(.J")- t ~ 1) < 2- m. Put A t i = t~" ) - t~"_) 1 and denote the corresponding j-th
l<__j<.
increments of w(t), y,,(t) and ~ z,,(s)ds by A w j, A Ymj and A ZmS respectively. Let
0
p,(x) and q(,")(x) be the probability densities at x = { x ( t ) , O < t < l } of {w(t~ff)),
1 < j < n } and {y,,(t~)), 1 < j < n } respectively. Then q(ff) exists and the likelihood
ratio 1377(x) = q(,~)(x)/p, ( ~ is~alculated to be
13m)(x)=fij=l-co
~exp(uAxs- 89
[ At---]
d Zrn..j < U A Ymk = A Xk, k = l , . . . , j - 1
I
~
/
where A x i is the j-th increment of x(t). By using Ito's differential rule and the
formula for conditional expectations as in the proof of Theorem 1,
log13~)(ym) = ~ { ~ [hj(t) z~(t)- 89
j=l
dr+ ~ hj(t) dw(t)}
dtj
At#
where
hj(t)=E[
Azmj Zi
}
At# I Yml, ... , A Ymj_ l, Ym(t)- ym(t~n)_1)
Thus, by using Schwarz's and Jensen's inequalities and a property of stochastic
integrals,
1
e log 137)(y,,)~E I z~(t) dt.
0
Hence from (18)
1
lirn log 13~o(Ym) < E 5 z2 (t) dt + ~ < ~ .
(19)
0
Therefore, if we denote by Pr,~ the measure induced by y,~={ym(t), O < t < 1}, it
follows from (19) and continuity of w and Ym that Pyre< Pw [10]. Also
sup E log(dPrJdPw)(y,~ ) < ~ .
Then since ym(t)-*y(t) in distribution, Theorem A in the Appendix asserts that
Pr<Pw.
258
TT Kadota and L.A. Shepp:
Finally, by using a contradiction argument similar to the last half of the proof
of Theorem 1, we can replace the assumption E ~ z 2 (t) d t < ~ by ~ z 2 (t) d t < oQ a.s.
This completes the proof.
Proof of Theorem 2'. Let {fj} be an orthonormal basis of the space of all
square-integrable functions on [0, 1]. Then ~ z a (t) dt < oo a.s. implies
11
n
12
1
lira , z(t)-Z{jf~(t ) dt=O
n~m 0
a.s.
where
( , = I z(t)fj(t)dt
j=l
0
and
~ ~<c~
(20)
a.s.
j=l
Furthermore,
~jSfj(s)ds
w(t)=
j=l
1
a.s.
(21)
0
where t/j= ~fj(t) dw(t), j = l , 2,...,
and
t/=(t/1,t/2
.... )
is a standard normal
0
sequence [13]. Thus
t
oo
t
y(t)= ~ z(s)ds+w(t)= 2 (~j+tl)~f(s)ds
0
j=l
a.s.
(22)
0
Now let x={x(t),O<t<=l} be a separable and measurable process with
associated measures Pw and Py and let_ ~Bx be the a-field generated by x. Denote
by P the completion of 89 + Py) on ~3x. Then from (21) and (22)
~j~fj(s)ds
x(t)=
j=l
a.s. [P]
(23)
0
1
where ~j= ~ fj(t)dx(t) a.s. [P], j = 1, 2 ..... Let ~3r be the a-field generated by
0
and ~3~ the a-field of the sets which are either ~5r
or sets of P-measure zero.
Then (23)implies ~ 3 ~ = ~ [14]. Therefore it suffices to show ~ P ~ where ~ and
P~ are restrictions of Py and Pw on ~3r But ~ and Pw are identical with P~+, and P~
which are the measures induced on ~Br by ( + t/and t/. Hence, through Theorem 2,
(20) and mutual independence between t/and ( imply Py~ Pw.
Proof of Theorem 4. First, note that v is continuous in probability and almost
every sample function of it is square-integrable since v is a Gaussian process with
a continuous covariance. Note also that almost every sample function of z is
continuous since it belongs to the range of R -~ [15]. Hence z + v is continuous in
probability and almost every sample function of it is square-integrable.
Again let x={x(t),O<t<=l} be a separable and measurable process with
associated measures P=+v and P~, ~3x the a-field generated by x and P the completion of 89 v+ P~) on ~x. Then
lim 0~1 x ( t ) - ~ 2~. ~jq)j(t)2dt = 0
n~oo
*
j=l
a.s. [P]
(24)
1
where ~j--2j ~ ~ q0~(t) x(t) dt a.s. [P] and ).j and q~j, j = 1, 2, ..., are the eigenvalues
0
and orthonormal eigenfunctions of R. Define ~3r and ~3~ as before. Then (24) and
Absolute Continuity between a Certain Pair of Probability Measures
259
continuity in probability [ P ] of x imply ~3~ = ~3~ [14]. Therefore it suffices to show
equivalence of the restrictions of P~+ ~ and P~ on ~3~, which is equivalent to showing
1
1
Pc+, ~ P,, where t/j=2} -~ 5 (pj(t) v(t) dt and ~j=2~ -~ 5 q0j(t) z(t) dt, j = 1, 2, .... N o t e
0
0
~/is a standard normal sequence according to the K a r h u n e n - L o 6 v e theorem [163
and ~ ~2 < oo a.s. since almost every sample function is in the range o f R }. Hence,
according to T h e o r e m 2, mutual independence between r/and ( implies P~+,~ P,.
Acknowledgments. The idea of studying random variables by imbedding them in the Wiener
process is apparently due to A. V. Skorokhod [17]. Our use of the above idea is closely related to a
technique shown to us by H. P. McKean.
Appendix
Theorem A. Let x, = {x,(t), te T} be a sequence of stochastic processes converging in distribution to x = {x (t), t e T}. D e n o t e by #, and g the measures induced
by x, and x respectively. Let v be another probability measure. Then # < v if #, < v
for every n and sup Eu log ~ v ' <
n
respect to #,.
oo where Eu, denotes the expectation with
Proof. Put
2=v+#+
~ 2-"#,.
n=l
T h e n it suffices to show
dv
-oo<E
x d~l~
d2'
Put u,=d#,jd2, u=dp/d2 and v=log(dv/d2). Let a=t(o") <t~m) <...<t(ff)=O
be a densely refining partition of (a, 0] for any a < 0, and set A = {a < v < 0} and
A~(m)_- {t~) 1 < v < t~")}. T h e n
a
a= t a,(-,
j= 1
a,(,-)
Since x,(t) --* x(O in distribution, we have #,(A(.a")) --+#(A(.m)),
thus
J
lim
J" u, d 2 =
n ~ m A(jm)
] ud2.
A}m)
Hence
lim ~ u, v d), < ~ t}~) ~ u d2
n~co
A
j=l
A~m)
for every m. Therefore
m
lira ~ u . v d A _ < l i m
n~oo A
m~co
~t ~
j=l
~ udR=fuvd2.
A(fn)
(25)
A
It follows from sup E,. log(d#./dv) < oo that sup(Ex u, log u , - E z u, v) < 0% which
implies
"
- oo < lim, E a u.
v
(26)
260
T T Kadota and L.A. Shepp: Absolute Continuity between a Certain Pair of Probability Measures
since
e - 1 < u. l o g u. < oo a n d u. v < 0. By combining (25) and
a---+ --oO,
- oe <lira. Eau.v<lim. Exu.v<E~uv,
(26) and by letting
w h i c h c o m p l e t e s the proof.
References
1. Kakutani, S.: On equivalence ofinfinite product measures. Ann. of Math. II, Set. 49, 214-224 (1948).
2. Shepp, L. A.: Gaussian measures in function space. Pacific J. Math. 17, 1, 167-173 (1966).
3. Gundy, R. R.: The martingale version of a theorem of Marcinkiewicz and Zygmund. Ann. math.
Statistics 38, 3, 725-734 (1967).
4. Kadota, T. T.: Nonsingular detection and likelihood ratio for random signals in white Gaussian'
noise. IEEE Trans. Inform. Theory IT-16, 3, 291-298 (1970).
5. Wong, E.: Likelihood ratio for random signals in Gaussian white noise. (Internal Memorandum
of Bell Telephone Laboratories.)
6. Kailath, T.: A general likelihood ratio formula for random signals in Gaussian noise. IEEE
Trans. Inform. Theory IT-lg, 3, 350-361 (1969).
7. Pitcher, T. S.: On the sample functions of processes which can be added to a Gaussian process.
Ann. math. Statistics 34, 1, 329-333 (1963).
8. Grenander, U.: Stochastic processes and statistical inference. Ark. Mat. 17, 1, 195-277 (1950).
9. Wonham, W. M.: Lecture notes on stochastic control, part I, Center for Dynamical Systems, Div.
Appl. Math., Brown University (1967).
10. Rozanov, Yu. A.: On the density of one Gaussian measure with respect to another. Theor. Probab.
Appl. 7, 1, 82-87 (1962).
11. Doob, J.L.: Stochastic processes, p. 319-321,440-441. New York: John Wiley 1953.
12. Halmos, P. R.: Measure theory, p. 134-135. Princeton, New Jersey: Van Nostrand 1950.
13. Shepp, L. A. : Radon-Nikodym derivative of Gaussian measures. Ann. math. Statistics 37, 2, 321354 (1966).
14. Bharucha, B. H., Kadota, T. T.: On the representation of continuous-parameter processes by a
sequence of random variables. IEEE Trans. Inform. Theory IT-16, 2, 139-141 (1970).
15. Kadota, T. T.: Simultaneous diagonalization of two covariance Kernels and application to second
order stochastic processes. J. Soc. industr, appl. Math. lg, 6, 1470-1480 (1967).
16. Lo~ve, M.: Probability theory, 2nd ed., p. 478-479. Princeton, New Jersey: Van Nostrand 1960.
17. Skorokhod, A. V.: Studies in the theory of random processes. Massachusetts: Addison-Wesley
1965.
T. T. Kadota
Bell Telephone Laboratories, Inc.
Murray Hill, New Jersey 07974, USA
(Received September 30, 1969)