No. 1770
ASYMPTOTIC NORMALITY OF KOUL-SUSARLA-VAN RYZIN FSflMATOR USING
COUNTING PROCESS
by
MAl ZHOU
Department of Statistics
University of North Carolina
Chapel Hill, NC 27599-3260
USA
Abstract
To estimate the covariates in a linear model based on censored survival
data.
Koul.
ordinary
Susarla & Van Ryzin
least
squares
estimator.
(lOO1)
This
proposed a
paper uses
generalization of
the
counting process and
martingales techniques to provide a new proof of the asymptotic normali ty of
the estimator.
term.
We represent the estimator as a martingale plus a high order
This approach allows us to relax some assumptions made by Koul. Susarla
& Van Ryzin.
We also find that the asymptotic variance is better than they
though t i t was.
Key Words and Phrases:
Censored data. linear regression. counting processes.
central limit theorem. asymptotic variance.
Section O.
Introduction
Suppose the patients'
survival
times (or its logarithm) T
i
are random
variables, independent of each other and follow the linear model
TI. = a +
~x.
I
+ c.I
= 1,2, ... n.
i
(0.1)
where x. 's are known and c. 's are (iid) r.v. 's with zero mean finite variance.
I
Let
F.(t)
I
I
=
peT.
I
<
t).
(a,~)
are covariates,
treatment, sex, etc.) to be estimated.
Zi
(e.g.
age,
indicator of
We only observe the pair
= min{Ti,Y i },
= [T i
0i
~ YiJ
where Yi's are iid censoring times, r.v. 's independent of c
i
and P(Yi
<
t)
=
G(t) .
Koul, Susarla & Van Ryzin (1981) suggested that one use the following to
O. Z.
~ = }; b ni
I
I
(0.2)
l[z. ~ MnJ'
1
(0.3)
I-G(Z.)
I
O. Z.
A
a = }; a .
ni
I
I
A
I-G(Z. )
I
1
where
<
Mn J
-
l[z.1
A
n
X b ni.,
x
= n.!..
L X.;
I
and G is a Kaplan-Meier (1958) product-limit type estimator of the distribution
G(t) of Y's.
M is a sequence of constants
n
(~oo)
satisfy certain conditions.
This estimator is particularly easy to implement on computer, it can use the
-2-
existing package with a slight modification.
Notice if
o.1
= 1. G =0
'"
(no censoring) and M
n
= 00.
the estimator (0.2).
(0.3) reduces to the usual least squares estimator.
In order
to make
inference on a.
f3.
we not only need a
estimate. but their distributions or asymptotic distributions.
I imi t theorem for a. f3 are important.
consistent
Thus a central
The proof of Asymptotic Normal i ty in
their (1981) paper Koul. Susarla & Van Ryzin used a U-statistic argument. which
boils down to approximate the estimator by a U-statistic and then applying
Hoeffding (1948).
Our approach uses the martingale structure of the counting
processes associated wi th the estimator and represents the estimator as an
martingale plus a term tending to zero.
This approach naturally involves time
and can be adapted for sequential analysis of
sequence M in (0.2).
n
the model.
The
truncation
(0.3). which were "constants depending on the unlmown
e
Fi(t) and G(t) in a complicated way" and "further guidelines and experience are
needed in the proper choice of M " (Miller & Halpern (1982). Gill (1983» are
n
replaced here by some (observable) order statistic of the Zi·s.
In addition.
•
our discussion on the asymptotic variance reveals that Koul. Susarla & Van
Ryzin (1981) formula (3.7) needs an extra n factor in the second term there.
Thus their asymptotic variance estimator need to be fixed.
suggest some interesting consequences
Besides. this fact
that can be explored to improve the
estimation of covariate (a.f3).
Section 1 below contains addi tional notation and some counting process
results that are useful later.
Section 2 contains the asymptotic normality theorem and its proof. but the
treatment of high order term is defered to section 4. so that one can better
concentration on the martingale representation part of the proof.
multiple regression is also discussed.
Extension to
•
-3-
Section 3 takes a closer
look at the asymptotic variance derived in
section 2. and points out a correction on a formula in Koul. Susarla & Van
Ryzin (1981) and remarked some consequences thereof.
Section 1.
Notation and Counting Processes.
We now introduce some additional notation and establish some simple facts
that will be useful later.
For i = 1,2 •...• n; let
And let
n
A
= ~ I[Zl.>_t] = ~(I-Hi); and ~ = max {Z.}
i=1
i
1
AC(t) = J[O.t]
(1.4)
dG(s)
I-G(s-)
It is well known that the three processes
+
t
+
Mi(t) = I[Z.<t] - J O I[Z.>s] d Ai(s)
1
(1.5)
1
(1.6)
and
c
Mi(t) = I[Z.<t;o.=o] 1
Jt
c
0 I[Zi>s] d A (s)
(1.7)
1
are all square integrable martingales with respect to the filtration
and
c
Jt
c
<Mi>(t) = 0 I[Z.>s]d A (s).
1
(1.8)
-4+
uD
c
Clearly M. = M: + M..
I
I
I
= ADI.
.
A+.
SInce
I
C
+ A .
And we define
M+ = 2 M~
c
I
"-
The Kaplan-Meier estimator G(t) can be defined as
A N+(s)
"-
G(t)
=1
-
11
c
(1--~-).
+
s~t
y (s)
where
n
= i=l
2
I[Z i <s; u~.=O]'
l
Thus the processes
G(t)-G(t) _ ftA'fl 1-G_
1
-- -1-G(t)
- 0
1-G Y+(s)
+
dM(s);
c
t € [O.'fl]
and
for t such that 1-H.(t) > 0
I
are martingales.
for t > 'fl.
(1.9)
( 1.10)
Notice that the right hand side of (1.9) is well defined even
a constant equal to its value at 'fl. we shall use this definition
to extend the domain of the martingale so that both martingale are now defined
on [0. 00 ) which brings some convenience later.
Lemma 0.1
<i!.
is zero. i.e.
Proof:
The cross variation process of the martingales (1.6) and (1.7)
I
M~>
I
= O.
Notice that
~
+
M~
is also a martingale - a cOmPensated counting
C
. h IntensIty
.
.
ft0 I [Zi>s] d(AD
process WIt
i+Ai ) .
C
d(AD+A ) is a martingale.
i
i
Hence. we have
C
t
Thus. MA _- (uD.
M1 +M I.)2 - f o I [Zi>S]
With a litle algebra. we see that
•
-5-
which shows,
in view of
(l.8),
that
*~
is a martingale,
an equivalent
•
statement that their cross variation is zero.
Section 2.
Asymptotic Normality
(0.3) are truncated at M , it is natural to
Since the estimates (0.2),
n
a,~
expect that the best we can do is to estimate
up to M , and the correct
n
centering quantity are thus
(2.0)
a*
notice if we replace M by
n
Theorem 2.1.
00,
=1
M
a.
nl
In_
~
t dF.(t)
1
we get exact a and
~.
Under the conditions that will be specified at the end of this
paper, we have
!!lJ
-+ N(Q.,1).
where 1 = 0ij
2
0Il = 0a = (2.5) below but with b
ni b nj replaced by ani' a nj ·
2
022 = o~ = (2.5) below,
and 012 = 021
I
h.(t)
2
d F.
[ 1-G(t-) - l-~i(t) ] [l-H i (t)] 1-F:
t
-6-
Proof:
normality of
(a.~)
*
alone. since the proof for joint
~
Observe that
are similar.
Zi 0i
Yi =
so
"
We only illustrate the proof for
(2.1)
"
1-G{Zi-)
the KSVestimator
(O.2)
~
of
can be written as
"
(take G
to be
the
Kaplan-Meier estimator).
"
Zi 0i
~ = ~ b.
nl
A
1-G{Z.-)
=~
~
b . f
nl
1-G{t-)
d I[z <
i-to
°i= 1]
1
If we need to truncate at M . as KSV did. we can simply set the upper limit of
n
the integral to M .
n
M
~* = ~ b . f n t dF .. we have
nl
1
Since
Now plus and minus f
~
A*
~-~
~b
= ~.
nl
•
d Fi
t
"
I[Z.>t] 1-G.
1-G{t-)
1
1
{f
A
t
1-G{t-)
+ f
"t
1-G{t-)
~
d I [Z
/t
i~
.
in the { ... } we get
~
v
i=
I[Z. >t]
1
1-G{t-) (i-F.)
~(t)
1
1] -
f
At
1-G{t-)
-7A
+
~
b .
nl
I[I~(t)
II=HH i. - 1] t dF
i
I
I-G(t-)
(2.2)
Now. because
A
A
I-Hi
I-H.
I
Hi-Hi
= 1 + I-H. .
I
A
I-G(t)
and
=1
A
I-G(t-)
+
G(t-)-G(t)
A
•
I-G(t-)
it is easily seen that
A
[
A
I-H.
]
Hi-Hi
G(t-)-G(t)
+ (a high order term)
I _HI.I - 1 = I-Hi + I-G(t-)
I-G(t)
I-G(t-)
A
H.-H.
= ~-H~
I
A
(2.3)
+ G(t-)-G(t) + (two high order terms)
I-G(t-}
now plug this into (2.2) and ignore the high order terms. we get
A
"t"
+ ~
If we let h.(t)
I
00
= It
s dF.(s).
I
b
ni
I rHi-Hi
IT-H.
+ G(t-)-G(tl]t dF
I-G(t-}
I
we have d h.(t)
I
= -t
i
dF.(t}.
and integration by
I
parts in the second term above. we see that
A
A
H.-Hi
I
I I-H.
t dF i
I
A
H.-H.
I I I
= I-H. (-}d hi
I
H.-H.
I I
= -hi(t) I-H.
I
A
1+00
-00
Hi-HI.
I
+
hi d I-H
i
by (1.1O).
similarly
A
A
G-G
G-G
I I=C t dF i = -hi(t} I=C
A
r:
G-G
+ I : hi(t} d 1=C
-8-
So. we have
~-~ ~
f
Z b .
nl
~
I-G{t-)
M?{t)
d
1
A
-1
+
I-G
1
d M~{t)
+ Z bni f h. ~ d M.{t) + Z bni f hi{t) I-G- +
1
- i
1
- Y (t)
=
z bnl.
f
un
~
d MI.{t) I-G{t-)
1
hi
z bn1·
+
f I-Hi d M1·{t)
A
f{z
+
=
I-G
I-G
bnihi{t»
M~{t)
+1
d
- Y (t)
n
n
hi
+
b . f
At
dtl!{t) - z bni f I-H. d Mi{t)
i=1 nl
I-G{t-)
1
i=1
1
z
A
n
+
z f{z
i=1
j
b . h.{t»
nJ J
I-G
I-G
+1
d
_ y (t)
~(t)
1
(2.4)
A
The quardretic variation process of (1.4)
<1.4>
<~-~> ~
=
z b2 i f [
i
n
(~ <~-~»
hi{t)
I-G{t-) - I-H i {t)
A
t
]2
can easily be computed
d Fi{t)
I[Ti>t] I-F {t)
i
A
+ Zi f (Zj bnj hj{t»
[
I-G
---- 1
I-G_ y+{t)
The limit of the above quadratic variation process is the asymptotic variance.
Of course we need to check the
'Lingdeberg'
condition here to assure the
-9-
& Borgan (1985). it can be done similar
asymptotic normality. see e.g. Andersen
to Zhou (1988).
Also we need to show the high order terms are negligible under
certain mild tail conditions.
For details see section 4.
And so we have a
central limit theorem. and the asymptotic variance are
lim
2
'"
n<f3-f3> = lim n ~ b . I(
nl
i
t
1--G(t-)
1
bni hi(t) 2
d G
~(l-H.) - 1-H .(t)] [l-H i (t)] 1--G
1
+ lim n ~ I[(~ bnj hj(t»
1
J
.
J
(2.5)
J
There is no difficul ty in extending Theorem 2.1 to mul tiple regression
case.
f3'" i
*
y.
Since the estimator is
=~
and its i th component
* defined in (2.1).
u ij Yj* is again a linear combination of Yj's
So the
same proof also works.
Section 3.
Asymptotic Variance
To wri te the asymptotic variance in another form.
square in the second term of (2.5). (a-b)2
= a2
+ b
(1)
2
we develop the big
- 2ab. we find that (1)
(:u
(2)
and (3) combines to give
+ lim n
=
I ~
i
2
2
b i hi(t) dG
~-H (t) 1--G - lim n
b2
lim n . 1 ni
1=
i
I(
t
1--G(t-)
hi(t)
- 1-H (t»
i
I
2
d Fi
[I-Hi] I-F.
1
-10-
h~(t)
2
1
}; b . f
(I-F.)d
G
1
i=1 nl
[1-H.(t)]2
n
+ lim n
1
[}; b . h.(t)]2
nJ
j
- lim n f
d G
I-G .
J
};(I-H.)
j
(3.1)
J
The first two positive terms of (3.1) are just the variance of
0i T
i
In }; bni I-G(T.)
as the next Lemma shows.
1
Notice the third term is negative!
and is a little different from Koul, Susarla & Van Ryzin's formulas, as they
had an excess
1n
term there.
Lemma 3.1:
Proof:
We first observe that the mean of the quantity is
and so
z.] = E [I-G(Z.)
0i Zi
- f
0.
Var [ 1~(Zl)
i
but
1
t dF i
°i Zi
t
I-G(Zi) = f ~1-G~(-t~) d I[Zi~t, 0i=I]' see (2.1).
[I~(~~) - I
=f
t dF 1] = I
1
I-G(t)
I~(t) d I[ZI~t.
]2
Thus
6 =1] - I t dF I
1
(3.2)
-11-
+
I
d F.
I
t
l-G{t-)
1
-
t
dF
[Zi>t] l-F i
i
un
[I[Zi>t]
]
t
l-G{t-) d Mj + I (l-G){l-F ) - 1 t dF i
i
=I
"
t
__n
I-Hi
=I l-G d + I [I-H. - l]t dF i
1
Mi
(3.3)
if we let
00
= It
hi{t)
=
I
s d Fi{s)
t
d
I-G{t-)
un
Mj
then
d hi
= -t
dFi{t)
and (3.3) become
"
H.-H.
+ I 11 1 (-) d h.{t).
- Hi
1
(3.4)
Integration by parts on the second term of (3.4) we have
"
Hi-Hi
- I I-H. d hi{t)
=
[
-hi{t)
1
Plug this into (3.4). we have
(3.4)
__n
"
H.-H.
= I
t
d M: + I h.{t) d 1~H.1
l-G{t-)
1
1
1
=I
__n
-1
+
d M: + I hi{t) I-H.{t) d Mi{t)
l-G{t-)
1
1
= !(
hi
__n
hi
C
- l-H )d l'tf:"i{t) - I I-H d M.{t).
l-G{t-)
i i I
t
t
(3.5)
in view of (l.lO).
The quadratic variation process of the above martingale is now easy to
-12-
compute and the expected value of the quadratic variation process gives the
desired variance:
Var
=E
+
<3.5
> =J
t
hi 2
d Fi
(1-G - I-H) (I-Hi) I-F.
i
1
h~1
d G
J ---:::-2 (I-Hi) I-G
[I-H.1 ]
h
(1-G)d F. + J
1
2
i 2 (I-F.)d G.
[1-H ]
1
i
•
The negative term in (3.1) is, in general, nonzero.
h j (-oo)]2
=~
To see this, notice
[~b
j
.
nJ
and it is a continuous nonnegative function of t while the other
part of the integral
J
n
d G(t)
~(I-H.) I-G(t) ~
j
J dG
I-G
= -log(I-G(ro»
J
Unless G(t) is completely flat at places where [~ b
term is going to be nonzero.
nj
h (t)]2 is positive, the
j
Thus the variance estimator proposed by Koul,
Susarlar & Van Ryzin (1981) (4.8) need to include an extra negative term.
suggest to use
-n
J
"
2
+
[~ b nj hj(t)] dNc(t)
+
+
[y (t)-I]
Y (t)
where
(& g 0)
We
-13-
to estimate the negative part of the asymptotic variance (3.1).
REMARK:
The first two positive terms in the asymptotic variance (3.1) can be
thought as
the asymptotic variance of an estimator wi th mown censoring
distribution (Lemma 3.1).
'"
(0.2) with a true G(t) instead of G(t)
Estimation of G(t) actually make the estimator of
~
better.
therein.
Using this fact to
modify the estimator (0.2). (0.3) so that to result a larger negative part is
For some discussions see Fygenson & Zhou (1988).
possible and interesting.
High Order Term
Section 4.
We show in this section that the high order terms in the proof of Theorem
2.1 are negligible.
Recall from section 2. (2.3) the high order term is
}; b
'"
M
ni
f n
H.-H.
[1
I-H.
-<X)
'"
G{t-~-G{t)
1
'"
G{t-~-G{t)
_
1 - G(t-)
'"
G{t-)-G{t-)] t dFi(t)
1 - G(t-)
(4.1)
'"
M
=
+
_
1 - G{t-)
1
f~ [}; b
H.-H.
ni
A
l~H.I f.(t)] G{t-l-G{t) t dt +
1
1 - G{t-)
1
Hn G{t-)-G{t) G{t-)-G{t-)
'"
t d{}; b . F.(t».
1 - G(t-)
1 - G(t-)
nI 1
f -<X)
we need to show ~ x (4.2) =
0
p
(4.2)
(1).
The following Lemmas are useful.
Lemma 4.1
If xl x ··· x
2
n are independent random variables with P(X i < t)
=
Fi(t) and f . are constants such that}; f2. ~ 0 as n ~ 00; ~ ) 0; then for those
nI
nI
2
n such that}; f2 ~ 2~ . we have
ni
2
P(s~p I}; fni[I[Xi<t] - Fi(t)] I>~) ~ 8(n+1) exp(- 32~};f2. ).
nI
-14-
Furthermore, if f . are functions of t, f .(t); we have the following:
nl
nl
fni(t} are of bounded variation with V~ fni(t} ~ K
as
n ~
< 00,
If
2
and sup ~ f .(t} ~ 0
t
nl
2
00
then for those n such that sup ~ f~i(t} ~ ~ , we have
t
.
16K
2
wIth C~ (n) = --n
~
Proof:
+ n + 1.
See Zhou (1988).
Lenuna 4.2
sup l~(t}
t~Mn 1-G(t}
'"
=0
(1)
p
sup G(t}:C(t} = 0 (l)
t~Mn
l-G(t}
p
provided anyone of the following condition is satisfied:
>0
(a)
1-G(00)
(b)
The design is random, i.e. Xi are iid D(t} independent of
(c)
Mn = Z(k ) f
T,
~i'
Var(X i }
< 00.
such that
n
M
J n
-00
Proof:
c
d A (t)
1
~(l-H.(t})
n
I
=0
(n).
P
See Zhou {l988}.
Lenuna 4.3
If h(t} is a real function, such that h(t} = h (t}-h (t} where hi(t}
l
2
are nonnegative nonincreasing, functions such that
Then the processes
h(t AT}
n Zn (t AT);
n
J
tAT
n h(s} d Z (s);
n
J
tAT
n Z (s}d h(s)
n
"
-15-
converge jointly in D[O.TG] in distribution to
h{t) B{C{t».
and h{t) B{C{t»
I t h{s)d B{C{s);
~
0
It B{C{s»d h{s)
G.
a.s. as t
Where
~ T
=~
Z (t)
n
'"
1-G(t)
1-G{t)
B{-) is a standard Brownian motion. and
C{t)
Proof:
= I~
d G
(l-G)2{1-F)
This is Gill (1983) Theorem 2.1 with a generalization on h function.
•
For details see Zhou (1986).
It is easy to see that for a fixed fini te
(4.2) but restricted to
T.
>
T
O.
the two integrals in
i.e.
[~ I~T{*) d{*)]. are 0p{l) under the assumptions made.
To show that the tail part of the integrals are 0 (1) is more involved.
p
Let us
look at second term of (4.2) first.
If we assume
00
lim lim sup I T
T~
then by Lemma 4.3. V ~
[t ~ b . f {t)]2
i __
_~l:--.;;.;n;.;;;.l_,,--- ~(l-F.)
n
n
> O. we
d G
_ 0
(1-G)2 -
(4.3)
1
can choose T large to make
arbi trary small. in view of Lemma 4.2. thi s proves the upper tai I is
0
p
(1).
The lower tail is easier. we need only to assume
IT-eo
It ~ bnl. f.{t)ldt
1
< 00.
(4.4)
-16-
As
to the first
term of (4.2),
similar to Zhou (1988),
but here in
addition to
~lb2. ~ b~i
~
Hi (s)[l-H i (s)]
K(s,t); s
<t
(4.5)
nl
(4.6)
we also assume
f.(t)
t
1
for each i and all t,
1-F.(t)
1
then,
stretch out Browning bridge.
'"
H. (t )-H. (t)
1
(4.5), (4.6) imply that
J~
b
(4.7)
~
2
bi
1
1
1-G(t)
converge weakly to a
i
Now (4.7), Lemma 4.2 guarantees that
(4.8)
thus the whole term is
0
p
(1).
•
Conditions:
2
(a1)
o <m ~
(a2)
the variance-covariance
(83)
max n b .
i
nl
(a4)
(b1)
n x
2
~
~ 0,
b
M < 00,
2
max n a i
i
n
sup E[~.-tl~.>t]
1, t I l
~
ni
G
ij
for large n.
are well defined. non infinite and non zero.
~ 0,
as n
< 00.
Mn are stopping times such that lim Mn
of Lemma 4.2 hold.
(b2)
~ 00.
(4.3) (4.4) (4.5) (4.6) (4.7) hold.
= lim
Tn and one of the conditions
-17-
REFERENCES
AALEN, O. (1978). Nonparametric inference for a family of counting processes. Ann. Stat£st. 6, 701-726.
ANDERSEN, P. K and BORGAN, O. (1985). Counting process models for life history data:
A review (with discussion). Scand. J. Stat£st. 12, 97-158.
ANDERSEN, P. K, BORGAN, 0., GILL, R. and KEIDING, N. (1982). Linear nonparametric tests for comparison of counting processes, with applications to censored
survival data. Int. Stat. Rev. 50, 219-258.
ANDERSEN, P. K and GILL, R. (1982). Cox's regression model for counting processes: a
large sample study. Ann. Stat£st. 10, 1100-1120.
BILLINGSLEY, P. (1968). Weak Convergence of Probab£l£ty Measures. Wiley, New York.
BUCKLEY, J. and JAMES, I. (1979). Linear regression with censored data. B£ometr£ka
66, 429-436.
CHATTERJEE, S. and McLEISH, D. 1. (1986). Fitting linear regression models to censored data by least squares and maximum likelihood methods. Commun. Stat£st. Theory Meth. 15(11), 3227-3243.
COX, D. R. (1972). Regression models and life tables (with discussion). J. R. Stat£st. Soc. B
34, 187-220.
FYGENSON, M. and ZHOU, M. (1988). Analysis of linear regression models with censoring.
Prepr£nt.
GILL, R. (1980). Censor£ng and Stochast£c Integrals. Mathematical Centre Tracts 124.
Mathematisch Centrum, Amsterdam.
GILL, R. (1983). Large sample behavior of the product-limit estimator on the whole line.
Ann. Stat£st. 11, 49-58.
HOEFFDING, W. (1948). A class of statistics with asymptotically normal distribution.
Ann. Math. Stat£st. 16, 293-325.
JACOBSON, M. (1982). Stat£st£cal Analys£s of Count£ng Processes.
Statistics. Springer, New York.
Lecture Notes
III
KAPLAN, E. and MEIER, P. (1958). Non-parametric estimator from incomplete observations. J. Amer. Stat£st. Assoc. 53, 457-481.
-18-
KOUL, H., SUSARLA, V. and VAN RYZIN, J. (1981). Regression analysis with randomly
right-censored data. Ann. Stat£st. 9, 1276-1288.
LEURGANS, S. (1987). Linear models, random censoring and synthetic data. B£ometr£ka
74, 301-309.
MILLER, R. G. and HALPERN, J. (1982). Regression with censored data. B£ometr£ka 69,
521-531.
POLLARD, D. (1984). Convergence of Stochast£c Processes. Springer, New York.
SHORACK, G. R. and WELLNER, J. (1986). Emp£r£ca/ Processes w£th App/£cat£ons to
Stat£st£cs. Wiley, New York.
SUSARLA, V., TSAI W. Y. and VAN RYZIN, J. (1984). A Buckley-James-type estimator
for the mean with censored data. B£ometr£ka 71, 624-625.
SUSARLA, V. and VAN RYZIN, J. (1980). Large sample theory for an estimator of the
mean survival time from censored samples. Ann. Stat£st. 8, 1002-1016.
WANG, J. G. (l987). A note on the uniform consistency of the Kaplan-Meier estimator.
Ann. Stat£st. 15, 1313-1316.
ZHENG, Z. (l984). Regression with randomly censored data. Ph. D. thesis. Columbia
University.
•
ZHOU, M. (1986). Some nonparametric two sample tests with censored data. Ph. D. thes£s.
Columbia University.
•
ZHOU, M. and VAN RYZIN, J. (1987). Difference of means test with censored data. Subm£tted to J. Amer. Stat£st. Assoc.
ZHOU, M. (1988). Asymptotic normality of 'synthetic data' regression estimator for censored survival data. Preprint.
© Copyright 2026 Paperzz