Ko, Wen-Jene; (1991)A Sequential Clinical Trials Model for Determiing the Best Among Three Treatments with Normal Responses."

','
~..
".' "'...,
.
~tA{e::E~:;.;.:~; ,,';~.
A SEQUENTIAL a.INICAL TRIALS JfODEL
FOR IETERJlININC 1HE BFm' AJDfG 11JREE TREATJIENTS
wrm
NORJIAL RESPUfSES
by
Wen-Jene Ko
A dissertation submitted to the faculty of The University of North
Carolina at Chapel Hill in partial fulfillment of the requirements for
the degree of Doctor of Philosophy in the Department of Statistics.
Chapel Hill
1991
Approved by:
Advisor
Reader
WS-.)ENE lID.
A Sequential Clinical Trials lIoclel for Detel"1lining the
Best awong Three TreatlleDts with- NOlWl1 Responses.
(UDder the direction of
~
D. SIJDfS)
ABS'J'RACf
Patients are frequently involved with clinical trials which are
designed to compare two or more treatments. For ethical reasons, any
unnecessary use of inferior treatments should be avoided. This concern
invites the use of a sequential clinical trial. The focus of attention
here is upon the case of three treatments, and the approach is
Bayesian with the adaptation of a decision-theoretic model proposed by
Anscombe {l963}. The main feature of his approach is the use of a
testing stage within which patients are assigned three at a time to
the three treatments followed by a post testing stage whithin which
all remaining patients are assigned to what (by this time) appears to
be the best of the three treatments. The main statistical problem is
to find an optimal stopping rule for stopping the testing stage.
It is assumed that both the treatment responses and prior
parameters are normally distributed. As with earlier work by Chernoff
and Petkau (1981) for two treatments. a "continuous time"
approximation is introduced, which. in the present case. leads to the
consideration of two independent Brownian motions. and a heat equation
in two spatial variables. The problem of finding an optimal stopping
time becomes that of finding the solution of a free boundary problem.
The emphasis here is on finding this free boundary numerically by
approximating the two Brownian motions by a two-dimensional simple
11
random walk. The approach mirrors an approach developed by Chernoff
and Petkau for statistical problems involving a single spatial
variable. Their approach requires an approximating one-dimensional
random walk. Its implementation in two dimensions is DRlch more
demanding.
•
•
iii
First and foremost. I would like to express m¥ great appreciation
to m¥ advisor Dr. Gordon Simons. His expert guidance and continuous
encouragement have helped me illlllensely throughout the past few years.
My cOlllllittee members (Dr.s C. Ji.
G. Kallianpur. V.G. Kulkarni and
P.K. Sen) are to be thanked in addition for their careful reading of
the manuscript.
I would also like to thank the entire faculty and staff of the
Depar~ment
of Statistics for their support and assistance throughout
m¥ graduate studies.
With heartfelt gratitude. I thank m¥ mother for her love and
constant devotion.
Finally. I want to thank m¥ lOVing husband and son for their
constant support.
tv
TABLE OF (D(1tM'S
Page
LIST OF TABLES
vi i i
LIST OF FIGtJRE'S. .. . . . . . • . . . . . • • • • . • . . . . • . . . • . . . . • . . . . . . . . . . . . . . . . . . . Ix
ampter
I. INTRODUC'I'ION...................................................
1
1.1 Introduction................................................
1
1.2 Literature Review...........................................
4
1.2.1 Sequential Clinical Trials
4
1.2.2 Treatments with Dichotomous Responses
6
1.2.3 Treatments with Normal Responses
7
1.2.4 Testing for the Sign of a Normal Mean
1.2.5 Numerical Methods
~
:
9
. . . . . . . . . . . . . . . . . . . . . . . .. 13
II. ESTABLISHING THE MODEL
17
2.1 Introduction
17
2.2 Two Independent Gaussian Processes
18
2.3 The Stopping Risk
27
2.4 Continuous Time Version
33
2.5 Pa.raJDeter-Free Problem
37
2.6
III.
A 2-Dimensional Random Walk
A 2-DlMENSIONAL RANDOM WALK
42
:
44
3. 1 Introduc t ion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
44
3.2 The Properties of the Set of the Last Continuation Points
45
·3.3 The Numerical Techniques
59
vi
•
3.3.1 Programming techniques
59
3.3.2 Programming with Huge Matrices in C
73
3.3.3 Interpolation
78
3.4 An ExaJnple
97
IV. TIlE ASYMProTlCALLY EXPECTED OVERJUMP OF A 2-DIXENSIONAL RANDOM
IAI..K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 99
4.1 Introduction................................................ 99
4.2 The Asymptotically Expected Overjump
100
4.3 Numerical Results
108
APPENDIX A. PROGRAM LIsrmC
:
114
BIBLIOGRAPHY•....••.•••••••.••.........•••..•••.......•..•........ 135
vii
LIST OF TABLES
Table 3.3.1.1:
The CPU Time Required for the Solution
~t
Time
1+(641)A. with A = 0.125 ......................... 69
Table 3.3.3.1:
EstillBtes of the Risk at the Last Continuation
Points: 1 Step after Changing the Step Size ...... 91
Table 3.3.3.2:
Estimates of the Risk at the Last Continuation
Points: 2 Steps after Changing the Step Size ..... 92
Table 3.3.3.3:
Estimates of the Risk at the Last Continuation
Points: 5 Steps after Changing the Step Size ..... 93
Table 3.3.3.4:
Estimates of the Risk at the Last Continuation
Points': 10 Steps after Changing the Step Size .... 94
Table 3.3.3.5:
Estimates of the Risk at the Last Continuation
Points: 20 Steps after Changing the Step Size .... 95
Table 3.3.3.6:
Estimates of the Risk at the Last Continuation
Points: 40 Steps after Changing the Step Size .... 96
Table 4.3.1:
Asymptotically Expected Overjumps
viii
;
112
LISf OF FICtJIIm
Figure 2.3.1:
Regions TO' T1 and T2
32
Figure 2.5.1:
The Scales of the Time Variable s *
41
Figure 3.2.1:
The Last Continuation Points at s
=3
(Considering Different Time Variable Spacings) .. 58
Figure 3.3.1.1:
The Last Continuation Points
at s = {2. 5. 10. 20. 35. 50. 65. 80. 1OO}
Figure 3.3.1.2:
70
The Last Continuation Points
at s = {4O. 50. 65. 80. 1oo}
71
Figure 3.3.1.3:
The Optimal Continuation Region at s = 100
72
Figure 3.3.3.1:
Sample of Grid Levels (A
Figure 3.3.3.2:
Sample of Grid Levels
(A
Figure 3.3.3.3:
= A' = AI:
Av
= AI:
Av
= ~)
= ~ = Av/2)
83
83
0 Step after Changing the Step Size
(s = 15)........................................ 84
Figure 3.3.3.4:
1 Step alter Changing the Step Size
(s
Figure 3.3.3.5:
85
2 Steps after Changing the Step Size
(s
Figure 3.3.3.6:
= 15.03125)
= 15.0625)
86
5 Steps after Changing the Step Size
(s = 15.15625)
Figure 3.3.3.7:
87
10 Steps after Changing the Step Size
(s = 15.3125)
Figure 3.3.3.8:
88
20 Steps after Changing the Step Size
(s = 15.625).................................... 89
Figure 3.3.3.9:
40 Steps after Changing the Step Size
(5
= 16.25)
90
ix
=8
Figure 4.2.1:
Boundary: X + 2Y
Figure 4.3.1:
Perpendicular Expected Overjumps vs Slopes
109
Figure 4.3.2:
Horizontal Expected Overjumps vs Slopes
110
Figure 4.3.3:
Vertical Expected Overjumps vs Slopes
111
x
or -8
107
aJAPTER I
:INTRaXJCTlaf AND LITERAllJRE REVIEW
1.1 Introduction
This dissertation develops a sequential clinical trials model to
select the best among two new treatments. called A and B. and a
standard treatment by treating a specified number N of patients. Of
particular interest are treatments with normal responses and with
normally distributed prior parameters.
Patients are frequently involved in clinical trials which are
designed to compare two or more treatments. For ethical reasons. any
unnecessary use of inferior treatments should be avoided. This concern
motiv.ates the use of a sequential clinical trial. Armitage (1961)
discusses some sequential stopping rules for trials comparing two
treatments. Compared to any fixed sample size design. these rules
reduce the expected sample size while maintaining the same error
probabil i ties.
lBny studies in the literature address trials haVing two
treatments. Some of them are discussed in Sections 1.2.2 and 1.2.3.
However. for any illness. there might be numerous potential new
treatments. Pocock (1977) describes· some methods for the three
treatments as. three pairwise ongoing trials analyzed by so-called
"group sequential" methods. This is further discussed by McPherson
(1984).
The trial. that the dissertation research concerns. consists of
allocation of treatments to n triplets of patients. A conclusion is
made as to which treatment apPears to be best. so that the remaining
N-3n patients will be given the apparently best treatment based on the
results of the first n triplets. The main feature of this model is the
use of a testing stage followed by a post-testing stage. The
statistical problem is to determine the number of triplets to be
considered lor the testing phase.
The strategy. from three treatments -to one. is not a satisfactory
model for ethical reasons. It is difficult to justify a clinical trial
which continues sampling by
t~iplets
while a treatment is performing
relatively very poorly. This. however. can occur since the model is
constrained to make a single switch to a unique treatment. Our work is
useful for developing the more complicate model in the future. Palmer
(1988) discusses a model for a clinical trial involving three
treatments with dichotomous responses. The computations involVing
treatments with continuous responses are much more difficult. The
model discussed by Palmer begins with an initial testing stage. called
stage I. This continues until a decision is reached to eliminate the
single worst-appearing treatment leaving two treatments in contention.
Secondly. one decides after stage II. when to swi tch all remaining
patients to the better of these two treatments. He demonstrates that
the model is more applicable and useful than two pairwise trials.
likewise better than the model which jumps directly from three
treatments to one.
-2-
~
Chapter II discusses how the model is developed and introduces a
risk function when a total of N patients are to be treated wi th one of
three medical treatments. The risk function is constructed with but
one risk. the consequence of treating a patient wi th the best. second
best or the worst of the three treatments. A Bayesian approach is
described which is based on a decision-theoretic model proposed by
Anscombe (1963). In the approach. triplets of patients are assigned to
the three treatments sequentially in the testing ·stage. When to stop
allocating patients depends on the stopping risk. Let the optimal
stopping risk be the expected stopping risk associated with the
optimal stopping rule. If the stopping rfsk is smaller than the
optimal stopping risk. the testing stage is stopPed; otherwise. the
testing stage continues. Deciding the best treatment after the testing
stage is terminated should be based on the information from the
treated patients. This concern leads to the consideration of two
independent Gaussian processes. The optimal stopping risk can then be
solved by a backward induction. Each stopping rule can be represented
by a continuation region and its complement. a stopping region. The
problem of finding an optimal stopping rule becomes that of finding
the solution of a free boundary problem. A simple 2-dimensional random
walk is introduced to solve the boundary of the optimal continuation
region numerically.
Chapter III focuses on the solution of the optimal continuation
region for the 2-dimensional random walk. Several properties of the
optimal continuation region are described in this chapter. With these
results. the problem can be solved efficiently. Progr.amming techniques
using the properties are discussed. A numerical method. interpolation.
-3-
is presented which can be employed to obtain smooth estimates of the
boundary of the continuation region and using less computer time.
Chapter IV discusses the asymptotically expected
exc~ss
over the
boundary by a 2-dimensional random walk. The boundary is formed by two
parallel lines. The results here is useful in investigating the
relation between the solution of the optimal stopping rule for the
2-dimensional random walk and that of the continuous version of our
problem.
The remainder of this chapter is a literature review of previous
research on the sequential clinical trials.
1.2 Literature Review
The literature review may be roughly divided into five parts. The
first section describes two opposing viewpoints among theoreticians of
sequential clinical trials. The second section discusses sequential
clinical trials models for treatments with dichotomous responses. The
third section is devoted to the description of a sequential clinical
trial model for two treatment with normal responses. A Bayesian
strategy for the testing of a normal mean is discussed in Section
1.2.4. Lastly. numerical methods for obtaining the optimal stopping
rule for the problem. which is described in Section 1.2.4. are
discussed in. Section 1.2.5. Sections 1.2.4 and 1.2.5 do not seem
directly related to our topic. But. their work will be referred to in
our research.
1.2.1 Sequential Clinical Trials
For the comparison of two treatments, Armitage(1960) considers
-4-
three hypothesis - that treatment A is preferable to treatment B. or
vice versa. or that A and B show no difference in their effects. His
approach is to control the error probabilities of reaching an
incorrect conclusion. This is accomplished by determining stopping
boundaries for a sequential trial. Colton(l963) asserts that in many
cases it is indeed difficult for the medical experimenter to select a
difference and to state with what probability he wants to detect this
difference.
On the other hand. Anscombe(l963) in his paper. reviewing
Armitage's book. suggests having only two possible conclusions (not
the third). and uses a two-stage. decision-theoretic approach. The
suggested approach begins with the use of both treatments and finishes
with all remaining patients assigned to the most promising treatment.
The reason for adopting a two-decision formulation is that a better
basis is provided for determining the sequential stopping rule. His
suggested two-stage approach is motivated by the fact that in the
improvement of knowledge some patients will be ill-treated. but if
knowledge is not improved all patients may be ill-treated. He assumes
that there is a fixed number of patients. N. and the difference
between the effects of treatment A and B is
a
(the effect of A minus
the effect of B). If n pairs of patients have been tested in the first
stage. let y be the sum of the n response differences (the response of
the patient treated by A minus the response of patient treated by B).
The remaining N-2n patients will be given A or B according to whether
y is posi tive or negative. Let
lal
be the cost attributed to each
patient receiving the inferior treatment. Then. a total regret at the
end of the trial. which is based on the cost of treating patients
-5-
suffering from a serious illness by what appears to be an inferior
treatment. can be assessed by
nE(lel) + (N-2n)E[max(O. -esgn(y»].
Anscombe also finds an 1ntui tively reasonable sequential stopping rule
such that the first stage is stopped as soon as further continuance is
expected to lead to an increase in the total regret.
Further. Anscombe attempts to argue against the formal
methodology of hyPOtheses testing which focuses attentions on
significance level and power function and ignores the cumulative
benefits of patients.
Colton (1963) uses Anscombe' s approach in the continuous variable
case.
1.2.2 Treatments with Dichotomous Responses
For some trials. the treatments display dichotomous responses; a
treatment ei ther works. or i t does not. One sometimes speaks of
Bernoulli responses of "successes" and "failures" instead of working
or not. In such cases. the primary question becomes a matter of
identifying the treatment with the maximal success probability.
Hoel. Sobel and Weiss (1972) investigate a two-stage procedure
for choosing the better of two biJlOlafal populations. They show that a
two-stage procedure can take advantage of the difference between
Vector-at-a-Time rule (Bechhofer. Kiefer and Sobel. 1968) and
Play-the-Winner rule (Robbins. 1956; Zelen. 1969) for assigning
treatments. The first of these assigns two treatments in alternation.
In the second. a success on a treatment generates a further test with
-6-
that treatment while a failure generates a test on the other
treatment.
Bather & Simons (1985) and Simons (1986) are concerned wi th two
treatments which successfully treat the illness with unknown
probabilities PI and P2' respectively. Both papers seek to find
suitable stopping rules for terminating the first stage under
Anscombe's clinical trials design. Bather & Simons (1985) find a
stopping rule which is minimax within the class of two-stage
procedures. Simons (1986) considers Bayes rules and obtains a stopping
rule which is best amongst admissible rules for symmetric priors. A
clinical trials model for deciding the best of three treatments with
dichotomous responses is discussed in Palmer (1988). He extends the
work of Simons (1986) to three treatments and develops the methodology
for a simple, well-performing, ethical, pragmatic and potentially
useful mode I .
1.2.3 Treatments with Normal Responses
Some work about comparing two treatments wi th normal responses
has been done under Anscombe' s formulation. The Bayes rule for
deciding when to stop the first stage is discussed in CbernoH and
Petkau (1981) in the context of a normal prior. They use many of the
techniques which are described in ChernoH (1972). Let Zi denote the
difference in response between the patient receiVing treatment A and
the patient receiVing treatment B in the i-th trial pair. Assume ZI'
Z2' ..• are independent .N(J,L, 02) variables, with 02 known. Further,
2
suppose the parameter J,L has a normal prior distribution: .N(JJ.O' uo ).
-7-
The posterior distribution of
~
given Z1'
~,
*
... ,Zn is H(Y*
n , sn)'
where
Y*
n
,,'
n
-2
-2 ~ Z )/( -2
-2)
0'0 ~O + 0' M i
0'0 + no
,
__ (
i=1
and
* = (0'0-2
-2 -1
) .
+ no
sn
Since the treatment for the remaining (N-2n) patients is indicated by
the sign. of Y* , the posterior risk and the expected risk associated
n
with stopping after treating n pairs of patients is
Simple calculations can convert this posterior risk into d (Y* ,s* ):
1 n n
-2+(1/2)NO'-2-s-1 ) Iy I ,
d 1(y,s) = Ns 1/2~(ys -1/2 )-0-2 (0'0
where
~(u)
= ~{u)+u{.(u)-1/2}
~
and
and • are the standard normal
density and cumulative respectively.
The distribution of Y*-Y* , where n
n m
~
m
~
0, is H(O,
5
*m-s*n )
and
2 * ).
Y*-Y* is independent of Y* . The distribution of Y* is H(O, 0'0-5
n m
m
m
m
Denote by T the set of integers between 0 and N/2. Then, the process
(Y* ,s* ,~T) is a Gaussian stochastic process with independent
n n
increments, starting from (yo'
* sO),
*
Chernoff and Petkau (1981) solve the problem of selecting the
best sequential procedure for terminating the experimental phase by
finding the stopping time I which minimizes the expected risk, i.e ..
the expectation of d 1 (Y* ,sl)'
* Since sn* is decreasing in n, it is
1
-8-
M
M M
equivalent to find a stopping time 51 to minimize E[d (Y.,s.)]. The
1
M M
M
M
M
possible stopping times are {sO' sl' ... , SM}' where SM = sm and
m=
LN/2J.
Since s~ ~ ~7 ~
...
~
M
i.e., sn goes backward in time,
Chernoff says this process is in the -s scale. The continuous version
of this process is that the stopping lilly be taken to be any value
M
M
M
M
between So and SM' The process Yn is replaced by Y (s) starting at
M M
M
2
M
M
Y (sO), where Y (s) has the N(O ,aO-s) distribution for So ~ s ~ SM
M
M
and Y (Sq)-Y (sp) has the N(O, Sp-Sq) distribution for
M
So ~ sp
M
So ~ S
~
~
Sq
~
M
M
SM' Thus, Y (s) is a Wiener process in the -s scale for
M
M M
sM starting at Y (sO) with
E(dY* (s»
= 0 and Var(dY
M
(s»
= -ds.
Here, s is decreasing in time, so -ds may be thought of as positive.
The results of Chernoff and Petkau on the Bayes risk show that
Anscombe's (1963) rule is remarkably close to optilllll, but the
procedure obtained by Begg and Mehta (1979) is a relative poor
competitor. Begg and Mehta propose stopping the first stage as soon as
no fixed size continuation of that stage does as well as stopping.
Col ton (1963) proposes to use horizontal stopping boundaries based on
a truncated sequential probability ratio test. Lai, Robbins and
Siegmund (1983) study the asymptotic properties of these rules.
1.2.4. Testing for the Sign of a Normal Mean
The Bayes sequential test for the sign of a normal mean is
discussed in Chernoff (1972). Assume Xl'
-9-
~,
... are independently
identically normal distributed wi th unlmown mean J.L and mown variance
2
02. If the parameter J.L has the JI(J.LO' 0'0)
distribution. denote by Yn
and sn the posterior mean and variance of J.L. respectively. Then. the
process (Y .s ) is a Gaussian process like the process (Y* .s* ) in the
n n
n n
last section.
Chernoff considers that the cost of a wrong decision is
r(J.L)
=klJ.L1
and the cost of observing n X's is en. k
>0
and c
> O.
Then. the posterior risk (including the risk of accepting the
hypothesis H1 or
~
and the cost of· sampling) associated with stopping
at the n-th observation in the sequential analysis problem is
dey • s ). where
n
n
d(y.s)
=ks 1/2+(ys -1/2 )
+
CO'
2s -1
-
-2 2
o 0'
CO'
and
+(u) = [4fI(J.L) - J.L[1 - +(u)].
4flh!) + J.L+(J.L).
with 4fI the standard normal density and + the standard normal c.d.f. He
attempts to find a stopping rule (a random variable N) to minimize
Discrete time stopping problems with infinitely many possible
stopping times lead to
su~stantial
difficulties which have been
discussed in Clow. Robbins and Siegmund (1971). Chernoff asserts that
the resulting problem could be attacked by the backward induction
method 1£ one truncates the problem so that stopping Dnlst happen by
-10-
the n st observation for some n . The nontruncated sequential problem
1
1
can be treated as a limit of' truncated problem.
The continuous version of this problem is related to the Wiener
Process. Clernorr (1972) points out that the solution of the
continuous time stopping problem involving a Wiener process is related
to the heat equation defined later. There is a continuation region.
which is an open set. and a stopping region. which is the complement
of this open set. corresponding to a particular stopping rule for a
continuous time stopping problem. So. the solution of the problem can
be expressed in term of a stopping set and a continuation set in the
(y.s) plane if the process Yes) is considered. Suppose yes) is a
Wiener Process in the -s scale. Let d(y.s) be the stopping cost. If <e
and
~
are an open continuation set and the stopping set corresponding
to a particular stopping rule. and b(y.s) is the expected cost
associated with this stopping rule. the folloWing equations are true:
1
(y.s) e <e
-2 byy (y.s) = b s (y.s)
(heat equation)
b(y.s)
= d(y.s)
(y.s) e
~.
The heat equation. here. can be justified by expanding E[b(y+W~.s)]
in a Taylor series:
b(y.s+6)
= E[b(Y(s).s)IY(s+6) =y]
= E[b(y+~.s)] + 0(6).
+ 0(6)
for (y.s) e <e.
Here. W is a generic N(O.I) random variable. More detail is provided
by ClernoH (1972).
-11-
Denote by S the set of all stopping rules. Let
•
where r(yO'sO) is the expected cost associated with a particular
stopping rule in S. The procedure: Stop as soon as p(y.s) = d(y.s). is
an optimal procedure (under regularity conditions sufficient to imply
the existence of an optimal procedures). Denote by li the optimal
O
continuation region and by ~O the optimal stopping region. Then. li is
O
the set: {(y.s) : p(y.s) < dey,s)} and ~o is the set:
{(y.s) : p(y.s)
= d(y.s)}.
Chernoff obtains:
1
2 Pyy(y,s)
= ps(y,s),
(Y. s)
~ liO.
p(y,s) = d(y.s).
(y. s) ~ ~O.
= dy (y.s)
on the boundary of li .
O
py (y.s)
The optimal stopping rule is characterized by the optimal
continuation region and its complement. the stopping region. In the
preface of "Free Boundary Problems: Theory and Applications". Hoffmann
(1990) say. "Free boundaries. that is. singular surfaces with a priori
unknown location in space that separate spatial regions with different
physical characteristics. occur as a natural feature in the evolution
of a great variety of very important scientific and technological
processes; ..• ". So. the problem of find the optimal stopping rule for
the continuous time problem becomes a free boundary problem.
Bather (1962) exploits very effective techniques to decide
whether a specified point (YO' sO) is a continuation point for the
optimal procedure. Chernoff (1972) also provides
-12-
meth~s
to yield
inner and outer bounds of
ceO.
All of these techniques require the
ability to generate convenient solutions of the heat equation.
1.2.5 Numerical Methods
Denote by Y the upper boundary of the optimal continuation region
for the continuous time problem described in the last section. Let Y()
be the upper boundary for a discrete version of the problem that
results when stopping is restricted to a lattice with adjacent points
a distance () apart. According to Chernoff (1965) the relation between
Y and Y() is expressed by
z = 0.582.
The key step in the proof of this is to introduce an'
auxiliary problem in which a Wiener process is started at a point
(z.t). t < O. no payoff is made if stopping occurs before time O. and
at t = 0 stopping is enforced and a payoff z2 I (z<0) is received.
stopping is permitted only at times t ::; O. -1. .... and each
observation costs a dollar. For this problem the optimal stopping
boundary can be shown to be increas ing and contained in [-1. 0]. and
A
therefore it has a limit as t
~ -CD
which turns out to be Z.
Let Sn = ~=1 Xi' be a random walk wiOth ES 1 = O. ~ = 1. and
4
EIs1 J < 00. Let T a = inf {n: Sn > a}. T+ = TO. For the normal random
walk. Siegmund (1985) remarks that
where R is a random variable whose distribution is the same as that of
-13-
the asymptotic excess over the boundary as a
~m.
Following the work
A
of Siegmund. Hogan (1986) uses another method to get the amount Z. He
proves
Chernoff
& Petkau (1986) attack the problem described in the last
section. which is about finding the boundary of the optimal
continuation region for the testing of a normal mean. numerically. The
numerical methods are investigated in great detail in Chernoff &
Petkau (1984). The basic idea is: the process Yes). which is a Wiener
process in the -s scale. is approximated by a discrete time process:
backward induction is used to solve the optimal stopping problem for
this new process. Consider the discrete time problem where one is
permitted to stop only on the discrete set of possible values of s,
{Sa+iA. i
= O.
1. 2 •... } to have the backward induction easier to
carry out. While the value of s decreases by A between these
successive possible stopping times, the process Yes) changes by a
A
normal deviate wi th mean 0 and variance A. The optimal risk d is
defined recursively by
A
A
d{y.s) = min (d{y.s). E{d{Y{s-A).s-A)IY{s)=y»
.....
= min (d{y.s). E{d{y+~.s-A»
for s
>
where z represents a standard normal deviate.
If stopping is enforced when s
= sa •
then
A
d{y.sa ) = d{y.sa ).
-14-
•
The optimal risk can be solved by using the backward induction
algorithm. The boundary of the optimal continuation region for this
.....
discrete time problem is determined by the "break-even" points YA(s}
at which
A
dey,s} = E(d(y+~,s-A}}.
The numerical integration to evaluate the risk at each point (y,sa +iA)
could be quite time consuming. Consequently, the discrete time process
with normal increment is replaced by the simple random walk process
where
Yes-A} = Yes} ~ ~
each with probability 1/2.
The first two moments of the Bernoulli random variable are chosen
to match those of the increment
it
is replacing: the higher even
moments do not match. Chernoff & Petkau (1916) have investigated the
follOWing relation of this discrete time simple random walk problem to
the original continuous time problem:
(1.2.5.1)
It is not easy to calculate YA(s} directly, so another approximation
is proposed by Chernoff and Petkau (1984). Derive YO(s} and Y1(s}, the
continuation points on the grid which are closest and second closest
to the stopping region at the stopping time s. Let
A
= d(YO(s},s}
D1 = d{Y 1{s),s)
Do
A
- d(YO(s},s}
.
- d{Y 1{s),s).
-15-
Then. the following continuation correction can be applied to get the
optimal stopping boundary, Yes). for the original problem:
.'
(1.2.5.2)
D1
-
Yes) = Yo(s) + {4D - 2D
o
l.fT.
1
Considering the boundary of the optimal continuation region for
nonnegative s is enough. since d(y.s) is symmetric. Also. given that
Yo and YO+..r-A are stopping points at s = s....-A. then it can be shown
that YO+..r-A will be a stopping point at s =
Sw
(at least for small
values of A). Based on the above results. Chernoff & Petkau (1984)
write a Fortran program which solves the optimal stopping time problem
very efficiently. Two approximations of Yes) can be calculated
directly from the program. The first one. simply uses
~he
continuity
correction to YO(s}. shown in (1.2.5.1). i.e .•
Yes} = Yo(s) + 0.5 ..r-A.
The second one uses the continuity correction. shown in (1.2.5.2).
This approach is not well adapted for very precise resul ts but is
surprisingly effective for rough approximation.
•
-16-
aJAPI'ER II
mfABLISBING mE JOEL
2.1 Introduction
This chapter presents a model to test two new treatments against
a standard treatment with a two-stage approach described by Anscombe
(1963). The treatments considered are with continuous responses which
are normally distributed. The number of patients available for the
trial is fixed and denoted by N. A" stopping rule is discussed which
indicates when to discontinue allocating triplets of patients and
begin giving the apparently best treatment to all remaining patients.
The decision rule for determining the best treatment when the testing
stage is stopPed is also discussed. The following is an example of the
problem that we are interested in:
Suppose Bactrim. Pentamadine and Pentamadine with Steroids are
the treatments for Pneumocystiis Carinii Pneumonia (PCP) and 297
patients are suffering from PCP. We would like to know which treatment
is best on this disease for treating the 297 patients. Pa0 means the
2
partial pressure of oxygen in the blood. A patient with higher Pa0 is
2
recovering better from PCP. So, Pa0 can be a quantitative measure of
2
response to indicate the effect of a treatment. By Anscombe's
approach, triplets of patients are assigned sequentially to the three
-17-
treatments until the testing stage is terminated. and then the
remaining patients are assigned to the apparently best treatment. When
to stop the testing stage and how to decide the best treatment by the
Pa0 's of treated patients will be discussed in Section 3.4 of the
2
next chapter.
Section 2.2 relates the problem of finding the stopping rule to
two independent Gaussian processes. Section 2.3 defines the stopping
risk if the first stage is stopped after n triplets of patients have
been allocated.
In Section 2.4. we discuss a continuous time version of the
problem described in Section 2.2 and 2.3.· The continuous time problem
becomes a free boundary problem. Section 2.5 is devoted to the
transformation of the problem of finding an optimal stopping rule to a
parameter-free problem.
Section 2.6 describes an approach for solving the boundary of the
optimal continuation region. which characterizes the optimal stopping
rule. by means of a 2-dimensionaJ random walk.
2.2 Two Independent Gaussian Processes
We shall aSsume that the treatment responses are independently
normally distributed. Let X • X • ... denote the responses to the
J2
J1
J-th treatment. where j = 0 refers to the standard treatment. and
j
= 1 and
2. respectively. refer to new treatments A and B. The
assumption is that the XJi's (J = O. 1. 2. i = 1. 2. 3 •... ) are
independent and .N(9 .a2 ) variables. where a 2 is mown. For the
J
convenience in considering the time variable later. the variances of
the responses of the treatments are set to be equal. Further. suppose
-18-
that the 9 j 's are independent and have prior density
2
X(~j'o*).
If n
triplets of patients have been assigned to the three treatments, the
posterior distribution is given by the following lemma:
Lemma 2.2.1.
The posterior distribution of 9 = (90 , 9 1 , 9 )' given
2
n, is
X , j = 0, I, 2, i = I, 2, ...
ji
.
~(!IXji' j
= 0,
I, 2, i = I, 2, ... , n)
(na-2+a~2)-1
o
= X(
o
Proof:
0
0
(na-2+a~2)-1
0
O
-2 -2-1
(na
+a*)
Let Xnj = ~=lXji/n, j = 0, I, 2. Then, the Xnj's are
independently normally distributed with mean 9 j and variance
°2In.
Consider
XnO =
~O + ZexP* + ZOl ol..rn,
Xn1 = ~1 + Z10o* + ZUol..rn,
where. Zjt "S are independent standard normal random variables. Then,
the joint distribution of (90 " 9 1 " 92 " XnO" Xn1 " Xn2 ) is easily
-19-
).
computed to be
~
where
=
(~O' ~1' ~)'
and 13 is an identity matrix of order 3 x 3.
By Theorem 2.5.1 of Anderson (1958), the posterior distribution of 9
given the xnj's is
).
=H(
Since (XnO ' ~n1' Xn2 ) is sufficient for (90 , 91 , 92 ) (Anderson (1958).
p. 56). the posterior distribution of !given x ji ' j = 0, 1,2.
i
= 1.
2 •...• n, is equivalent to the posterior distribution of
!
given xnj • j ~ O. 1. 2. This proves the Lemma.
•
c
If n triplets of patients have been observed. the best appearing
treatment can be decided by comparing the posterior mean of
!.
The
posterior means of the difference of the response parameters 9 -9 and
1 0
9 -9 seem to be the obvious contrasts for selecting the best
2 0
treatment. It wIll prove more convenient. however, to study the
•
-20-
Specifically, let
1, 2, ... , n)
(2.2.1)
and
(2.2.2)
sn
-2-1
= (no-2+aM)
•
Following Chernoff (1972), the next step is to consider the
W
distribution of ( w~W ) for n ~ m ~ 0, which is obtained in
-n -m
LeIlllB 2.2.2.
W
LeIlllB 2.2.2.
H(
For n ~ m ~ 0, the distribution of ( W-m
-w ) is
-n -m
(J.l2-J.l1 )/..r2
(J.l1+~-2J.lO)/.J""6
0
0
2
a M-sm
0
0
0
0
2
a M-sm
0
0
0
0
s m-sn
0
0
0
0
s m-sn
).
Proof:
Let a nxn be an nxn matrix of constant a and let In be an
identity matrix of order nxn. By an argument, similar to the argument
about the distribution of (90 , 9 1 , 9 , XnO '
2
-21-
Xn1 , Xn?)
in LeIlllB 2.2.1,
the distribution of (90 . XOI ' ...• XQn' 91 , XII' .... Xln · 92 , ~l'
...• ~)' is normal with mean 11 and covariance matrix I • where
1
I1 =
enen.
n]
n
[ nne
n = 0{n+l)x{n+1)'
and
2
e = uMl{n+l)x{n+l)
+
2 0nxl 0nxl ]
0nxl In .
U [
where
o
o
o
o
0hcn
(-2/n}lxn
0 1xn
0
{-l/n}lxn
0
(l/n}lxn
0 (-l/m}l xm 0 1x ( n-m )
(-2/m}lxm 0lx(n-m) 0 {l/m}lxm 0lx(n-m}
-22-
0
{l/n}lxn
0
(l/n}lxn
0 {l/m)l xm 0 1x{n-m }
0 (l/m}lxm 0lx{n-m)
and the distribution of
g. is normal with mean
M2 and
covariance matrix
1 , where
2
and
Since W is a linear function of U , i = I, 2,
ni
ni
W = [W
n1 , Wn2 , Wm1 , Wm2 ]' has a normal distribution wi th mean
covariance matrix 13 , where
G = {uif2-sn }I2
and
-23-
~
and
!at )
Similarly. ( W -w
is normal with mean Mand covariance matrix I.
-n -m
where
2
a -s
o
o
o
* m 2
o
o
o a -s
* m
o
o Sn-5m o ).
o s n-sm
o
o
A:3 = [
001
00
0 0
1]
1 0 -1
o
1
0
0 -1
W
-m
• since ( W -w )
and
= A..}!..
-n -m
c
Lemmas 2.2.1 and 2.2.2 indicate that as evidence accumulates. the
posterior means of the chosen contrasts (referred to above) of
9
= (90 ,
91 , 92 )' behave like two independent Gaussian stochastic
processes. each with independent increments. The vectors
10·
11' ...
form a Markov process starting from
Note here that the obvious contrasts give posterior means that
behave like dependent processes.
The following lemma is used in the proof of Proposition 2.2.1.
Lemma 2.2.3.
Let X and Y be two independent random variables. If
G1(X) and G2 (Y) are sufficient statistics for
then (G 1(X). G2 (Y»
is sufficient for
-24-
~1
(~1'~2).
and
~2'
respectively.
Proof:
By Theorem 2.2.1 of Bickel 8.nd Doksum (1977). the
distribution of X is
for some functions gl and hI' and the distribution of Y is
for some functions
12
and
~.
Since X and Yare independent. the joint
distribution of X and Y is
[gl{Gl{x)'~I)hl{x)][~{G2{Y)'~2)h2{Y)]
= [gl{Gl{x)·~I)I2{G2{X)·~2)][hl{x)h2{x)].
[]
The next result shows the property of the statistic
Proposition 2.2.1.
The statistic
is sufficient for [{92 -9 1 )/.r2. {9 1+92-290 )/v'l5]. where XnO ' Xn1 and
X are as defined above.
n2
-25-
Proof:
with mean
It is mown that (X
nO
!
' X ,
n1
Xn2 )'
is normally distributed
and variance M. where
•
and
M=
o
o
= (O,-l/..r2,l/..r2)' and B2 = (-2/.J'6,l/.J'6.1/.J'6)·. Then
(X -X )/..r2 = Bi (X ' X , X )' is normally distributed wi th mean
nO
n2
n1
n2 n1
Let B
1
. and var iance
and (Xn1+Xn2-2XnO)/.J'6 = B (XnO '
2
Xn1 , Xn2 )' is normally distributed
with mean
and variance
Hence, (X -X )/..r2 is a sufficient statistic for (9 -9 )/..r2 and
n2 n1
2 1
(X +X -2X )/.J'6] is a sufficient statistic for (9 +9 -29 )/.J'6].
n1 n2
nO
1 2
0
Since B and B are orthogonal. by Theorem 3.3.1 of Anderson (l958)
1
2
Bi(XnO ' Xn1 , Xn2 ) and B (XnO ' Xn1 • X ) are independent. The result
n2
2
then follows from Lemma 2.2.3.
[]
-26-
2.3 The Stopping Risk
Suppose the number of patients available is N, where N is a
multiple of 3. Let
TO = {(wI'"2) : wI + v'"'3w2 ~ 0 and wI - v'"'3w2 ~ O},
(2.3.1)
T1
= {(w1 'w2 )
wI ~ 0 and wI - v'"'3w2
< O},
and
T2 = {(wI' w2 )
wI ~ 0 and wI + v'"'3w2
> O}.
Figure 2.3.1 gives the picture of the regions TO' T1 and T2 on (w 1 'w2 )
plane. That ('nl"n2) defined in (2.2.1) is in region TO is equivalent
to the assertion that the posterior mean of 90 is larger than the
posterior means of 91 and 9 , Thus, after n triplets of patients are
2
allocated, we will select the standard treatment for the remaining
(N-3n) patients if the observed ('nl"n2) is in TO' Similarly. we will
select treatment A if the observed ('nl"n2) is in T1 , select
treatment B if the observed ('nl"n2) is in T2 , select either
treatment A or B if the observed ('nl' 'n2) is on the boundary of T1
and T , select either standard treatment or treatment A if the
2
observed ('nl' 'n2) is on the boundary of TO and TI' select ei ther
standard treatment or treatment B if the observed ('nl' 'n2) is on the
boundary of TO and T2 , and select either one of standard treatment,
treatment A and treatment B if ('nl' 'n2) = (0, 0).
For ethical considerations mentioned in the last chapter, the
stopping risk associated with the above decision rule should be
considered over the entire patient horizon. Here, "stopping" means
swi tching from all three treatments to the best appearing treatment
for the remaining patients in the trial. Of course, the ideal
procedure assigns all N patients to the best treatment. The posterior
-2:1-
Bayes risk at "time" n is the expected total difference in response
between the ideal procedure and the procedure that stops the testing
stage after n triplets of patients have been allocated
~
gives the
remaining N-3n patients the treatment selected by the rule described
above. Let 'n be the a-field generated by
{X
ji
: j = O. 1.2. i = 1. 2 • . . . • n}.
En (M) = E(M I'n ).
and
The risk associated with stopping patient allocation at "time" n is:
(2.3.2)
0a
= OJ
if
OJ
is selected. j
= O.
1. 2.
This can be rewritten as
(2.3.3)
The objective is to find a stopping rule (or a random variable
that can only assume the values O. 1. 2 •...• N/3) to minimize
(2.3.4)
Since 'n is nondecreasing in n and
-28-
~
n. By combining this and the fact that stopping
is
enforced when
n = N/3. it can be proven that
where Jl
n
~
is
a random number to define the stopping rule and
{o. 1. 2 •...• N/3}. Hence. the first term of (2.3.3) has no
bearing on our optillBl stopping problem.
The second term of (2.3.3) should be rewritten in term of Wn1 and
Wn2 • since the posterior means of the two chosen contrasts. (Wn1 ,Wn2 ),
are considered after n triplets of patients have been allocated. Let
IT be the indicator of the set Tj • j = O. 1. 2. where Tj's are defined
j
in (2.3.1). Then.
By (2.2.1) W
n1 and Wn2 are Sn-measurable. It follows that IT'
o
IT and IT are S -measurable. too. Thus. (2.3.5) can be rewritten as
1
2
n.
follows:
f('n1' 'n2)
(2.3.6)
= -r6Wn2ITo(Wn1'Wn2) + ((.J"6Wn2-3v"2Wn1)/2)IT1(Wn1'Wn2)
+ ((.J"6Wn2+3v"2Wn1 )/2)I (Wn1 •Wn2 ).
T2
-29-
The next theorem shows that the decision rule, which is discussed
in the beginning of this section for determining the best appearing
treatment at time "n". is optimal.
Theorem 2.3.1.
Let TO' TI' and T2 be as defined in (2.3.1). The
following decision rule is optimal for deciding the treatment for the
patients in the post testing stage 1£ the testing stage is terminated
after n triplets of patients have been allocated:
(1) Select standard treatment 1£ the observed ('nl"n2) is in TO.
(2) Select new treatment A if the observed ('nl,'n2)
in Tl .
(3) Select new treatment B if the observed ('nl"n2) is in T2 .
Proof:
is
By (2.3.3) and (2.3.5), the 'stopping risk at time "n" is:
-(N-3n)En(9a-(90+91+92)/3)
=
[
-(N-3n)En«290-91-92)/3)
if standard treatment is selected,
-(N-3n)En «29 1-90-92 )13)
if treatment A is selected,
-(N-3n)En «292-90-9 )/3)
l
if treatment B is selected.
1£ the observed ('nl' 'n2) is in TO' we lmow that
which imply
-En «290 -9 1-92 )13)
~
-En «29 1-90 -92 )13)
and
Hence, it is optimal to select standard treatment 1£ the testing stage
is stopPed at time "n" and the observed ('nl' 'n2) € TO·
-30-
Similarly. we can prove it is optimal to select treatment A (B)
c
From (2.2.1). it can be seen that s
fun~tion
n
is a strictly decreasing
of n. Thus. we consider the process (W .W .s ) instead of
n1 n2 n
the process (W •W .n). By Lenma 2.2.2. The decrement of the time
n1 n2
variable s
n
is the variances of the increments of the spatial
variables W and W • To introduce sn in the stopping risk. it can be
n1
n2
proven that
-(N-3n)
where St =
=0 2 (sn-1 -St-1 ).
(o~2+(NI3)o-2)-1. Hence. the s"econd term of (2.3.3) can
be
rewritten as r1(Wn1.Wn2.sn):
The problem of optimal stopping can be recast in terms of a risk
It is obvious that stopping is enforced when there is no patient
available: that is. stopping is enforced when time variable s is equal
to St'
Now. the objective is to find a stopping rule (or a random
variable S that can only assume the values sO' s1' ...• St) to
minimize
-31-
Figure 2.3.1 : Regions To, T1, and T2
To: Standard treatment is the best appearing treatment
Tl: Treatment A is the best appearing treatment
T2: Treatment B is the best appearing treatment
2.4 Continuous Time Version
Consider a continuous time version of our problem, in which
stopping may occur at any point in the interval:
The clinical trial model may be regarded as a special case of the
continuous time problem wi th the restriction that stopping may take
place only in a limited subset of the (w1 ,w ,s) space, and hence the
2
optimal risks and related stopping sets are larger. Moreover, the
discrete time solution converges monotonically to the continuous time
solution if the set of possible stopping times {sO,sl,s2' ... }
increases and sUPi Is i -s1+11 ~ 0 (Chernoff and Petkau (1986».
Following Chernoff (1912) discussed in Chapter I, a limiting form
of our sequential problem is a special case for G = [St'sO]' of the
following stopping problem.
STOPPING PROBLEM. Given two independent Gaussian processes
{W (s), s ~ G} and {W (s», s
2
1
scale, with
= [0
o
~
G} of independent increments in the -s
(S)] = [ -ds 0]
], and Var [dW1
dW2 (s)
o
2
= w02 where So = OM'
starting at W1 (sO) = w01 ' W2 (sO)
time S to minimize
Stopping is enforced when S = St'
-33-
-ds
find a stopping
Let P(w ,w ,s) be the expected stopping risk associated with the
1 2
optimal stopping rule (under regularity conditions sufficient to imply
the existence of an optimal rule). Then, it pays to continue taking
(\bservations iff
(2.4.1 )
The point which satisfies the above inequali ty is called an optimal
continuation point. The set of all optimal continuation points in the
("1' "2' s) space is the optimal continuation region ~O corresponding
(2.4.1). The optimal stopping rule is characterized by the
continuation region
~O.
"e shall see that. the continuous time stopping
problem can be expressed in terms of a free boundary problem involving
the heat equation. The stopping problem can then be attempted using
the analytic tools of partial differential equations. The following
theorem gives some necessary condi tions for the optimal continuation
region
~O.
Theorem 2.4. 1.
(i) For (w ,w ,s) f
1 2
~O'
P(w ,w ,s)
1 2
(ii) For (w 1 ,w2 ,s) ~ ~O'
(2.4.1)
(2.4.2)
(2.4.3)
-34-
= r(w 1 ,w2 ,s).
Proof:
(i) is obvious.
(ii) Suppose (w 1 ,w2 ,s) ~ ~O. Then the probability of stopping
between s+6 and s is 0(6) and ('1"2) change from ('1 (s+6), '2(s+6»
to ('l(s), '2(s». Consequently,
P(w 1 ,w2 ,s+6) = E{p('1(S), '2(5), 5)I'1(5+6)=Wl , '2(s+6)=w2 )} + 0(6)
= E{p(w +Z J'"6, w +Z J'"6, 5)} + 0(6),
l
1
2 2
where Zl' Z2 are independent generic .N(O,l) random variables. By
Taylor's formula,
P(W1 ,W ,5+6)
2
= E{P(W1'W2~s)
+ ZlJ'"6PW1{Wl·W2'S) + Z2~PW2{Wl'W2'S)
2
2
+ Zl(6/2)p
w w (w l ,w2 ,s) + Z2(6/2)pw w (w 1 ,w2 ,s)
1 1
2 2
+ ZlZ2(6)Pw1w2(W1'W2's)} + 0(6)
= P(w 1.w2 ,s)
+ {6/2}p
(w 1 ,w2 ,s) + (6/2)p w w (w 1 ,w2 ,s) + 0(6).
.
ww
2 2
1 1
Thus,
(p
w1w1 (w1 ,W2 ,5)+PW2W2 (w 1 ,w2 ,s»/2
= P5 (w1 ,w2 ,s).
(iii) At a specified time s., assume ("1**' -2*' s.> and
("1.' .2*'
SM)
are on the boundary of
{('I' w2M ' s.) : wl **
< '1 < wl .}
~O'
where w1**
E ~O and
{('I' w2M ' s.): '1 ~ "1.} ( ~O· Then, since
-35-
< WI.'
the right-hand derivative
E[p(Wl.+Z~. w2•. s.)]. where Z is a generic N(O.l) random
variable. corresponds to the risk of the suboptimal procedure in which
one insists on not stopping between 5.+6 and s. but proceeding
optimally thereafter. So.
But
Z > O.
Z < O.
+ o(~).
Thus.
-36-
from~,
it follows that (r
w1
(w1*, w2*' s*)
and then establishes (2.4.2). Similarly, we can verify the expression
c
(2.4.3).
The partial differential equations in the Theorem 2.4.1 might not
have the unique solution. Shepp (1969) discusses a stopping problem
and obtains two solutions from the partial differential equations
which are the necessary conditions for the optimal rule. He shows that
one solution is better than another. Perhaps under some additional
conditions. the partial differential equations have the unique
solutions. ThUS," a solution based on the differential equations in the
theorem might not be a solution of our optimal stopping problem. The
boundary of the optimal continuation region is calculated in Chapter
III numerically without using the result of Theorem 2.4.1.
2.5 Parameter-Free Problem
The transformation
= aW1 (s),
'7(s)
W;<s) = a'2(s).
and
s*
= a 2s
converts 'l(s) and '2(s) to '7(s*) and W;(s*) which are also two
-37-
independent Wiener processes of independent increments in the -s*
scale with
and
d'7{S*)] =
Var
* *
[dW {s )
2
2
dS
0
[-a
0 ] = [-dS* 0]
2
* .
ds
0 -ds
-a
Since the function, f, is homogeneous wi th degree 1, if "a" is such
that a 2 St
= 1,
the stopping risk, r{w 1 ,w ,s), will be transformed to
2
b {w* ,w*2 ,s* ), where
1 1
Hence, the original problem is eqUivalent to the parameter-free
problem for which the stopping risk corresponding to the stopping at
Note that
{s*)-1
n
= {a2sn )-1
2 -2
2 -2
= (0 aM +n}/{u a* +N/3)
means the currently available proportion of the total potential
information.
The advantages of the parameter free approach is the enforcement
of stopping when s * = 1; this point is not affected by the total
-38-
• • •
number of patients N. Since the risk function b(w .w .s ) only depends
1 2
on the time variable and two spatial variables. the solution can be
applied to any case wi th specified parameters:
0. 0 .
•
and N.
•
•
The relationships between the new variables: 'nl' 'n2' sn and the
original variables: Xji's, (j
= 0,
I, 2: i
= I,
2, 3 •...• n) are
sUlllllElrized as the following:
(2.5.1)
(2.5.2)
(2.5.3)
•
2 -2
2 -2
.
sn = (0 0. +(N/3»/(o 0. +n).
-2
-2-1
sn = (0. +no ) • and
The second part of the stopping risk at time "n" in (2.3.3):
is equal to
Since stopping is enforced at s
• = 1.
backward induction can be
applied to solve the optimal stopping time of a discrete version of
the problem. If a good approximation to the difference between the
solution of the continuous time problem and that of discrete version
is available. one discrete version could be solved numerically. Its
-39-
solution. properly adjusted. can be used to approximate the solution
of the continuous problem as well as those of other discrete versions.
In summary. the set of possible stopping times of the original
problem is:
* sl'
* •..• st=I}.
*
{sO'
in which Is7 - s7+1 1 decreases in 1. For the continuous time problem.
the stopping may occur at any point in the interval:
A natural approximation to the results of these problems arises by
allowing stopping on a discrete set of possible values of s * •
{1+iA. i
= O.
1. ... }
So that. the backward induction is more forward straight.
Figure 2.5.1 gives an example of the scales of the time variable
s * for all the versions of the problem discussed above.
Figure 2.5.1 : The Scales of the Time Variable S*
ffittftH~--1
1 (a) The original problem
S~
I~----I
1 (b) The continuous version
S~
1
S~
(c) The discrete version with the possible
stopping times equally spaced
2.6 A 2-Dimensional Random Walk
An important factor in solving a problem numerically is the
amount of computing time it will consume. Based on the fact that a
simple random walk converges in distribution to a corresponding
Brownian motion (Theorem 16.1 of Billingsley (1968)). a 2-dimensional
random walk process 1s used to replace the original two Gaussian
processes. This results in a simpler backward induction algorithm. The
increments of the random walk are chosen to have the same first two
moments as those of the increments they are replacing. If the discrete
version of the original problem. in which the spacing of s * is A. is
considered then the following is the chosen simple random walk
process:
Let (V 1 (s).V2 (s»
be a random walk in two dimensions starting at
(0.0). where
each with probability 1/4. Stopping is permitted on the discrete set
of possible values of s. {1+iA. i
=O.
1. 2 •••• }. The risk
associated with the stopping on (v .v .s) is
1 2
(2.6.1)
where function f is defined by (2.3.6).
-42-
Define
A
(2.6.2)
b(v ,v ,s)
1 2
A
A
+ b(v ,v -1"2'r,s-A) + b(v ,v +v""2r"·s-A)]/4}.
1 2
1 2
Then
(v .v ,s) is a optimal continuation point 1£
1 2
A
b(v ,v .s)
1 2
< b(v 1 ,v2 ,s):
otherwise, it is a stopping point.
Denote by Cs and Ts the set
. of continuation points and stopping
points respectively for a fixed time s. Following the definition of
the last continuation point in Chernoff and Petkau (1984), we would
call (v ,v .s) a last continuation point 1£ (v -.J'""2r".v ,s) is a
1 2
1
2
continuation (stopping) point and (v1+.J'""2r". v , s) is a stopping
2
(continuation) point. Let L be the set of last continuation points
s
for a fixed time s. Then, L characterizes C and T . The next chapter
s
s
s
discusses a numerical method for finding L for a specified value of
s
s.
-43-
(]IAPJ'ER
III
NUJlERICAL S1UTIafS OF A 2-DIJm'fSIaw.. RANIXII WAll(
3.1 Introduction
The main result of this chapter is a numerical solution of the
set of the last optimal continuation points at a fixed time s. denoted
by L ' for a simple·2..edimensional random walk. The 2-dimensional
s
random walk considered here, the risk associated with the stopping on
{v ,v ,s}, and the optimal stopping rule are defined in the previous
1 2
chapter. A grid point {v ,v ,s} is called a last continuation point if
l 2
it is a continuation point and satisfies one of the following
condi tions :
{I} The grid point {v v"2I,v ,s} is a continuation point and
C
2
(v +J:2r,v ,s) is a stopping point.
1
2
{2} The grid point {vl+v"2I, v2 ' s} is a continuation point and
(v -v"2I,v ,s) is a stopping point.
l
2
The direct solution involves checking all the grid points in the
{V l ,V2 } plane, {(v l 'v2 ): v 1 =
O!i-i2I ,
v
2
= O+l..J2I', i = 1, 2, ... },
individually. This is very computationally intensive. If the number of
grid points necessary to be considered can be reduced, we can save the
computing time. Section 3.2 is devoted to describes the properties of
Ls which might be applied to reduce computation time.
The numerical techniques to determine Ls for a fixed time swill
be discussed in Section 3.3. Section 3.4 gives an example to
illustrate the use of this sequential clinical trials model.
3.2 The Properties of the Set of the Last continuation Points
The symmetry of L stated in Proposition 3.2.1 can be derived
s
directly from the symmetry of the stopping risk b(v 1 ,v2 ,s}, which is
defined by (2.6.1).
For a fixed time s, (v ,v2 ) is an optimal
1
continuation point (an optimal stopping Point), iff
Proposition 3.2.1.
(-v 1 ,v2 ) is an optimal continuation point (an optimal stopping point).
Proof:
The proof follows directly from:
for all real VI' v2 ' and s
> o.
o
](ore syumetry can be obtained for our original problelh, in which
two independent Gaussian processes are involved. Denote by CM the
optimal continuation region For the problem. Let (w , w , s) be a
1 2
specific point of (WI(s), W2 (s}, s}. If (w1 , w2 , s) is on the boundary
of CM , there are five other points on the boundary of CM which can be
determined by simply
51ft tching
the roles of the three treatments. The
reason is that all three treatments are on the same footing in the
established model. In other words, at a fixed time s, there are six
possible conclusions. The plane (W ,W ) can be divided into six parts,
1 2
each corresponding to an appropriate (Bayesian) conclusion if the
-45-
testing stage is stopped at time s. For example, the region
G1 = {(wI' w2 ) : wI - ~w2 ~ 0 and wI ~ O} corresponds to the
conclusion treatment B is the best, A is the second best, and the
standard treatment is the worst. Therefore, 1£ the boundary of CM in
the region G
1
is dete1'1lined, the boundary of CM in the remaining
regions can be solved by interchanging the three treatments. The
following result describes the symmetry for the original problem.
Let (W (s), W (s}} be two independent Gaussian
1
2
Proposition 3.2.2.
processes. The stopping risk associated with stopping at (WI' w ' s)
2
is b(w , w ' s}. Then, at a fixed time s" 1£ (WI' w ) is an optimal
1
2
2
continuation (stopping) point then so too are (-WI' w ),
2
(Wl/2+~w2/2, ~wl/2-w2'2), (-wl/2-~w2'2, ~wl/2-w2/2),
(-Wl/2+~w2'2, -~wl/2-w2/2), and (wl/2-~w2/2, -~wl/2-w2/2).
Proof:
Let
G1 = {(WI' w2 )
G2
= {(WI'
"2)
WI - ~w2 ~ 0 and WI ~ O};
WI - ~w2 ~ 0 and "1 + ~w2 ~ O};
G3 = {(WI' w2 )
WI + ~W2 ~ 0 and WI ~ O};
G4 = {(WI' w2 )
WI + ~w2 ~ 0 and WI ~ O};
GS
= {(WI' w2 )
G6 = {(wI" "2)
WI - ~w2 ~ 0 and WI + ~w2 ~ O};
WI - ~w2 ~ 0 and WI ~ O}.
-46-
2 is the same as the role of
treatment B in C : the role of treatment A in C is the role of
I
2
standard treatment in C : the role of standard treatment in C is the
I
2
role of treatment A in C .
I
Thus. the role of treatment B in C
Assume (WI (sn). W2 {sn» = (wnl •wnl ) is an optimal continuation
(stopping) point in C . Then. by interchanging standard treatment and
1
treatment A. there is a corresponding optimal continuation (stopping)
point is C given by
2
Not~
that
Similarly. there is a corresponding optimal continuation (stopping)
point in
G:3
which is
En[{81-80)/~.{el+90-282)/~]
= (-wnI /2+v'"3wd2 • -I"3wnI /2-wd
-47-
2):
a corresponding optimal continuation (stopping) point in G4 which is
a corresponding optimal continuation (stopping) point in GS which is
En[(OO-02}/~·(02+00-201}/~]
= (-wn1 /2-.J""3wn2/2 • .J""3wn1 /2-wn2/2);
and a corresponding optimal continuation ·(stopping) point in G6 which
is
En[(OO-Ol}/~·(Ol+OO-202}/~]
= (wn1 /2-.J""3wd2 •
-.J""3wn1 /2-wn2/2).
o
This result does not apply to the case involving the
2-dimensional random walk. since the grid points considered by the
random walk do not reflect the symmetry. The next two results.
Proposition 3.2.3 and Proposition 3.2.4. are very useful for computing
last continuation points.
The rest of this section concerns only the case involving the
2-d1mensional random walk. In Section 2.6. the optimal stopping rule
-48-
is defined and (V 1 ,V2 ,s) is defined to be an optimal continuation
point if the following· inequality is true:
A
b{V 1 .V2 .S) < b{V 1 ·V2 ·S),
A
where b{v 1 .v2 ,s) and b{v 1 .v2 ,s) are defined by (2.6.1) and (2.6.2),
respectively.
To verify Proposition 3.2.3, we make the following transformation
if n
= 0,
otherwise.
Note that
A
b{v 1 ,v2 ,sn)/g{sn)
= min{-f{v1 ·v2 )· an[;{v1~,v2·sn_1) + ;(v1~,v2,sn_1)
This transformation converts the stopping rule to: stop as soon as
A
e(V 1(s).V2 (s).s) = -f{V1{s).V2 {s».
The next lemma leads to the result of Proposition 3.2.3.
Lemma 3.2.1.
A
a n+1 e{v 1 .v2 ,sn) is nondecreasing in n, for-n
= 1,2,
....
Proof:
Th~
proof proceeds by induction on n. First, the following
inequali ty nnlS t be proved:
A
A
~e(vl,v2'S2) ~ ~e(vl,v2,sl) = ~[-f(vl,v2)]·
(3.2.1)
A
By the definition of e(v 1 ,v2 ,sn)'
A
~
e(v 1 ,v2 'S2)
-f(v 1 ·v2 )·
Since a n is nonnegative and nondecreasing in n.
(3.2.2)
= 1.
So. the lemma is true for n
For the induction step, the following nnlst be proved:
If
A
A
a n+1e(v 1 .v2 ,sn)
~
a ne(v 1 ,v2 ,sn_l) for n
= 3.
4 •...• k,
then
A
A
~+le(vl,v2·Sk) ~ ~e(vl,v2,sk_l)·
A
Case 1: e(v 1 ,v2 ,sn)
= -f(v1 ,v2 ),
n
= k-l,
k.
It is obvious that
A
~+le(vl·v2·Sk}
= ~+1[-f(vl·v2)]
A
~ ~[-f(vl·v2)]
since a n is nondecreasing in n.
-50-
= ~e(vl·v2·sk_l)
,.
Case 2: e(v l ,v2 ,sn)
Since
,.
,.
~e(ul,u2'~_l)
and a
n
is
~ ~_le(ul,u2'~_2)'
V u l ' u2 '
nondecreasing in n,
,.
,.
~1~e(ul,u2'~_l) ~ ~~_le(ul,u2,sk_2)'
V u l ' u2 ·
Thus,
,.
,.
~+le(vl,v2'~) ~ ~e(vl·v2'~_l)·
,.
l"
,.
Case 3: e(vl,v2'~_l)= 4~_l[e(vl-~,v2'~_2)+e(vl~,v2'~_2)
~{vl,v2-~,sk_2)+;(vl,v2+~'~_2)]}'and
,.
e(v l ,v2 ,sk) = -f(v 1 ,v2 )·
,.
By the definition of e(v ,v ,sn)' we know
l
2
Together with the arguments in Case 2, we get
,.
,.
~+le(vl,v2'~) ~ ~e(vl,v2·sk_l)·
-51-
A
= -f(v 1 ,v2 ),
Case 4: e(v 1 ,v2 ,sk_l)
and
A
It is obvious that e(v 1 ,v2 ,sk)
have
~
-f(v 1 ,v2 ). Since
A
~+l ~ ~ ~
0, we
A
~+le(vl,v2'Sk) ~ ~+I[-f(vl,v2)] ~ ~[-f(vI,v2)]
= ~e(vI,v2,sk_I)·
c
By the optimal stopping rule, (v ,v ,s) is an optimal
1 2
continuation point if
A
e(v l ,v2 ,sn)
< -f(v I ,v2 )·
A
By the result of Lemma 3.2.1, the Bayes risk, e(v 1 ,v2 ,sn) is
nonincreasing in n. Thus, if
A
e(v 1 ,v2 ,sn_l) < -f(v 1 ,v2 )·
then
A
e(v 1 ,v2 ,sn)
< -f(v I ,v2 )·
In the other words, if (vI' v2 ) is an optimal continuation point at
time s
s
= sn .
= sn-l'
(vI' v 2 )
€
Cn 1s an optiDBI continuation point at time
Hence, we establish the following result.
-52-
Proposition 3.2.3.
Cn is nondecreasing in n, where Cn is the set of the optimal
continuation points when s
= sn .
Proof: The proof follows directly from the above argument.
c
In general. the above result is true for a sequential stopping
problem involVing the 2-dimensional random walk process described
above, if the stopping risk associated with a grid point, (u ,u ,t)
1 2
say, can be expressed as -he t)q(u ,u ) such that:
1 2
(1) h(t) is nonnegative and log concave;
(2)
q(ul'~) is
nonnegative and "informative".
(For example, if q(u ,u ) is a constant function, it is not
1 2
"inf.ormative" .)
By Proposition 3.2.3, all the points, (v ,v )'s, which are
1 2
continuation points at time s , would be continuation points at time
n
sn+1' Thus, the number of points. which must be examined to determine
whether they are continuation points or stopping points. is reduced.
Recall the decision rule discussed in Section 2.3. The standard
treatment will be used on the remaining patients if we decide to stop
allocating patients at time sn and the observed posterior mean of the
chosen contrasts, (W7Csn). J;(sn»' is in the region TO . Similarly,
new treatlllent A (B) will be selected if the observed ('7(sn)' W;(sn»
is in the region T (or T ). where
2
1
(3.2.3)
TO = {(x,y)
x+0i'3y ~ 0 and x-..r.3y ~ O}.
T1 = {(x.y)
x ~ 0 and x-..r.3y ~ O}, and
T2 = {(x,y)
x ~ 0 and x+..r.3y ~
-53-
o.
Here, the inverse of time s , 1/s , means the proportion of patients
n
n
that have been allocated.
The following property tells us that a grid point (v 1 'v2 ) is a
stopping point at time sn' if all its four adjacent grid point are
stopping points at the previous step and in the region TO ( or T1 or
T2 )·
If (v 1 ! ~, v2 ) and (v 1 ' v 2 ~~) are
stopping points at time sn-1 and in the region Ti , i = 0, 1. 2. then.
Proposition 3.2.4.
(v 1 ' v2 ) is a stopping point at sn.
The points (v 1 !
Proof:
points at time s
~ ~) are stopping
n- 1. Then,
.....
e(v 1 ,v ,sn)
2
We know that 0
.J'2I, v2 ) and (v 1 ' v2
< an < 1
Therefore. -f(v ,v )
1 2
= min{
-f(v 1 ,v ) , -an f(V1 ,v )}.
2
2
for all n and f(v 1 , v2 )
< -an f(v 1 ,v2 )
~
0 for all v 1 ' v2 .
and (v 'v ) is a stopping point at
1 2
time s .
C
n
Suppose by the current information it is known that we will stop
at the next step and make a decision. The decision is the same as that
we will make at the current step if stopping is chosen. Then we should
stop at the current step because there is no need to go one step
further. The above is what Proposition 3.2.4 states. The next result
is strongly supported by computer-generated numerical evidence, and
intuition, but remains unproven.
-54-
Con1ecture 3.2.1.
If at time sn' n
= O.
1.2•....•• (V 1 .v2 ) is an
optimal stopping point in the region T2 • then the points (v 1+.J2I'.
V
2)
and (VI' v +..r2A) are s.topping points. too.
2
Proposition 3.2.5. Assume the distance of the time steps A is greater
than (2-1"3-3)/6. Then,
are optimal stopping points at time. s n • n
Proof:
= 2.
3 .....
At time s2'
are optimal stopping points if
where
(Rl'~)
= (0.
*1) or (*1. 0) each with probability 1/4.
Since
A
b{v t ·v 2 'SI)
= b(v1 .v2 .s 1 )
V VI' v2 •
(0. v ) is an optimal stopping point when
2
This is equivalent to saying (0. V ) is an optimal stopping point when
2
for A ~ (2J"'3"-3)/6.
-55-
Now assume (0.v2 ) is an optimal stopping pOint at sk' k = 3.4•...•
m-l. it
where A ~ (2\f"3-3)/6. Then. at time s m•
are stopping points if
A
(3.2.3)
b(0.v2 .sm) ~ E [b(O+R1~.v2+R2~.sm_1)]
= E [b(O+R1~.v2+R2~.sm-1)]'
where (R 1 .R2 ) = (0. %1) or {:H. O} each with probability 1/4.
The inequality. (3.2.3). is equivalent to
If A ~
(~-3)/6.
am
~
a m- 1+1.
Hence. the proof is complete.
c
Proposition 3.2.5 provides the growth rate in s of the
continuation region in the V direction when the distance of the time
2
steps A is large enough. The growth rate is a factor of the square of
the number of time steps between s=1 and the current step. Note that
Thus. the inferiority of the standard treatment to one of the new
treatments is quantified by the magnitude of 'n2. Specifically. if 'n1
-56-
is O. i.e .• if based on the available information the two new
treatments have the same effects. then the magnitude of W
n2
can tell
how much the standard treatment is worse than both of the two new
treatments. Because of this. the continuation region grows quickly in
s in the V direction. Equivalently. the continuation region is
2
reducing very fast in the V direction in l/s which means the
2
currently available proportion of total potential information.
The next proposition describes the relationship between the
optimal continuation regions which are obtained by using different
spacings of the time variable s.
Let
Proposition 3.2.6.
es. i
be the set 'of the optimal continuation
point at time s corresponding to the grid spacing (the step size) Ai.
Then at any fixed time s
es. ICCs. 2 ces. 3
= A1/(4i-I }.
if Ai
Proof:
1
C ...•
= 1.2,3•....•
The proof is 1'11'1Dediate because the optimal stopping rule.
corresponding to the grid spacing Ai' is a suboptimal stopping rule.
corresponding to the grid spacing Ai + 1 .
c
Figure 3.2.1 illustrates the result of Proposition 3.2.6 using
Al
= 0.125
and s = 12.
-57-
Figure 3.2.1 :The Last Continuation Po~ts at s=3
(Considering Different Time Vanable Spacings)
V2
18
6 =0.125
14
-----------------. 6 =0.125/4
------:..----- 6 =0.125/16
10
- - - - - - - 6 =0.125/64
6
2
-2
1
3
5
7
9
·
3.3 Ihe NUmerical Techniques
This section describes the techniques employed in obtaining
numerical descriptions of the last optimal continuation points. The
results established in Section 3.2 allow the computational effort to
be subs tantially reduced.
Section 3.3.1 discusses how those results are applied in
programming. Section 3.3.2 discusses some programming techniques for
handling huge matrices, and an alternative algorithm to what is
described in Section 3.3.1. An interpolating. method is described in
Section 3.3.3 which reduce the computer time required for small grid
spacings.
3.3.1 Programming Techniques
First recall the stopping risk and the Bayes risk associated at a
f(v 1 ,v2 )
= ~V2IT
(v1 ·v2 ) + «v'6v2-3.J""2v 1 )/2)!.r (v 1 ,v2 )
1
o
+ «v'"'6v +3.J""2v )/2)I (v ,v ),
2
1
T2 1 2
where IT is the indicator of the set T, and TO' T1 and T2 are defined
by (3.2.3).
Then the stopping risk at (v ,v ,s) is
1 2
(3.3.1.1)
-59-
The Bayes risk associated with the optimal stopping rule at (v ,v ,s)
1 2
is:
(3.3.1.2)
where
(R1'~)
= (f1,O)
or (O,f1) each with probability 1/4,
respectively.
As discussed in Section 2.6, (v ,v ,s) is an optimal continuation
1 2
point if
A
b(v ,v ,s}
1 2
(3.3.1.3)
< b(v 1 ,v2 ,s}:
otherwise, it is an optimal stopping point.
The backward induction algorithm yields the optimal solution to
the stopping problem, since stopping is enforced when s = 1. Actually,
the testing stage will not continue after s = 1+A, since
(3.3.1.4)
b(v
1
,l+A}
2
,V
<0
A
= b(v ,v2 ,l+A}.
1
Two programs, say D (direct) and F (fast), are coded in the computer
language C with the backward induction algorithm to solve the last
continuation points numerically. COnjecture 3.2.1 is applied in both
programs. A direct algorithm, 1.e., checking every grid point until
COnjecture 3.2.1 can be applied, is used in program D. Program F is
more efficient by applying Propositions 3.2.3 and 3.2.4.
The basic features of the programs D and Fare:
(a) Two matrices store the risk, one for the previous step, and one
for the current step:
(b) updating the risk occurs only at the grid points which are in the
optimal continuation region;
(c) grid points are chosen symmetrically with respect to·the line
VI
=0;
(d) deciding the number of the grid points, which need to complete the
computation to justify whether the inequality (3.3.1.3) is
satisfied before they can be classified as continuation points or
stopping points at each time step.
By (3.3.1.2), the Bayes risk at a specified point can be computed
directly
~
recursive method, but it requires heavy expenditures of
A
memory and time. To calculate b{v ,v ,l+iA), i t is necessary to know
1 2
A
A
b{vI~~'V2,l+{i-1)A), b{v1,v2~~,l+{i-1)A) and b{v 1 ,v2 ,l+1A).
A
Thus, in order to compute b{v ,v ,l+nA), one must first calculate and
1 2
store
numerical values.
To avoid the demands in computer space and time of using
recursion, two matrices are used in both programs. The matrices are
used to store the risk for the previous step and the current step,
respectively. The risk corresponding to a specific grid point at a
specific time. step can be calculated using the two DELtrices, because
the risk at the current step depends only on the risk at the previous
A
step. To calculate b{v ,v ,l+iA), it is only necessary to compute
1 2
b{v ,v2 ,I+iA). The numerical values of b{vI~,v2,1+{i-1)A),
1
A
b{v1'V2~~,l+{i-1)A) are stored in the DELtrix. Therefore, compared
-61-
to recursive method, the use of two matrices saves computer time and
memory.
It is obvious, when s
= I,
that all the grid points are stopping
points and the stopping risk at each grid point is O. For convenience,
let si = l+iA. Then, (3.3.1.4) implies that at s = sl'
...
b{v 1 ,v2 ,sl)
= b(v1 ,v2 , l+A) ,
In other words, when s = sl' all the grid points are stopping points,
too.
Denote by PS and CS the matrices for storing the risks of the
previous step and the current step, respectively. At time s
= s2
each
cell of the matrix PS is determined by the function b(vI' v2 ,I+A)..
Naturally, the matrix CS should be determined by:
= min{
...
b(v l ,v2 ,l+2A), E[b(v1+R1~,v2+~~,l+A)]},
= min{ b(v 1 ,v2 ,l+2A), E[b(v1+R1~,v2+~~,l+A)]},
where
(R1'~)
= (±l,O)
or (O,±I) each with probability 1/4.
Instead, the matrix CS is determined by:
...
= min{ b(v 1 ,v2 ,l+A), ~E[b(v1+R1~,v2+~~,l+A)]},
where
~
= (1/sl-1)/(1/s 2-1) and
(R1'~)
-62-
= (±l,O) or (O,±l) each with
probability 1/4. Hence, to update the risk stored in the matrix PS for
= s3'
deciding the continuation region at s
we need to only update the
risk corresponding to the grid points which are continuation points at
So b(v 1 .v2 ,l+4} is stored in the matrix PS corresponding to the grid
point (vl' v ). Thus, it is not necessary to update the risk in the
2
stopping region. Similar settings can be applied to the other time
steps.
In Summary, with this setting at time s
= si'
if (v 1 'v2 ) is a
continuation point, the risk stored in the corresponding cell of the
matrix CS would be determined by:
A
A
= a i b(v 1 ,v2 ,si)'
b (v1 ,v2 'Si}
ai
where a i = (1/sl-1)/(1/s i -1). Otherwise, if (v 1 'v2 ) is a stopping
point, the risk stored is
Note that
where a i
= (1/sl-1)/(1/s i -1)
and
(R1'~)
= (i1,O)
or (O,i1) each with
probability 1/4. This setting does not change the fact that the risk
at the current step only depends on the risks at the previous step.
-63-
However. it is unnecessary to update the risk from one time step to
the next for the stopping points with the setting·. Hence. the program
can be made more efficient with this result.
Due to the synnetry of the function b(v .v .s).
l 2
....
....
b(v l .v2 .s) = b(-v l ·v2 ·s).
Hence. using a grid which is synnetric about V = O. the continuation
l
region on the negative haH plane is mown if it is mown on the
positive half-plane. In programs D and F. we use the grid:
{(v l .v2 .s): s = l+i4. v l = (-l+j)~. v 2 =~. i = O. 1•...• nSf
j
= O.
1, •••• nvl, k
= O.
±1, •..• nv2}.
where ns. nv1 and nv2 are specified by the input of the programs.
Figures 3.3.1.1 and 3.3.1.2 give the picture of the last continuation
points for several possible values of the time variable s. The data
for the figures is a computational result of program F. The spacing of
the time variable 4 used is 0.125. Figure
continuation region at s
3.~.1.3
shows the optimal
= 100.
The main difference between programs D (direct) and F (fast) is
that we utilize the results of Propositions 3.2.3 and 3.2.4. in
addition to Conjecture 3.2.1. in program F. To apply Propositions
3.2.3 and 3.2.4. the algorithm for program F is more complicated than
the algorithm for program D. The programming to get the solution of
last continuation points. with the application of Propositions 3.2.3
and 3.2.4. is relatively difficult. With the help of
~he
results
produced by program D. we are able to code the program F correctly.
-64-
For each grid level inV , program D examines every grid level in
2
V until Conjecture 3.2.1 is applicable. To locate the last
1
continuation point corresponding to a grid level in V , v
say,
2M
2
program D will sequentially test the grid points:
{(i~,v2M): i
= 0,
1, 2 ••..• nv1},
where nvl is the number of grid levels considered in V (specified in
1
the program). The testing will be terminated if one of the following
conditions is true:
(i)
All grid levels in V have been tested;
1
(ii) a grid point. (v 1M ,v ) say. is determined as a
2M
stopping point such that (v1*-~,v2M) is a continuation point.
The computing is very time consuming if a large number of grid levels
are considered.
By Proposition 3.2.3. if a grid point is a continuation point at
time si' it is a continuation point at time si+1. Thus, at the current
step it is not necessary to examine the grid points which are
continuation points at the previous step. Also, by proposition 3.2.4,
a grid point is a stopping point at time si if all the four adjacent
grid points are stopping points at time si_l and are in the region T2
(or T OT TO). These results allow the consideration of many fewer
1
grid points than when Conjecture 3.2.1 is applied alone.
Using Propositions 3.2.3. 3.2.4 and Conjecture 3.2.1. some grid
points can be determined to be continuation points (or stopping
points) with just the information about the continuation region of the
-65-
previous step and with no computation. In program F. vectors are used
to store the locations of the last continuation paints at the previous
step. Based on the locations of these continuation points. program F
determines whether a computation is necessary to classify a grid point
as a continuation point or a stopping point. The number of
computations necessary is. therefore. much smaller than the total
number of the grid points.
The folloWing is a simple example to illustrate tracking the
locations of the last continuation points and determining whether a
grid point must be examined in program F. Let
VI
store levels of the grid in Vl:· VI[l]
=-vr2r.
VI[2]
=O.
Vl[3] =..r2I. etc;
V2
store levels·of the grid in V2 : V2[1]
V2[2]
2J:2A. etc;
= O.
V2[2]
= vr2r.
=
INDP store the locations of the last continuation points at the
previous step: INDP[2] = 3 means that (VI[3].V2[2]) is a
last continuation point at the previous step.
Suppose currently INDP[2]
=6.
INDP[3]
= 6 and
INDP[4]
= 8.
For this
example. fewer grid points must be examined to solve the last
continuation point corresponding to the grid level in V • V2[3]. using
2
program F than program D.
Progra. D will sequentially examine the grid points:
{Vl[i].V2[3]. i
= 2.
3. 4 •...• nvl}
where nvl is the number of grid levels considered in VI' until a grid
point. (VI[q].V2[3]) say. is determined to be a stopping point such
that (Vl[q-I].V2[3]) is a continuation point. By Proposition 3.2.3.
-66-
{V1[i] ,V2[3] , i
= 2,
3, 4, 5, 6} are continuation points since they
are continuation points at the previous step. Thus, currently the
first possible stopping point is V1[7] corresponding to the grid level
in V V2[3]. So, with the algorithm of program D, at least 6 grid
2
points must be examined.
Using the algori thm of program F, the maximum number of the grid
points which must be examined is only 3. By Proposi tion 3.2.3 it is
not necessary to consider the grid points
{(V1[i],V2[3]): i
{(V1[i], V2(3]): i
= 2,
~
3, 4, 5, 6}. By Proposition 3.2.4,
9} are stopping points at the current step. So,
it is not necessary to consider them, either. Therefore, program F
will examine only the grid points {(V1(i],V2(3]), i = 6, 7, 8}, by the
increasing order of i. If one of these grid points is determined to be
a stopping point, say (V1(k],V2[3]), the examination is terminated and
(V1[k-1],V2[3]) becomes the last continuation point. If none of them
is a stopping point, (V1[8],V2(3]) Is the last continuation point.
In the above example, the only grid points considered have the
grid level in" Vl' which is
-I'2r or
0, or have all adjacent grid
points in the region T . It is simpler to decide how many grid points
2
should be examined in this case. In the case with the grid points
which are not all in the region T or with some of the four adjacent
2
points not in T , is very difficult.
2
For example, solving the last continuation point, corresponding
to a negative grid level in V , is more complicated. Because of the
2
shape of the continuation region (refer to Figure 3.3.1.1),
corresponding to a positive grid level in V , we need only solve for
2
one last continuation point, called the "right-last continuation
-67-
point", from the positive grid levels in V . But, corresponding to a
1
negative grid level in V , there might be two last continuation
2
points, called the "left-last continuation point" and the "right-last
continuation point" in the posi tive grid levels in V1•
To solve the last continuation point corresponding to a posi tive
grid level in V , the first grid level in V , which IIR1st be examined,
2
1
is the grid point right next to the last continuation point at the
previous step. This is by the result of Proposition 3.2.3. But, it is
not always true in solving last continuation points for negative V
2
grid levels because of the reason discussed above.
Program F handles the case with a negative grid level in V
2
involved by the following procedure:
{1} Decide the number of the last continuation points in positive V
l
grid levels, corresponding to the considered V grid level. based
2
on the continuation region at the previous step. (The answer can
be obtained by PToposition 3.2.3.)
{2} If the answer of step {1} is "one", the examining procedure is the
same as that of the case with a positive V grid level involved.
2
1£ the answer of step {l} is possibly "two", go to next step.
{3} By the locations of the "left-last (right-last) continuation
points" at the previous steps, decide the grid levels in V , which
1
JalSt be examined for determining the "left-last (right-last)
continuation points". (The answer of step {3} can be obtained by
Conjecture 3.2.1, Proposition 3.2.3 and Proposition 3.2.4).
Hence, program F uses another vector to store the locations of
"left-last continuation points", corresponding to negative grid levels
in V •
2
-68-
The above case is only one of many cases which needs more
detailed consideration. All the required considerations that should be
done are given in program F, which is listed in Appendix A. The
program illustrates how Propositions 3.2.3. 3.2.4 and Conjecture 3.2.1
can be utilized to decide how many grid points should be examined to
determine the last continuation points.
Both programs D and F are compiled on the
(X)NVEX
C240
supercomputer at the Univers i ty of North Carolina at Chape I Hill. In
the case that 2000 grid levels are considered both in VI and V2 , Table
3.3.1.1 gives the CPU time required to compute the last continuation
points at 1+(641)A. Here A is the spacing of the time variable s.
Table 3.3.1.1
The CPU Time ReqUired for the Solution
at Time 1+(641)A, with A = 0.125
Program used
CPU time required
D
11095.14 seconds
F
1777.74 seconds
As shown in Table 3.3.1.1, program F is much more efficient than
program D. The techniques. we have discussed, do reduce the labor
involved in carrying out the backward induction algorithm to solve our
problem.
-69-
Figure 3.3.t1:The Last Continuation Points
at s: f2, 5, 10, 20, 35, 50, 65, 80, 100}
(~ : 0.125)
V2
30
20
10
1
0~
-10
-20
-30
0
10
20
30
40
V1
Note: The area inside the two lines marked tOO is
the optimal continuation region at s=100, etc.>
50
60
F~ure
3.3.1.2: The Last Cont~uation Points
at s={40, 50, 65, 80, 100}
(~ =0.125)
V2
60
40
20
0
-20
-40
-60
0
4
20
40
60
80
V1
Nole :The area inside the two lines marked 100 is
lhe optimal continuation region at s=100, etc.
100
120
Figure 3.3.1.3: The Continuation Region
at s=100 (A =0.125)
V2
60
40
20
i
1\
)
\
./
/"
o
//
i",
""'-~
~
/
~~
!'-...
//
-20
~
-40
......................
1-
-60
'-
1
,
I
I
I
I
I
-60
-40
-20
0
20
40
60
3.3.2 Programming with Huge Matrices in C
The computer language C was devised originally for systems
programming work, not for scientific computing. Relative.to other
high-level programming languages, C puts the prograJlllleTS "very close
to the m.chine". Therefore, declaring a huge m.trix in C is not so
simple as it is in Pascal or Fortran.
A
huge m.trix, PS say, can be declared in Pascal (on the Vax 6330
at the University of North Carolina at Chapel Hill) with simple "VAR"
statement. For example, to declare a 2OOOx2000 matrix of reals one can
simply say:
Var
-
PS:array[l .. 2000.1 .. 2000] of "variable-type"
where "variable-type" can be "real", "single", "double"·or "extended".
Any similar statement in C (on the COnvex 240) would result in a
compiler error. Instead, one needs to use something like the
following:
(1) Define i and ps:
double: PS[2000]
int: i
("Double" is the variable type, it can be "float". too; "int"
means
integer.)
-73-
(2) Run a do loop to make PS a matrix:
for (i=O: i<2OOO; i++)
{ PS[i] = (double *) malloc«2OQOMsizeof(double»;
if (PS[i]
==
null)
{ fprintf (stderr. "Memory allocation fails: PS[Xd]\n".i);
exit(-l);}
}
This method. and some other prograrmting conventions in C. are
discussed by Press. Flannery, Teukolsky and Vetterling (1988).
Another aspect of computing efficiency concerns how a matrix is
-
allocated memory space in the computer. For example, on the Convex
C240, Fortran allocates space by columns
whil~
C allocates space by
rows.
For most computers, a block of variables stored in adjacent
locations are brought into a "quick retrievable state" all at once, as
a "page". A program involving a huge matrix, with size larger than a
"page". should take advantage of this. The "do loop" related to the
computation of the matrix should be set up with the computing sequence
in accordance wi th how the matrix is stored (which can be ei ther by
columns or by rows). Failing to do this can cause a heavy burden on
memory retrieval. In other words. the computer has to spend more time
"swapping pages". This can increase a program's execution time very
subs tan t ially .
Finally, there is an alternative programming algorithm "for this
problem which avoids the use of tlDO matrices; only one is needed.
-74-
Because two huge matrices demand a large amount of computer memory.
the sizes of these matrices are limited.
There is a prograDllling technique called "Sweep" (Knott; personal
cOlllllUJ1ication) which permi ts one to work wi th a single large matrix
and some variables. This can be motivated by considering the formula
for the optimal stopping risk defined by (3.3.1.2). Let
nv1 be the number of grid levels in V1 ;
nv2 be the number of grid levels in V2 ;
V1[i]
V2[j]
= (-2+i).J'2A. i = 1, 2. ... , nv1;
= (j-lnv2l2J ).J'2A. j = 1, 2. . .. . nv2 .
Then. for a fixed p •
. ,..
~b(V1[i].v2[p].sq):
i
= 1.
2 •...• nv1-1}
can be computed if
,..
{b(v1[i].v2[p-1].sq_1): i = 1. 2 ••..• nv1}
and
,..
{b(v1[i].v2[P+l].sq_1): i
= 1.
2 •...• nv1}
are known. Therefore. instead of using two matrices to store the risk.
"Sweep" uses three long vectors.
PrograJllS
D and F consider all the grid levels at a single time
step before moving to the next time step. In contrast. "Sweep" works
with part of the grid levels for several time steps simultaneously.
Naturally. this requires a heavy "bookkeeping burden" so as to
-75-
guarantee that information is produced in a timely manner. properly
sequenced.
The basis for "Sweep" is that in the backward induction algori thm
A
{b(v1[i].v2[p].sq }: i = 1. 2 •...• nv1} are useless in the computation
of the risk at time Sj' j
~
q+1. after
A
{b(v1[i].v2[P+1].Sq+1): i = 1. 2 •...• nv1-1} have been calculated.
Assume nv1 is large enough so that
{(v1[nv1].v2[j]). j
= 1.
are stopping points at all time steps:
A
{b(v1[i].v2[j]'S3): i
2 •...• nv2}
~.
= 1.2•...•
k
~
3. To get
= 3.
nv1: j
4}.
the calculation algori thm of "Sweep" Is as follows:
Step {1}: Calculate and store
A
{b(v1[i].v2[j]'S1): i
= 1.
2 •...• nv1. j
= 1.
Step {2}: Calculate and store
A
{b(v1[i].v2[2]'S2): i = 1.2•...• nv1}.
Step {3}: For j = 4. calculate and store
A
{b(v1[i].v2[j]'S1): i = 1. 2 •...• nv1}.
in where
A
{b(v1[i].v2[j-3]'S1): 1 = 1. 2 •...• nv1}
was stored.
-76-
2. 3}.
= 3.
Step {4}: For j
calculate and store
A
= 1.
{b(vl[i].v2[j].s2): i
Step {5}: Repeat Step {3} with j
j
=5
2 •.•.• nvl}.
and then repeat Step {4} with
= 4.
=3
Step {6}: For j
and k
= 3.
calculate and store
A
{b(vl[i].v2[j]'S3): i
= 1.
2 •...• nvl}.
Step {7}: Repeat step {3} with j = 6.
=5.
Step {8}: For j
calculate and store
A
{b(vl[i].v2[j]'S2): .i
= 1.
2 ..... nvl}.
in where
A
{b(vl[i].v2[j-3]'S2): i = 1.2•...• nvl}
was stored.
step {9}: Repeat step {6} with j
=4.
It can be seen irOll steps {3}. {5} and {8} that the risk which is
not needed to estimate the risk for the coming time steps is not
stored. Thus. "Sweep" is not as memory-demanding as programs D and F.
Let nvl be the number of grid levels in VI' nv2 be the number of
grid levels in V2 • and ns be the total mDiber of time steps. Using two
matrices requires the memory space to store 2(nvl)x(nv2) numerical
values. "Sweep" needs only the space to store 3(nvl)x(ns) numerical
values. The amount of the memory required by "Sweep" is less than
three eighths of that required by using two matrices since ns should
be less than nv2l4.
-77-
It is difficult to apply the properties of the last continuation
points, described in Section 3.2, with the calculation algorithm of
"Sweep". Tracking the locations of the last continuation points is
especially hard with the algorithm. Hence, this calculation algorithm
was not used in programming.
3.3.3 Interpolation
This section discusses a numerical method which can be employed
to obtain smooth estimates of the boundary of the continuation region
for the continuous time problem. The use of a large grid spacing
produces estimates of the boundary of the continuation region which
are not smooth. On the other hand, the use of a small grid spacing in
a backward induction, designed to obtain estimates for large values of
s, say as large as s
= 10000,
would requires an exorbitant amount of
computer time.
By carrying out a single backward induction that incorporates a
changing step size as it proceeds. SDIOoth estimates of the boundary
can be obtained without consuming a large amount of computer time. The
use of various grid levels may apply to most areas of numerical
computation. Briggs and McCormick (1987) introduce some essential
principles of multigrid methods designed for partial differential
equations. One of the very basic principles of multigrid techniques is
the use of various discretization levels to resolve different
components of error in the approximation.
The first phase of the backward induction, which incorporates a
changing grid spacing as it proceeds, might execute M steps
1
corresponding to a large grid spacing AI' from the initial value s
-78-
=1
to s
= 1+M141 = si
say, and the second phase might execute M2 steps
corresponding to a smaller grid spacing 4 , from the initial value for
2
this phase si to si+M2A say. The backward induction can be continued
2
for as many phases as desired if
(1) the factor for decreasing 4 from phase to phase is
appropriate;
(2) at the first step of each phase (except the first phase),
estimates of the risk at all the new grid levels are
precise.
We select a factor of 4 to decrease the 4 from each phase to the
next. To perform the interpolation, the following equality should be
true:
v2dim = 4Mns +1,
where v2dim is the number of grid levels in V and ns is the number of
2
steps in each interpolation phase. Each phase executes the same number
of time steps. Since the spacing of the spatial variables VI and V ,
2
is ~, use of the factor 4 from phase to phase implies that the grid
at the current phase is a refinement of the grid at the previous
phase. Consequently, the estimates of the risk at the new grid levels
at the value of s corresponding to the first step of any phase are not
provided by the previous phase. Interpolation is necessary. Figures
3.3.3.1 and 3.3.3.2 give pictures of the grid levels of a phase and
it's next phase on part of the plane (V ,V ).
1 2
Actually, interpolation is not necessary for the new grid levels
which are located in the stopping region at the first step of each
phase. The stopping region at the first step of each phase is provided
-79-
by the previous phase since the value of s corresponding to the last
step of the previous phase is carried over to the first step of the
current phase. Therefore, the function which defines the stopping risk
b(v ,v ,s) is able to provide the risk for the new grid level located
1 2
in the stopping region.
The following is the procedures used to compute the estimates of
the risk of new grid levels, which are located in the continuation
region at the first step of each phase (except the first phase):
Assume the current time step size is A*, and the value of s,
corresponding to the first step of current phase is s'.
Step 1: Compute the estimate of the risk of a new grid level, (vi'v2 )
say, such that (vi~,v2) and (vi~,v2) are two grid
levels of the previous phase. The estimate of the risk at
A
A
Where b(vi~,v2's') and b(vi~,v2'S') are provided by
the previous phase.
Step 2: Compute the estimate of the risk of a new grid level, (vi'v )
2
say, such that (vi ,v2~) and (vi ,v2~) are two grid
levels of previous phase. The estimate of the risk at
A
A
where b(vi'v2-..J 2A*,s') and b(vi'v2+.J 2A*,s') are prOVided by
the previous phase.
-80-
Step 3: Compute the estimate of the risk of a new grid level, (vi,vi)
say, such that (vi-J"2I;,vi+v' 24M), (vi+v' 2AM,vi~)
(vi~,vi-v' 2AM) and (vi-J"2I;.vi-v' 2AM) are four grid
levels of the previous phase. The estimate of the risk at
A
A
[b(vi-J"2I;·vi~,s'}+b(vi~,vi~·s'}+
b(Vi+v' 24M'Vi-J"2I;,s'}+b(Vi-v' 24M,vi-v' 2AM,s'}]/4,
where b(vi-J"2I;,vi+v' 2AM.s'}. b(vi~,vi+v'2AM,s'}.
A
A
b(vi+v' 2AM,vi-J"2I;.s'}. and b(vi-v' 24M,vi-v' 2AM,s'} are
provided by the previous phase.
Figures 3.3.3.3-9 present the performance of the above
interpolation estimates of the risk by an example. The interpolation
used to get the data for the figures is with the starting time
variable spacing A = 0.125. and with the time variable spacing 0.125/4
in the second phase. By comparing the resul ts obtained by the
interpolation and those obtained by starting with the smaller grid
spacing A = 0.03125, it can be seen that the approximation is better
if the corresponding value of s is farther away from the first step of
the corresponding phase.
Tables 3.3.3.1-6 gives the difference between the estimates of
the risk by the above two methods. 10 steps after the step size has
been changed. the difference is only one out six digi ts. The tables
and figures suggest that to get a good approximation of the
continuation region corresponding to a large value of s. Slg say. by
-81-
interpolation. The interpolation should be set so Slg is an adequate
number· of steps away from the first step of the corresponding phase.
-82-
Figure 3.3.3.1 : Sample of Grid Levels
(f.=f.1; f.v=
JU1)
4~
3Av
2Av
Av
V2
0
-I---+--f-~I---I--I---I--+--1
-Av
-2Av+---+--+----II---+--~_I_-+_~
-3Av+---+--+---1r---+--~_I_-~~
-4~ +--+--+-~r---+-I---._I__~",""""
-Av
0
Av 2Av
3Av
46", 56",
66",
7Av
Figure 3.3.3.2: Sample of Grid Levels
,
,
r::-:T
)
(f.=f. =f.1'4; f.V =-I2.a =f.V'2
8~'
.()-~~-o-~)...O-o--o-~~-O-~~
6~' -o-f-(l~-o-~>-+-o--+-a--.-b--4-~~
4~' -o-~~-o-~>-+-o--+-o--f-b--I--o-~
2~' -o-~~-o-~!r4--o--I-~-b--+--o-1-Jl
o
-2~' -o-f-(l~-o-~>-+-o--+-o--f-b--I--o-~
-4Av'
-o-f-(l~-o-~>-+-o--+-a--.-b--I--o-~
,
-6~ -o-t--9'-+-6-~)'-+-O--+-o-I-o.-I--o-~
-8~' ~t-P<l~,.o..,.+..-<>-.+.,.o..~0....4....o."""",,,~~
V1
o new grid levels
Figure 3.3.3.3 :0Step after Changing" the Step Size
s: 15
V2
20
I
I
15
10
I
_
I
I
I
,
changing the step size at s=15
---------_. starting with the fine grid
I
5
,
o
-5 --.
-10
-15
~
-20 Trrrrrrrrn-rT'TT'T"T'TTT"''TTTTTTTTTTT'TT'TTTTTT'TTTT'M~~~
,
o 5 10 15 20 25 30 35 40
V1
-,
Figure 3.3.3.4 :1Step after Changing the Step Size
s: 15.03125
V2
20
15
.
10
---------_.
interpolation
starting with the fine grid
5
o
-5 .
-10
-15
-20 ~1'"TTTTTT"I'TTTT'T"TTTTTTTT"1"T'TTTTTTTTT'T"I'"TTTTTT"I'TT~~
o 5 10. 15 20 25 30 35 40
V1
Figure 3.3.3.5 :2Steps after Chang~g the Step Size
s: 15.0625
V2
20
15
interpolation
10
5
-_.------_.
starting with the fine grid
o
-5 .
-10
-15
-20 Trrn-rrTTTrrTT'TTTTTTTTTTTT'T'T"I"T'TI'TTTTTTTT'TTTT'TTTTT'M"TT'1~~
o 5 10 15 20 25 30 35 40
V1
Figure 3~3.3.6 :5Steps after Changing the Step Size
s: 15.15625
V2
20
15
10
5
interpolation
---------_.
starting with the line grid
o
-5 _
-10
-15
-20 'rr-rTTTTTTTrrT"'l'TT"l'TT'''T"r"TTT'rrT"TTTTTTT'T'TTT'T'~l''T'TT"r'T''T'"I"T''T'T'TT''
o 5 10 15 20 25 30 35 40
V1
.F~ure 3.3.3.7 :10 Steps after Changing the Step Size
s=15.3125
V2
20
15
10
5
interpolation
uo
starting with the fine grid
o
-5
-10
-15
-20 'TTTT-n-rrTTTT'TTTT'T'T'TT'T"TTT'T'TTTTTT'TrTT'T'T'TTTTTTTT"TTTTTT'rTTT'~r"TTTT'T"'TT'T
o 5 10 15 20 25 30 35 40
V1
FlQUre 3.3.3.8 :20 Steps after Changing the Step Size
s=15.625
V2
20
15
10
interpolation
----------.
starting with the fine grid
5
o
-5
-10
-15
.-20 'TTn-r-rTrnT1"'TTT'TTT'TTTTT"TTTT'TTTTT'M'TT"rTTTTTTTTT"I"T'TTT'TT~'M"TTTT'T'M"TT
o 5 10 15 20 25 30 35· 40
V1
Figure 3.3.3.9 :40 Steps after Changing the Step Size
s: 16.25
V2
20
15
.10
interpolation
-..----_._.
starting with the fine grid
5
o
-5
-10
-15
-20 Trrrrn-n-rrrM"TTTTTT'TTTTT~rTTTTT'TT'TT'T'~I"T'T'TTTTTTTT~
o 5 10 15 20 25 30 35 40
V1
Table 3.3.3.1
Estimates of the Risk at the Last
Continuation Points: 1 Step after Changing the Step Size
Note: (1) starting with A
= 0.03125:
(2) interpolating at s
Last Continuation Points
s
V
2
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
15.03125
-5.00
-4.75
-4.50
-4.25
-4.00
-3.75
-3.50
-3.25
-3.00
-2.75
-2.50
-2.25
-2.00
-1.75
-1.50
-1.25
-1.00
-0.75
-0.50
-0.25
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
2.25
2.50
2.75
3.00
3.25
3.50
3.75
4.00
4.25
4.50
4.75
5.00
I
VI by (l)
16.25
16.00
15.50
15.25
14.75
14.50
14.25
13.75
13.50
13.00
12.75
12.50
12.00
11.75
11.25
11.00
10.75
10.25
10.00
9.75
9.50
9.00
8.75
8.50
8.25
7.75
7.50
7.25
7.00
6.75
6.50
6.25
6.25
6.00
5.75
5.75
5.50
5.50
5.50
5.25
5.25
I
= 15.
Bayes Risk
VI by (2)
by (1)
by (2)
16.25
16.00
15.75
15.25
14.75
14.50
14.25
13.75
13.50 .
13.00
12.75
12.50
12.00
11.75
11.25
11.00
10.75
10.50
10.00
9.50
9.25
9.00
8.75
8.50
8.25
8.00
7.50
7.25
7.00
6.75
6.50
6.50
6.25
6.00
5.75
5.75
5.50
5.50
5.25
5.25
5.25
-26.465902
-26.253857
-25.553062
-25.340994
-24.640369
-24.428253
-24.216757
-23.515690
-23.304102
-22.603315
-22.391630
-22.180462
-21.47·9391
-21.268087
-20.567413
-20.355995
-20.144978
-19.444265
-19.233128
-19.022327
-18.811892
-18.110943
-17.900345
-17.690029
-17.480009
-16.779810
-16.569904
-16.360300
-16.151030
-15.942143
-15.733726
-15.526164
-15.807142
-15.599866
-15.394222
-15.675647
-15.471072
-15.752951
-16.036310
-15.832254
-16.115185
-26.467167
-26.255383
-26.043539
-25.342718
-24.641386
-24.429691
-24.217882
-23.517607
-23.303364
-22.604528
-22.392738
-22.181658
-21.480476
-21.269445
-20.568213
-20.357134
-20.145952
-19.935314
-19.231411
-18.533632
-18.322475
-18.111856
-17.901188
-17.690931
-17.480745
-17.270845
-16.570822
-16.361126
-16.151848
-15.942892
-15.734859
-16.015518
-15.807614
-15.599711
-15.393500
-15.674094
-15.471933
-15.752053
-15.548253
-15.829201
-16.113358
-91-
Table 3.3.3.2 Estimates of the Risk at the Last
Continuation Points: 2 Steps after Changing the Step Size
Note: (1) starting with A
= 0.03125:
(2) interpolating at s
Last Continuation Points
s
V
2
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
15.06250
-5.00
-4.75
-4.50
-4.25
-4.00
-3.75
-3.50
-3.25
-3.00
-2.75
-2.50
-2.25
-2.00
-1.75
-1.50
-1.25
-1.00
-0.75
-0.50
-0.25
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
2.25
2.50
2.75
3.00
3.25
3.50
3.75
4.00
4.25
4.50
4.75
5.00
I
VI by (1)
16.25
16.00
15.50
15.25
15.00
14.50
14.25
13.75
13.50
13.00
12.75
12.5012.00
11.75
11.50
11.00
10.75
10.50
10.00
9.75
9.50
9.00
8.75
8.50
8.25
7.75
7.50
7.25
7.00
6.75
6.50
6.50
6.25
.6.00
6.00
5.75
5.50
5.50
5.50
5.25
5.25
I
= 15.
Bayes Risk
VI by (2)
by (1)
by (2)
16.25
16.00
15.75
15.25
15.00
14.50
14.25
13.75
13.50
13.25
12.75 .
12.50
12.25
11.75
11.50
11.00
10.75
10.50
10.00
9.75
9.50
9.00
8.75
8.50
8.25
8.00
7.50
7.25
7.00
6.75
6.75
6.50
6.25
6.00
5.75
5.75
5.50
5.50
5.50
5.25
5.25
-26.470263
-26.258106
-25.557293
-25.345114
-25.133596
-24.432253
-24.220644
-23.519571
-23.307886
-22.607079
-22.395298
-22.184027
-21.482950
-21.271551
-21.060617
-20.359364
-20.148256
-19.937567
-19.236315
-19.025429
-18.814899
-18.113968
-17.903307
-17.692917
-17.482813
-16.782644
-16.572699
-16.363058
-16.153757
-15.944845
-15.736416
-16.017805
-15.809752
-15.602511
-15.885031
-15.678270
-15.473865
-15.755659-
-26.471504
-26.259377
-26.047716
-25.346516
-25.134453
-24.433422
-24.221806
-23.521189
-23.308424
-23.097178
-22.396404
-22.185074
-21.974089
-21.272617
-21.061308
-20.360294
-20.149248
-19.938421
-19.236259
-19.025328
-18.815161
-18.114727
-17.904152
-17.693756
-17.483599
-17.273619
-16.573574
-16.363903
-16.154499
-15.945591
-16.226887
-16.018314
-15.810262
-15.602716
-15.396419
-15.678128
-15.473703
-15.755364
-16.038170
-15. 83391q
-16.116978
-92-
-16~038935
-15.835067
-16.117945
Table 3.3.3.3 Estimates of the Risk at the Last
COntinuation Points: 5 S~eps after Changing the Step Size
Note: (1) starting with A 0.03125: (2) interpolating at s
15.
=
=
Last COntinuation Points
s
V2
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
15.15625
-5.00
-4.75
-4.50
-4.25
-4.00
-3.75
-3.50
-3.25
-3.00
-2.75
-2.50
-2.25
-2.00
-1. 75
-1.50
-1.25
-1.00
-0.75
-0.50
-0.25
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
2.25
2.50
2.75
3.00
3.25
3.50
3.75
4.00
4.25
4.50
4.75
5.00
VI by (1)
16.25
16.00
15.75
15.25
15.00
14.50
14.25
13.75
13.50
13.25
12.75
12.50
12.00
11. 75
11.50
11.00
10.75
10.50
10.00
9.75
9.50
9.00
8.75
8.50
8.25
8.00
7.50
7.25
7.00
6.75
6.75
6.50
6.25
6.00
6.00
5.75
5.75
5.50
5.50
5.25
5.25
I
Bayes Risk
VI by (2)
by (1)
by (2)
16.50
16.00
15.75
15.25
15.00
14.50
14.25
14.00
13.50
13.25
12.75 .
12.50
12.25
11. 75
11.50
11.25
10.75
10.50
10.00
9.75
9.50
9.25
8.75
8.50
8.25
8.00
7.75
7.50
7.25
7.00
6.75
6.50
6.25
6.00
6.00
5.75
5.75
5.50
5.50
5.25
5.25
-26.483295
-26.270794
-26.058983
-25.357430
-25.145565
-24.444221
-24.232302
-23.531183
-23.319185
-23.107733
-22.406281
-22.194719
-21.493595
-21.281927
-21.070700
-20.369431
-20.158068
-19.947092
-19.245844
-19.034735
-18.823950
-18.123005
-17.912163
-17.701584
-17.491259
-17.281204
-16.581045
-16.371298
-16.161903
-15.952921
-16.234150
-16.025568
-15.817587
-15.610569
-15.892719
-15.686223
-15.969173
-15.763806
-16.046795
-15.843521
-16.126226
-26.973043
-26.271801
-26.059843
-25.358646
-25.146357
-24.445261
-24.233215
-24.021669
-23.319885
-23.108263
-22.407202
-22.195515
-21.984285
-21.282829
-21.071344
-20.860418
-20.158838
-19.947660
-19.246193
-19.034960
-18.824257
-18.613882
-17.912882
-17.702253
-17.491858
-17.281702
-17.071848
-16.862242
-16.652781
-16.443518
-16.234533
-16.025974
-15.817988
-15.610944
-15.892852
-15.686394
-15.969218
-15.763751
-16.046665
-15.843370
-16.126137
-93-
Table 3.3.304 Estimates of the Risk at the Last
Continuation Points: 10 Steps after Changing the Step Size
Note: (1) starting with A = 0.03125; (2) interpolating at s = 15.
Last Continuation Points
s
V2
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250.
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
15.31250
-5.00
-4.15
-4.50
-4.25
-4.00
-3.15
-3.50
-3.25
-3.00
-2.15
-2.50
-2.25
-2.00
-1.15
-1.50
-1.25
-1.00
-0.15
-0.50
-0.25
0.00
0.25
0.50
0.15
1.00
1.25
1.50
1.15
2.00
2.25
2.50
2.15
3.00
3.25
3.50
3.75
4.00
4.25
4.50
4.15
5.00
I
VI by (1)
16.50
16.00
15.15
15.25
15.00
14.15
14.25
14.00
13.50
13.25
13.00
12.50
12.25
11.15
11.50
11.25
10.15
10.50
10.25
9.15
9.50
9.25
9.00
8.50
8.25
8.00
1.15
1.50
7.25
1.00
6.75
6.50
6.25
6.25
6.00
5.15
5.75
5.50
5.50
5.50
5.25
I
Risk
VI by (2)
by (1)
by (2)
16.50
16.00
15.15
15.50
15.00
14.15
14.25
14.00
13.50
13.25
13.00 .
12.50
12.25
12.00
11.50
11.25
10.15
10.50
10.25
9.15
9.50
9.25
9.00
8.50
8.25
8.00
1.15
1.50
7.25
1.00
6.75
6.50
6.25
6.25
6.00
5.75
5.75
5.50
5.50
5.50
5.25
-26.993580
-26.291824
-26.994238
-26.292151
-26.080292
-25.868444
-25.166252
-24.954292
-24.252441
-24.040422
-23.338118
-23.126640
-22.915092
-22.213196
-22.001531
-21.190333
-21.088131
-20.816149
-20.115039
-19.963516
-19.152331
-19.050618
-18.839499
-18.628605
-18.411999
-11.116510
-11.505922
-17.295523
-11.085315
-16.815481
-16.665846
-16.456509
-16.247528
-16.039001
-15.831150
-16.113321
-15.905835
-15.100063
-15.982245
-15.111145
-16.060165
-16.343893
-16.140263
-94-
-26.019458
-25.311851
-25.165455
-24.953612
-24.251614
-24.039153
-23.331946
-23.125999
-22.914519
-22.212448
-22.000895
-21.299135
-21.081463
-20.816209
-20.114351
-19.962951
-19.151928
-19.050159
-18.839008
-18.628153
-18.411616
-11.115958
-11.505354
-11.294996
-11.084881
-16.815023
-16.665401
-16.456012
-16.247084
-16.038551
-15.830638
-16.113091
-15.905558
-15.699692
-15.982039
~15.711452
-16.059996
-16.343800
-16.140081
,
'\
Table 3.3.3.5 Estimates of the Risk at the Last
Continuation Points: 20 Steps after changing the Step Size
Note: (1) starting with A = 0.03125; (2) interpolating at s = 15.
Last Continuation Points
s
V2
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500·
15.62500
15.62500·
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
15.62500
-5.00
-4.75
-4.50
-4.25
-4.00
-3.75
-3.50
-3.25
-3.00
-2.75
-2.50
-2.25
-2.00
-1.75
-1.50
-1.25
-1.00
-0.75
-0.50
-0.25
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
2.25
2.50
2.75
3.00
3.25
3.50
3.75
4.00
4.25
4.50·
4.75
5.00
I
VI by (1)
16.50
16.25
16.00
15.50
15.25
14.75
14.50
14.00
13.75
13.50
13.00
12.75
12.25
12.00
11.75
11.25
11.00
10.75
10.25
10.00
9.75
9.25
9.00
8.75
8.50
8.25
7.75
7.50
7.25
7.00
6.75
6.75
6.50
6.25
6.00
6.00
5.75
5.75
5.50
5.50
5.50
I
Bayes Risk
VI by (2)
by (1)
by (2)
16.75
16.25
16.00
15.50
15.25
14.75
14.50
14.25
13.75
13.50
13.00 .
12.75
12.50
12.00
11.75
11.25
11.00
10.75
10.25
10.00
9.75
9.50
9.00
8.75
8.50
8.25
7.75
7.50
7.25
7.00
6.75
6.75
6.50
6.25
'6.00
6.00
5.75
5.75
5.75
5.50
5.50
-27.035316
-26.822552
-26.610449
-25.907219
-25.695053
-24.992039
-24.779818
-24.077017
-23.864716
-23.652941
-22.949814
-22.737926
-22.035116
-21.823122
-21.611551
-20.908598
-20.696892
-20.485561
-19.782614
-19.571144
-19.359995
-18.657316
-18.446089
-18.235111
-18.024393
-17.813944
-17.111853
-16.901604
-16.691669
-16.482094
-16.272949
-16.555172
-16.346386
-16.138296
-15.931569
-16.214027
-16.008118
-16.291107
-16.086784
-16.369829
-16.653913
-27.526379
-26.823156
-26.610914
-25.907881
-25.695585
-24.992708
-24.780357
-24.568575
-23.865282
-23.653387
-22.950405
-22.738415
-22.526899
-21.823671
-21.612000
-20.909168
-20.697374
-20.485945
-19.783113
-19.571575
-19.360350
-19.149462
-18.446545
-18.235527
-18.024750
-17.814232
-17.112329
-16.902063
-16.692116
-16.482531
-16.273447
-16.555410
-16.346672
-16.138626
-15.932006
-16.214285
-16.008497
-16.291328
-16.575684
-16.370090
-16.654072
-95-
Table 3.3.3.6 Estimates of the Risk at the Last
Continuation Points: 40 Steps after Changing the Step Size
Note: (I) starting with A = 0.03125; (2) interpolating at s = 15.
Last Continuation Points
s
V2
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
16.25000
-5.00
-4.75
-4.50
-4.25
-4.00
-3.75
-3.50
-3.25
-3.00
-2.75
-2.50
-2.25
-2.00
-1.75
~1.50
-1.25
-1.00
-0.75
-0.50
-0.25
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
2.25
2.50
2.75
3.00
3.25
3.50
3.75
4.00
4.25
4.50
4.75
5.00
I
VI by (I)
17.00
16.50
16.25
15.75
15.50
15.25
14.75
14.50
14.00
13.75
13.25
13.00
12.75
12.25
12'.00
11. 75
11.25
11.00
10.75
10.25
10.00
9.75
9.25
9.00
8.75
8.50
8.25
8.00
7.50
7.25
7.25
7.00
6.75
6.50
6.25
6.25
6.00
6.00
5.75
5.75
5.75
I
VI by (2)
17.00
16.50
16.25
15.75
15.50
15.25
14.75
14.50
14.00
13.75
13.50
13.00
12.75
12.25
12.00
11.75
11.25
11.00
10.75
10.25
10.00
9.75
9.25
9.00
8.75
8.50
8.25
8.00
7.75
7.50
7.25
7.00
6.75
6.50
6.50
6.25
6.00
6.00
5.75
5.75
5.75
-96-
Bayes Risk
by (I)
by (2)
-28.097265
-27.392290
-27 .179413
-26.474583
-26.261665
-26.049318
-25.344044
-25.131605
-24.426586
-24.214064
-23.509300
-23.296692
-23.084518
-22.379539
-22.167242
-21.955341
-21.250257
-21.038210
-20.826513
-20.121477
-19.909634
-19.698086
-18.993397
-18.781788
-18.570408
-18.359257
-18.148344
-17.937656
-17.234007
-17.023769
-17.306635
-17.096767
-16.887346
-16.678526
-16.470907
-16.754206
-16.547224
-16.831190
-16.625458
-16.909508
-17.194674
-28.097683
-27.392828
-27.179846
-26.475132
-26.262108
-26.049646
-25.344519
-25.131987
-24.427076
-24.214462
-24.002329
-23.297127
-23.084873
-22.379999
-22.167633
-21.955656
-21.250675
-21.038565
-20.826796
-20.121864
-19.909973
-19.698366
-18.993786
-18.782141
-18.570730
-18.359545
-18.148588
-17.937859
-17.727329
-17.516972
-17.306824
-17.096991
-16.887602
-16.678814
-16.962894
-16.754427
-16.547562
-16.831396
-16.625795
-16.909750
-17.194832
3.4 An Example
This section uses the example about comparing treatments for PCP.
which is described in Section 2.1 of last chapter. to illustrate how
to apply the solution of the optimal stopping rule. Let "oi be the
or a patient treated by Bactrim. Xli be the Pa02 or a patient
2
treated by Pentamadine. and ~i be the Pa02 of a patient treated by
Pa0
Pentamadine with Steroid.
Assume Xji·s (j
= O.
1. 2. i
= 1.
2•... ) are independently
N(9 j .1) random variables. Further. suppose 9 j ·s (j
= O.
1. 2) are
independent and 90 has prior density N(86.8.1). 9 1 has prior density
N(87.1) and 92 has prior density N(87.5.1).
If we get X01 = 84.8. X11 = 85
an~ ~1
= 86 from the first three
treated patient. then
'71 = 5.3033 and W72 = 4.6949
by (2.5.1) and (2.5.2).
Also. by (2.5.3)
Since (5.3033.4.6949) is in the optimal continuation region in Figure
3.3.1.1. we will continue allocating the other three patients.
Suppose the testing stage is not stopped e1 ther after 6 patients
are treated or after 9 patients are treated and the observed (W: 1 •'*42)
is (11.8). In Figure 3.3.1.1. (11.8) is not in the optimal
continuation region at s
M
= 20. therefore. the testing stage is then
terminated after 12 patients are allocated. Since (W:1.W~)
-97-
= (11.8)
is in the region T • the treatment Pentamadine with Steroid is
2
selected for the remaining 285 patients.
-~-
aJAPrER IV
mE ASYJIPTOTlCAlLY EXPB::lED OVERJUJIP
OF A 2-DDlmSI<JW. 1WGXII WALK
4.1 Introduction
This chapter presents the solution of the asymptotically expected
excess over the boundary by a 2-dimensional random walk. This solution
can be used to approximate the difference between the solution of the
continuous version of the stopping problem discussed in Chapter II and
that of discrete version.
The random walk considered in this chapter is defined as follows:
Let (Xn,Y ) be a random walk in two dimensions, where
n
(Xn .Yn )
= (Xn- 1
± I, Yn- 1) or (Xn- I' Yn- 1 ± 1)
each wi th probabili ty 1/4. The boundary considered is two parallel
lines of equations:
aX + bY = ±C,
for some integers a, b. and a positive integer c.
In Section 4.2 the distance between (Xm, Ym) and the boundary
is
.
studied for specified a and b and some m such that (X , Y ) is the
m m
first time the random walk crosses the boundary. The distance is
-99-
called the "over jump". The solution of the asymptotically expected
overjump for 0
<a
~
b is given by Theorem 4.2.1.
In Section 4.3, some numerical results of Theorem 4.2.1 are
computed. The application of the numeTical results is discussed.
4.2
The
Asymptotically Expected Over1ump
The region inside the boundary is called the continuation region
and is denoted by C. The complement of C is called the stopping region
and is denoted by S. The random walk begins at a random point in
continuation region C and stops as soon as it reaches stopping region
S. Therefore, i t "walks" until i t reaches- the stopping region S. Let
PS(x,y) be the probablli ty that a random walk starting at (x,y)
eventually hits the stopping region S. Then,
(4.2.1)
PS(x.y)
=
+
I
P[(x.y),{u,v)]
(u,v)€S
I
P[{x,y),(u,v)]PS{u,v).
(u,v)€C
where P[(x,y),(u,v)] is the conditional probability that (X 1 ,Y ) is
1
equal to (x.y) given that (XO,YO) is equal to (u,v) (Hoe I , Port and
Stone (1912), p.25).
Partition the continuation region into several strips by the
lines: aX + bY
= -c+1,
aX + bY
= -c+2,
•.. , aX + bY
= c-2,
aX + bY = c-1, in case that the boundary is of equations aX + bY = :t:c
for some specified a. b and c. The i-th closest strip to the line
aX + bY = c is called the i-th strip. Figure 4.2.1 shows this
partition for the specific case a
= 1.
b
=2
and c
= 8.
The next result is used in the proof of Theorem 4.2.1.
-100-
Lemma 4.2.1.
Let the boundary of the walk and the overjump be as
defined in Section 4.1.
(i) If the random walk begins at a random point (xO'YO) in the
~
~
a such that (X1 .Y 1) = (xO+l.yO)' the overjump
in X-direction is uniformly distributed on the interval
i-th strip for 1
i
«a-i)/a.(a-i+l)/a).
(ti) If the random walk begins at a random point (xO'YO) in the
b such that (X t .Y1 ) = (xO'YO+I). the overjump
in the X-direction is uniformly distributed on the interval
i-th strip for 1
~
i
~
«b-i)/a.(b-i+l)/a).
Proof:
(I) The continuation region is partitioned into 2c strips by
the lines:
aX + bY
= c-j.
where j
The distance of the lines aX + bY
= 1.2•...• 2c-l.
= c-j
and aX + bY = c-j-l • where
j = O. 1. 2 •...• 2c-l. in the X-direction is l/a. Hence. if the
random walk begins at a point in the k-th strip for k
> a.
it will not
reach the stopping region by just walking one unit in the X-direction.
On
the other hand. if the random walk begins at a point in k-th strip
for 1
~
k
~
a and walks one uni t in the X-direction. it reaches the
stopping region and the overjump is between (a-k)/a and (a-k+l)/a. The
overjump depends on where the random walk begins in the strip.
The resul t then follows from the assumption that the walk begins
at a random point in the strip.
(ii) The proof is similar to (i).
-101-
D
Let Pi be the probabi Ii ty that a 2-dimensional random walk
starting at a random point in the i-th strip eventually reaches the
stopping region S. Then. if a
(4.2.2)
Pi =
{
~
< 2c-b. (4.2.1) implies
b
( 1 + 1 + Pi+b + Pi+a)/4
1 ~ i ~ a.
(1 + P1-a + Pi+b + P i +a)/4.
a
<i
<i
~
b
(Pi-b + Pi-a + Pi+b + P i +a)/4.
~
b.
2c-b.
The following notations will be used for the Theorem 4.2.1 and
Corollary 4.2.1 which follow. For some integers a. b .and a positive
integer c, let
T
Dc
c
= in!
= (laXT
laXn +bYn I ~ c},
{n
+bY
c
T
l-c)/(a2+b2) 1/2,
c
IXT
-(c-bYT )/al.
H = {c
c
C
lx _( -c-bY )/a I,
T
T
C
IY -(c-aX
V =
C
{
T
C
T
C
)lbl,
(aXT +bYT )
c
c
>
(aXT +bYT )
c
C
< -c,
(aXT +bYT )
>
(aXT +bYT )
c
c
< -c.
C
C
IYT -(-c-aXT )Ib!,
c
c
c,
c,
C
Then, Dc is the overjump perpendicular to the boundary, Hc is the
overjump in the X-direction and V is the overjump in the Y-direction.
c
-102-
The limits cl...
imooEDc'
lim EH and lim EVc exist from renewal theory
c ... oo c
c ... oo
(Hogan (1986». Let
(4.2.3)
ED
= limooEDc •
c ...
(4.2.4)
EH
=c lim
EH •
...
c
00
and
(4.2.5)
EV
= limooEVc '
c ...
Theorem 4.2.1.
where 0
<a
~
ED and EH are defined as above. For integers a. b
b. if (r • r •...• r ) is a solution of the following
b
1 2
set of equations:
where i = -(b-1). -(b-2) •...• -1. O.
qi
=
1 - 2i
2a
and w1 • w2 •...• wb-1 are the roots with magnitude less than 1 of the
equation:
1 + zb-a _ 4zb + zb+a + z2b = O.
then
and
ED
2 1/2
=ar 1/(a2+b)
.
-103-
Proof:
Let e
be the expected overjwnp in the X-direction by the
i
random walk begilUling at a random point in the i-th strip if the
boundary is aX + bY = !.C for some specific posi tive integers a. b and
c such that a
Then. By
(4.2.6)
ei =
~
b.
LeIllllB.
4.2.1 and (4.2.2). we have
[(2(b-i}+1)/2a+(2(a-i)+1)/2a+ei+b+ei+a]/4
1
~
i
~
a.
[(2(b-i)+1)/2a+ei-a+ei+b+ei+a]/4
a
<i
~
b.
<i
~
{
[ei_b+ei_a+ei+b+ei+a]/4
b
2c-b.
Actually. (4.2.6) can be rewritten as the following:
where i
= 1.2•...•
ei
2c-b. such that
= (1-2i)/2a.
i = -b+1; -b+2•...• 1. O.
It is clear that (4.2.7) is a homogeneous linear difference equation.
A solution of this linear difference equation is
where w1 • w2 •...• wb-1 are the roots with magnitude less than 1 of
the equation:
1 + zb-a - 4zb + zb+a + z2b = O.
By the boundary condi tions of the linear difference equation. we can
get a solution of (r 1 • r 2 •...• r b ). Since
i = 1, 2. 3. . ... b-1,
-104-
we get
In other words. the expected overjump in the X-direction tends to r
1
as c goes to infinity. Therefore. the expected over jump in the
Y-direction and the expected overjump perpendicular to the boundary
tend to
ar 1b
1
and
arl /( a 2+b2}1/2 •
respectively. as c goes to infinity.
D
The following is analytic solutions for some special cases of
Theorem 4.2.1:
(1) if a
= 1 and b = 1.
ED
(2) if a
= l/(2Y"2'"):
= 1 and b = 2.
ED = ..r5/5 - 1/10:
(3) if
ED
~
= 1 and b = 3.
= ..r5/(2.J'2}
-
(1/10}(&JiO - 2(loJ'5" - 2O}1/2)/(5+J5 - ("5 _ 2)1/2 _ (v'5+2)1/2).
The asymptotically expected overjump in the X-direction when the
boundary has slope -alb is the same as the asymptotically expected
over jump in the Y-direction when the boundary has slope -b/a. Thus.
Corollary 4.2.1 follows.
-105-
Coro llary 4.2. 1.
For integers a and b where 0
<b
~
a.
if (r • r •...• r ) is a solution of the following set of equations:
a
2
1
where i
= -(a-I).
-(a-2) • . . . • 1. O.
qi =
1 - 2i
2b
and wI' w2 • . . . • wa - 1 are the roots with magni tude less than 1 of the
equation:
1 + za-b _ 4za + za+b +. z2a = O.
then
Ell = br 1/a.
and
A program in Turbo Pascal is coded to simulate the asymptotically
expected over jump. The numerical evidence generated by the program
strongly supports the Theorem.
-106-
Figure 4.2.1 Boundary: X+2Y : 8or -8
y
6
-- --- --- -...............
-6
-4
... ......
-2
o
x
2
4
6
4.3 Numerical Results
Because of synnetry. if the boundary has slope -p/q.
p
< 0 < q.
~here
the asymptotically expected over jumps can be obtained from
the Theorem 4.2.1
Hence. the asymptotically expected overjump perpendicular to the
boundary is the same for a boundary wi th slope -alb. -b/a. alb or b/a.
for some positive integers a and b. Therefore. we only compute the
asymptotically expected over jumps for boundaries with rational slope
-alb. where a and b are relatively prime. positive integers and a
< b.
A program coded in Gauss provides the numerical results.
Figures 4.3.1-3 illustrate the numerical results. Table 4.3.1 provides
the asymptotically expected overjumps for the boundaries with some
selected slopes. The largest value of b
consider~
is 150.
Finally. Theorem 4.2.1 can be applied as follows. Let's consider
the discrete version of the stopping problem defined in Section 2.6 in
which the spacing of the time variable is A. Assume that part of the
boundary of .the optimal continuation region for this problem can be
described by the equation:
aX + bY = c
for some positive integers a. b and c.
By
Theorem 4.2.1. the suggested
approximation for the solution of the continuous time problem is
where ED is defined in (4.2.3) and the sign is determined so as to
make the continuation region for the continuous time problem larger.
-108-
Figure 4.3.1 :Perpendicular Expected Overjumps vs Slopes
0.48
(j)
~0.44
....:J
~
Q)
>
00.40
"C
...oQ)
Q)
a.
~O.36
0.32 ~~.........-r-r-rT'""'T-r-T'"T'"~~r-T"'"T""'T'T"""T""T""'T'.....-.--.J
-0.75
-0.50
-0.25
0.00
-tOO
Slopes
Figure 4.3.2 :Horizontal Expected Overjumps vs Slopes
24
lJ)
20
a.
E
.~ 16
L-
a>
>
o 12
1J
...Q)
~ 8
a.
x
w
4
o
-too
1'T""T""T""'r-r-,....,...,,-Ir-r-T'"""'T"""'r-r-Ilr-r-Ir-r-Ir-r-r-T""'l~1
T-rl'T""T"'"T'""T""1""-r"'T"""T""'T"""'T"""
-0.75
-0.50
Slopes
-0.25
0.00
Figure 4.3.3 :Vertical Expected Overjumps vs Slopes
0.50
0.48
tJ)
a.
E0.46
.....:JL.
o~ 0.44.
~...., 0.42
o
Q)
~O.40
w
0.38
0.36 l..,.-r-,--,--r~~~~"""""""'--r-T"""T'"'r--T-r'"'T"'-r-~
-tOO
-0.75
-0.50
Slopes
-0.25
0.00
Table 4.3.1
Asymptotically Expected Overjumps
Asymptotically Expected Overjumps
a
b
1
33
24
19
16
40
23
41
9
33
22
27
37
1
34
25
20
17
43
25
45
10
37
25
31
43
20
25
6
39
37
5
43
41
35
25
4
27
26
25
17
21
5
32
30
4
34
32
27
19
3
20
19
18
27
7
29
17
29
2
13
16
18
11
3
13
18
4
14
11
20
17
13
38
10
42
25
43
3
20
25
29
18
5
22
31
7
25
20
37
32
25
slope
-1.0000
-0.9706
-0.9600
-0.9500
-0.9412
-0.9302
-0.9200
-0.9111
-0.9000
-0.8919
-0.8800
-0.8710
-0.8605
-0.8500
-0.8400
-0.8333
-0.8205
-0.8108
-0.8000
-0.7907
-0.7805
-0.7714
-0.7600
-0.7500
-0.7407
-0.7308
-0.7200
-0.7105
-0.7000
-0.6905
-0.6800
-0.6744
-0.6667
-0.6500
-0.6400
-0.6207
-0.6111
-0.6000
-0.5909
-0.5806
-0.5714
-0.5600
-0.5500
-0.5405
-0.5313
-0.5200
Perpendicular
0.35355
0.34322
0.34084
0.33887
0.33732
0.33559
0.33414
0.33300
0.33175
0.33085
0.32972
0.32895
0.32816
0.32742
0.32684
0.32668
0.32584
0.32549
0.32551
0.32489
0.32463
0.32451
0.32458
0.32520
0.32456
0.32443
0.32453
0.32473
0.32509
0.32548
0.32623
0.32681
0.32823
0.32721
0.32734
0.32818
0.32881
0.33010
0.33035
0.33119
0.33232
0.33342
0.33473
0.33616
0.33779
0.34019
-112-
Horizontal
0.50000
0.49279
0.49216
0.49201
0.49218
0.49271
0.49352
0.49444
0.49591
0.49706
0.49910
0.50085
0.50313
0.50555
0.50815
0.51029
0.51368
0.51681
0.52107
0.52381
0.52762
0.53128
0.53642
0.54199
0.54527
0.54987
0.55541
0.56065
0.56688
0.57283
0.58016
0.58448
0.59172
0.60039
0.60725
0.62230
0.63057
0.64161
0.64937
0.65956
0.66981
0.68239
0.69458
0.70693
0.72000
0.73737
Vertical
0.50000
0.47829
0.47247
0.46741
0.46323
0.45833
0.45403
0.45049
0.44632
0.44333
0.43920
0.43622
0.43293
0.42972
0.42685
0.42524
0.42148
0.41903
0.41685
0.41417
0.41180
0.40985
0.40768
0.40650
0.40390
0.40183
0.39990
0.39836
0.39682
0.39552
0.39451
0.39419
0.39448
0.39026
0.38864
0.38625
0.38535
0.38496
0.38372
0.38297
0.38275
0.38214
0.38202
0.38212
0.38250
0.38343
Table '4"3.1 (Continued)
Asymptotically Expected Overjumps
Asymptotically Expected Overjump
a
b
23
45
2
45
1
22
12
8
9
11
19
8
16
2
16
8
10
9.
7
15
1
8
9
3
9
7
10
6
1
6
3
2
4
1
4
2
7
4
3
3
3
1
1
1
2
1
2
1
1
1
25
17
20
25
44
19
39
5
41
21
27
25
20
44
3
25
29
10
31
25
37
23
4
25
13
9
19
5
21
11
41
25
20
23
25
9
10
11
25
14
33
20
25
33
slope
-0.5111
-0.5000
-0.4889
-0.4800
-0.4706
-0.4500
-0.4400
-0.4318
-0.4211
-0.4103
-0.4000
-0.3902
-0.3810
-0.3704
-0.3600
-0.3500
-0.3409
-0.3333
-0.3200
-0.3103
-0.3000
-0.2903
-0.2800
-0.2703
-0.2609
-0.2500
-0.2400
-0.2308
-0.2222
-0.2105
-0.2000
-0.1905
-0.1818
-0.1707
-0.1600
-0.1500
-0.1304
-0.1200
-0.1111
-0.1000
-0.0909
-0.0800
-0.0714
-0.0606
-0.0500
-0.0400
-0.0303
Perpendicular
Horizontal
Vertical
0.34257
0.34721
0.34475
0.34409
0.34391
0.34453
0.34519
0.34593
0.34694
0.34821
0.35012
0.35080
0.35208
0.35378
0.35565
0.35779
0.36018
0.36316
0.36320
0.36434
0.36604
0.36773
0.36976
0.37185
0.37418
0.37789
0.37904
0.38103
0.38319
0.38620
0.38985
0.39184
0.39438
0.39788
0.40130
0.40468
0.41186
0.41601
0.41986
0.42474
0.42899
0.43438
0.43900
0.44519
0.45185
0.45873
0.46612
0.75271
0.77639
0.78492
0.79517
0.80768
0.83956
0.85711
0.87260
0.89405
0.91742
0.94273
0.96496
0.98902
1.01860
1.04997
1.08305
1.11624
1.14842
1.19171
1.22921
1.27385
1.31892
1.37136
1.42520
1.48236
1.55808
1.62419
1.69450
1.76643
1.87468
1.98786
2.09416
2.20466
2.36416
2.54006
2.72804
3.18437
3.49161
3.80199
4.26857
4.73832
5.44710
6.16171
7.35913
9.04832
11.47733
15.38900
0.38472
0.38820
0.38374
0.38168
0.38008
0.37780
0.37713
0.37681
0.37644
0.37638
0.37709
0.37657
0.37677
0.37726
0.37799
0.37907
0.38054
0.38281
0.38135
0.38148
0.38215
0.38291
0.38398
0.38519
0.38670
0.38952
0.38980
0.39104
0.39254
0.39467
0.39757
0.39889
0.40085
0.40364
0.40641
0.40921
0.41535
0.41899
0.42244
0.42686
0.43076
0.43577
0.44012
0.44601
0.45242
0.45909
0.46633
-113-
APPamIX A
PROGRAJI LISfING
/'MN===============================NNI
~
1M
1M
~
nus
roDE IS FOR SOLVING THE LAST CX>NTINUATION POINTS
INVOLVING A 2-DIMENSIONAL RANDOM WALK.
~:
~
~
=
~
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
MI
MI
-~
include <math.h>
include <stdio.h>
include <stdlib.h>
define out "t. int"
1M
MI
define right "rg. int" 1M define the name of output files MI
define left "1£. int" 1M
MI
define w1dim 445
1M the number of grid levels in V1 MI
define w2dim 445
1M the number of grid levels in V2 MI
define negdim 100
IN the number of negative grid levels in V2 *1
define leftdim no-steps+negdim+10
1* the size of the vector storing the
locations of the "left-last continuation
points for negative grid levels in V2 *1
define starting-S 1
define startiD8-dels 0.125
1M the spacing of the time variable in the
first phase *1
define spacing 1
define nOJtages 2
1M the number of phases MI
define no-steps 111
1M the. number of steps in each phase *1
double b();
maine)
{
1*=/*
1M
1M
IN
=
VARIABLE DECLARATIONS
double
double
double
w2[w2dim] ;
w1[w1dim] ;
neww2[w2dim]
double
neww1[w1dim] ;
NI
MI
MI
MI
NI
1* stores the grid levels
1M stores the grid levels
1* stores the grid levels
for next phase *1
1M stores the grid levels
for next phaseMI
-114-
in V2 *1
in V1 MI
in V2
in V1
double
int
int
Mr[w2dim].
Mrr[w2dim]:
iid[2].
1M stores the risk for the current step MI
1M stores the risk for the previous step M/
1M indicators for applying
Proposition 3.2.4 MI
rindp[w2dim].
1M stores the locations of the "right-last"
continuation points at the previous step
MI
rind[w2dim].
1M stores the locations of the "right-last"
continuation point at the current step MI
lindp[leftdim]. 1M stores the locations of the "left-last"
continuation points at the previous step
MI
lind[leftdim]: 1M stores the locations of the "left-last"
continuation points at the current step
MI
£lend.
1M the location of the "left-last" continuation
point corresponding to the V2 grid level which
is smallest in V2 grid levels having "left-last"
continuation points at the current step MI
flid.
1M indicating 1£ the value of flend is updated MI
p£lend. 1M the location of the "left-last" continuation
point corresponding to the V2 grid level which
is smallest in V2 grid levels having "left-last"
continuation points at previous step MI
Bend. 1M indicator for applying Proposi tion 3.2.4 MI
irend. 1M indicator for applying Proposition 3.2.4 MI
outO.
1M indicator to see if (o.v2) is written in the
output file graphleft MI
nlow.
1M the number of negative grid levels in V2 in the
output file MI
larger. 1M indicating 1£ the matrices r and rr are large
eDOUgh M/
rend.
double
1M the rightmost grid level in V1 necessary to be
checked for the current V2 grid level MI
lend.
1M the leftmost grid level in V1 necessary to be
checked for the current V2 grid level MI
nleft. 1M the size of the vectors lind and lindp MI
C.
1M the factor to decrease the s spacing from each
phase to the next MI
ng.
1M indicator for the output: 1 means the result of
the currents tep will be outpu t MI
ns tage. 1M the number of phases MI
ns.
1M the munber of steps MI
nw1.
1M the number of grid levels in V1 MI
nw2.
1M the number of grid level in V2 MI
w2ind. 1M indicating whether the interpolation is
appropriate MI
w2id. is. beg. bed. tw11ck.
1M index variables MI
i. j. ii. jj. id. w1k. w11ck.
1M
for
MI
w1kkk. w2k. w2kk. w2kkk;
1M
computations MI
s.
1M the time variable MI
sO;
1M the initial -value of the time variable MI
-115-
double
double
FILE
de Is ,
delw,
td, d,
ted, ed,
aed, a;
1*
1*
1*
1*
1*
the time variable spacing *1
the spacing of Vl and V2 *1
stores the stopping risk *1
stores the bayes risk *1
for updating the risk in the continuation
region only *1
sqt3;
1* stores sqrt(3) *1
*graph, *graphleft, Moutfile;
IN
MI
1* AllOCATING TWO 2-DlMENSIONAL MATRICES *1
1*
NI
for (i=O; i<w2dim; i++)
{
r[i]=(double *)malloc«wldim)*sizeof(double»;
if (r[i] NUll)
{
fprintf(stderr, "Memory allocation fails: r[%d]\n", i);
eXit(-l);
}
rr[i]=(double *)malloc«wldim)*sizeof(double»;if (rr[i]--NUll)
{
fprintf(stderr, "Memo.ry allocation fails: rr[%d]\n", i);
exi t(-l);
}
}
=_
=
IM=
EMI
1* OPEN THREE ourPUI' FILES */
IN
_
=== ===NI
if « outfile=fopen( out, "wt"»-=NUll)
{ printf("cannot open file outfile. \n");
exit(l) ;
}
if «graph=fopen(right, "wt"»==NUll)
{ printf("Qumot open file right. \n");
exit(l) ;
}
if «graphleft=fopen(left, "wtlt»==NUll)
{ printf("Qumot open file left. \nit);
exit(l) ;
}
1* SET mE INITIAL VALUES FOR SOME VARIABLES.*I
sO=s tar ting,.,s ;
c=spacing;
dels=starti~els/(double)c;
larger=O;
sqt3=sqrt«double)3.0);
ng=O;
-116-
1* DECIDE TIlE NUMBERS OF GRID LEVELS IN VI AND V2
AND TIlE NUMBERS OF STEPS AND PHASES. *1
nw2=w2dim-1 :
nwl=wldim-l:
nlow=negdim:
nleft=leftdim;
ns tage=noJ tages:
ns=noJteps:
if (sO<=O)
{
fpr1ntf(outfile. "ERROR-----INITIAL S <= O! IsO = Xf\n". sO):
goto finish:
}
else
{
id=l;
1* LOOP FOR DIFFERENT GRID SPACINGS *1
do
{
delw=sqrt((double)2.OMdels):
if ( id>l )
{
for (j=O: j<=nw2; j++)
neww2[j]=w2[ns]+jMdelw;
for (j=O: j<=nwl: j++)
newwl[j]=(-I+j)Mdelw:
for (j=O: j<=nw2: j+=2)
{
w2ind=ns+j/2:
if (w2ind>(nw2-ns»
fprint! (outfile. "ERROR----1nterpolating too many"):
if (neww2[j]!=w2[w21nd])
fprintf (outfile. "error in rescaling"):
if (neww2[j]<=O)
{
for (1=0: 1<=nwl: 1++)
{
1f ( (newwl[i]<wl[lindp[w2ind]]) ::
. (newwl[1]>wl[rindp[w2ind]]) )
r[j][i]=b(newwl[i].neww2[j].s.sO)*sOMs/(s-sO):
else
{
if (( 1%2)==0)
r[j][i]=(rr[w2ind][i/2]+rr[w2ind][(i+2)/2])/2:
else
r[j][1]=rr[w2ind][(i+I)/2]:
}
}
}
else
{
-117-
for (i=O; i<=nwl; i++)
{
if ( (newwl[i]>wl[rindp[w2ind]]) )
r(J][i]=b(newwl[i].neww2[j].s.sO}*s*sO/(s-sO};
else
{
if
«iX2)==O}
r[j][i]=(rr[w2ind][i/2]+rr[w2ind][(i+2}/2]}/2:
else
r[j][i]=rr[w2ind](i+l}/2];
}
}
}
}
for (j=l; j<:nw2; j+=2)
{
for (i=O; i<=nwl; i++)
r(j][i]=(r(j-l][i]+r[j+l](i])/2:
}
}
fprintf (outfile.
==- :-. -
"N
fprint! {outfile.
if (id=l) s=sO;
fprintf (outfile.
fprintf (outfile.
fprintf {outfile.
fprintf {outfile.
fprintf (outfile.
"
s
"STAGE
X3d\n" .id);
== -=
==M\n") .
.
--
"INITIAL VALUE OF S : %8.6f\n". s):
"SPACING OF S : %8.6f\n". dels};
"SPACING OF Vl & V2 : %8.6f\n". delw};
"\n"};
V2
VI
ed\n") :
1* SET mE INITIAL VALUE OF mE VECI'ORS STORING mE LOCATIONS
OF mE LAST roNTINUATION POINTS *1
flend=O;
for (i=O: i<=nw2; i++)
{
rindp[i]=O;
rind[i]=O:
}
for (i=O: i<=nleft; i++)
{"
lindp(i]=O;
lind[i]=O:
}
if (id=l)
{
s=sO+dels:
1* SET mE GRID LEVELS OF Vl AND V2 *1
for (i=O: i<=nw2; i++) w2[i]=(-l*nlow-l*ns+i)*delw:
for (i=O: i<:nwl; i++) wl[i]=(-I+i}*delw:
-118-
/* EVALUATE THE SfOPPING RISK WHEN S=SO+DElS */
for (j=O: j<=nw2: j++)
for (i=O: i<=nwl: i++)
rr[j][i]=b(wl[i].w2[j].s.sO)*sOMs/(s-sO):
}
else
{
/* SET THE GRID LEVELS OF VI AND V2 FOR INTERPOLATION */
for (i=O: i<=nw2: i++) w2[i]=neww2[i]:
for (i=O: i<=nwl: i++) wl[i]=newwl[i]:
/* EVALUATE THE SfOPPING RISK WHEN S=STARTING-S
AT THIS PHASE */
for (j=O: j<=nw2: j++)
for (i=O: i<=nwl: i++)
rr[j][i]=r[j][i]:
}
/* AT EAaI STEP. FOR EAaI GRID LEVEL IN V2. V20 SAY. FIND
THE CORRESPONDING VI. VIO SAY. TO MAKE (VI0. V2O) A LAST
CONTINUATION POINT */
/*LOOP FOR EAaI STEP. S=SO+ISMDElS. IS=2. 3. . .. */
for (is=l: is<=ns: is++)
{
outO=O:
flid=O:
£1end=O:
ng=O:
if
«is==5) II (is==10) II (is 20) I: (is==40) )
{
if (id=nstage)
ng=l:
}
s=s+dels:
w2k=is-l:
&=sOMs/(s-sO):
/* &=1/(I/so-l/s) */
aed=(s-dels-sO)*S/«s-dels)*(s-sO»:
/* aed=(I/(s-dels)-I/s0)/(I/s-l/s0) */
/* AT TIME S2. DETERMINE THE LAST CONTINUATION POINT */
if ( is==1 )
{
/* FOR EAaI GRID LEVEL IN V2. V20 SAY. FIND THE CORRESPONDING
VI. VI0 SAY. TO MAKE (VI0. V20) A LAST CONTINUATION POINT */
while ( w2k<=(nw2-2) )
{
-119-
beg=O;
bed=O;
wlk=O;
w2kk=w2k+1;
w2kkk=w2kk+1;
twlkk=O;
td=O;
ted=O;
1£ ( w2[w2kk]>O )
{
while ( wlk<=(nwl-2) )
{
wlkk--wlk+1;
wlkkk:wlkk+1;
d=aMb(wl[wlkk],w2[w2kk],s,sO);
ed=aedM
(rr[w2kk] [wlk]+rr[w2kk] [wlkkk]+
rr[w2k][wlkk]+rr[w2kkk][wlkk])/4;
if ( ed<d )
{
i£ ( wlkk==(nwl-l) )
{
larger=1 ;
goto finish;
}
r[w2kk][wlkk]=ed;
if ( wlkk==2 )
r[w2kk][O]=r[w2kk][2];
bed=l;
td=d;
ted=ed;
}
else i£ ( bed==1 )
{
rind[w2kk]=wlk;
if ( (w2kk>=ns) &&. (w2kk<=(nw2-ns» &&. (ng==l) )
{
td=tdla;
ted=tedla;
£print£(out£lle~
" %8.5£ %8.2£ %8.2£ %14.6£\n",
s, w2[w2kk] , wl[wlk], ted);
£print£(graph," %8.6£ %14.6£ %14.6£\n",
s, wl[wlk], w2[w2kk]);
}
wlk=nwl;
}
else
{
w2k=nw2;
wlk=nwl ;
}
-120-
w1k=w1k+1 ;
}
}
else
{
while ( w1k<=(nw1-2) }
{
w1kk--w1k+1 ;
w1kkk--w1kk+1 ;
d=aMb(w1[w1kk].w2[w2kk].s.sO};
ed=aed*
(rr[w2kk] [w1k]+rr[w2kk] [w1kkk]+
rr[w2k][w1kk]+rr[w2kkk][w1kk]}/4;
if ( ed<d )
{
if (flid==O)
{
flend=w1kk:
£lid=1; .
}
i£ (w1kk==nw1-1)
{
larger=1 ;
goto finish;
}
r[w2kk] [w1kk]=ed;
if ( w1kk=2 )
r[w2kk][O]=r[w2kk][2];
i£ ( beg==O )
{
beg=1;
l1nd[w2kk]=w1kk;
if «outO==O) && (w1kk==1)}
{
out0=1;
if ( (w2kk>=ns) && (w2kk<;(nw2-ns»
&& (ng==1) )
{
d=d/a;
ed=ed/a;
£print£(out£lle •
•• %8.5£ %8.2£ %8.2£ %14.6£\n·"
s. w2[w2kk]. w1[w1kk]. ed};
£print£(graphleft.·· %8.6£ %14.6£ %14.6£\n".
s. w1[w1kk]. w2[w2kk]};
}
}
else i£ ( (w1kk>1) && (w2kk>=ns)
&& (w2kk<=(nw2-ns» && (ng=1) )
{
d=d/a:
ed=ed/a:
-121-
£pr1nt£(out£1le •
.. XS.5£ XS.2£ XS.2£ %14.6£\n".
s. w2[w2kk]. wl[wlkk]. ed):
£pr1nt£(graphle£t ... ·XS.6£ %14.6£ %14.6£\n".
s. wl[wlkk]. w2[w2kk]):
}
}
else
{
bed=l:
twlkk--wlkk;
td=d;
ted=ed;
}
}
else 1£ ( (beg==l) :: (bed==l) )
{
wlk=nwl;
}
wlk:wlk+l:
}
.
/* DECIDE TIlE VALUES OF VECI'ORS STORING TIlE LOCATIONS
OF TIlE LAST CDNTINUATION POINTS */
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
/Mftf/
if (beg==O)
{
l1nd[w2kk]=O:
r1nd[w2kk]=O;
}
else 1£
(bed==O)
{
rind[w2kk]=l1nd[w2kk];
}
else
{
r1nd[w2kk]=twlkk:
if ( (ng==l) && (w2kk>:ns) && (w2kk<=(nw2-ns»
{
td=td/a;
ted=ted/a;
f-pr1nt£(out£lle •
.. XS.5£ XS.2£ XS.2£ %14.6£\n".
s. w2[w2kk]. wl[twlkk]. ted):
£pr1nt£(graph." XS.6£ %14.6£ %14.6£\n".
s. wl[twlkk]. w2[w2kk]);
}
}
}
w2k=w2k+l:
}
}
else
{
-122-
)
/*
/*
APPLY mE PROPERTIFS OF mE LAST <X>NTINUATION POINTS */
AT TIME 83. 84. S5. . ..•
FOR EAaI GRID LEVEL OF V2. V20 SAY. FIND mE a>RRFSPONDING
VI. VI0 SAY. TO MAKE (VI0. V20) A LAST <X>NTINUATION POINT */
wh1le ( w2k<=(nw2-1s-1) )
{
beg=O:
bed=O:
w2kk--w2k+ 1;
w2kkk=w2kk+ 1:
twlkk=O:
td=O;
ted=O:
1£ ( w2[w2kk]>=O )
{
DECIDE mE NUMBER OF GRID LEVELS IN VI WHIaI HAVE TO
BE <lIE<XED FOR mE CURRENT GRID LEVEL OF V2 */
/*
/*
======================== */
/MM/
/MM/
/MM/
/MM/
1£ ( r1ndp[w2kk]=O )
{
1£ ( (r1ndp[w2k]?O) I: (r1ndp[w2kkk]>O) )
{
/MJIf/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
1f (r1ndp[w2k]>=r1ndp[w2kkk])
rend=r1ndp[w2k]:
else
rend=r1ndp[w2kkk] :
}
else
rend=l;
}
else 1f ( (rindp[w2k]>r1ndp[w2kk]) II
(r1ndp[w2kkk]>r1ndp[w2kk]»
{
1f (r1ndp[w2k]>=r1ndp[w2kkk])
rend=r1ndp[w2k]:
else
rend=r1ndp[ w2kkk];
/MM/o
/MM/
/MM/
/MM/
/MM/
/MM/
/MM/
/*
}
else
{
rend=r1ndp[w2kk]+I:
}
=========================
*/
wlk=r1ndp[w2kk]:
wh1le ( wlk<rend )
{
wlkk=wlk+1:
wlkkk=wlkk+1:
d=a*b(wl[wlkk].w2[w2kk].s.sO);
-123-
ed=aed*
(rr[w2kk] [w1k]+rr[w2kk][w1kkk]+
rr[w2k][wlkk]+rr[w2kkk][wlkk])/4;
if ( ed<d )
{
1£ ( w1kk==nw1-1 )
{
larger=1 ;
goto finish;
}
bed=1;
twlkk=wlkk;
td=d;
ted=ed;
}
else
{
wlk=nwl;
}
wlk=w1k+1;
}
if ( bed=1 )
{
r1nd[w2kk]=tw1kk;
1£ ( (w2kk>=ns) &at (w2kk<=(nw2-ns» &at (ng==l) )
{
td=td/a;
ted=ted/a;
£pr1nt£(out£lle •
.. %8.5£ %8.2£ %8.2£ %14.6£\n".
s. w2[w2kk]. w1[tw1kk]. ted);
£pr1nt£(graph." %8.6£ %14.6£ %14.6£\n".
s. w1[tw1kk]. w2[w2kk]);
}
}
else
{
r1nd[w2kk]=r1ndp[w2kk];
1£ ( r1nd[w2kk]>O )
{
d=b(w1[r1nd[w2kk]].w2[w2kk].s.sO);
ed=aedM
(rr[w2kk][r1nd[w2kk]-1]+rr[w2kk] [r1nd[w2kk]+I]+
rr[w2k][r1nd[w2kk]]+rr[w2kkk][r1nd[w2kk]])/(4Ma);
if ( (w2kk>=ns) &at (w2kk<=(nw2-ns» &at (ng=l) )
{
£pr1nt£(out£lle •
•• %8.5£ %8.2£ %8.2£ %14.6£\n",
s. w2[w2kk]. wi [r1nd[w2kk]] • ed);
£pr1nt£(graph." %8.6£ %14.6£ %14.6£\n",
s. w1[r1nd[w2kk]]. w2[w2kk]);
}
}
}
-124-
if ( rind[w2kk]==O )
w2k::nw2;
}
else
{
if ( (lindp[w2kk]==O) && (rindp[w2kk]-::O) &&
(lindp[w2k]==O) && (rindp[w2k]==O) &&
(lindp[w2kkk]==O) && (rindp[w2kkk]==O) )
{
Iend::pflend ;
llend=O;
while ( (ilend<l) && (lend>O) )
{
for (i=O; i<=l; i++) iid[i]=O;
if ( «wl[lend+l]+sqt3Mw2[w2kk])
«wl[lend+l]-sqt3*W2[w2kk])
if ( «wl[lend]+sqt3Mw2[w2kk+l])
«wl[lend]-sqt3Mw2[w2kk+l])
ilend=1id[O]Miid[1];
if (ilend==O) lend=lend-l;
<=
>=
<=
>=
0)
0)
0)
0)
&&
) iid[l]=l;
&&
) iid[O]=l;
}
wlk=lend;
while ( wlk<=(nwl-2) }
{
wlkk:wlk+1;
wlkkk--wlkk+ 1 ;
.
d=aMb(wl[wlkk].w2[w2kk].s.sO);
ed=aedM
(rr[w2kk] [wlk]+rr[w2kk][wlkkk]+
rr[w2k][wlkk]+rr[w2kkk][wlkk])/4;
if ( ed<d )
{
if (fltd:=O)
{
flend=wlkk;
flid=l;
}
1f ( wlkk==(nwl-l) )
{
larger=l;
goto finish;
}
if ( beg==O )
{
beg=l;
lind[w2kk]:wlkk;
if «outO==O) && (wlkk==l»
{
outO=l;
1f ( (w2kk>=ns) && (w2kk<=(nw2-ns»
(ng==l) )
{
d:d/a;
ed=ed/a;
-125-
&&
£print£{out£ile •
•• %8.5£ %8.2£ %8.2£ %14.6£\n",
s. w2[w2kk]. w1[w1kk]. ed}:
£print£{graphle£t.·· %8.6£ %14.6£ %14. 6£\n" ,
s. w1[w1kk]. w2[w2kk]}:
}
}
else if { {w2kk>=ns} 8& (w2kk<={nw2-ns»
(w1kk>1) 8& {ng=1} }
8&
{
d=d/a:
ed=ed/a:
£print£{out£ile.
" %8.5£ %8.2£ %8.2£ %14.6£\n".
s. w2[w2kk]. w1[w1kk]. ed}:
fprint£{graphleft," %8.6£ %14.6£ %14.6£\n",
s, w1[w1kk], w2[w2kk]};
}
}
else
{
bed=1;
tw1kk=wlkk:
td=d:
ted=ed:
}
}
else if { {beg==1} :: {bed=1} }
{
w1k=nw1:
}
else
{
irend=O:
for {i=O; i<=1: i++} iid[i]=O:
i£ { {{w1[w1k]+sqt3*W2[w2kk]} >= O} 8&
{{w1[w1k] >= O}} }
iid[0]=1 :
if { {{w1[w1kk]+sqt3*W2[w2kk-1]} >= O} 8&
({w1[w1kk] >= 0» )
Ud[l]=l:
irend=iid[0]Miid[1]:
i£ {irend=l} w1k=nw1:
}
w1k=w1k+1:
}
if ( (beg==O) )
{
lind[w2kk]=O:
rind[w2kk]=O:
}
else i£ ( (bed==O) )
{
-126-
rind[w2kk]=lind[w2kk]:
}
else
{
rind[w2kk]=tw1kk:
1£ ( (ng=1) && (w2kk>=ns)
&&
.
(w2kk<=(nw2-ns» )
{
td=teVa:
ted=tecVa;
£pr1nt£(out£Ue,
.. %8.5£ %8.2f %8.2f X14.6£\n",
s, w2[w2kk] , w1[tw1kk], ted):
fprintf(graph," %8.6£ X14.6£ X14.6£\n",
s, w1[tw1kk], w2[w2kk]);
}
}
}
else
{
1M DECIDE TIiE NUMBER OF GRID POINTS HAVE TO BE 0lEa<ED
FOR TIiE CURRENT GRID LEVEL OF V2 MI
1M
=
__= =======================
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
if ( (rindp[w2k]>rindp[w2kk]) ::
(rindp[w2kkk]>rindp[w2kk]) )
{
if (rindp[w2k]>=rindp[w2kkk])
rend=rindp[w2k];
else
rend=rindp[w2kkk]:
}
else
rend=rindp[w2kk]+1:
1M LOOP FOR APPLYING PROPOSITION 3.2.4 MI
irend=<>;
while (irend < 1)
{
for (i=O: 1<=1: 1++) Ud[i]=O;
1£ ( «w1[rend-1]+sqt3Mw2[w2kk]) >= 0)
«w1[rend-1] >= 0» )
Ud[O]=1 :
if ( «w1[rend]+sqt3Nw2[w2kk-1]) >= 0)
«w1[rend] >= 0» )
Ud[1]=1;
1rend=1id[0]Miid[1];
1f (irend==O) rend=rend+1:
1**1
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
IMMI
}
i£ (lindp[w2kk]>2)
{
-127-
&&
&&
MI
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
/ttM/
1M
if ( «lindp[w2k]<lindp[w2kk]) && (lindp[w2k]>l»
I I
I I
«lindp[w2kkk]<lindp[w2kk]) && (lindp[w2kkk]>l»
)
{
•
if ( (lindp[w2kkk]<=liDdp[w2k]) &&
(lindp[w2kkk]>l) )
lend=lindp[w2kkk]-l;
else if (lindp[w2k]>l)
lend=lindp[w2k]-l;
else
lend=lindp[w2kk]-2;
}
else
lend=lindp[w2kk]-2;
}
else
lend=O;
/M
LOOP TO APPLY PROPOSITION 3.2.4
ilend=O;
while ( (ilend
M/
1) && (lend>O) )
~
{
for (i=O; i<=l: i++) iid[i]=O:
if ( «w1[lend+1]+sqt3*W2[w2kk])
«w1[lend+1]-sqt3*W2[w2kk])
if ( «w1[lend]+sqt3*W2[w2kk+1])
«w1[lend]-sqt3*W2[w2kk+1])
ilend=iid[O]Miid[l];
if (ilend==O) lend=lend-1:
<=
>=
<=
>=
0)
0)
0)
0)
&&
) iid[l]=l:
&&
) iid[O]=l:
}
w1k=lend;
=:===================:==:==*/
if ( (lindp[w2kk]==O) :: (rindp[w2kk]-=O) )
{
while ( w1k<=rend )
{
w1kk=w1k+1;
w1kkk=w1kk+1:
d=aMb(w1[w1kk].w2[w2kk].s.sO):
ed=aedM
(rr[w2kk] [w1k]+rr[w2kk][w1kkk]+
rr[w2k][w1kk]+rr[w2kkk][w1kk])/4:
if ( ed<d )
{
if (£lid==O)
{
£lend=w1kk;
£lid=l;
}
if ( w1kk==(nw1-1) )
{
larger=l ;
-128-
•
goto finish;
}
if ( beg==<> )
{
beg=1 ;
lind[w2kk]=w1kk;
if ((outO==O) && (w1kk=1»
{
out0=1 ;
if ( (w2kk>=ns) && (w2kk<=(nw2-ns)) &&
(ng=1) )
{
d=d/a;
ed=ed/a;
£print£(out£ile,
" %8.5£ %8.2£ %8.2£ %14.6£\n",
s, w2[w2kk], w1[w1kk], ed);
£print£(graphle£t,
" %8.6£ %14.6f %14.6f\n",
s, w1[w1kk], w2[w2kk]);
}
}
else if ( (w1kk>1) && (w2kk>=ns)
&& (w2kk<=(nw2-ns)) && (ng=1) )
{
d=d/a;
ed=ed/a;
fprint£(out£ile,
" %8.5£ %8.2£ %8.2£ %14.6£\n",
s, w2[w2kk], w1[w1kk], ed);
£print£(graphleft," %8.6f %14.6£ %14.6f\n",
s, w1[wlkk], w2[w2kk]);
}
}
else
{
bed=1;
tw1kk:w1kk;
td=d;
ted=ed;
}
}
else if ( (beg=1) II (bed=l) }
{
wlk=nwl;
}
wlk:wlk+l;
}
if ( (beg==<»
)
{
lind[w2kk]=O;
rind[w2kk]=O;
}
-129-
else i£ ( (bed==O) )
{
rind[w2kk]=lind[w2kk]:
}
else
{
rind[w2kk]=tw1kk:
if ( (ng=1) && (w2kk>:ns) && (w2kk<=(nw2-ns» )
{
td=td/a;
ted=ted/a:
£print£(out£ile,
" XS.5£ XS.2£ XS.2£ %14.6£\n",
s, w2[w2kk] , w1[tw1kk], ted}:
£print£(graph," XS.6£ %14.6£ %14. 6£\n" ,
s, w1[tw1kk], w2[w2kk]}:
}
}
}
else
{
while (w1k < rend)
{
i£ ( (w1k>=lend) && (w1k<=lindp[w2kk]-2) )
{
w1kk=w1k+1:
w1kkk=w1kk+1:
d=aMb(w1[w1kk],w2[w2kk].s,sO}:
ed=aedM
(rr[w2kk] [w1k]+rr[w2kk] [w1kkk]+
rr[w2k][w1kk]+rr[w2kkk][w1kk]}/4:
if ( ed<d )
{
if (£lid==O)
{
flend:w1kk:
£lid=1:
}
if ( beg==O )
{
beg=1:
lind[w2kk]:w1kk:
if ( (outO==O) && (w1kk=1) )
{
out0=1 :
if ( (w2kk>=ns) && (w2kk<=(nw2-ns» &&
(ng=1) )
{
d=d/a:
ed=ed/a:
£print£(out£ile.
" XS.5£ XS.2£ XS.2£ %14.6£\n",
s, w2[w2kk], w1[w1kk]. ed}:
-130-
•
fprint£{graphleft,
" XS.6£ %14.6£ %14.6£'\n",
s, wi [w1kk], w2[w2kk]};
}
}
else if { {w2kk>=ns} 8& (w2kk<={nw2-ns» &&
{w1kk>l} 8& {ng=l} }
{
d=d/a;
ed=ed/a;
fprintf{outflle,
" XS.5£ XS.2£ XS.2£ %14.6£'\n",
s, w2[w2kk] , wi [wlkk], ed};
£print£{graphle£t,
" XS.6£ %14.6£ %14.6£'\n",
s, wi [w1kk], w2[w2kk]};
}
wlk=rindp[w2kk]-l;
}
}
}
else i£ { {w1k>=rindp[w2kk]} 8& {w1k<rend} }
{
w1Ick--wlk+1;
w1kkk--w1kk+1;
d=a*b{w1[w1kk],w2[w2kk],s,sO};
ed=aed*
{rr[w2kk][w1k]+rr[w2kk][wlkkk]+
rr[w2k][w1kk]+rr[w2kkk][wlkk]}/4;
if { ed<d }
{
bed=l;
twlIck--w1kk;
td=d;
ted=ed;
}
else
w1k=nw1;
}
w1k:w1k+1;
}
if {beg==O }
{
lind[w2kk]=lindp[w2kk];
i£ { lind[w2kk]>O }
{
d=b(w1[lind[w2kk]],w2[w2kk],s,sO};
ed=aed*
(rr[w2kk][lind[w2kk]-l]+rr[w2kk][llnd[w2kk]+l]
+rr[w2kk-1][lind[w2kk]]+rr[w2kkk][lind[w2kk]])
/{4Ma};
if { {outO==O} 8& lind[w2kk]=l}
{
outO=l;
-131-
1£ (
(w2kk>=ns)
(ng=l) )
&&
(w2kk<=(nw2-ns»
&&
{
£print£(out£1le,
" %8.5£ %8.2£ %8.2£ %14.6£\n",
s. w2[w2kk] , wl[l1nd[w2kk]]. ed}:
£print£(graphle£t." %8.6£ %14.6£ %14.6£\n",
s. wl[lind[w2kk]]. w2[w2kk]}:
}
}
else 1£ ( (l1nd[w2kk]>I) && (w2kk>=ns)
(w2kk<=(nw2-ns» && (ng=l) )
&&
{
£print£(out£1le.
" %8.5£ %8.2£ %8.2£ %14.6£\n",
s, w2[w2kk] , wl[l1nd[w2kk]], ed}:
£print£(graphleft." %8.6£ %14.6£ %14.6f\n",
s, wl[lind[w2kk]], w2[w2kk]}:
}
}
}
i£ (
bed==O )
{
rind[w2kk]=rindp[w2kk]:
i£ ( rind[w2kk]>O )
{
d=b(wl[rind[w2kk]],w2[w2kk],s.sO}:
ed=aed*
(rr[w2kk][rind[w2kk]-I]+rr[w2kk][rind[w2kk]+I]
+rr[w2kk-l][rind[w2kk]]+rr[w2kkk][rind[w2kk]])
/(4*8.):
1£ (
{
(w2kk>=ns)
&&
(w2kk<=(nw2-ns»
&&
(ng==l) )
£print£(out£lle.
" %8.5£ %8.2£ %8.2£ %l4.6f\n".
s, w2[w2kk] , wl[rind[w2kk]], ed}:
£printf(graph." %8.6£ %14.6£ %14.6£\n",
s, wi [rind[w2kk]], w2[w2kk]}:
}
}
}
else
{
rind[w2kk]=twlkk:
1£ ( (w2kk>=ns) && (w2kk<=(nw2-ns»
&&
(ng=l) )
{
td=tdla;
ted=tedla:
£print£(out£lle.
" %8.5£ %8.2£ %8.2£ %14.6£\n",
s. w2[w2kk]. wl[twlkk], ted}:
-132-
..
£print£(graph," %8.6£ X14.6£ X14.6£\n",
. s, w1[tw1kk], w2[w2kk]);
}
}
}
}
}
if (w2[w2kk]>=O)
jj=1;
else i£ (lind[w2kk]<=1)
jj=1;
else
jj=l1nd[w2kk] ;
for (j=jj; j<=rind[w2kk]; j++)
{
r[w2kk][j]=aedM
(rr[w2kk][j-1]+rr[w2kk][j+1]
+rr[w2kk-1][j]+rr[w2kkk][j])/4;
if (j 2)
r[w2kk][O]=r[w2kk][2];
}
if (flid==1) pflend=flend;
w2k--w2k+1;
}
}
1M
UPDATE MATRICES AND VECTORS rr, rindp, l1ndp, FOR
a:>MPUI'ATIONS OF 1HE NEXT TIME STEP MI
for (j=is; j<=(nw2-is); j++)
{
if (w2[j]>=O )
else i£ ( lind[j]==O )
else
for (i=ii; i<=rind[j]; i++)
ii=1;
11=1 ;
11=l1nd[j] ;
{
rr[j][i]=r[j][i];
2)
rr[j][O]=rr[j][2];
for (i=O; i<=nw2; i++)
for (i=O; i<=nleft; i++)
for (i=O; i<=nleft; i++)
rindp[i]=rind[i];
lindp[i]=lind[i];
if (i
}
}
{
if { (is...-ns) sa (i==is »
printf ( .. Xf Xf Xd Xd\n", s, w2[i], lind[i], rind[i]);
}
}
id=id+1;
dels=dels/4;
•
}
while (id<=nstage);
}
finish:
if ( larger==1 )
{
-133-
fprintf(outfile,
"need more V1 grid to find the last continuation points\n");
fprintf(outfile,
"the last value of S considered is: is::%d and s=%f.\n", is, s);
•
}
fclose(outfile) ;
fclose(graph);
fclose(graphleft);
}
1M
MI
=# =
1M SUBROUTINE OF FUNCTION b()
1M (evaluating the stopping risk)
1M
MI
MI
MI
MI
1M
MI
1M
#
double b(w1, w2. s, sO)
double w1. w2, s, sO;
{
double sqt2, sqt3, sqt6;
sqt2=sqrt«double)2.0);
sqt3=sqrt({double)3.0);
sqt6=sqrt«double)6.0);
if «(w1+sqt3Mw2) <= 0) && «w1-sqt3Mw2) >= 0»
return «1/s-1/s0)M(-1 Msqt6Mw2»;
else if «w1 <=0) && «w1-sqt3Mw2) < 0»
return «(1/s)-(1/s0»M«sqt6/2)Mw2-(3Msqt2l2)Mw1»;
else if «w1 >0) && «w1+sqt3Mw2) > 0»
return «1/s)-(1/s0»M«sqt6/2)Mw2+(3Msqt2l2)Mw1);
•
}
1M
1M
1M
1M
1M
THE END OF THE PROGRAM
MI
MI
MI
MI
MI
•
-134-
BIBLIOGRAPHY
Anderson. T. W. (1958) A Introduction to Multivariate Statistical
Analysis
Anscombe. F.J. (1963) Sequential Medical Trials. J. Am. Statist.
Assoc. 58 365-383.
Armitage. P. (1960) Sequential Medical Trials. Blackwell: Oxford ..
Bather. J. A. (1962) Bayes Procedure for Deciding the Sign of a Normal
Mean. Proc. Cambridge Philos. SOC •• 58 599-620.
Bather. J.A. and Simons. G. (1985) The Minimax Risk for Two-Stage
Procedures in Clinical Trials. J. R. Statist. Soc. B 47
466-75.
Bechhofer. R.E .• Kiefer. J. and Sobel. M. (1968) Sequential
Identification and Ranking Problems. Univ. of Chicago Press
Begg. C.B. and Mehta. C.R. (1979) Sequential Analysis of Comparative
Clinical Trials. Biometrika 66 97-103
Bickel. P.J. and Doksum. K.A. (1977) Mathematical Statistics: Basic
ideas and Selected Topics Holden-Day. Inc .• San Francisco.
Ca.
Billingsley. P. (1968) Convergence of Probability Measures. John Wiely
& Sons. Inc .• New York
Briggs. W. and McCormick. S.F. (1987) Multigrid Methods. Frontiers in
Applied Mathematics. SIAM. Philadelphia 1-30.
Chernoff. H. (1965) Sequential Tests for the Mean of a Normal
Distribution IV (discrete case). Ann. Math. Statist. 36
55-68.
.
Chernoff. H. (1972) Testing for the Sign of a Normal mean: no
Indifference Zone. Sequential Analysis and Optimal Design.
SIAM. Philadelphia. 84-102.
O1ernoff. H. and Petkau. J. (1976) An Optimal Stopping Problem for
Sums of Dichotomous Random Variables. Ann. Probe 4 875-889.
-135-
Chernoff. H. and Petkau. J. (1981) Sequential Medical Trials involving
Paired Data. Biometrika 68 119-32.
Chernoff. H. and Petkau. J. (1984) Numerical Methods for Bayes
Sequential Decision Problems. Technical Report ONR 34.
Statistics Center. MIT.
•
Chernoff. H. and Petkau. J. (1986) Numerical Solutions For Bayes
Sequential Decision Problems. Siam J. Stat. Comput. 7 46-59.
Chow. Y.S .• Robbins. H. and Siegmund. D. (1971) Great Expectations:
The Theory of Optimal Stopping. Houghton Mifflin. New York.
Colton. T. (1963) A Model for Selecting One of Two Medical Treatments.
J. Am. Statist. Assoc. 58 388-400.
Hoel. P.G .• Port. S.C. and Stone C.J. (1972) Introduction to
Stochastic Processes. Houghton Mifflin Company. Boston. Mass
Hoel. D.G .• Sobel. M. and Weiss. G.H. (1972) A Two-Stage Procedure for
Choosing the Better of Two Binomial Populations. Biometrika
59 317-322
Hoffmann. K.H. and Sprekels. J. (1990) Free Boundary Problems: Theory
and Applications. Pitman Research Notes in Mathematics
Series. Longman Scientific & Technical. England
Hogan. M.L. (1986) ConIIIents on a Problem of Chernoff and Petkau. Arm.
Prob. 14 1058-1063.
La!. T.L.. Robbins. H. and Siegmund. D (1983) Sequential Design of
Comparative Clinical Trials. Recent Advances in Statistics
(Rizvi. M. Rustag!. J. and Siegmund. D. ed. s). Academic
Press: New York 51-68
McPherson. K. (1984) Interim Analysis and Stopping Rules. In Cancer
Trials: Methods and Practice (Buyse. M.E .• Staquet. M.J. &
Sylvester. R.J .• ed.s). Oxford University Press 407-422
Knott. J. Developmental Services. Academic Computing Services. The
University of North Carolina at Chapel Hill
Palmer. C.R. (1988) A Clinical Trials Model for Determining the Best
of Three Treatments Having Bernoulli Responses. Mimeo Series
#1758. Dept. of Stat .• UNC-QI
Pocock. S.J. (1977) Group Sequential Methods in the Design and
Analysis of Clinical Trials. Biometrika 64 191-199
Press. W.H.• Flannery. B.P .• Teukolsky. S.A. and Vetterling.
W.T.(1988) Numerical Recipes in C: the Art of Scientific
Computing. Cambridge. NEW YORK
Robbins. H. (1956) A Sequential Decision Procedure with Finite Memory.
Proc. Nat. Acad. Sci. U.S.A. 42 920-923
-136-
•
Shepp. L.A. (1969) Explicit Solutions to Some Problems of Optimal
Stopping. Ann. Math. Statist. 40 993-1010
Simons. G. (1996) Bayes Rules for Clinical-Trials Model with
Dichotomous Responses. Ann. Statist. 14 954-70.
Zelen. M (1969) Play the Winner Rule and the Controlled Clinical
Trial. J. Am. Statist. Assoc. 64 131-14
-137-