Section 6.5. Stopping times. Let , be a stochastic process (it will be

Section 6.5. Stopping times.
Let 𝑋𝑑 , 𝑑 ∈ 𝑇 βŠ† ℝ, be a stochastic process (it will be our process of reference). A
random variable 𝜏 = 𝜏 (πœ”) taking values in 𝑇 βˆͺ {∞} (that is, 𝜏 ∈ 𝑇 or is possibly infinite)
is called a stopping time if for every 𝑑 ∈ 𝑇 the event {𝜏 ≀ 𝑑} is determined by the values
of our stochastic process 𝑋𝑒 for 𝑒 ≀ 𝑑: there exists a subset 𝐢 in the space of functions
π‘₯𝑒 , 𝑒 ≀ 𝑑, such that
{ (
)
}
(6.5.1)
{πœ” : 𝜏 (πœ”) ≀ 𝑑} = πœ” : 𝑋𝑒 (πœ”), 𝑒 ≀ 𝑑 ∈ 𝐢 .
That is, for every 𝑑 we should be able, observing the reference process 𝑋𝑒 up to this time,
to determine whether the moment 𝜏 has already come (𝜏 ≀ 𝑑) or not.
The name β€œstopping time” is used because of the use of this concept in the theory of
martingales: we’ll see that if we stop a martingale at a stopping random time 𝜏 , it still
remains a martingale. There is another name for the same class of mathematical objects:
Markov times: from their use in the theory of Markov processes.
The concept of stopping time has nothing to do with any probabilities or expectations;
so it does not belong to the theory of stochastic processes properly speaking but rather to
the set-theoretic introduction to it.
Example 6.5.1. Every constant π‘‘βˆ— ∈ 𝑇 , and also 𝜏 ≑ ∞ is a stopping time.
Indeed,
{
Ξ©
if π‘‘βˆ— ≀ 𝑑,
{πœ” : π‘‘βˆ— ≀ 𝑑} =
(6.5.2)
βˆ…
if π‘‘βˆ— > 𝑑.
{ (
)
}
Each of these events is represented as πœ” : 𝑋𝑒 (πœ”), 𝑒 ≀ 𝑑 ∈ 𝐢 with 𝐢 being some subset
in the space of functions π‘₯𝑒 , 𝑒 ≀ 𝑑: the event Ξ©, with 𝐢 being the whole space of these
functions, with the impossible event βˆ…, with 𝐢 = βˆ….
As for 𝜏 = ∞, clearly for every real 𝑑
{πœ” : ∞ ≀ 𝑑} = βˆ….
(6.5.3)
Indeed we know by time 𝑑 whether π‘‘βˆ— has already come (and the time +∞ definitely
has not come by time 𝑑).
Example 6.5.2. Let 𝑑1 < 𝑑2 be two time points belonging to 𝑇 ; and let 𝐢 be a
subset of the space of functions π‘₯𝑒 , 𝑒 ≀ 𝑑1 . Take
{
𝑑1
if (𝑋𝑒 , 𝑒 ≀ 𝑑1 ) ∈ 𝐢,
(6.5.4)
𝜏=
𝑑2
otherwise.
Then 𝜏 is a stopping time.
Indeed,
⎧
βˆ…



⎨ {(𝑋 , 𝑒 ≀ 𝑑 ) ∈ 𝐢}
𝑒
1
{𝜏 ≀ 𝑑} =

{(𝑋𝑒 , 𝑒 ≀ 𝑑) ∈ 𝐢𝑑 }


⎩
Ξ©
1
for 𝑑 < 𝑑1 ,
for 𝑑 = 𝑑1 ,
for 𝑑1 < 𝑑 < 𝑑2 ,
for 𝑑 β‰₯ 𝑑2 ,
(6.5.5)
where the set 𝐢𝑑 in the space of all functions π‘₯𝑒 , 𝑒 ≀ 𝑑, consists of all functions whose
part up to time 𝑑1 belongs to the set 𝐢.
Example 6.5.3. Let 𝑇 = {0, 1, 2, ..., 𝑛, ...}, and let 𝑋𝑑 , 𝑑 ∈ 𝑇 , be a process taking
values in a space SP. Let 𝐴 be a subset of SP. Then 𝜏 defined as the first time for which
𝑋𝑑 ∈ 𝐴 (the first reaching time) is a stopping time.
Only what are we to do if 𝑋𝑑 never reaches 𝐴? and there is no first moment of
reaching it? Let us take, in this case, 𝜏 = ∞. So the precise definition:
{
min{𝑑 : 𝑋𝑑 ∈ 𝐴}
if there are such 𝑑,
𝜏=
(6.5.6)
+∞
if there is no such 𝑑.
This seems to coincide with the way we would use the expression β€œplus infinity” in our everyday life:
if we tell somebody that something will happen at time +∞, in all probability we would mean that it will
never occur.
Let us check that this 𝜏 is a stopping time. For 𝑑 ∈ 𝑇 we have:
{𝜏 ≀ 𝑑} = {𝑋0 ∈ 𝐴} βˆͺ {𝑋1 ∈ 𝐴} βˆͺ ... βˆͺ {𝑋𝑑 ∈ 𝐴} = {(𝑋𝑒 , 0 ≀ 𝑒 ≀ 𝑑) ∈ 𝐢},
(6.5.7)
where the set 𝐢 βŠ† SP𝑑+1 is defined as the set of all sequences (π‘₯0 , π‘₯1 , ..., π‘₯𝑑 ) for which at
least one element π‘₯𝑖 ∈ 𝐴.
Example 6.5.4. It the same situation let us consider the last time that 𝑋𝑑 ∈ 𝐴:
{
sup{𝑑 : 𝑋𝑑 ∈ 𝐴}
if there are such 𝑑,
(6.5.8)
𝜎=
and we don’t know what
if there is no such 𝑑.
The supremum is used here rather than maximum because the set of all 𝑑’s for which
𝑋𝑑 ∈ 𝐴 may be infinite (and then the supremum is equal to +∞). As for the case of
no such 𝑑’s, in order not to have to think of it, let us restrict ourselves to the case when
𝑋0 (πœ”) ∈ 𝐴 for all πœ” ∈ Ξ©; also let us suppose for simplicity that the number of visits of
𝑋𝑑 (πœ”) to the set 𝐴 is finite for every πœ”.
It turns out that, in general, 𝜎 is not a stopping time.
Indeed, say, for 𝑑 = 3, suppose 𝑋0 ∈ 𝐴, 𝑋1 ∈
/ 𝐴, and 𝑋2 , 𝑋3 ∈ 𝐴. If the process 𝑋𝑑
never visits the set 𝐴 after time 𝑑 = 3, we have 𝜎 = 3; but if it does, 𝜎 > 3. So, observing
the process 𝑋𝑑 up to time 3 we cannot be sure whether 𝜎 ≀ 3 or > 3.
It may, of course, happen that because of something special about the process 𝑋𝑑 , the
random variable 𝜎 happens to be a stopping time; but generally speaking, no.
Example 6.5.3β€² . 𝑇 = [0, ∞), 𝑋𝑑 , 𝑑 β‰₯ 0, is a process with continuous trajectories,
and 𝐴 is a closed subset of the space SP. Then the random variable 𝜏 defined by (6.5.6)
is a stopping time.
First of all, the minimum does exist if there are 𝑑’s with 𝑋𝑑 ∈ 𝐴: because of the
continuity of 𝑋𝑑 (πœ”) and closedness of the set 𝐴. If the set 𝐴 were not closed, say, if it were
an open set, we could speak only of the infimum inf{𝑑 : 𝑋𝑑 ∈ 𝐴}.
For 𝑑 ∈ [0, ∞) we have:
{𝜏 ≀ 𝑑} = {(𝑋𝑒 , 0 ≀ 𝑒 ≀ 𝑑) ∈ 𝐢},
2
(6.5.9)
where the set 𝐢 consists of all functions π‘₯𝑒 , 0 ≀ 𝑒 ≀ 𝑑, taking a value in the set 𝐴 for at
least one 𝑒 ∈ [0, 𝑑] (if we have observed 𝑋𝑒 for 𝑒 ∈ [0, 𝑑], we just look at this function:
if it reaches the set 𝐴 for one of these 𝑒’s, the time 𝜏 has come by the time 𝑑; if not, it
hasn’t).
Example 6.5.3β€²β€² . 𝑇 = [0, ∞), 𝑋𝑑 , 𝑑 ∈ [0, ∞), is a real-valued process with continuous
trajectories, 𝐴 = (0, ∞) (an open, not closed set). Take
{
𝜏=
inf{𝑑 : 𝑋𝑑 ∈ 𝐴}
+∞
if there are such 𝑑,
if there is no such 𝑑.
(6.5.10)
It turns out that, generally, this is not a stopping time.
Indeed, suppose we observed the process 𝑋𝑑 for 0 ≀ 𝑑 ≀ 2, and got the following
realization:
{
βˆ’ 2 + 𝑑2 ,
0 ≀ 𝑑 ≀ 1,
𝑋𝑑 (πœ”) =
(6.5.11)
2
βˆ’ (2 βˆ’ 𝑑) ,
1≀𝑑≀2
(make a picture of the graph, which is continuous; at 𝑑 = 1 both formulas yield the same
result). It may be that for this πœ” the trajectory will go up after 𝑑 = 2, e. g.:
𝑋𝑑 (πœ”) = (𝑑 βˆ’ 2)2 ,
𝑑 β‰₯ 2;
(6.5.12)
for such an πœ” we have 𝜏 (πœ”) = inf(2, ∞) = 2. But it may be that the trajectory will go
down after having touched the level 0, e. g.:
𝑋𝑑 (πœ”) = βˆ’(𝑑 βˆ’ 2)2 + (𝑑 βˆ’ 2)3 ,
𝑑β‰₯2
(6.5.13)
(make a picture of the graph). For πœ” for which (6.5.11), (6.5.13) hold, 𝜏 = 3.
So observing 𝑋𝑒 , 0 ≀ 𝑒 ≀ 2, we cannot decide whether the event {𝜏 ≀ 𝑑} has occurred
or not.
Not a stopping time.
Example 6.5.5. Let 𝜏 be an arbitrary stopping time; let us prove that 𝜎 = 𝜏 + 1
is also one (we suppose that the time parameter set 𝑇 is such that 𝑑 + 1 ∈ 𝑇 for 𝑑 ∈ 𝑇 ).
Let, for definiteness, 𝑇 = [0, ∞) or 𝑇 = {0, 1, 2, ..., 𝑛, ...}.
We have:
{
βˆ…
if 𝑑 < 1,
(6.5.14)
{𝜎 ≀ 𝑑} = {𝜏 ≀ 𝑑 βˆ’ 1} =
{(𝑋𝑒 , 0 ≀ 𝑒 ≀ 𝑑) ∈ 𝐷}
if 𝑑 β‰₯ 1,
where the set 𝐷 in the space of functions π‘₯𝑒 , 0 ≀ 𝑒 ≀ 𝑑, consists of all functions whose
restriction to the interval [0, 𝑑 βˆ’ 1] belongs to the set 𝐢 used in the representation
{𝜏 ≀ 𝑑 βˆ’ 1} = {(𝑋𝑒 , 0 ≀ 𝑒 ≀ 𝑑 βˆ’ 1) ∈ 𝐢}.
(6.5.15)
Example 6.5.5β€² . Let 𝑇 = [0, ∞), and let 𝜏 be a stopping time. The random
variable 𝜎 = 𝜏 /2 may not be a stopping time.
3
This is because the event
{𝜎 ≀ 𝑑} = {𝜏 ≀ 2𝑑} = {(𝑋𝑒 , 0 ≀ 𝑒 ≀ 2𝑑) ∈ 𝐢2𝑑 }
(6.5.16)
may be representable through the values of 𝑋𝑒 with 0 ≀ 𝑒 ≀ 𝒕, and may be not: one
cannot require you to stop the process at the half-time before it, say, reaches a set 𝐴 for
the first time.
Example 6.5.5β€²β€² . Let 𝜏 be an arbitrary stopping time; and let β„Ž(𝑑) be a nondecreasing left-continuous function on 𝑇 such that β„Ž(𝑑) β‰₯ 𝑑 for every 𝑑 (make a picture of
the graph of such a function). Then the random variable
𝜎 = β„Ž(𝜏 )
(we take β„Ž(∞) = ∞) is a stopping time.
Indeed,
{𝜎 ≀ 𝑑} = {𝜏 ≀ π‘‘βˆ— },
(6.5.17)
(6.5.18)
where π‘‘βˆ— is the largest value of 𝑠 ≀ 𝑑 for which β„Ž(𝑠) ≀ 𝑑 if there are such values:
π‘‘βˆ— = max{𝑠 ≀ 𝑑 : β„Ž(𝑠) ≀ 𝑑};
(6.5.19)
the maximum is reached because of our restrictions imposed on the function β„Ž(𝑠): the set
{𝑠 ≀ 𝑑 : β„Ž(𝑠) ≀ 𝑑} is an interval with its right end. If there are no values 𝑠 ≀ 𝑑 for which
β„Ž(𝑠) ≀ 𝑑, the event {𝜎 ≀ 𝑑} is just impossible (equal to βˆ…).
This is a generalization of Example 6.5.5, not of Example 6.5.5β€² .
Theorem 6.5.1. If 𝜏 , 𝜎 are stopping times, then so are min(𝜏, 𝜎) and max(𝜏, 𝜎).
Indeed,
{min(𝜏, 𝜎) ≀ 𝑑} = {𝜏 ≀ 𝑑} βˆͺ {𝜎 ≀ 𝑑},
{max(𝜏, 𝜎) ≀ 𝑑} = {𝜏 ≀ 𝑑} ∩ {𝜎 ≀ 𝑑}. (6.5.20)
6.5.1 For the stochastic process π‘Œπ‘‘ , 𝑑 β‰₯ 0, of Example 5.2.1, prove that 𝑇 is a stopping
time with respect to this process.
Check that for a non-random π‘‘βˆ— > 0 also min(𝑇, π‘‘βˆ— ) is a stopping time.
4