The Lost Cow Problem
The lost-cow problem is as follows: A short-sighted cow (or assume it's dark,
or foggy, or ...) is standing in front of a fence and does not know in which
direction the only gate in the fence might be. How can the cow nd the gate
without walking too great a detour?
This problem can also be formulated as a search problem on undirected
graphs. Assume we are at node s, and there are two paths starting from s.
The two paths never meet again, and only one path contains at distance d a
node t we would like to reach (we can somehow recognize t when we reach
it). More general, in the w-lane lost cow problem, where w = 2; 3; 4; : : :, w
paths start in s and only one of them contains the goal t at distance d. We
assume that d 1 (without a lower bound on d, no online algorithm can be
competitive).
Lemma 1. If we know d in advance then there is a (2w ? 1)-competitive
algorithm for the w-lane lost cow problem, and this is optimal.
Proof. Try the paths in arbitrary order. Walk each path for distance d. We
stop if we nd t, otherwise we try the next path. Since we nd t on the
last path (in the worst case), we walk at most a distance of (2w ? 1) d.
No deterministic algorithm can be better because an adversary could always
place t on the last path explored.
ut
The problem is more dicult if d is not known. We rst consider the 2lane lost-cow problem with two paths P1 and P2 . Any deterministic online
algorithm can be described by a sequence of non-negative numbers f0 =
0; f1; f2 ; f3; : : :, where fi indicates how far we follow path Pimod2 in the i-th
step. We also set f?1 = 0. If we want to be optimal we must have fi < fi+2
for i 0, i.e., each time we explore path Pi we go farther than the previous
time we explored the path. But how should we choose the fi to minimize the
total travel distance? It turns out that the doubling strategy works well in this
case, i.e., we choose fi = 2i+1 for i = 1; 2; 3; : : :.
Theorem 2. The doubling strategy is 9-competitive for the 2-lane lost-cow
problem.
Proof. In the worst case, d = fi + for some small and some i 1, and
we walk the path containing t in the i-th step (just missing t by ). Then we
1
turn around and explore the other path up to distance fi+1 , and then we try
again the right path and nd t in the (i + 2)-th step. Therefore, our travel
distance is
2
Pi+1
j =1 fj + d = 2 Pi+2 j
2
j =2 + d
= 2 (2i+3 ? 3) + 2i+1 + = 9 2i+1 ? 6 + = 9 fi + ? 6
9 (fi + )
= 9d
ut
The next theorem shows that the doubling strategy actually achieves the
best possible competitive ratio. Unfortunately, the proof of the lower bound
is non-trivial, and we only give a sketch here. A full proof can be found in [1].
Theorem 3. No deterministic online algorithm for the 2-lane lost-cow problem can be better than 9-competitive.
Proof. Assume A is a c-competitive algorithm for some c 9. We show that
then c = 9. If A nds t at distance d in the (i + 2)-th step then fi < d fi+2 .
Taking the supremum over all i and distances d with this constraint, we obtain
2
c sup sup
Pi+1
i0 fi <dfi+2P
j =1 fj + d
i+1 f
j =1 j
fi
= 1 + 2 sup
i1
d
since the rst quotient is maximized if d is minimized (so we choose d = fi ).
We can prove c 9 by bounding
i =
Pi+1
j =1 fj
fi
and its supremum
2
= sup i
i1
In particular, we want to show that 4. Consider a xed i 0. For j 1
let
hj = ffi+j
i
Then
i
> fi +f fi+1
i
= 1 + h1
>1
so we have
?1 0
Choosing a better bound on h1, we can get a better bound on . For k 1
i+k
and therefore
fi+k i+X
k+1
j =1
fi +
, hk k+1
X
fi+j
j
=1
P +1
1 + kj=1
hj ? hk
?1
For k = 1, this gives us
and thus
fj
h1 > 1+?h12 > ?1 1
3
> 1 + h1 > 1 + ?1 1
or equivalently
2 ? 2 > 0
Of course, using a better bound on h2 in the bound on h1 would further
improve our bound on , etc. It can be shown by induction that we must
have
j #
1 "
X
?
1
k
?
j
>0
gk := k?1 j j =0
for all k 0. Observe that
g0 = 0 0 ? 1 ?
1
g1 = 0 0 ?1 + 01 ?1 = 1
? 0 ? 1 ? 2
g2 = 1 20 ?1 + 11 ?1 + 02 ?1
=?1
g3 = 2 ? 2
...
Since
?k?j ?1 j
P
gk = k?1 1
j =0
j ?k?1?j ?1 j
P
P1 ?k?1?j ?1 j
= k?1 1
+
j =0
j =1
j
j ?1 P1 ?k?1?j ?1 j
P1 ?k?2?j ?1 j
k
?
1
k
?
2
? j=0 j = j=0
j
= (gk?1 ? gk?2)
we only have to solve gk 's characteristical equation
x2 ? x + = 0
4
to get a closed form expression for gk . Let
p
2
x1;2 = 2 ? 4
be the two solutions of this equation. If x1 6= x2 , i.e., 6= 4, then
gk = d1 xk1 + d2 xk2
where
x1
and
2 ? 4
d2 = ? px22
? 4
are chosen such that g1 = 1 and g2 = ? 1. But then gk 0 for all k 0 if
and only if > 4. If x1 = x2 , i.e., = 4, then
d1 =
p
gk = (d1 k + d2) xk1
where d1 = d2 = 41 are chosen such that g1 = 1 and g2 = ? 1. But then
gk = (k + 1) 2k?2 > 0
for all k 0.
ut
For the w-lane lost-cow problem, the best algorithm is a generalization of
the doubling strategy. We try the paths in some xed order until we nd t,
where the distance traveled on a path always increases by a constant factor.
In particular, we explore path Pimodw to a distance of fi = ( ww?1 )i in step i, for
i = 1; 2; 3; : : :. This algorithm is also known as spiral search (if we draw the
paths as the rays of a star eminating from its center s, the points at distance
fi on the rays form a spiral). As before, one can prove that this algorithm is
optimal [1].
Theorem 4. Spiral search is (1 + 2 (w?w1) ?1 )-competitive, and no deterministic online algorithm can have a better competitive ratio.
ut
w
w
5
Note that
ww 1 + 2ew
(w ? 1)w?1
for large w, so the optimal deterministic competitive ratio of the w-lane lostcow problem is linear in w. We note that Dasgupta et al. [2] proposed a
dierent deterministic algorithm with the same competitive ratio.
Obviously, spiral search would be more ecient if we could guess correctly
at which path to start the search (so that we explore the path containing t
when we for the rst time explore to a distance of at least d). This suggests
to use randomness to choose the order of the paths explored. It turns out
that it is also advantageous to choose f1 randomly. Altogether, this gives the
algorithm SmartCow, introduced by Kao et al. [4].
1+2
SmartCow
Let r > 1 be xed. Choose a permutation of f0; : : : ; w ? 1g
uniformly at random. Choose 2 [0; 1) uniformly at random.
Let
fi = ri+
for i = 1; 2; 3; : : :. Then we explore path P(imodw) to a distance
of fi in the i-th step.
Theorem 5. For r > 1, SmartCow is (1 + w2 1+r+ln+r r ?1 )-competitive.
w
Proof. The total distance traveled by SmartCow is a random variable D. Assume t lies on path q at distance d. Then d = rk+ for some k 2 IN and
0 < 1 (i.e., k = blogr dc). Let m be the rst time when SmartCow explores path q at least up to distance rk . Obviously, k m k + w ? 1.
We distinguish two cases. If m k + 1 then SmartCow explores path q in
step m at least up to distance rk+1 d, so it detects t. Hence,
P
and therefore for ` k + 1
D = 2 im=0?1 ri+ + d
m
= 2r (r ? 1) + d
r?1
6
`
E [D j m = `] = 2(rr ??11) E [r j m = `] + d
Since we choose uniformly from [0; 1)
and therefore
Z 1
rd
Z0 r
dx
=
1 ln r
r
= ?1
ln r
E [r] =
`
E [D j m = `] = 2(rln?r 1) + d
On the other hand, if m = k then SmartCow nds t in step m if and only
if , otherwise it nds t in step (m + w). We denote the rst event by F .
Then
"
E [D j m = k] = prob(F ) E 2 k?1
X
ri+ + d j F
"i=0 k+w?1
X
+prob(F ) E 2 i=0
#
ri+ + d j F
#
k
= prob(F ) 2(rr ??11) E [r j F ]
k+w
+prob(F ) 2(r r ? 1? 1) E r j F + d
?
= r ?2 1 prob(F ) (rk ? 1) E [r j F ]
+prob(F ) (rk+w ? 1) E r j F + d
Since F occurs if and only if , prob(F ) = 1 ? and prob(F ) = .
Furthermore,
Z 1
1
E [r j F ] = 1 ? rd
Z r
1
dx
=
prob(F ) r ln r
= r ?1
prob(F ) ln r
7
Hence
E [D j m = k] = (r ? 21) ln r ((r ? r )(rk ? 1) + (r ? 1)(rk+w ? 1)) + d
So we obtain for D
E [D] =
k+X
w?1
`=k
(prob(m = `) E [D j m = `])
k+X
w?1
`
1
1
= w E [D j m = k] + w ( 2(rln?r 1) + d)
`=k+1
k+w ? 1 rk+1 ? 1
2
r
1
= w E [D j m = k] + w ln r r ? 1 ? r ? 1 ? (w ? 1)
+ w w? 1 d
= w(r ?21) ln r [(rk+1 ? r ? rk+ + r )
+(rk+w+ ? r ? rk+w + 1)
+(rk+w ? 1 ? rk+1 + 1 ? (w ? 1)(r ? 1))] + d
2
=
(rk+w+ ? rk+ ? w(r ? 1)) + d
w
(
r
?
1)
ln
r
w ? 1)
2
(
r
w(r ? 1) ln r + 1 d
Therefore, the expected competitive ratio is at most
w
w?1
1 + 2(r ? 1) = 1 + 2 1 + r + + r
w(r ? 1) ln r
w
ln r
ut
Theorem 6. The competitive ratio of SmartCow in Theorem 5 is minimized
by choosing r as the unique solution of the equation
w?1
ln r = r + 21r2++r + + (+w r? 1)rw?1
8
ut
In particular, for w = 2 the optimal competitive ratio is
min
(1 + 1ln+rr )
r>1
The following table shows the optimal values of r and the resulting competitive ratio of SmartCow for w = 2; : : : ; 7, compared to the optimal deterministic competitive ratio
w
r
SmartCow
Spiral search
2
3
4
5
6
7
3.59112
2.01092
1.62193
1.44827
1.35020
1.28726
4.59112
7.73232
10.84181
13.94159
17.03709
20.13033
9
14.5
19.96296
25.41406
30.85984
36.30277
Kao et al. [3] showed that SmartCow is actually the optimal randomized
online algorithm.
References
1. R. A. Baeza-Yates, J. C. Culberson, and G. J. E. Rawlins. Searching in the
plane. Information and Computation, 106(2):234{252, 1993.
A preliminary version was published with a dierent title: Searching with uncertainty. In Proceedings of the 1st Scandinavian Workshop on Algorithm Theory
(SWAT'88). Springer Lecture Notes in Computer Science 318, pages 176{189,
1988.
2. P. Dasgupta, P. P. Chakrabarti, and S. C. DeSarkar. Agent searching in a tree
and the optimality of iterative deepening. Articial Intelligence, 71:195{208,
1994.
3. M.-Y. Kao, Y. Ma, M. Sipser, and Y. Yin. Optimal construction of hybrid algorithms. In Proceedings of the 5th ACM-SIAM Symposium on Discrete Algorithms (SODA'94), pages 372{381, 1994.
9
4. M.-Y. Kao, J. H. Reif, and S. R. Tate. Searching in an unknown environment:
An optimal randomized algorithm for the cow-path problem. Information and
Computation, 131:63{79, 1996.
A preliminary version was published with a dierent title: Searching in an unknown environment: An optimal randomized algorithm for the Cow-Path Problem. In Proceedings of the 4th ACM-SIAM Symposium on Discrete Algorithms
(SODA'93), pages 441{447, 1993.
10
© Copyright 2026 Paperzz