Institute of Theoretical Computer Science
Mohsen Ghaffari, Angelika Steger, Emo Welzl, Peter Widmayer
Algorithms, Probability, and Computing
Solutions for SpA 2
HS16
(a) Let M be any Olympic matching, and let T be any Olympic transversal. Since T contains
by denition an element (colour) of every ag, it must contain an element c(f) 2 f for
each f 2 M. Since the ags f 2 M are pairwise disjoint, the colours c(f) must be pairwise
distinct. Thus we have shown that T must contain at least as many colours as there are
ags in M, that is, |T | |M|. Since this reasoning was valid for arbitary M and T , it
implies τ(F) µ(F).
(b) The linear program (LP-T) is feasible, because 1n is a feasible point. Furthermore its
objective
function is bounded from below, because for all feasible x we have 1Tn x =
Pn
i=1 xi 0. From these two facts it follows that there is a (nite) optimum.
Now let T be an Olympic transversal of minimum size, |T | = τ(F). Then the (characteristic) vector x 2 {0, 1}n with
0 if ci 2
/T
xi =
1 if ci 2 T
is feasible with objective value 1Tn x = |T |. The optimum of the linear program can only
be smaller; hence τ (F) τ(F).
(c) Let n = 3 and F = {c1 , c2 }, {c2 , c3 }, {c1 , c3 } (a triangle graph). Here every Olympic
transversal must contain at least two colours, and T = {c1 , c2 } is an Olympic transversal;
so we have τ(F) = 2. On the other hand, x = 21 , 12 , 12 is feasible, so we have τ (F) 23 .
(d) Consider the dual linear program,
maximize 1Tm y subject to y 0, AT y 1n .
(LP-M)
Since (LP-T) has an optimal solution, so also (LP-M) has an optimum µ (F).
Let M be an Olympic matching of maximum size, |M| = µ(F), and let y 2 {0, 1}m denote
its characteristic vector, that is,
0
yi =
1
if fi 2/ M
if fi 2 M.
Then y is a feasible solution for (LP-M) with objective value |M|, thus |M| µ (F).
Since M was an arbitrary Olympic matching, this proves µ(F) µ (F). By weak duality
we have µ (F) τ (F), and the statement follows.
1
(e) Let T = {c1 , . . . , cs }, and let N denote the number of ags not intersected by T . We have
3
2
m
m
X
X
5
4
Pr fi \ T = ;
[ fi \ T = ; ] =
E[N] = E
i=1
i=1
and for all i we have
Pr fi \ T = ; = Pr [ c1 2/ fi and . . . and cs 2/ fi ]
=
=
s
Y
k=1
s
Y
k=1
=
=
1 − Pr [ ck
0
s
Y
B
X
k=1
c2fi
@1 −
s
Y
k=1
s
Y
0
=
k=1
2
fi ]
1
C
p(c)A
1
P
x
@1 − Pcj 2fi j A
n
j=1 xj
!
1
(because x is feasible for (LP-T))
1 − Pn
j=1 xj
k=1
s
Y
(the draws are independent)
Pr [ ck 2/ fi ]
1
1− τ
!
1
1− τ
=
!s
e−s/τ .
It follows that E[N] me−s/τ . If we choose s := bτ ln mc + 1 (note that we must choose
an integer number here) then we have s > τ ln m and we obtain
ln m)/τ
E[N] < me−(τ
=
m
= 1.
m
It follows that the random set T is an Olympic transversal with positive
probability
P
(because otherwise we would have Pr [N 1] = 1 and hence E[N] = k1 Pr [N k] 1). This means, concretely, that there is a xed choice of colours c1 , . . . , cs such that T
is an Olympic transversal. Since |T | s this proves τ(F) s τ ln m + 1.
(f)
(i) Assume w.l.o.g that the colours are numbered in such a way that the rst, say,
d colours are dark and the remaining colours are light. The assumption in the
question means, in terms of the incidence matrix A, that every row has exactly two
entries equal to 1, namely one entry within the rst d columns and another entry
within the remaining n − d columns. All other entries are zero. It follows that the
sum of the rst d columns of A equals 1m , and the sum of the remaining columns
is also 1m . This is a linear dependence among the columns of A, which for clarity
we can also write as
!
1d
−1n−d
A
2
= 0.
(ii) We prove by induction on k 1 that for of every k k-submatrix B of A we have
det(B) 2 {−1, 0, 1}.
Induction base case, k = 1:
Clear because the entries of A are all 2 {0, 1}.
Let k 2 and assume that the statement holds for all smaller
square submatrices. Let B be a k k-submatrix of A. If every row of B contains
two non-zero entries, then we can prove in the same way as in (i) that the columns
of B are linearly dependent, from which it follows that det(B) = 0.
Induction step:
Now assume the opposite, that there is a row i of B that has at most one nonzero entry. If all entries in this row are zero, then we again have det(B) = 0. So
now assume that row i contains exactly one non-zero entry, say in column j. Then
Bij = 1. Let C denote the matrix that arises from B by deleting the ith row and
the jth column. From the Laplace expansion for the determinant (or directly from
the formula for the determinant, if you're patient enough to go through this) it
follows that det(B) = Bij det(C) = det(C). Since C is also a submatrix of A,
but of dimension smaller than k, we can apply the induction hypothesis and obtain
det(C) 2 {−1, 0, 1}. The statement follows.
(iii) Let x be a basic feasible solution of (LP-T). By denition, x satises n linearly
independent constraints with equality. If we write the constraints of the linear
program in the form
Bx b
where
!
B :=
In
, b :=
A
!
0n
,
1m
then the n linearly independent constraints correspond to n linearly independent
~ be the submatrix that consists of these rows, and let
rows of the matrix B. Let B
~ be the vector that contains the corresponding entries of the right-hand side b, so
b
that
~ = b.
~
Bx
Since the proof in (ii) applies verbatim to the matrix B instead of A, we have
~ 2 {−1, 0, 1}. By linear independence, however, we have det(B)
~ 6= 0. By
det(B)
Cramer's rule (cf. the proof of theorem 6.2 where the same trick was used), for all
j,
~ j)
det(B
~ j ),
xj =
= det(B
~
det(B)
~ j is obtained from B
~ by replacing the jth column with b~ . All entries of B
~j
where B
~
are integers, hence det(Bj ) is an integer. Thus we have shown that all entries of x
are integers.
(iv) We assume, as we told you by mail you may, too, that (LP-T) has an optimal
solution x which is at the same time a basic feasible solution. It follows from (iii)
that x is integral. Furthermore we can observe that every optimal solution satises
x 1n (if there is some coordinate i with xi > 1, then we can replace xi by 1
3
and the resulting vector is still feasible, but its objective value is strictly smaller,
contradicting optimality of x ). We also have x 0n from the constraints.
Thus x must be a 0-1-vector, and the set T := {cj : xj = 1} is an Olympic
transversal of minimum size (by construction of (LP-T); no need to detail this
once again for just 2 points). It follows that that τ (F) = |T | = τ(F).
(v) Consider the dual program (LP-M) again. We can write the constraints in the same
way as we did for (LP-T) in (iii), i.e.,
!
−Im
y
AT
!
0m
.
1n
!
Let C be any square submatrix of
−Im
. Then, by Laplace expansion, det(C) =
AT
det C~ where C~ is some submatrix of AT (and also of C). The transposed matrix
is a submatrix of A, so we can apply (ii):
~T
C
~ = det(C~ T ) 2 {−1, 0, 1}.
det(C) = det(C)
This was true for an arbitrary square submatrix C. Hence the reasoning in (iii) can
also be applied to the dual program (LP-M), and every basic feasible solution of
(LP-M) is integral.
Let y be any feasible solution of (LP-M). We claim that y 1m . Indeed, every
column of AT contains at least one entry (actually, two of them) equal to 1, so that
every yi appears in some constraint of the form yi1 + + yik 1. Since we also
have y 0, this implies yi 1.
We have shown that every basic feasible solution of (LP-M) is a 0-1-vector. Similarly
to (iv) it follows that, given an optimal solution y , the set M := {fi : yi = 1} is
a maximum Olympic matching, and µ (F) = |M| = µ(F). We conclude with strong
duality:
τ(F) = τ (F) = µ (F) = µ(F).
4
© Copyright 2026 Paperzz