Basics 2.1 Notation

2
Basics
In this chapter we introduce some of the basic concepts that will be useful for the study of
integer programming problems.
2.1 Notation
Let A ∈ Rm×n be a matrix with row index set M = {1, . . ., m} and column index set N =
{1, . . ., n}. We write
A = (aij )i=1,...,m
j=1,...,n


a1j


A·,j =  ...  : jth column of A
amj
Ai,· = (ai1 , . . . , an1 ) : ith row of A.
For subsets I ⊆ M and J ⊆ N we denote by
AI,J := (aij )i∈I
j∈J
the submatrix of A formed by the corresponding indices. We also set
A·,J := AM,J
AI,· := AI,N .
For a subset X ⊆ Rn we denote by
lin(X) := { x =
k
!
i=1
the linear hull of X.
λi vi : λi ∈ R and v1 , . . . , vk ∈ X }
10
Basics
2.2 Convex Hulls
Definition 2.1 (Convex Hull)
Given a set X ⊆ Rn , the convex hull of X, denoted by conv(X) is defined to be the set of
all convex combinations of vectors from X, that is,
conv(X) := { x =
k
!
λi vi : λi ! 0,
i=1
k
!
i=1
λi = 1 and v1 , . . . , vk ∈ X }
Suppose that X ⊆ Rn is some set, for instance X is the set of incidence vectors of all
spanning trees of a given graph (cf. Example 1.8). Suppose that we wish to find a vector
x ∈ X maximizing cT x.
"
If x = k
i=1 λi vi ∈ conv(X) is a convex combination of the vectors v1 , . . . , vk , then
cT x =
k
!
λi cT vi " max{cT vi : i = 1, . . ., k}.
i=1
Hence, we have that
max{ cT x : x ∈ X } = max{ cT x : x ∈ conv(X) }
for any set X ⊆ Rn .
Observation 2.2 Let X ⊆ Rn be any set and c ∈ Rn be any vector. Then
max{ cT x : x ∈ X } = max{ cT x : x ∈ conv(X) }.
Proof: See above.
(2.1)
!
Observation 2.2 may seem of little use, since we have replaced a discrete finite problem
(left hand side of (2.1) by a continuous one (right hand side of (2.1)). However, in many
cases conv(X) has a nice structure that we can exploit in order to solve the problem. It turns
out that “most of the time” conv(X) is a polyhedron { x : Ax " b } (see Section 2.3) and that
the problem max{ cT x : x ∈ conv(X) } is a Linear Program.
Example 2.3
We return to the IP given in Example 1.2:
max x + y
2y − 3x " 2
x+y " 5
1"x"3
1"y"3
x, y ∈ Z
We have already noted that the set X of feasible solutions for the IP is
X = {(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (2, 3)}.
Observe that the constraint y " 3 is actually superfluous, but that is not our main concern
right now. What is more important is the fact that we obtain the same feasible set if we add
File: –sourcefile–
Revision: –revision–
Date: 2013/10/24 –time–GMT
2.2 Convex Hulls
11
5
4
3
2
1
0
0
1
2
3
4
5
Figure 2.1: The addition of a the new constraint x − y ! −1 (shown as the red line) leads
to the same set feasible set.
the constraint x − y ! −1 as shown in Figure 2.1. Moreover, we have that the convex hull
conv(X) of all feasible solutions for the IP is described by the following inequalities:
2y − 3x " 2
x+y " 5
−x + y " 1
1"x"3
1"y"3
Observation 2.2 now implies that instead of solving the original IP we can also solve the
Linear Program
max
x+y
2y − 3x " 2
x+y " 5
−x + y " 1
1"x"3
1"y"3
that is, a standard Linear Program without integrality constraints.
#
In the above example we reduced the solution of an IP to solving a standard Linear Program.
We will see later that in principle this reduction is always possible (provided the data of
the IP is rational). However, there is a catch! The mentioned reduction might lead to an
exponential increase in the problem size. Sometimes we might still overcome this problem
(see Section 5.4).
File: –sourcefile–
Revision: –revision–
Date: 2013/10/24 –time–GMT
12
Basics
2.3 Polyhedra and Formulations
Definition 2.4 (Polyhedron, polytope)
A polyhedron is a subset of Rn described by a finite set of linear inequalities, that is, a
polyhedron is of the form
P(A, b) := { x ∈ Rn : Ax " b },
(2.2)
where A is an m × n-matrix and b ∈ Rm is a vector. The polyhedron P is a rational
polyhedron if A and b can be chosen to be rational. A bounded polyhedron is called
polytope.
In Section 1.3 we have seen a number of examples of Integer Linear Programs and we
have spoken rather informally of a formulation of a problem. We now formalize the term
formulation:
Definition 2.5 (Formulation)
A polyhedron P ⊆ Rn is a formulation for a set X ⊆ Zp ×Rn−p , if X = P ∩(Zp ×Rn−p ).
It is clear, that in general there is an infinite number of formulations for a set X. This
naturally raises the question about “good” and “not so good” formulations.
We start with an easy example which provides the intuition how to judge formulations.
Consider again the set X = {(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (2, 3)} ⊂ R2 from Examples 1.2 and 2.3. Figure 2.2 shows our known two formulations P1 and P2 together with a
third one P3 .


2y − 3x " 2 



'
(

 x
x+y " 5
:
P1 =
1"x"3 

y




1"y"3


2y − 3x " 2 







' x ( x + y " 5 
: −x + y " 1
P2 =



 y
1"x"3 





1"y"3
Intuitively, we would rate P2 much higher than P1 or P3 . In fact, P2 is an ideal formulation
since, as we have seen in Example 2.3 we can simply solve a Linear Program over P2 , the
optimal solution will be an extreme point which is a point from X.
Definition 2.6 (Better and ideal formulations)
Given a set X ⊆ Rn and two formulations P1 and P2 for X, we say that P1 is better than P2 ,
if P1 ⊂ P2 .
A formulation P for X is called ideal, if P = conv(X).
We will see later that the above definition is one of the keys to solving IPs.
Example 2.7
In Example 1.7 we have seen two possibilities to formulate the Uncapacitated Facility
Location Problem (U FL):
min
m
!
fj yj +
j=1
j=1 i=1
x ∈ P1
x∈B
m !
n
!
nm
,y ∈ B
m
cij xij
min
m
!
fj yj +
j=1
m !
n
!
cij xij
j=1 i=1
x ∈ P2
x ∈ B nm , y ∈ B m ,
File: –sourcefile–
Revision: –revision–
Date: 2013/10/24 –time–GMT
2.4 Cones
13
5
4
3
2
1
0
0
1
2
Figure
2.2:
Different
formulations
{(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (2, 3)}
3
4
for
5
the
integral
set
X =
where
*' (
x
:
y
*' (
x
P2 =
:
y
P1 =
"m
= 1 for i = 1, . . ., n
xij " yj for i = 1, . . ., n and j = 1, . . ., m
+
"m
xij = 1 for i = 1, . . ., n
"j=1
n
i=1 xij " nyj for j = 1, . . ., m
j=1 xij
+
We claim that P1 is a better formulation than
"P2 . If x ∈ P1 , then xij " yj for all i and j.
Summing these constraints over i gives us n
i=1 xij " nyj , so x ∈ P2 . Hence we have
P1 ⊆ P2 . We now show that P1 &= P2 thus proving that P1 is a better formulation.
We assume for simplicity that n/m = k is an integer. The argument can be extended to
the case that m does not divide n by some technicalities. We partition the clients into
m groups, each of which contains exactly k clients. The first group will be served by a
(fractional) facility at y1 , the second group by a (fractional) facility at y2 and so on. More
precisely, we set
xij = 1 for i = k(j − 1) + 1, . . ., k(j − 1) + k and j = 1, . . ., m
and xij = 0 otherwise. We also set yj = k/n for j = 1, . . . , m.
"
k
Fix j. By construction n
i=1 xij = k = n n = nyj . Hence, the point (x, y) just constructed
is contained in P2 . On the other hand, (x, y) ∈
/ P1 .
#
2.4 Cones
Definition 2.8 (Cone, polyhedral cone)
A nonempty set C ⊆ Rn is called a (convex) cone, if for all x, y ∈ C and all λ, µ ! 0 we
have:
λx + µy ∈ C.
A cone is called polyhedral, if there exists a matrix A such that
C = {x : Ax " 0} .
File: –sourcefile–
Revision: –revision–
Date: 2013/10/24 –time–GMT
14
Basics
Thus, cones are also polyhedra, albeit special ones. We will see that they play an important
role in the representation of polyhedra.
Suppose that we have a set S ⊆ Rn Then, the set
cone(S) := {λ1 x1 + · · · + λk xk : xi ∈ S, λi ! 0 for i = 1, . . ., k}
is obviously a cone. We call this the cone generated by S. It is easy to see that this is the
smallest cone containing S.
2.5 Linear Programming
We briefly recall the following fundamental results from Linear Programming which we
will use in these notes. For proofs, we refer to standard books about Linear Programming
such as [Sch86, Chv83].
Theorem 2.9 (Duality Theorem of Linear Programming) Let A be
, an m × n-matrix,b ∈ Rm and c ∈ Rn . Define the polyhedra P = {x : Ax " b} and Q = y : AT y = c, y ! 0 .
(i) If x ∈ P and y ∈ Q then cT x " bT y.
(weak duality)
(ii) In fact, we have
max{ cT x : x ∈ P } = min{ bT y : y ∈ Q },
provided that both sets P and Q are nonempty.
(2.3)
(strong duality)
!
Theorem 2.10 (Complementary Slackness Theorem) Let x∗ be a feasible solution of
max{ cT x : Ax " b } and y∗ be a feasible solution of min{ bT y : AT y = c, y ! 0 }. Then
x∗ and y∗ are optimal solutions for the maximization problem and minimization problem,
respectively, if and only if they satisfy the complementary slackness conditions:
for each i = 1, . . ., m, either y∗i = 0 or Ai,· x∗i = bi .
(2.4)
!
Theorem 2.11 (Farkas’ Lemma) The set {x : Ax = b, x ! 0} is nonempty if and only if
there is no vector y such that AT y ! 0 and bT y < 0.
!
2.6 Agenda
These lecture notes are consist of two main parts. The goals of Part I are as follows:
(i) Prove that for any rational polyhedron P(A, b) = {x : Ax " b} and X = P ∩ Zn the
set conv(X) is again a rational polyhedron.
,
,
(ii) Use the fact max cT x : x ∈ X = max cT x : x ∈ conv(X) (see Observation 2.2)
and (i) to show that the latter problem can be solved by means of Linear Programming by showing that an optimum solution will always be found at an extreme point
of the polyhedron conv(X) (which we show to be a point in X).
(iii) Give tools to derive good formulations.
File: –sourcefile–
Revision: –revision–
Date: 2013/10/24 –time–GMT