Chapter 2 Methods to enhance formulations

4.4 Geometry and solution of Lagrangean duals
 Solving Lagrangean dual: 𝑍𝐷 = max 𝑍(πœ†)
πœ†β‰₯0
π‘˜
𝑍 πœ† = min(𝑐′π‘₯ π‘˜ + πœ†β€²(𝑏 βˆ’ 𝐴π‘₯ )) (unless 𝑍 πœ† = βˆ’βˆž)
π‘˜βˆˆπΎ
π‘˜
Let π‘“π‘˜ = 𝑏 βˆ’ 𝐴π‘₯
and β„Žπ‘˜ = 𝑐′π‘₯ π‘˜ . Then
𝑍 πœ† = min(β„Žπ‘˜ + π‘“π‘˜β€² πœ†) (piecewise linear and concave)
π‘˜βˆˆπΎ
 𝑍(πœ†) is nondifferentiable, but mimic the idea for differentiable function
πœ†π‘‘=1 = πœ†π‘‘ + πœƒπ‘‘ 𝛻𝑍 πœ†π‘‘ ,
𝑑 = 1,2, …
 Prop 4.5: A function 𝑓: 𝑅𝑛 β†’ 𝑅 is concave if and only if for any π‘₯ βˆ— ∈ 𝑅𝑛 ,
there exists a vector 𝑠 ∈ 𝑅𝑛 such that
𝑓 π‘₯ ≀ 𝑓 π‘₯ βˆ— + 𝑠′ π‘₯ βˆ’ π‘₯ βˆ— ,
for all π‘₯ ∈ 𝑅𝑛 .
 The vector 𝑠 is called the gradient of 𝑓 at π‘₯ βˆ— (if differentiable). Generalize the
above property to nondifferentiable functions.
Integer Programming 2015
1
 Def 4.3: Let 𝑓 be a concave function. A vector 𝑠 such that
𝑓 π‘₯ ≀ 𝑓 π‘₯ βˆ— + 𝑠′ π‘₯ βˆ’ π‘₯ βˆ— ,
for all π‘₯ ∈ 𝑅𝑛 , is called a subgradient of 𝑓 at π‘₯ βˆ— . The set of all subgradients of
𝑓 at π‘₯ βˆ— is denoted as πœ•π‘“(π‘₯ βˆ— ) and is called the subdifferential of 𝑓 at π‘₯ βˆ— .
 Prop: The subdifferential πœ•π‘“(π‘₯ βˆ— ) is closed and convex.
Pf) πœ•π‘“(π‘₯ βˆ— ) is the intersection of closed half spaces.
 If 𝑓 is differentiable at π‘₯ βˆ— , then πœ•π‘“ π‘₯ βˆ— = 𝛻𝑓 π‘₯ βˆ— .
For convex functions, subgradient is defined as vector 𝑠 which satisfies
𝑓 π‘₯ β‰₯ 𝑓 π‘₯ βˆ— + 𝑠′(π‘₯ βˆ’ π‘₯ βˆ— ).
 Geometric intuition: Think in 𝑅𝑛+1 space
𝑓 π‘₯ ≀ 𝑓 π‘₯ βˆ— + 𝑠′(π‘₯ βˆ’ π‘₯ βˆ— )
⟺ 𝑠′π‘₯ βˆ— βˆ’ 𝑓 π‘₯ βˆ— ≀ 𝑠 β€² π‘₯ βˆ’ 𝑓(π‘₯)
⟺ (𝑠, βˆ’1)β€²(π‘₯ βˆ— , 𝑓 π‘₯ βˆ— ) ≀ (𝑠, βˆ’1)β€²(π‘₯, 𝑓 π‘₯ )
⟺
𝑠, βˆ’1 β€² { π‘₯, 𝑓 π‘₯
Integer Programming 2015
βˆ’ π‘₯βˆ—, 𝑓 π‘₯βˆ— } β‰₯ 0
2
 Picture:
𝑠, βˆ’1 β€² { π‘₯, 𝑓 π‘₯
βˆ’ π‘₯βˆ—, 𝑓 π‘₯βˆ— } β‰₯ 0
𝑓 π‘₯ = 𝑓 π‘₯ βˆ— + 𝑠′(π‘₯ βˆ’ π‘₯ βˆ— )
𝑓(π‘₯)
π‘₯, 𝑓 π‘₯
βˆ’ (π‘₯ βˆ— , 𝑓 π‘₯ βˆ— )
(π‘₯, 𝑓 π‘₯ )
(π‘₯ βˆ— , 𝑓 π‘₯ βˆ— )
βˆ’1
π‘₯βˆ—
Integer Programming 2015
𝑠
(𝑠, βˆ’1)
π‘₯
π‘₯
3
𝑠, βˆ’1 β€² { π‘₯, 𝑓 π‘₯
 For convex functions:
βˆ’ π‘₯ βˆ—, 𝑓 π‘₯βˆ— } ≀ 0
𝑓 π‘₯ = 𝑓 π‘₯ βˆ— + 𝑠′(π‘₯ βˆ’ π‘₯ βˆ— )
𝑓(π‘₯)
π‘₯, 𝑓 π‘₯
βˆ’ (π‘₯ βˆ— , 𝑓 π‘₯ βˆ— )
(π‘₯ βˆ— , 𝑓 π‘₯ βˆ— )
(π‘₯, 𝑓 π‘₯ )
βˆ’1
π‘₯
Integer Programming 2015
π‘₯βˆ—
𝑠
(𝑠, βˆ’1)
π‘₯
4
 Prop 4.6: Let 𝑓: 𝑅𝑛 β†’ 𝑅 be a concave function. A vector π‘₯ βˆ— maximizes 𝑓
over 𝑅𝑛 if and only if 0 ∈ πœ•π‘“ π‘₯ βˆ— .
Pf) A vector π‘₯ βˆ— maximizes 𝑓 over 𝑅𝑛 if and only if
𝑓 π‘₯ ≀ 𝑓 π‘₯ βˆ— = 𝑓 π‘₯ βˆ— + 0β€² π‘₯ βˆ’ π‘₯ βˆ— , for all π‘₯ ∈ 𝑅𝑛 ,
which is equivalent to 0 ∈ πœ•π‘“ π‘₯ βˆ— .
 Prop 4.7: Let
𝑍 πœ† = min (β„Žπ‘˜ + π‘“π‘˜β€² πœ†)
π‘˜βˆˆπΎ
𝐸 πœ† = π‘˜ ∈ 𝐾: 𝑍 πœ† = β„Žπ‘˜ + π‘“π‘˜β€² πœ† .
Then, for every πœ†βˆ— β‰₯ 0 the following relations hold:
(a) For every π‘˜ ∈ 𝐸 πœ†βˆ— , π‘“π‘˜β€² is a subgradient of the function 𝑍(οƒ—) at πœ†βˆ— .
(b) πœ•π‘ πœ†βˆ— = conv π‘“π‘˜ : π‘˜ ∈ 𝐸 πœ†βˆ— , i.e., a vector 𝑠 is a subgradient of the
function 𝑍(οƒ—) at πœ†βˆ— if and only if 𝑠 is a convex combination of the vectors
π‘“π‘˜ , π‘˜ ∈ 𝐸 πœ†βˆ— .
Integer Programming 2015
5
Pf) (a) By definition of 𝑍(πœ†) we have for every π‘˜ ∈ 𝐸(πœ†βˆ— )
𝑍 πœ† ≀ β„Žπ‘˜ + π‘“π‘˜β€² πœ† = β„Žπ‘˜ + π‘“π‘˜β€² πœ†βˆ— + π‘“π‘˜β€² πœ† βˆ’ πœ†βˆ— = 𝑍 πœ†βˆ— + π‘“π‘˜β€² πœ† βˆ’ πœ†βˆ— .
(b) We have shown that {π‘“π‘˜ : π‘˜ ∈ 𝐸 πœ†βˆ— } βŠ† πœ•π‘(πœ†βˆ— ) and since a convex
combination of two subgradients is also a subgradient , it follows that
conv π‘“π‘˜ : π‘˜ ∈ 𝐸 πœ†βˆ—
βŠ† πœ•π‘ πœ†βˆ— .
Assume that there is 𝑠 ∈ πœ•π‘ πœ†βˆ— , such that 𝑠 βˆ‰ conv( π‘“π‘˜ : π‘˜ ∈ 𝐸 πœ†βˆ— ) and
derive a contradiction. By the separating hyperplane theorem, there exists a
vector 𝑑 and a scalar , such that
π‘‘β€²π‘“π‘˜ β‰₯ 𝛾,
for all π‘˜ ∈ 𝐸(πœ†βˆ— )
(4.29)
𝑑′ 𝑠 < 𝛾
Since 𝑍 πœ†βˆ— < β„Žπ‘– + 𝑓𝑖′ πœ†βˆ— , for all 𝑖 βˆ‰ 𝐸 πœ†βˆ— , then for sufficiently small πœ€ > 0,
we have, 𝑍 πœ†βˆ— + πœ€π‘‘ = β„Žπ‘˜ + π‘“π‘˜β€² πœ†βˆ— + πœ€π‘‘ , for some π‘˜ ∈ 𝐸(πœ†βˆ— )
Since 𝑍 πœ†βˆ— = β„Žπ‘˜ + π‘“π‘˜β€² πœ†βˆ— , we obtain that
𝑍 πœ†βˆ— + πœ€π‘‘ βˆ’ 𝑍 πœ†βˆ— = πœ€π‘“π‘˜β€² 𝑑, for some π‘˜ ∈ 𝐸(πœ†βˆ— ).
Integer Programming 2015
6
(continued)
From (4.29), we have
𝑍 πœ†βˆ— + πœ€π‘‘ βˆ’ 𝑍 πœ†βˆ— β‰₯ πœ€π›Ύ.
Since 𝑠 ∈ πœ•π‘ πœ†βˆ— , we have
𝑍 πœ†βˆ— + πœ€π‘‘ ≀ 𝑍 πœ†βˆ— + πœ€π‘  β€² 𝑑.
From (4.29), we have
𝑍 πœ†βˆ— + πœ€π‘‘ βˆ’ 𝑍 πœ†βˆ— < πœ€π›Ύ,
contradicting (4.30).

Integer Programming 2015
(4.30)
(def’n of subgradient)
( 𝑑′ 𝑠 < 𝛾 )
7
 Subgradient algorithm
 For differentiable functions, the direction 𝛻𝑓(π‘₯) is the direction of steepest
ascent of 𝑓 at π‘₯. But, for nondifferentiable functions, the direction of
subgradient may not be an ascent direction. However, moving along the
direction of a subgradient, we can move closer to the maximum point of a
concave function.
Integer Programming 2015
8
 Ex: 𝑓 π‘₯1 , π‘₯2 = min {βˆ’π‘₯1 , π‘₯1 βˆ’ 2π‘₯2 , π‘₯1 + 2π‘₯2 } : concave function
direction of subgradient may not be an ascent direction for 𝑓, but we can
move closer to the maximum point.
π‘₯2
𝑓(π‘₯) = βˆ’2
𝑠 2 = (1,2)
𝑠1 π‘₯ βˆ’ π‘₯ βˆ— = 0
𝑓(π‘₯) = βˆ’1
π‘₯ βˆ— = (βˆ’2,0)
𝑠1 = (1, βˆ’2)
Integer Programming 2015
𝑓(π‘₯) = 0
π‘₯1
𝑠2 π‘₯ βˆ’ π‘₯ βˆ— = 0
9
 The subgradient optimization algorithm
Input: A nondifferentiable concave function 𝑍(πœ†).
Output: A maximizer of 𝑍(πœ†) subject to πœ† β‰₯ 0.
Algorithm:
1. Choose a starting point πœ†1 β‰₯ 0; let 𝑑 = 1.
2. Given πœ†π‘‘ , check whether 0 ∈ πœ•π‘(πœ†π‘‘ ). If so, then πœ†π‘‘ is optimal and the algorithm
terminates. Else, choose a subgradient 𝑠 𝑑 of the function 𝑍(πœ†π‘‘ ). (π‘“π‘˜ β†’ 𝑏 βˆ’ 𝐴π‘₯ π‘˜ )
3. Let πœ†π‘—π‘‘+1 = max{πœ†π‘—π‘‘ + πœƒπ‘‘ 𝑠𝑗𝑑 , 0} (projection), where πœƒπ‘‘ is a positive step size
parameter. Increment 𝑑 and go to Step 2.
Integer Programming 2015
10
 Choosing the step lengths:
 Thm :
(a) If
∞
𝑑=1 πœƒπ‘‘
β†’ ∞, and lim πœƒπ‘‘ = 0, then 𝑍(πœ†π‘‘ ) β†’ 𝑍𝐷 the optimal value of
π‘‘β†’βˆž
𝑍(οƒ—).
(b) If πœƒπ‘‘ = πœƒ0 𝛼 𝑑 , 𝑑 = 1,2, … , for some parameter 0 < 𝛼 < 1, then 𝑍(πœ†π‘‘ ) β†’ 𝑍𝐷
if πœƒ0 and  are sufficiently large.
(c) If πœƒπ‘‘ = 𝑓 π‘π·βˆ— βˆ’ 𝑍 πœ†π‘‘ / 𝑠 𝑑 2 , where 0 < 𝑓 < 2, and π‘π·βˆ— ≀ 𝑍𝐷 , then
𝑍 πœ†π‘‘ β†’ π‘π·βˆ— , or the algorithm finds πœ†π‘‘ with π‘π·βˆ— ≀ 𝑍 πœ†π‘‘ ≀ 𝑍𝐷 , for some finite 𝑑.
 (a) guarantees convergence, but slow (e.g., πœƒπ‘‘ = 1/𝑑 )
(b) In practice, may use halving the value of πœƒπ‘‘ after 𝑣 iterations
(c) π‘π·βˆ— typically unknown, may use a good primal upper bound in place of π‘π·βˆ— .
If not converge, decrease π‘π·βˆ— .
 In practice, 0 ∈ πœ•π‘(πœ†π‘‘ ) is rarely met. Usually, only find approximate optimal
solution fast and resort to B-and-B. Convergence is not monotone.
If 𝑍𝐷 = 𝑍𝐿𝑃 , solving LP relaxation may give better results (monotone
convergence).
11
Integer Programming 2015
 Ex: Traveling salesman problem:
For TSP, we dualize degree constraints for nodes except node 1.
𝑍 πœ† = min
( = min
π‘’βˆˆπΈ (𝑐𝑒
βˆ’ πœ†π‘– βˆ’ πœ†π‘— )π‘₯𝑒 + 2
π‘’βˆˆπΈ 𝑐𝑒 π‘₯𝑒
π‘’βˆˆπ›Ώ( 1 ) π‘₯𝑒
π‘’βˆˆπΈ(𝑆) π‘₯𝑒
+
π‘–βˆˆπ‘‰βˆ–{1} πœ†π‘– (2 βˆ’
π‘’βˆˆπ›Ώ 𝑖
π‘₯𝑒 ) )
= 2,
≀ 𝑆 βˆ’ 1,
π‘’βˆˆπΈ(π‘‰βˆ– 1 ) π‘₯𝑒
π‘–βˆˆπ‘‰βˆ–{1} πœ†π‘–
𝑆 βŠ‚ 𝑉 βˆ– 1 , 𝑆 β‰  βˆ…, 𝑉 βˆ– 1 ,
= 𝑉 βˆ’ 2,
π‘₯𝑒 ∈ 0,1 .
step direction is
πœ†π‘‘+1
= πœ†π‘‘π‘– + πœƒπ‘‘ (2 βˆ’
𝑖
π‘’βˆˆπ›Ώ 𝑖
π‘₯𝑒 πœ†π‘‘ ) ( 𝑠 𝑑 = 𝑏 βˆ’ 𝐴π‘₯ π‘˜ )
step size using rule (c) is
πœƒπ‘‘ = 𝑓 π‘π·βˆ— βˆ’ 𝑍 πœ†π‘‘ /
𝑑 2
π‘’βˆˆπ›Ώ( 𝑖 ) π‘₯𝑒 (πœ† ))
𝐴π‘₯(πœ†π‘‘ ) 2 )
π‘–βˆˆπ‘‰(2 βˆ’
(πœƒπ‘‘ = 𝑓 π‘π·βˆ— βˆ’ 𝑍 πœ†π‘‘ / 𝑏 βˆ’
 Note that 𝑖 βˆ’th coordinate of subgradient direction is two minus the number of
edges incident to node 𝑖 in the optimal one-tree. We do not have πœ† β‰₯ 0 here
since the dualized constraints are equalities.
Integer Programming 2015
12
 Lagrangean heuristic and variable fixing:
 The solution obtained from solving the Lagrangian relaxation may not be
feasible to IP, but it can be close to a feasible solution to IP.
Obtain a feasible solution to IP using heuristic procedures. Then we may
obtain a good upper bound on optimal value.
Also called β€˜primal heuristic’
 May fix values of some variables using information from Lagrangian
relaxation (refer to W, pp177-178).
 Branching variable selection? (solutions to Lagrangian relaxation are integer
valued)
Integer Programming 2015
13
 Choosing a Lagrangean dual:
How to determine the constraints to relax?
οƒ˜Strength of the Lagrangean dual bound
οƒ˜Ease of solution of 𝑍(πœ†)
οƒ˜Ease of solution of Lagrangean dual problem 𝑍𝐷 = max 𝑍(πœ†)
πœ†β‰₯0
 Ex: Generalized Assignment Problem (max problem)(refer to W, pp. 179):
𝑧 = max
Integer Programming 2015
𝑛
π‘š
𝑗=1 𝑖=1 𝑐𝑖𝑗 π‘₯𝑖𝑗
𝑛
𝑗=1 π‘₯𝑖𝑗 ≀ 1
π‘š
𝑖=1 π‘Žπ‘–π‘— π‘₯𝑖𝑗 ≀ 𝑏𝑗
π‘₯ ∈ π΅π‘šπ‘›
for 𝑖 = 1, … , π‘š
for 𝑗 = 1, … , 𝑛
14
 Dualize both sets of constraints:
𝑍𝐷1 = min 𝑍1 (𝑒, 𝑣) , where
𝑒β‰₯0,𝑣β‰₯0
𝑍1 𝑒, 𝑣 = max
𝑛
𝑗=1
π‘š
𝑖=1
𝑐𝑖𝑗 βˆ’ 𝑒𝑖 βˆ’ π‘Žπ‘–π‘— 𝑣𝑗 π‘₯𝑖𝑗 +
π‘š
𝑖=1 𝑒𝑖
+
𝑛
𝑗=1 𝑣𝑗 𝑏𝑗
π‘₯ ∈ π΅π‘šπ‘›
 Dualize first set of assignment constraints:
𝑍𝐷2 = min 𝑍 2 (𝑒) , where
𝑒β‰₯0
𝑍 2 𝑒 = max
𝑛
π‘š
𝑗=1 𝑖=1(𝑐𝑖𝑗 βˆ’ 𝑒𝑖 )π‘₯𝑖𝑗
π‘š
𝑖=1 π‘Žπ‘–π‘— π‘₯𝑖𝑗 ≀ 𝑏𝑗
π‘šπ‘›
+
π‘š
𝑖=1 𝑒𝑖
for 𝑗 = 1, … , 𝑛
π‘₯∈𝐡
 Dualize the knapsack constraints:
𝑍𝐷3 = min 𝑍 3 𝑣 , where
𝑣β‰₯0
𝑍 3 𝑣 = max
Integer Programming 2015
𝑛
π‘š
𝑗=1 𝑖=1(𝑐𝑖𝑗 βˆ’ π‘Žπ‘–π‘— 𝑣𝑗 )π‘₯𝑖𝑗
𝑛
𝑗=1 π‘₯𝑖𝑗 ≀ 1
π‘₯ ∈ π΅π‘šπ‘›
+
𝑛
𝑗=1 𝑣𝑗 𝑏𝑗
for 𝑖 = 1, … , π‘š
15
 𝑍𝐷1 = 𝑍𝐷3 = 𝑍𝐿𝑃
For each 𝑖,
conv{π‘₯:
= conv{π‘₯:
𝑛
𝑗=1 π‘₯𝑖𝑗 ≀ 1, π‘₯𝑖𝑗 ∈ {0,1} for 𝑗
𝑛
𝑗=1 π‘₯𝑖𝑗 ≀ 1, 0 ≀ π‘₯𝑖𝑗 ≀ 1 for
= 1, … , 𝑛}
𝑗 = 1, … , 𝑛}
𝑍1 𝑒, 𝑣 , 𝑍 3 (𝑣) can be solved by inspection. Calculating 𝑍𝐷3 look easier than
calculating 𝑍𝐷1 since there are π‘š dual variables compared to π‘š + 𝑛 for 𝑍𝐷1 .
 𝑍𝐷2 ≀ 𝑍𝐿𝑃
 To find 𝑍 2 𝑒 , we need to solve 𝑛 0-1 knapsack problems. Also note that the
information that we may obtain while solving the knapsack cannot be stored
and used for subsequent optimization.
Integer Programming 2015
16
 May solve the Lagrangean dual by constraint generation (NW p411-412):
 Recall that
𝑍𝐷 = maximize 𝑦
subject to 𝑦 + πœ†β€² 𝐴π‘₯ π‘˜ βˆ’ 𝑏 ≀ 𝑐′π‘₯ π‘˜ ,
πœ†β€² 𝐴𝑀 𝑗 ≀ 𝑐′𝑀 𝑗 ,
π‘˜βˆˆπΎ
π‘—βˆˆπ½
πœ†β‰₯0
 Given 𝑦 βˆ— , πœ†βˆ— , with πœ†βˆ— β‰₯ 0, calculate
𝑍 πœ†βˆ— =
min (𝑐 β€² π‘₯ + πœ†βˆ— β€² 𝑏 βˆ’ 𝐴π‘₯ )
π‘₯∈conv(𝐹)
(min 𝑐 β€² βˆ’ πœ†βˆ— ′𝐴 π‘₯ + πœ†β€² 𝑏)
If 𝑦 βˆ— ≀ 𝑍 πœ†βˆ— , stop 𝑦 βˆ— = 𝑍 πœ†βˆ— , πœ† = πœ†βˆ— is an optimal solution
If 𝑦 βˆ— > 𝑍 πœ†βˆ— , an inequality is violated
οƒ˜π‘ πœ†βˆ— β†’ βˆ’βˆž, βˆƒ ray 𝑀 𝑗 s.t. 𝑐 β€² βˆ’ πœ†βˆ— ′𝐴 𝑀 𝑗 < 0, hence πœ†β€² 𝐴𝑀 𝑗 ≀ 𝑐′𝑀 𝑗 is violated
οƒ˜ 𝑍 πœ†βˆ— < ∞, βˆƒ extreme point π‘₯ π‘˜ s.t. 𝑍 πœ†βˆ— = 𝑐′π‘₯ π‘˜ + πœ†βˆ— β€² 𝑏 βˆ’ 𝐴π‘₯ π‘˜ . Since 𝑦 βˆ— >
𝑍 πœ†βˆ— , 𝑦 + πœ†β€²(𝐴π‘₯ π‘˜ βˆ’ 𝑏) ≀ 𝑐′π‘₯ π‘˜ is violated.
 Note that max/min is interchanged in NW.
Integer Programming 2015
17
Nonlinear Optimization Problems
 Geometric understanding of the strength of the bounds provided by the
Lagrangean dual
 Let 𝑓: 𝑅𝑛 ↦ 𝑅 and 𝑔: 𝑅𝑛 ↦ π‘…π‘š be continuous functions. Let β„± βŠ‚ 𝑅𝑛 .
minimize 𝑓(π‘₯)
subject to 𝑔(π‘₯) ≀ 0,
(4.31)
π‘₯ ∈ β„±.
Let 𝑍𝑃 be the optimal cost. Consider Lagrangean function
𝑍 πœ† = π‘šπ‘–π‘›π‘₯βˆˆβ„± {𝑓 π‘₯ + πœ†β€² 𝑔 π‘₯ }.
 For all πœ† β‰₯ 0, 𝑍(πœ†) ≀ 𝑍𝑃 and 𝑍(πœ†) is a concave function.
Lagrangean dual: 𝑍𝐷 = π‘šπ‘Žπ‘₯πœ†β‰₯0 𝑍(πœ†).
 Let π‘Œ = { 𝑦, 𝑧 β€² ∈ π‘…π‘š+1 : 𝑦 β‰₯ (=)𝑓 π‘₯ , 𝑧 β‰₯ (=)𝑔 π‘₯ , for all π‘₯ ∈ β„±}.
Problem (4.31) can be restated as
minimize 𝑦
subject to (𝑦, 0)β€² ∈ π‘Œ. ( 𝑦, 𝑧 β€² ∈ π‘Œ, 𝑧 ≀ 0 )
Integer Programming 2015
18

𝑓, (𝑦)
π‘π‘œπ‘›π‘£(π‘Œ)
𝑍𝑃
𝑍 πœ† = 𝑓 + πœ†π‘”
𝑍𝐷
𝑔, (𝑧)
Figure 4.7: The geometric interpretation of the Lagrangean dual
Integer Programming 2015
19
 Given that 𝑍 πœ† = π‘šπ‘–π‘›π‘₯βˆˆβ„± 𝑓 π‘₯ + πœ†β€² 𝑔(π‘₯), we have for a fixed  and for all
π‘₯ ∈ β„± that
𝑍 πœ† ≀ 𝑦 + πœ†β€² 𝑧, βˆ€(𝑦, 𝑧)β€² ∈ π‘Œ.
Geometrically, this means that the hyperplane 𝑍 πœ† = 𝑦 + πœ†β€² 𝑧 lies below the
set Y.
For 𝑧 = 0, we obtain 𝑦 = 𝑍(πœ†), that is 𝑍(πœ†) is the intercept of the hyperplane
𝑍 πœ† = 𝑦 + πœ†β€² 𝑧 with the vertical axis.
To maximize 𝑍(πœ†), this corresponds to finding a hyperplane 𝑍 πœ† = 𝑦 + πœ†β€² 𝑧,
which lies below the set Y, such that the intercept of the hyperplane 𝑍 πœ† =
𝑦 + πœ†β€² 𝑧 with the vertical axis is the largest.
 Thm 4.10: The value 𝑍𝐷 of the Lagrangean dual is equal to the value of the
following optimization problem:
minimize 𝑦
subject to (𝑦, 0)β€² ∈ π‘π‘œπ‘›π‘£(π‘Œ). ( 𝑦, 𝑧 ∈ π‘π‘œπ‘›π‘£ π‘Œ , 𝑧 ≀ 0 )
Pf) not given here.
Integer Programming 2015
20
 Ex 4.7: Ex 4.3 revisited
𝑓 π‘₯1 , π‘₯2 = 3π‘₯1 βˆ’ π‘₯2 and 𝑔 π‘₯1 , π‘₯2 = βˆ’1 βˆ’ π‘₯1 + π‘₯2 ,
β„± = {(1, 0), (2,0), (1, 1), (2, 1), (0, 2), (1, 2), (2, 2), (1, 3), (2, 3)}
𝑓, 𝑔 = {(3, -2), (6, -3), (2, -1), (5, -2), (-2, 1), (1, 0), (4, -1), (0, 1), (3, 0)}
𝑓
π‘π‘œπ‘›π‘£(π‘Œ)
𝑍𝑃 = 1
𝑍𝐷 = βˆ’13
Integer Programming 2015
duality gap
𝑔
21