L12 Convex Prog Prob

L12 LaGrange Multiplier Method
•
•
•
•
Homework
Review
Summary
Test
1
Constrained Optimization
LaGrange Multiplier Method
Remember:
1. Standard form
2. Max problems f(x) = - F(x)
MINIMIZE :
f (x )
Subject To :
hi ( x )= 0
i  1... p
g j (x )  0
x
(U )
k
j  1...m
)
 x k  x(L
k  1 n
k
2
KKT Necessary Conditions for Min
p
m
i 1
i 1
Lx, v, u, s   f (x )   υi hi (x )   ui ( gi (x )  si2 )
p
f
hi m g j
  υi
 u j
 0 for k  1 to n
xk i 1 xk j 1 xk
hi (x*)  0 for i  1 to p
g j (x*)  s 2j  0 for j  1 to m
s2j  0 for j  1 to m
u*j s j  0 for j  1 to m
u*j  0 for j  1 to m
Regularity check - gradients of active inequality
constraints are linearly independent
3
Prob 4.122
Min f ( x1 , x2 )  ( x1  3)2  ( x2  3)2
subject to h  x1  3 x2  1
g  x1  x2  4
L  f ( x1 , x2 )  u1 ( g1  s12 )  u2 ( g 2  s22 )
L  ( x1  3) 2  ( x2  3) 2  ...
[ x1  3 x2  1]  u[ x1  x2  4  s22 ]
4
KKT Necessary Conditions
KKT Necessary Conditions :
L
 2( x1  3)    u  0
x1
L
 2( x2  3)  3  u  0
x 2
h  x1  3 x2  1  0
g  x1  x2  4  s22  0
s  0, us  0
u0
Regular po int?
5
Case1 u  0
2( x1  3)    0  0
2( x2  3)  3  0  0
h  x1  3 x2  1  0
g  x1  x2  4  s 2  0
2 x1  0 x2    6
0 x1  2 x1  3  6
1x1  3 x2  0  1
2
0
0
2
1
6
3 6
1 3
0
1
6
0
3
18
0
2
3
6
0
1
1 3
Case 1
6
0
3 18
6
2
0 24
1 3 0
6
6
0
2
1
3 18
0 24
 6 18 0  6
6
0
3 18
6 2 0 24
0 20 0 18
x2  18 / 20  0.9
6 x1  2(0.9)  24
x1  3.7
6
0
3 18
g  x1  x2  4  s 2  0
6
2
0 24
3 . 7  0. 9  4  s 2  0
1 3 0
1
s 2  0.6 BAD!
6
Case2 s  0
2( x1  3)    u  0
2( x2  3)  3  u  0
h  x1  3 x2  1  0
g  x1  x2  4  0  0
2 x1  0 x2  1  1u  6
0 x1  2 x1  3  1u  6
1x1  3 x2  0  0u  1
1x1  1x2  0  0u  4
2
0
0
2
1
1 6
3 1 6
1 3
0
0 1
1
0
0 4
1
Case 2
2
0
1
1 6
0 2 3 1 6
1 3 0 0 1
1
1
0
0 4
2
0
1
1 6
0
2
1 3
3 1 6
0
0 1
0 4
0 0 3
x2  3 / 4  0.75
x1  3(0.75)  1
x1  3.25
7
Case 2 cont’d, find multipliers
2(3.25)
0
1
1  0. 5
0 0 3 1
0 0
1 6
2(0.75)  3 1 6
0
0 0
1
3
4. 5
3  1. 5
0 0 3 1
4. 5
x1  3.25
x2  0.75
  1.25
u  0.75  0
s00
f  5.125
0 0 3 3  1. 5
0 0 0 4
3
u  0.75
3  3(0.75)  1.5
  1.25
8
Case 2 cont’d, regular pt?
Regular Pt?
1. pt feasible, YES
2. active constraint gradients independent
Are active constraint
gradients independent
i.e. parallel?
1 
1
h   , g   
 3
1
Determinant of
Constraint gradients
non-singular?
 1 1
A


3
1


A 0
Case 2 results in a KKT point!
9
Graphical Solution
1
2
 f    h  ug  0
2( x1  3) 
 1
1 0

1
.
25

0
.
75
2( x2  3)
 3
1  0
2(3.25  3)  1.25 0.75 0
2(0.75  3)   3.75  0.75  0
 0.5  1.25 0.75 0
 4.5   3.75  0.75  0
10
Constraint Sensitivity
x1  3.25
x2  0.75
  1.25
u  0.75  0
s00
f  5.125
Note how relaxing h increases the
feasible region but is in the wrong
“direction.” Recall ν can be ±!
Multiply h by -1, ah ha!
11
Sufficient Condition
Is this a convex programming problem?
Check f(x) and constraints:
f
 2( x1  3)
x1
f
 2( x2  3)
x 2
2 0 
Hf  

0
2


M 1  2, M 2  4
From convexity theorems:
1. Hf is PD
2. All constraints are linear
Therefore KKT Pt is global Min!
12
True/False
13
LaGrange Multiplier Method
• May produce a KKT point
• A KKT point is a CANDIDATE minimum
It may not be a local Min
• If a point fails KKT conditions, we cannot
guarantee anything….
The point may still be a minimum.
• We need a SUFFICIENT condition
14
Convex sets
Convex set:
All pts in feasible
region on a
straight line(s).
Non-convex set
Pts on line are not
in feasible region
15
Multiple variables Fig 4.21
x  αx  (1  α )x ; 0  α  1
x  x ( 2 )  α(x ( 2 )  x (1) ); 0  α  1
(2)
(2)
x  x 1  0
2
1
2
2
What if it were an equality
constraint?
misprint
16
Figure 4.22 Convex function f(x)=x2
Bowl that holds water.
. f ( x )  αf ( x )  (1  α ) f ( x ) ; 0  α  1
2
1
f ( x )  f ( x1 )  α( f ( x2 )  f ( x1 ) ) 0  α  1
f (αx 2  (1  α ) x1 )  f ( x1 )  α[ f ( x2 )  f ( x1 )] 0  α  1
17
Fig 4.23 Convex function.
f ( x )  αf ( x2 )  (1  α ) f ( x1 ) ; 0  α  1
f ( x )  f ( x1 )  α( f ( x2 )  f ( x1 ) ) 0  α  1
f (αx 2  (1  α ) x1 )  f ( x1 )  α[ f ( x2 )  f ( x1 )] 0  α  1
18
Test for Convex Function
f (x )  αf (x ( 2 ) )  (1  α) f (x (1) ) ; 0  α  1
f (x )  f (x (1) )  α( f (x ( 2 ) )  f (x (1) ) ) 0  α  1
Difficult to use above definition!
However, Thm 4.8 pg 163:
If the Hessian matrix of the function
is PD ro PSD at all points in the set
S, then it is convex.
PD… “strictly” convex, otherwise
PSD… “convex”
19
Theorem 4.9
Given:
Constraint Set
S  {x | hi (x )  0, for i  1 to p;
g j (x )  0, j  1 to m}
S is convex if:
1. hi are linear
2. gj are convex i.e. Hg PD or PSD
When f(x) and S are convex=
“convex programming problem”
20
“Sufficient” Theorem 4.10, pg 165
The first-order KKT conditions are Necessary
and Sufficient for a GLOBAL minimum….if:
1. f(x) is convex
Hf(x) Positive definite
2. x is defined as a convex feasible set S
Equality constraints must be linear
Inequality constraints must be convex
HINT: linear functions are convex!
21
Summary
• LaGrange multipliers are the
instantaneous rate of change in f(x)
w.r.t. relaxing a constraint.
• Equality constraints may need
tightening rather than loosening
• Convex sets assure contiguity and or
the smoothness of f(x)
• KKT pt of a convex programming
problem is a GLOBAL MINIMUM!
22