Geometric Programming
Chee Wei Tan
CS 8292 : Advanced Topics in Convex Optimization and its Applications
Fall 2010
Outline
• Standard GP and Convex GP
• Convexity of LogSumExp
• Generalized GP
• Dual GP
• Example
Acknowledgement: Thanks to Mung Chiang (Princeton), Stephen Boyd
(Stanford) and Steven Low (Caltech) for the course materials in this class.
1
Monomials and Posynomials
Monomial as a function f : Rn+ → R:
f (x) =
a(1) a(2)
dx1 x2
a(n)
. . . xn
where the multiplicative constant d≥ 0 and the exponential constants
a(j) ∈ R, j = 1, 2, . . . , n
Sum of monomials is called a posynomial:
f (x) =
K
X
(1)
ak
dk x1
(2)
ak
x2
(n)
ak
. . . xn
.
k=1
(j)
where dk ≥ 0, k = 1, 2, . . . , K, and ak ∈ R, j = 1, 2, . . . , n, k = 1, 2, . . . , K
Example:
√
2x−0.5y π z is a monomial, x − y is not a posynomial
2
Standard GP & Convex GP
• GP standard form with variables x:
minimize f0(x)
subject to fi(x) ≤ 1, i = 1, 2, . . . , m,
hl(x) = 1, l = 1, 2, . . . , M
where fi, i = 0, 1, . . . , m are posynomials and hl, l = 1, 2, . . . , M are monomials
Log transformation: yj = log xj , bik = log dik , bl = log dl
• GP convex form with variables y:
PK0
minimize p0(y) = log k=1 exp(aT0k y + b0k )
PKi
subject to pi(y) = log k=1 exp(aTik y + bik ) ≤ 0, i = 1, 2, . . . , m,
ql(y) = aTl y + bl = 0, l = 1, 2, . . . , M
In convex form, GP with only monomials reduces to LP
3
Convexity of LogSumExp
Log sum inequality (readily proved by the convexity of f (t) = t log t, t ≥ 0):
n
X
ai
ai log ≥
bi
i=1
n
X
ai
!
i=1
Pn
i=1 ai
P
log n
i=1 bi
where ai, bi ∈ R+, i = 1, 2, . . . , n
Pn
Let b̂i = log bi and i=1 ai = 1:
log
n
X
i=1
eb̂i
!
≥ aT b̂ −
n
X
ai log ai
i=1
So LogSumExp is the conjugate function of negative entropy
Since all conjugate functions are convex, LogSumExp is convex
4
GP and Convexity
• The following problem can be turned into an equivalent standard GP:
maximize x/y
subject to 2 ≤ x ≤ 3
√
2
x + 3y/z ≤ y
x/y = z 2
minimize x−1y
subject to 2x−1 ≤ 1, (1/3)x ≤ 1
x2y −1/2 + 3y 1/2z −1 ≤ 1
xy −1z −2 = 1
• Let p, q be posynomials and r monomial
minimize p(x)/(r(x) − q(x))
subject to r(x) > q(x)
5
which is equivalent to
minimize t
subject to p(x) ≤ t(r(x) − q(x))
(q(x)/r(x)) < 1
which is in turn equivalent to
minimize t
subject to (p(x)/t + q(x))/r(x) ≤ 1
(q(x)/r(x)) < 1
• Generalized posynomials: f is a generalized posynomial if it can be formed
using addition, multiplication, positive power, and maximum, starting from
posynomials. Composition of posynomials.
• Generalized GP: minimize generalized posynomials over upper bound inequality
constraints on other generalized posynomials
Generalized GP can be turned into equivalent standard GP
6
Generalized GP
• Rule 1: Composing posynomials {fij (x)} with a posynomial with nonnegative
exponents {aij } is a generalized posynomial
• Rule 2: The maximum of a finite number of posynomials is also a generalized
posynomial
• Rule 3: f1 and f2 are posynomials and h is a monomial: F3(x) =
f1 (x)
h(x)−f2 (x)
Example:
−2.9 1.5
0.5
maximize max{(x1 + x−1
)
,
x
x
}
+
(x
+
x
)
1
3
2
2
3
(x2 x3 +x2 /x1 )π
subject to x x −max{x2x3,x +x } ≤ 10,
1 2
variables
1 3
1
3
x1, x2, x3.
Freely available software: Stanford CVX & GGPLAB
(http://www.stanford.edu/∼boyd/ggplab)
7
Unconstrained GP
minimize
f (x) = log
m
X
!
exp(aTi x + bi)
i=1
Optimality condition has no analytic solution:
m
X
1
T ∗
∇f (x∗) = Pm
exp(a
i x + bi )ai = 0
T
∗
j=1 exp(aj x + bj ) i=1
8
Dual GP
Primal problem: unconstrained GP in variables y
minimize log
N
X
exp(aTi y + bi).
i=1
Lagrange dual in variables ν:
T
PN
maximize b ν − i=1 νi log νi
subject to 1T ν = 1,
ν 0,
AT ν = 0
9
Dual GP
Primal problem: General GP in variables y
Pk0
minimize log j=1 exp(aT0j y + b0j )
Pki
subject to log j=1 exp(aTij y + bij ) ≤ 0, i = 1, . . . , m,
Lagrange dual problem:
maximize
bT0 ν0 −
Pk0
j=1 ν0j
log ν0j +
Pm i=1
bTi νi −
Pki
j=1 νij
log
νij
1T νi
subject to νi 0, i = 0, . . . , m,
1T ν0 = 1,
Pm
T
i=0 Ai νi = 0
where variables are νi, i = 0, 1, . . . , m
A0 is the matrix of the exponential constants in the objective function, and
Ai, i = 1, 2, . . . , m are the matrices of the exponential constants in the
constraint functions
10
Example: DMC Capacity Problem
Discrete Memoryless Channel (DMC): x ∈ Rn is distribution of input; y ∈ Rm is
distribution of output;
P ∈ Rm×n gives conditional probabilities: y = P x
Primal Channel Capacity Problem:
T
Pm
maximize −c x − i=1 yi log yi
subject to x ≥ 0, 1T x = 1, y = P x,
Pm
where cj = − i=1 pij log pij
Dual Channel Capacity Problem is a simple GP:
Pm
minimize log i=1 eui
subject to c + P T u ≥ 0,
11
Properties of GP
• Nonlinear nonconvex problem can be turned into nonlinear convex problem
• Linearly constrained dual problem
• Theoretical structures: global optimality, zero duality gap, KKT condition,
sensitivity analysis
• Numerical efficiency: interior-point, robust
• Surprisingly wide range of applications
12
Summary
• Nonlinearity: Posynomial or LogSumExp
• Standard GP, Convex GP and Dual GP
• A variety of problems: Structural design in mechanical engineering,
Growth modeling in economics (1960s-1970s), Analog and digital
circuit design (late 1990s), Communication system problems (early
2000s) and many other applications in practice and analysis
Reading assignment: Sections 4.5, 5.7 of textbook.
• S. Boyd, S.-J. Kim, L. Vandenberghe, and A. Hassibi, “A Tutorial on Geometric
Programming,” Optimization and Engineering, 8(1):67-127, 2007.
• M. Chiang, “Geometric programming for communication systems,” Foundations
and Trends in Communications and Information Theory, 2005.
13
© Copyright 2026 Paperzz