b
Unidimensional Search Methods
Most algorithms for unconstrained and
constrained optimisation use an
efficient unidimensional optimisation
technique to locate a local minimum of
a function of one variable.
84
Basic principle - acceleration
and bracketing
A search for the solution x* which minimizes
f(x) is performed by computing the sequence:
{xo, x1, x2,..., xk, xk+1,..., xN-2, xN-1, xN}
by repeated application of the formula:xk+1 = xk + k
starting from an initial guess xo, evaluating
the function f(xk) each time with initial
direction chosen such that f(x1) < f(xo), until
f(xN) > f(xN-1).
85
f(x)
fo
f1
fk
fk+1
fN-2
fN
fN-1
f*
interval of
uncertainty
k
xox1
xx
k
k+1
x
N-2
xN-1 x*
x
N
x
86
Then the required solution x* will be bracketed by the
range xN-2 to xN, and |xN - xN-2| is known as the
interval of uncertainty.
Often the step size k is variable, for example using
an acceleration technique:
k = 2k.
which doubles the step size every iteration.
Once the minimum has been bracketed, an estimate
of x* can be obtained using a polynomial
approximation such as quadratic or cubic
interpolation. This estimate can be improved, and the
interval of uncertainty reduced, by repeated
87
application of the interpolation.
e.g. quadratic interpolation (without derivatives)
f(x)
c
*
~
f ( x ) hx cx x
h
1
2
2
fN
fN-2
fN-1
xN-2
xN-1 x*
~
x*
xN
x
It can be shown that:
N 2
N 2 2
N
N 1
N 2
N 1 2
N
N 2
[(
x
)
(
x
)
](
f
f
)
[(
x
)
(
x
)
](
f
f
)
~
x*
2[( x N x N 2 )( f N f N 1 ) ( x N x N 1 )( f N f N 2 )] 88
Rates of Convergence
When comparing different optimisation techniques
it is useful to examine their rates of convergence - a
common classification is as follows:
x
linear: (slow)
k 1
x
x x
k
quadratic: (fast)
*
*
x k 1 x *
xk x
* 2
c, 0 c 1
c, c 0
superlinear: (fast)
x k 1 x *
x x
k
*
ck , where ck 0 as k
89
Newton’s Method
Suppose that f(x) is approximated by a quadratic
function at a point xk (Taylor series:-)
f ( x ) f ( x k ) f ( x k )( x x k ) 21 f ( x k )( x x k ) 2
Then the stationary point, df(x)/dx = 0 , is given as:
f ( x ) f ( x )( x x ) 0
k
k
k
yielding the next approximation x k+1 = x as:k
f
(
x
)
k 1
k
x x
f ( x k )
90
f ( x )
tangent of f ( x ) at x k
c
slope f ( x k
0
x*
x
xk+1xk
x
k 1
h
f ( x )
x
k
f ( x )
k
k
91
advantages: 1. locally quadratically convergent.
2. for a quadratic function, x* is
found in a single iteration.
f ( x ) and f ( x )
disadvantages: 1. need to compute
2. if f ( x) 0,
then slow
convergence.
92
Quasi-Newton Methods
A quasi-Newton method is one that imitates
Newton’s method. For example, instead of computing
f ( x ) andf ( x ) exactly, we can use finite difference
approximations:
f ( x ) f ( x )
f ( x )
2
f ( x ) 2 f ( x) f ( x )
f ( x )
2
where is a small step size (chosen to suit the
computer machine precision).
93
This then provides the updating formula:
k
k
f
(
x
)
f
(
x
)
k 1
k
x x .
2 f ( x k ) 2 f ( x k ) f ( x k )
The disadvantage is the need to perform small
perturbations
to xk at each iteration, hence,
slowing down the progress towards the solution.
94
Example:
Minimisation of: f ( x) x 4 x 1, x ( 0) 3
(i) acceleration plus quadratic interpolation:
160
140
120
100
f
80
60
40
20
0
-4
-3
-2
-1
0
x
1
2
3
4
95
iter
1
2
3
4
5
6
7
8
9
10
x
3.0000
2.9900
2.9800
2.9600
2.9200
2.8400
2.6800
2.3600
1.7200
0.4400
Tabulated Results
f
iter
x
79.0000
11 -2.1200
77.9354
12 0.3205
76.8815
13 0.4626
74.8056
14 0.7663
70.7795
15 0.6185
63.2139
16 0.6178
49.9069
17 0.6284
29.6604
18 0.6301
8.0321
19 0.6300
0.5975
f
23.3196
0.6900
0.5832
0.5785
0.5278
0.5279
0.5275
0.5275
0.5275
interpolation
phase
96
(ii) Newton’s method:
f ( x) 4 x 1, f ( x) 12 x
3
2
160
140
120
100
f
80
60
40
20
0
-4
-3
-2
-1
0
x
1
2
3
4
97
Tabulated Results
iter
x
f
1 3.0000 79.0000
2 2.0093 15.2891
3 1.3601 3.0624
4 0.9518 0.8689
5 0.7265 0.5521
6 0.6422 0.5279
7 0.6302 0.5275
8 0.6300 0.5275
9 0.6300 0.5275
98
(ii) Quasi Newton’s method:
160
140
120
100
f
80
60
40
20
0
-4
-3
-2
-1
0
x
1
2
3
4
99
Tabulated Results
iter
x
f
1
3.0000 79.0000
2
2.0093 15.2890
3
1.3601
3.0623
4
0.9518
0.8689
5
0.7265
0.5521
6
0.6422
0.5279
7
0.6302
0.5275
8
0.6300
0.5275
9
0.6300
0.5275
100
How Unidimensional Search is
Applied in a Multidimensional
Problem
In minimizing a function f(x) of several
variables, a common procedure is to
(a) calculate a search direction s (a
vector)
(b) take steps in that search direction to
reduce the value of f(x)
101
x k 1 x k *s
1.5
k+1
1
0.5
x2
α*s
x
xk
s
0
-0.5
-0.5
0
x1
0.5
0.5
α
1
1
1.5
1.5
2
0
-1
f
-2
-3
0
α
102
Unconstrained Multivariable
Optimisation
min f (x), x
n
x
Univariate Search
Select n fixed search directions, using the
coordinate directions, then minimize f(x) in
each direction sequentially using line
search. In general, this method is not
efficient.
1
0 3 1
0
1
2
4
p , p ,p , p ,
0
1
0
1
LMO
L
O
L
O
L
O
P
M
P
M
P
M
P
NQ NQ NQ NQ
103
1.5
1
x1
xo
0.5
x3
x2
0
x2
x4
-0.5
-1
-1.5
-1.5
-1
-0.5
0
x1
0.5
1
1.5
104
Simplex Method
This is not a line search method
Use a regular geometric figure (a simplex) and
evaluate f(x) at each vertex. In two
dimensions a simplex is a triangle, in three
dimensions it is a tetrahedron. At each
iteration a new point is selected by reflecting
the simplex opposite the vertex with the
highest function value, which is then
discarded to form a new simplex. The
iterations proceed until the simplex straddles
the optimum. The size is reduced and the
105
procedure is repeated.
3
highest
f(x)
1
4
2
The simplex can expand and contract
continuously throughout the search (Nelder
and Mead Method)
106
Conjugate Search Directions
A set of n linearly independent search
directions:
so, s1, ... , sn-1
are said to be conjugate with respect
to a positive definite matrix Q if:
(si ) T Qs j 0,
0 i j n -1
where, in optimisation, Q is the Hessian matrix of the
objective function. (i.e. Q = H).
If f(x) is quadratic you are guaranteed to reach the
minimum in n stages (n line searches along conjugate
directions).
107
1.5
1
0.5
x2
so
xo
x1
s1
x2
0
-0.5
-0.5
0
0.5
x1
1
1.5
108
Minimization of a Quadratic Function
Using Line Search
Consider the function f(x) approximated by a
quadratic at point xk
f (x) f (x k h) f (x k ) T f (x k ). h 21 h T Hh
and consider the line search x k 1 x k s k
Then along this line
f (x) f (x k s k ) f (x k ) T f (x k ).s k 21 2 (s k ) T Hs k
and is an extremum, with respect to , when
f (x)
0 T f (x k ). s k (s k ) T Hs k
T
k
k
f
(
x
).
s
*
(s k ) T Hs k
109
Example: Consider the minimization of:
f (x) 2 x x 3,
2
1
2
2
with initial direction:
1O
L
x M
P
1Q
N
4O
L
MP
N2Q
o
so
Find a direction s1 which is conjugate to so and
verify that the minimum of f(x) can be reached
in two line searches, firstly using so and then
using s1.
4x O
4 0O
L
L
f (x) M P
, HM P
2x Q
N
N0 2Q
1
2
110
We require
(so ) T Qs1 0 with Q H.
4 0O
L
sO
L
2 M P
0 16s 4s 0 s 4s
M
P
0 2Q
sQ
N
N
1
1
1
2
4
1
1
1
2
1
2
1
1
1O
L
s MP
Hence, we can use
4 Q
N
4O
4O
L
L
f (x ) M
, s MP
Search along s
P
2Q
2Q
N
N
1
o
o
o
so f (xo ) direction of steepest descent )
(Note:
f (x ). s
0 T
o
(s ) Hs
T
o
o
o
4O
L
4 2 MP
2Q
20
N
4 0O
4O72
L
L
4 2 M P
M
P
0
2
2
N Q
NQ
111
Hence, x x s
1
o
o o
1O20 L
4O1 L
1O
L
M
MP
MP
P
1Q72 N
2Q9 N
4Q
N
Next stage - search along s1
4O4 L
1O
1O
L
L
f (x ) MP
MP
, s MP
8 Q9 N
2Q
4Q
9N
N
1O
L
4
1 2 MP
9
N4Q 4 1
Then: f (x ). s
4 0O
1 O36 9
(s ) Hs
L
L
1 4 M P
M
P
0
2
4
N Q
NQ
1O1 L1 OL
0O
1L
and: x x s MP
MP
M
4 Q9 N
4QN
0P
9N
Q
1
1
T
1
1
1
1
1 T
2
1
1
1 1
which is clearly the minimum of f(x).
112
1.5
so
1
xo
x1
0.5
x2
s1
x2
0
-0.5
-0.5
0
0.5
x1
1
113
1.5
Construction of conjugate directions
without using derivatives
1.5
x1
xb
1
-s
0.5
0
x2
-0.5
-1
-1.5
-1.5
xa
s
xo
f (xa )
-1
-0.5
0
0.5
1
1.5
x1
Start at xo and locate xa as the minimum of f(x) in the direction
s. Then start at another point x1 and locate xb as the minimum
of f(x) in the same parallel direction s (or -s). The the vector
(xb - xa) is conjugate to s (provided f(x) is quadratic).
Also, the gradient of f(x) at xa is orthogonal to s. i.e.
s f (x ) 0
T
a
114
Powell’s Method (basic form)
The kth stage employs n linearly independent search
directions. Initially, these are usually the co-ordinate
directions (Hence, stage 1 is univariate search). At
subsequent stages these search directions change such
that for a quadratic function they become conjugate.
Step 1 Fromx ok determine by line search
in direction
1k
s1k so thatf (xok 1k s1k ) is a minimum. Let
k
k
determine
so that
x1k xok 1k s1k . From
x1
2
k
k
k k
f (x1k 2k s2k ) is a minimum. Let
x2 x1 2 s2 .
Continue this procedure until all search
directions
starting always from the
sik , i 1,, n,
last immediate point in sequence, until all
are determined. The final point
x nk . is
ik , i 1,, n
115
xn
Step 2 Now search from
along the direction
s xn x o
k
xn1
to determine the point
which minimizes
f (xnk sk ). This point then becomes the new
starting point for the next stage.
k
k
k
k
Step 3 Now replace one of the search directions
by sk), and
s1k
sik , i 1,, n, by sk (e.g. replace
repeat from step 1.
Termination criterion Keep repeating the steps
of the algorithm until:
x x
k
n
k
o
where is a defined tolerance.
116
For example, in two variables the first iteration
would be:
1.5
x1o
1
s11
x11
0.5
s12
0
x12
x2
s1
-0.5
x13
-1
-1.5
-1.5
x2o
-1
-0.5
0
0.5
1
1.5
x1
s
1
1
1O
0O
L
L
M
, s M
, s
P
P
0Q
1Q
N
N
1
2
1
x x
1
2
1
0
s
2
1
0O
L
M
, s
P
1Q
N
2
2
[x 12 x 10 ]
117
Gradient Search Methods
Methods such as univariate, simplex and Powell do
not require the use of derivative information in
determining the search direction, and are known as
direct methods. In general, they are not as efficient
as indirect methods which make use of derivatives,
first or second, in determining search directions.
For minimization, a good search direction should
reduce the objective function at each step of a linear
search. i.e.
f (x
k 1
) f (x )
k
118
f (x k )
10
6
sk
3
sk+1
For sk+1 to be a descent direction > 90o, hence:
f (x ). s
T
k
k 1
0
119
Method of Steepest Descent
The gradientf (x)
gives the direction of
local greatest increase of f(x) and is normal
to the contour of f(x) at x. Hence, a local
search direction:
s f (x )
k
k
will give the greatest local decrease of f(x).
This is the basis of the steepest descent
algorithm.
120
Step 1 Choose an initial point xo (thereafter xk)
Step 2 Calculate (analytically or numerically) the
partial derivatives
f (x k )
, j 1,, n
x j
Step 3 Calculate the search direction vector:
s k f (x k )
Step 4
k 1
k
k k
Perform a line search:
x
x s
to find xk+1 which minimizes f(xk+ksk).
Step 5
Repeat from step 2 until a desired
termination criterion is satisfied.
121
Notes: 1) Convergence is slow for a
badly scaled f(x)
2) Exhibits zig-zag behaviour
122
Example: First few iterations of the steepest
descent algorithm
1.5
1
xo
0.5
0
x2
-0.5
-1
-1.5
-1.5
-1
-0.5
0
x1
0.5
1
123
1.5
Fletcher and Reeves Conjugate Gradient
Method
This method computes the new search
direction by a linear combination of the current
gradient and the previous search direction. For
a quadratic function this produces conjugate
search directions and, hence, will minimize
such a function in at most n iterations. The
method represents a major improvement over
the steepest descent algorithm with only minor
extra computational requirements.
124
xo
calculate
f(xo)
o
o
s
f
(
x
)
and let
Step 1:
At
Step 2:
1
o
o o
Use line search to determine
x x s
1), 1
to minimize f(xo+oso). Compute f(x
f
(x )
Step 3:
Compute the new search direction:- (at
iteration k)
s k 1 f (x k 1 ) s k
where
Step 4:
T f ( x k 1 )f ( x k 1 )
T f ( x k )f ( x k )
Test for convergence and if not satisfied
repeat from step 2.
125
Example:
The Fletcher and Reeves algorithm minimizes a two
dimensional quadratic function in two iterations
1.5
1
xo
0.5
0
x2
-0.5
-1
-1.5
-1.5
x*
-1
-0.5
0
x1
0.5
1
1.5
126
Example: Consider the same example as
used earlier of minimizing:
f ( x)
2 x12
x22
3, x
o
1O
L
M
P
1
NQ
4x O
L
f (x) M P
, Hence, s
2x Q
N
1
2
o
4O
L
f (x ) MP
2Q
N
o
which is the same initial direction as used previously.
Hence,
1O
L
O
L
4
f (x ) 9 MP
M
P
NQ
N2 Q
1
1
x 9
,
4
1
1
127
The next search direction, using Fletcher and Reeves
is:
s1 f (x1 ) so ,
T f (x1 )f ( x1 )
16 5
4
where T
.
o
o
81 20
81
f (x )f ( x )
L
O
L
O
L
O
L
O
M
P
M
P
M
P
M
P
NQ NQ N Q NQ
1
4
20
1
4
4
1
20
Then s 9
81
81
81
2
2
80
4
1
which provides the same search direction as
previously.
Hence, so and s1 are conjugate and the minimum will
be obtained by searching along so and s1 in sequence.
128
Newton’s Method
This is a second-order method and makes use of second-order
information of f(x) through a quadratic approximation of f(x) at
xk.
f (x) f (x k ) T f (x k )x k 12 (x k )T H(x k )x k
where x x x
k
k
Differentiating f(x) with respect to xk produces:f (x) f (x k ) H(x k )x k
= 0 for a minimum when:- x k [H(x k )]1 f (x k )
and defining xk = xk+1-xk provides the iteration:
x k 1 x k k [H(x k )]1 f (x k )
1
where [H(x )] f (x ) is the search direction, and k,
proportional to step length, is introduced to regulate
129
convergence.
k
k
disadvantage: need to determine,
analytically or numerically,
second order partial derivatives in
order to compute the Hessian
matrix H(x).
130
Newton’s method will minimize a quadratic function
in a single linear search.
1.5
f ( x k )
xk
1
0.5
s
0
x2
-0.5
-1
xk=x*
-1.5
-1.5
-1
-0.5
0
0.5
1
x1
131
s [H(x k )]1 f (x k ) = Newton search direction.
1.5
Example: Investigate the application of Newton’s
method for minimizing the convex quadratic function:
1
2
2
o
f (x) 4 x1 x2 2 x1 x2 , x
1
L
O
M
P
NQ
8x 2 x O
8 2O
L
L
f (x) M
, H( x) M
, H( x)
P
P
2x 2x Q
2 2 Q
N
N
1
2
2
1
1
1
12
2 2O
L
M
P
2
8
N Q
Now consider the first step of the Newton algorithm
with o = 1 :x1 x o [H(x o )]1 f (x o )
1O 1 L
2 2O
6OL
1OL
1OL
0O
L
L
M
12 M P
M
M
M
P
M
P
P
P
P
1
2
8
0
1
1
0
NQ N Q
NQNQNQNQ
Clearly, x1=x*, the minimum of f(x).
132
Quasi-Newton Methods
These methods approximate the Hessian matrix using
only the first partial derivatives of f(x). Define:g k f (x k 1 ) f (x k )
(x k ) H
k (= approximation of H(x k ))
H
k H
k 1 H
k
H
Basically, the methods update an estimate of the
Hessian matrix, or it’s inverse. Two particular
updating mechanisms are:
133
(i)
DFP (Davidon-Fletcher-Powell)
k 1 k
k T k T
[H ] g ( g ) [H ]
k 1 x ( x )
[ H ]
k T
k
k T k 1 k
( x ) g
( g ) [H ] g
k
k T
(ii) BFGS (Broyden-Fletcher-Goldfarb-Shanno)
k
k
k T k
g ( g )
H x ( x ) H
k
H
k T
k
k T k
k
( g ) x
( x ) H x
k
k T
Note: It is important that in any Newton
minimization method the Hessian matrix updates
remain +ve def. Both DFP and BFGS have this
property.
134
Test Example - Rosenbrocks Banana Shaped Valley
m
r
min f 100( x2 x12 ) 2 (1 x1 ) 2 ) x2* x1* 1
x1 , x2
Rosenbrocks Banana Function
3
2.5
2
1.5
Solution
1
x2
0.5
0
-0.5
-1
-2
-1.5
-1
-0.5
0
x1
0.5
1
1.5 135
2
It is called the banana function
because of the way the curvature
bends around the origin. It is
notorious in optimization examples
because of the slow convergence with
which most methods exhibit
when trying to solve this problem.
136
137
Steepest Descent Minimisation of Banana Function
3
2.5
2
1.5
1
Optimum
x2
0.5
0
-0.5
-1
-2
-1.5
-1
-0.5
0
x1
0.5
1
1.5
2
Warning: Maximum number of iterations has been exceeded Final solution:
x=
0.8842
0.7813
f=
138
0.0134
number of function evaluations = 1002
Quasi Newton - BFGS Minimisation of Banana Function
3
2.5
2
1.5
1
Optimum
x2
0.5
0
-0.5
-1
-2
-1.5
-1
-0.5
0
x1
0.5
1
final solution
x=
1.0000
1.0000
f=
8.0378e-015
number of function evaluations = 85
1.5
2
139
Quasi Newton - DFP Minimisation of Banana Function
3
2.5
2
1.5
1
Optimum
x2
0.5
0
-0.5
-1
-2
-1.5
-1
-0.5
0
x1
0.5
1
final solution
x=
1.0001
1.0003
f=
2.2498e-008
number of function evaluations = 86
1.5
2
140
Simplex Minimisation of Banana Function
3
2.5
2
1.5
1
Optimum
x2
0.5
0
-0.5
-1
-2
-1.5
-1
-0.5
0
x1
0.5
1
final solution
x=
1.0000
1.0001
f=
1.0314e-009
number of function evaluations = 212
1.5
2
141
Termination - Stopping Criteria
In iterative optimisation, the following criteria, often
used simultaneously, are recommended for testing
convergence. (i = scalar tolerance value (small))
f ( x k 1 ) f ( x k )
k
f (x )
or as f (x k ) 0,
f (x k 1 ) f (x k ) 2
xik 1 xik
xik
or as xik 0,
1
2
xik 1 xik 4
s k 1 5
f ( x k ) 6
(s = direction vector)
142
Nonlinear Programming With
Constraints
Quadratic Programming - equality constraints case
Quadratic Programming (QP) is sometimes used
within techniques employed for solving nonlinear
optimisation problems with equality and inequality
constraints. Here we will limit ourselves to equality
constraints only. Consider the problem:
min{ f (x) x Qx c x d
x
s.t.
1
2
T
Ax b 0,
T
(A not invertible)
where Q is +ve def. Forming the Lagrangian, and the
143
necessary conditions for a minimum, gives:
L(x, )
T
T
1 T
2 x Qx c x d ( Ax b)
x L Qx c AT 0
Ax b 0
(a)
(b)
giving from (a)
-1
T
x = -Q [c + A ]
and substituting into (b)
[AQ1 AT ]1[b AQ1c]
Hence:
x Q1 AT [AQ1 AT ]1 b [Q 1 AT [AQ 1 AT ]1 A I n ]Q 1c
That is an analytical solution
144
Sequential Quadratic Programming (SQP)
The basic idea is that, at each iteration (x = xk), the objective
function is approximated locally by a quadratic function and
the constraints are approximated by linear functions. Then
quadratic programming is used recursively.
The method is applied to the general problem:
min f ( x)
x
s. t . h ( x) 0
g(x) 0
but uses an active set strategy to replace it with:
min f (x)
x
s. t . h (x) 0
whereh(x)
contains h(x) and the active inequality constraints.
145
g(x)
0)
(those ofg(x) 0 which are
Example:
Application of SQP to the problem:
min f (x) 4 x1 x22 12
x
s.t. h1 (x) 25 x12 x22 0
g1 (x) 10 x1 x 10 x2 x 34 0
2
1
2
2
g2 (x) ( x1 3) 2 ( x2 1) 2 0
g3 (x) x1 2
g 4 ( x ) x2 0
146
8
7
6
5
4
3
x2
2
1
0
-1
start 1
start 2
-2
-2
0
2
x1
4
final solution
x = 1.0013 4.8987
f = -31.9923
h=0
g = 0 19.1949
6
8
147
MATLAB Optimisation Toolbox Routines
(Earlier version names in italics)
fminbnd scalar minimization on a fixed interval (fmin)
Fminunc multivariable unconstrained
minimization (Quasi Newton BFGS,
or DFP; or steepest descent) (fminu)
fminsearch
multivariable unconstrained
minimization (Simplex - Nelder and
Mead) (fmins)
fmincon constrained minimization (SQP) (constr)
fsolve
non-linear equation solver
linprog
linear programming (lp)
148
Example:
1
F
min f ( x) Gx
H
k
x
IJ,
K
2
10
2
k
kx k k
k 1
2
subject to the equality and inequality constraints:
h1 ( x) x1 x3 x5 x7 x9 0
h2 ( x) x2 2 x4 3x6 4 x8 5x10 0
h3 ( x) 2 x2 5x5 8 x8 0
g1 ( x) x1 3x4 5x7 x10 0
g2 ( x) x1 2 x2 4 x4 8 x8 100
g3 ( x) x1 3x3 6 x6 9 x9 50
1000 xi 1000
i 1,2,...,10149
MATLAB Program:
x0=[0 0 0 0 0 0 0 0 0 0]'; % initial condition
% lower and upper bounds
for i=1:10
vlb(i)=-1000; vub(i)=1000;
end
options=foptions; % sets defaults
options(1)=1;% allows output of intermediate results
options(13)=3; % number of equality constraints
x=constr('fopt22',x0,options,vlb,vub);
disp('final solution')
x=x'
[f,g]=fopt22(x')
150
where ‘fopt22’ is a function m file:
function [f,g] = fopt22(x)
f=0;
for k = 1:10
f = f +((x(k)^2)/k + k*x(k) + k^2)^2;
end
% equalities
g(1)=x(1)+x(3)+x(5)+x(7)+x(9);
g(2)=x(2)+2*x(4)+3*x(6)+4*x(8)+5*x(10);
g(3)=2*x(2)-5*x(5)+8*x(8);
% inequalities
g(4)=x(1)-3*x(4)+5*x(7)-x(10);
g(5)=x(1)+2*x(2)+4*x(4)+8*x(8)-100;
g(6)=x(1)+3*x(3)+6*x(6)-9*x(9)-50;
151
Results: (443 function evaluations)
final solution
x = 3.5121 4.0877 2.4523 4.8558 1.3922
0.6432 -3.4375 -0.1518 -3.9191 -3.0244
f = 1.7286e+004
g = 0.0000
0.0000
-25.2186 -70.1036
0.0000
0.0000
152
© Copyright 2026 Paperzz