Minimization of Static ! Cost Functions! Robert Stengel! Optimal Control and Estimation, MAE 546, Princeton University, 2017 •! J = Static cost function with constant control parameter vector, u •! Conditions for a minimum in J with respect to u •! Analytical and numerical solutions Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/MAE546.html http://www.princeton.edu/~stengel/OptConEst.html 1 Static Cost Function •! Minimum value of the cost, J, is fixed, but may be unknown •! Corresponding control parameter, u*, also is fixed but possibly unknown •! Cost function preferably has –! Single minimum –! Locally smooth, monotonic contours away from the minimum 2 Vector Norms for Real Variables •! Norm = Measure of length or magnitude of a vector, x –! Scalar quantity •! Taxicab or Manhattan norm n L1 norm = x 1 = ! xi •! Euclidean or Quadratic Norm ( ) L2 norm = x 2 = xT x •! p Norm i =1 ( 1/2 = x12 + x22 + ! + xn2 L norm = x p p " n p% = $ ! xi ' # i =1 & ! x1 $ & # x2 & # x=# ! & & # #" xn &% dim(x) = n ' 1 ) 1/2 1/ p •! Infinity Norm: p -> " 3 p Norm Example L norm = x p p " n p% = $ ! xi ' # i =1 & 1/ p p 1 2 4 8 16 32 15-dimensional vector 4.5 4 3.5 3 2.5 1 2 1 1 3 1 1 4 1 6 7 8 9 1 3 1 1 12 13 2 p norms of the vector 1 2 1.5 1 0.5 Lp 24.0000 7.2111 4.6312 4.0957 4.0050 4.0000 As p increases, the norm approaches the value of the maximum component of the vector 0 1 2 3 4 5 10 11 14 15 4 Apples and Oranges Problem Suppose elements of x represent quantities with different units ! # x = ## # #" x1 $ ! Velocity, m / s & # Angle, rad x2 & # = ! ! & # & # xn & # Temperature, °K % " $ & & & & &% What is a reasonable definition for the norm? One solution: Vector, y, normalized by ranges of elements of x ! # y = ## # #" y1 $ ! & # y2 & # ' ! & # & # yn & # % " (x (x ! x d1 x1 $ # 1 & # d 2 x2 & # x2 ' ! & ## & d n xn & # % #" xn 1max ( x1min 2max ( x2min (x ! ( xnmin nmax ) $& ! d & # ) && = ## 0 0 $! &# d2 ! 0 & # ! ! ! ! &# &# # #" 0 0 ! dn &% #" 1 ) & & &% 0 ! x1 $ & x2 & = Dx ! & & xn & % 5 Weighted Euclidean (Quadratic) Norm of x ( y 2 = yT y ( ) 1/2 = xT DT Dx ( = y12 + y22 + ! + ym2 ) 1/2 = Dx ) 1/2 dim ( y ) = m ! 1 2 dim ( x ) = n ! 1 dim ( D ) = m ! n •! If m = n, •! If m # n, •! If D is diagonal •! •! Rank(D) $ m, n > m Rank(D) $ n, n < m –! D is square –! DTD is square and symmetric –! DTD is diagonal –! D is not square –! DTD is square and symmetric 6 Rank of Matrix D •! Maximal number of linearly independent rows or columns of D •! Order of the largest non-zero minor determinant of D •! Examples ! a b $ # & D = # c d & ; maximal rank ' 2 #" e f &% ! a b D=# #" d e c $ & ; maximal rank ' 2 f & % 7 Quadratic Forms xT DT Dx ! xT Qx = Quadratic form Q ! DT D = Defining matrix of the quadratic form •! •! •! dim(Q) = n x n Q is symmetric xTQx is a scalar •! •! Rank(Q) $ m, n > m Rank(Q) $ n, n < m Useful identity for the trace of the quadratic form x T Qx = Tr( x T Qx ) = Tr( xx T Q) = Tr(Qxx T ) "#(1 ! n ) ( n ! n ) ( n ! 1) $% = "#(1 ! 1) $% = Tr (1 ! 1) = Tr "#( n ! n ) $% = Tr "#( n ! n ) $% = Scalar 8 Why are Quadratic Forms Useful? 2 x 2 example ! q11 q12 # ! x1 # x2 # % &% & $ % q21 q22 & % x2 & " $" $ 2 2 = q11 x1 + q22 x2 + ( q12 + q21 ) x1 x2 J ( x ) ! !" xT Qx #$ = ! x1 " = q11 x12 + q22 x22 + 2q12 x1 x2 for symmetric matrix •! 2nd-order term of Taylor series expansion of any vector function & # " f (x) & 1 T # " f 2 (x) f ( x 0 + !x ) = f ( x 0 ) + !xT % ( !x + ... H.O.T ( + !x % 2 %$ "x x=x0 (' %$ "x x=x0 (' 2 •! Gradient (as row vector) well-defined everywhere !J !x ! " !J !x1 # •! •! •! •! !J !x2 $ = " ( 2q11 x1 + 2q12 x2 ) % #& ( 2q x + 2q12 x1 ) $ %' 22 2 Large values weighted more heavily than small values Some terms can count more than others Coupling between terms can be considered Asymmetric Q can always be replaced by a symmetric Q 9 Definiteness! 10 Definiteness of a Matrix If xT Qx > 0 for all x ! 0 a) The scalar is definitely positive b) The matrix Q is a positive definite matrix If x T Qx ! 0 for all x " 0 Q is a positive semi - definite matrix If x T Qx < 0 for all x ! 0 Q is a negative definite matrix ! 1 0 0 $ & # Q=# 0 2 0 & #" 0 0 3 &% ! 1 0 0 Q=# 0 2 0 # #" 0 0 0 $ & & &% " !1 0 0 % Q = $ 0 !2 0 ' ' $ $# 0 0 !3 '& 11 Positive-Definite Matrix •! Q is positive-definite if –! All leading principal minor determinants are positive –! All eigenvalues are real and positive 3 x 3 example ! q11 q12 # Q = # q21 q22 # q q32 " 31 q11 > 0, q11 det ( sI ! Q ) = s 3 + a2 s 2 + a1s + a0 = ( s ! "1 ) ( s ! "2 ) ( s ! "3 ) q13 $ & q23 & q33 & % q12 q21 q22 > 0, "1 , "2 , "3 > 0 q11 q21 q12 q22 q31 q32 q13 q23 q33 > 0 Whats an eigenvalue? 12 Characteristic Polynomial Characteristic polynomial of a matrix, F sI ! F = det ( sI ! F ) = Scalar " #(s) = s n + an !1s n !1 + ... + a1s + a0 = s n ! Tr ( F ) s n !1 + ... + a1s + a0 where " $ $ ( sI ! F ) = $ $ $ # ( s ! f11 ) ! f21 ... ! fn1 ... ! f12 ( s ! f22 ) ... ... ! fn2 ... ... ! f1n ! f2n ... ( s ! fnn ) % ' ' ' ' ' & (n x n) 13 Eigenvalues Characteristic equation of the matrix !(s) = s n + an "1s n "1 + ... + a1s + a0 # 0 = ( s " $1 ) ( s " $2 ) (...) ( s " $n ) # 0 !i are solutions that set " ( s ) = 0 •! They are called –! the eigenvalues of F –! the roots of the characteristic polynomial an!1 = !Tr ( F ) i a0 = ( !1) n #" i=1 i 14 Examples of Positive Definite Matrices Inertia matrix of a rigid body " (y 2 + z 2 ) !xy !xz $ !xy (x 2 + z 2 ) I= ( $ !yz Bo dy $ !xz !yz (x 2 + y 2 ) $# % " I xx ' $ ' dm = $ !I xy ' $ '& $# !I xz !I xy I yy !I yz !I xz % ' !I yz ' ' I zz ' & Diagonal matrix with positive elements ! 7 0 0 $ Q = # 0 12 0 & & # #" 0 0 19 %& 15 Rotation (Direction Cosine) Matrix y = Hx •! Projections of unit vector components of one Cartesian reference frame on another •! Rotational orientation of one Cartesian reference frame with respect to another " cos ! 11 $ H = $ cos ! 12 $ cos ! 13 # cos ! 21 cos ! 22 cos ! 23 cos ! 31 % " h11 h12 ' $ cos ! 32 ' = $ h21 h22 cos ! 33 ' $ h31 h32 & # h13 % ' h23 ' h33 ' & 16 Euler Angles •! Body attitude measured with respect to inertial frame •! Three-angle orientation expressed by sequence of three orthogonal single-angle rotations from inertial to body frame ! : Yaw angle " : Pitch angle # : Roll angle det(H) = 1 % cos! cos" ' H = ' # cos $ sin" + sin $ sin ! cos" ' sin $ sin" + cos $ sin ! cos" & cos! sin" cos $ cos" + sin $ sin ! sin" # sin $ cos" + cos $ sin ! sin" # sin ! ( * sin $ cos! * cos $ cos! * ) 17 Rotational Transformation Position of a particle in two coordinate frames with common origin y = Hx x = H !1y Orthonormal transformation [H–1 = HT] Not a positive-definite matrix Non-singular matrix [det(H) = 1 # 0] Euclidean Norm y 2 = ( yT y ) = ( y12 + y2 2 + y32 ) 1/2 1/2 = ( xT HT Hx ) = Hx 2 = x 2 = ( x12 + x2 2 + x32 ) 1/2 1/2 as HT H = H !1H = I 3 (positive-definite) 18 Conditions for a Static Minimum! 19 The Static Minimization Problem Find the value of a continuous control parameter, u, that minimizes a continuous scalar cost junction, J Single control parameter min J ( u ); dim ( J ) = 1, dim ( u ) = 1 wrt u u* = arg min J ( u ) ! Argument that minimizes J u Many control parameters min J ( u ); dim ( J ) = 1, dim ( u ) = m ! 1 wrt u u* = arg min J ( u ) u 20 Necessary Condition for Static Optimality Single control dJ du u =u* =0 i.e., the slope is zero at the optimum point Example: J = (u ! 4 ) 2 dJ = 2 (u ! 4 ) du = 0 when u* = 4 Optimum point is a stationary point 21 Necessary Condition for Static Optimality Multiple controls " !J !J =$ ! u u= u* $ ! u1 # !J ! u2 ... !J % =0 ' ! um ' & u= u* Gradient, defined as a row vector i.e., all the slopes are concurrently zero at the optimum point Example: 2 J = ( u1 ! 4 ) + ( u2 ! 8 ) 2 dJ = 2 ( u1 ! 4 ) = 0 when u1 * = 4 du1 dJ = 2 ( u2 ! 8 ) = 0 when u2 * = 8 du2 " !J !J =$ ! u u= u* $ ! u1 # !J % ' ! u2 ' & u= u*= "$ 4 % ' # 8 & = "# 0 0 %& Optimum point is a stationary point 22 … But the Slope can be Zero for More than One Reason Minimum Either Maximum Neither (Inflection Point) 23 … But the Gradient can be Zero for More than One Reason Minimum Either Maximum Neither (Saddle Point) 24 Sufficient Condition for Extremum Single control •! Minimum –! Satisfy necessary condition –! plus d2J du 2 –! Satisfy necessary condition –! plus d2J du 2 >0 u =u* –! i.e., the curvature is positive at the optimum point •! Example: •! Maximum J = (u ! 4 ) 2 <0 u =u* –! i.e., the curvature is negative at the optimum point •! Example: J = ! (u ! 4 ) dJ = 2 (u ! 4 ) du d2J =2>0 du 2 2 dJ = !2 ( u ! 4 ) du d2J = !2 < 0 du 2 25 Sufficient Condition for a Local Minimum Multiple controls Satisfy necessary condition plus Hessian matrix !2J ! u2 u= u* " !2J $ 2 $ ! u1 $ 2 $ ! J = $ ! u2! u1 $ ... $ $ !2J $ $# ! um ! u1 !2J ! u1! u2 !2J ! u22 ... !2J ! u2 ! u m !2J % ' ! u1! um ' ' !2J ' ... >0 ! u2 ! u m ' ' ... ... ' !2J ' ... ' ! um2 ' & u= u* ... i.e., J must be convex for a minimum 26 Minimized Cost Function, J* •! Gradient is zero at the minimum •! Hessian matrix is positive-definite at the local minimum J ( u * +!u ) " J ( u *) + !J ( u *) + ! 2 J ( u *) + ... •! $# J ' !J ( u *) = !uT & )=0 % # u u=u* ( •! 1 ! J ( u *) = !uT 2 2 $# 2 J & 2 %# u ' ) !u * 0 u=u* ( First variation is zero at the minimum Second variation is positive at the minimum 27 Equality Constraints! 28 Static Cost Functions with Equality Constraints •! Minimize J(u), subject to c(u ) = 0 –! dim(c) = [n x 1] –! dim(u ) = [(m + n) x 1] 29 Two Approaches to Static Optimization with a Constraint Example : min J subject to $ ! u ,u 1.% Use constraint to reduce u' = # u1 & control dimension #" u2 &% c ( u') = c ( u1 ,u2 ) = 0 ! u2 = fcn ( u1 ) 1 2 then 2.% Augment the cost function to recognize the constraint Lagrange multiplier, !, is an unknown constant dim ( ! ) = dim ( c ) = n " 1 J ( u') = J ( u1 ,u2 ) = J "#u1 , fcn ( u1 ) $% = J ' ( u1 ) J A ( u') = J ( u') + ! T c ( u') Warning Lagrange multiplier, !, is not the same as an eigenvalue, ! 30 Solution Example: First Approach Cost function J = u1 ! 2u1u2 + 3u2 ! 40 2 2 Constraint c = u2 ! u1 ! 2 = 0 " u2 = u1 + 2 31 Solution Example: Reduced Control Dimension Cost function and gradient with substitution J = u12 ! 2u1u2 + 3u2 2 ! 40 2 = u12 ! 2u1 ( u1 + 2 ) + 3( u1 + 2 ) ! 40 = 2u12 + 8u1 ! 28 "J = 4u1 + 8 = 0 " u1 Optimal solution u1 * = !2 u2 * = 0 J* = !36 32 Solution: Second Approach •! Partition u into a state, x, and a control, u, such that –! dim(x) = [n x 1] = dim(c) –! dim(u) = [m x 1] ! x $ u' = # & " u % •! Add constraint to the cost function, weighted by Lagrange multiplier, ! J A ( u') = J ( u') + ! T c ( u') J A ( x, u ) = J ( x, u ) + ! T c ( x, u ) •! c is required to be zero when JA is a minimum ! x $ c ( u') = c # =0 " u &% 33 Solution: Adjoin Constraint with Lagrange Multiplier Gradients with respect to x, u, and ! are zero at the optimum point ! JA ! J !c = + "T =0 !x !x !x ! JA ! J !c = + "T =0 !u !u !u ! JA =c=0 !" 34 Simultaneous Solutions for State and Control ( 2n + m ) values must be found: ( x, u, ! ) •! First equation: find optimal Lagrange multiplier (n scalar equations) •! Second and third equations: specify the state and control (n + m) scalar equations # J $ #c' !* = " & ) #x % #x( or T "1 [Row] T *$ # c ' "1 - $ # J ' T !* = " ,& ) / & ) [Column] +% # x ( . % # x ( !J !c + " *T =0 !u !u #1 ! J ! J $ !c' !c # =0 & ) !u !x % !x( !u c ( x, u ) = 0 35 Solution Example: Lagrange Multiplier •! Cost function •! Constraint J = u 2 ! 2xu + 3x 2 ! 40 c= x!u!2= 0 •! Partial derivatives !J = "2u + 6x !x !J = 2u " 2x !u !c =1 !x !c = "1 !u 36 Solution Example: Lagrange Multiplier •! From first equation •! From second equation •! From constraint !* = 2u " 6x ( 2u ! 2x ) + ( 2u ! 6x ) ( !1) "x = 0 u = !2 Optimal solution x* = 0 u* = !2 J* = !36 37 Numerical Optimization! 38 Numerical Optimization •! What if J is too complicated to find an analytical solution for the minimum? •! … or J has multiple minima? •! Use numerical optimization to find local and/or global solutions 39 Two Approaches to Numerical Minimization Gradient-Free Search Evaluate J and search directly for min J Gradient-Based Search Evaluate !J/!u and search for zero !J " !J % $# '& = !u o !u u= u0 = starting guess !J " !J % " !J % " !J % $# '& = $# '& + ) $# '& = !u n ! u n (1 !u n !u u= un such that !J !J < ! u n ! u n (1 40 Gradient-Free Search (Based on J) •! Exhaustive grid search •! Unstructured random search •! Structured random search 41 Gradient-Free Search •! Structured random search –! Nelder-Mead (Downhill Simplex) algorithm –! Simulated annealing –! Genetic algorithm –! Particle-swarm optimization 42 Downhill Simplex Search (Nelder-Mead Algorithm)* •! Simplex: N-dimensional figure in control space defined by –! N + 1 vertices –! (N + 1) N / 2 straight edges between vertices * Basis for MATLAB fminsearch 43 Search Procedure for Downhill Simplex Method •! Select starting set of vertices •! Evaluate cost at each vertex •! Determine vertex with largest cost (e.g., J1 at right) •! Project search from this vertex through middle of opposite face (or edge for N = 2) •! Evaluate cost at new vertex (e.g., J4 at right) •! Drop J1 vertex, and form simplex with new vertex •! Repeat until cost is small 44 Gradient Search to Minimize a Quadratic Function •! Cost function, gradient, and •! Guess a starting value of u, uo Hessian matrix of a •! Solve gradient equation quadratic function J= = 1 ( u ! u *)T R ( u ! u *) , R > 0 2 2 !J T T ! J = ( u o " u *) R = ( u o " u *) ! u u=uo ! u2 1 T ( u Ru ! uT Ru * !u *T Ru + u *T Ru *) 2 !J ( uo " u *)T = ! u !J T = ( u " u *) R = 0 when u = u * !u !2J = R = symmetric constant ! u2 u=uo R "1 #! J & u* = u o " R % ( $ ! u u=uo ' "1 T u=uo ( row ) ( column ) 45 Optimal Value Found in a Single Step •! For a quadratic cost function #" J & u* = u o ! R % ( $ " u u=uo ' T !1 #" 2 J = uo ! % 2 %$ " u !1 & #" J & ( % ( " u u=uo ' u=uo ( $ ' T •! Gradient establishes general search direction •! Hessian fine-tunes direction and tells exactly how far to go 46 Newton-Raphson Iteration •! Many cost functions are not quadratic •! However, the surface is well-approximated by a quadratic in the vicinity of the optimum, u* J ( u * +!u ) " J ( u *) + !J ( u *) + ! 2 J ( u *) + ... !J ( u *) = !uT #J =0 # u u=u* 1 ! J ( u *) = !uT 2 2 $# 2 J & 2 %# u ' ) !u * 0 u=u* ( Optimal solution requires multiple steps 47 Newton-Raphson Iteration Newton-Raphson algorithm is an iterative search patterned after the quadratic search #"2J u k +1 = u k ! % 2 %$ " u & ( u= u k ( ' !1 #"J % %$ " u & ( u= u k ( ' T 48 Difficulties with NewtonRaphson Iteration #"2J u k +1 = u k ! % 2 %$ " u & ( u= u k ( ' !1 #"J % %$ " u & ( u= u k ( ' T •! Good when close to the optimum, but … •! Hessian matrix (i.e., the curvature) may be –! Difficult to estimate from local measurements of the cost –! May have the wrong sign (e.g., not positive-definite) –! May lead to large errors in incremental control variation 49 Steepest-Descent Algorithm Multiplies Gradient by a Scalar Constant ( Gain ) $#J u k +1 = u k ! " & &% # u ' ) u= u k ) ( T •! •! Replace Hessian matrix by a scalar constant Gradient is orthogonal to equal-cost contours 50 Choice of SteepestDescent Gain If gain is too small Convergence is slow If gain is too large Convergence oscillates or may fail Solution: Make gain adaptive 51 Optimal SteepestDescent Gain Find optimal gain by evaluating cost, J, for intermediate solutions ( with same !J !u ) Adjustment rule for ! • Starting estimate, J 0 • First estimate, J1 , using ! • Second estimate, J 2 , using 2! If J 2 > J1 • Quadratic fit through three points to find ! * • Else, third estimate, J 3 , using 4! • … Use optimal gain, ! *, on each major iteration 52 Generalized Direct Search Algorithm •! Choose optimal elements of K by sequential line search before recalculating the gradient T #" J & u k+1 (t) = u k (t) ! K k % ( t ) ( $ " u 'k ! k11 # K k = # ... # ... " ... ... $ & ... & ... &% k22 ... 53 Ad Hoc Modifications to the Newton-Raphson Search $# 2 J u k+1 = u k ! " & 2 &% # u #" 2 J u k+1 = u k ! % 2 %$ " u $" 2 J u k+1 = u k ! & 2 &% " u !1 T ' $# J ' ) & ) , " <1 # u u=uk ( u=u k ) % ( ## " 2 J %% 2 % % " u1 u k+1 = u k ! % % 0 %% %% %% 0 %$ $ !1 u=u k T & #" J & + K( % ( , K > 0, diagonal (' $ " u u=uk ' !1 u=u k !1 && 0 0 (( T (( # & " J ! 0 ( (( % ( ( $ " u u=uk ' 2 " J (( 0 " um2 ( ( ' (' ' $" J ' + #I) & ) )( % " u u=uk ( T ! With varying ", this is the Levenberg-Marquardt algorithm 54 Next Time:! Principles for Optimal Control of Dynamic Systems! ! Reading:! OCE: Sections 3.1, 3.2, 3.4! 55 Supplemental Material 56 Infinity Norm •! # Norm –! As p -> # –! Norm is the value of the maximum component Lp norm = x p " n p% = $ ! xi ' # i =1 & 1/ p " n )% *p() ** ( $ ! xi ' # i =1 & 1/) = ximax = L) norm ! x1 $ & # x2 & # x=# ! & & # #" xn &% dim(x) = n ' n •! Also called –! Supremum norm –! Chebyshev norm –! Uniform norm L! norm = x ! = max { x1 , x2 ,…, xn } 57 Gradient-Free vs. Gradient-Based Searches •! J is a scalar •! J provides no search direction •! Search may provide a global minimum •! !J/!u is a vector •! !J/!u indicates feasible search direction •! Search defines a local minimum 58 MATLAB Implementations of Gradient-Free Search !! Nelder-Mead (Downhill Simplex) algorithm !! http://www.mathworks.com/help/matlab/ref/ fminsearch.html !! Simulated annealing !! http://www.mathworks.com/help/gads/ simulated-annealing.html !! Genetic algorithm !! http://www.mathworks.com/help/gads/ genetic-algorithm.html !! Example: edit gaconstrained !! Particle-swarm optimization !! http://www.mathworks.com/help/gads/ particle-swarm.html 59 Numerical Example •! Cost function and derivatives J= J= 1 ( u ! u *)T R ( u ! u *) , R > 0 2 T . 2 1 0 (! u1 $ ! 1 $ + ( 1 2 + (! u1 $ ! 1 $ + 0 * * ' ' # & # & / 3 * 2 0 *#" u2 &% #" 3 &% - ) 2 9 , *#" u2 &% #" 3 &% - 0 , ) ,4 1) T T (! u $ ! + ( 1 2 + ! 5J $ 1 1 $- ( 1 2 + = *# & '# * -; R = * & "# 5 u %& *#" u2 &% " 3 % - ) 2 9 , ) 2 9 , ) , •! First guess at optimal control (details of the actual cost function are unknown) ! $ u1 ! 4 $ # & =# #" u2 &% " 7 &% 0 •! Derivatives evaluated at starting point T )" 4 % " 1 % , ) 1 2 , " 11 % ) 1 2 , !J ; R=+ =+ ( . + . .= ! u u= u0 +*$# 7 '& $# 3 '& -. * 2 9 - $# 42 '& * 2 9 - Solution from starting point #"J u* = u o ! R % %$ " u !1 & ( u= uo ( ' T * ! u1 $ ! 4 $ ( 9 / 5 '2 / 5 + ! 11 $ u* = # '* & =# #" u2 &% " 7 &% ) '2 / 5 1 / 5 , #" 42 &% ! 4 $ ! 3 $ ! 1 $ =# ' = " 7 &% #" 4 &% #" 3 &% 60 Conjugate Gradient Algorithm First step Calculate gradient of improved trajectory T #"H & u1 (t) = u k (t) ! K 0 % (t ) $ " u (' 0 T "!H % $# ! u ( t ) '& 1 Calculate ratio of integrated gradient magnitudes squared tf b= to tf % "!H % (t )' dt $ &1 1 # !u "!H % "!H % (t )' dt $ &0 0 # !u ( $# ! u (t )'& to T "!H ( $# ! u (t )'& T Second step T T "H +) # " H & u k +1 (t) = u 2 (t) = u1 (t) ! K1 * % (t )( ! b #% (t )&( +. " u " u $ '1 ' 0 /+ ,+ $ 61 Rosenbrock Function Typical test function for numerical optimization algorithms 2 ( J(u1 ,u2 ) = (1 ! u1 ) + 100 u2 ! u12 ) 2 Wolfram Alpha Plot (D ((1-u1)^2+100 (u2 - u1^2)^2, u1), D ((1-u1)^2+100 (u2 - u1^2)^2, u2)) Minimize[(1-u1)^2+100 (u2 - u1^2)^2, u1, u2] 62 How Many Maxima/Minima does the Mexican Hat [z = (sin R)/R] Have? " !J !J =$ ! u u= u* $ ! u1 # !2J ! u2 u= u* " !2J $ 2 $ ! u1 $ 2 $ ! J = $ ! u2! u1 $ ... $ $ !2J $ $# ! um ! u1 !J ! u2 ... !2J ! u1! u2 ... !2J ! u22 ... !2J ! u2 ! u m !J % =0 ' ! um ' & u= u* !2J % ' ! u1! um ' ' !2J ' ... > 0 ! u2 ! u m ' < ' ... ... ' !2J ' ... ' ! um2 ' •! Prior & u= u* One maximum example Wolfram Alpha (sin(sqrt(u1^2 + u2^2)) ) / (sqrt(u1^2 + u2^2)) maximize(sin(sqrt(u1^2 + u2^2)) ) / (sqrt(u1^2 + u2^2)) 63
© Copyright 2025 Paperzz