Optimal Control, Guidance and Estimation Lecture – 34 Constrained Optimal Control – I Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Topics Motivation Brief Summary of Unconstrained Optimal Control Pontryagin Minimum Principle Time Optimal Control of LTI Systems • Time Optimal Control of Double-Integral System Fuel Optimal Control Energy Optimal Control State Constrained optimal control OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 2 Motivation Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Motivation Physical systems are always restricted by constraints on control and state variables. Examples: • • • • • • Thrust deflection of the rocket engine cannot not exceed a certain designed value Control surface deflections are constrained by hard bounds Aircrafts cannot climb beyond a certain altitude (else, they will loose lift because of low dynamic pressure) Robotic arms are constrained by physical limits on angular movements Speed of electric motors should not increase beyond a limit (to prevent wear and tear) Current in a circuit must not increase beyond a limit. Else, some component may burn out. OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 4 Motivation Question: Can these constraints be explicitly handled in the control design? Answer: YES! (optimal control framework allows that) Ways to handle • • Soft constraint formulations Hard constraint formulation Problem classification • • • Control constrained problems State constrained problems Mixed state and control constrained problems OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 5 Pioneers of Optimal Control 1700s • • • Bernoulli, Newton Euler (Student of Bernoulli) Newton Bernoulli Lagrange Euler ....200 years later.... Lagrange 1900s • • • Pontryagin Bellman Kalman Pontryagin Bellman Kalman OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 6 Lev Semyonovich Pontryagin • Lev Semyonovich Pontryagin (September 3,1908: May 3, 1988) Moscow Russia. Lost his eyesight when he was about 14 years old due to an explosion. Entered Moscow State University (1925). In 1930s & 1940s significant contribution to topology which was translated into several Languages. • As a head of Steklov Mathematical Institute he focused on general theory of singularly perturbed systems of ordinary differential equations and maximum principle in optimal control theory. OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 7 L. S. Pontryagin • In 1955, he formulated a general time-optimal control problem for a fifth-order dynamical system describing optimal maneuvers of an aircraft with bounded control functions. • To invent a new calculus of variation he spent three consecutive sleepless nights and came up with the idea of the Hamiltonian formulation for the problem and the adjoint differential equations. • His other contributions include "singular perturbation theory" and "differential game theory". • He and his co-workers were awarded "Lenin Prize" in 1961. OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 8 Brief Summary of Unconstrained Optimal Control Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Objective To find an "admissible" time history of control variable U ( t ) , t ∈ t0 , t f , which: 1) Causes the system governed by Xɺ = f ( t , X ,U ) to follow an admissible trajectory 2) Optimizes (minimizes/maximizes) a "meaningful" performance index tf J = ϕ ( t f , X f ) + ∫ L ( t , X ,U ) dt t0 3) Forces the system to satisfy "proper boundary conditions". OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 10 Optimal Control Problem Performance Index (to minimize / maximize): tf J = ϕ ( t f , X f ) + ∫ L ( t , X , U ) dt t0 Path Constraint: Xɺ = f ( t , X , U ) Boundary Conditions: X ( 0 ) = X 0 : Specified t f : Fixed, X ( t f ) : Free OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 11 Necessary Conditions of Optimality tf Augmented PI J = ϕ + ∫ L + λ T ( f − Xɺ ) dt t0 Hamiltonian H ≜ ( L + λT f ) tf First Variation δ J = δϕ + δ ∫ ( H − λ T Xɺ ) dt t0 tf = δϕ + ∫ δ ( H − λ T Xɺ ) dt t0 OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 12 Necessary Conditions of Optimality tf First Variation δ J = δϕ + ∫ (δ H − δλ T Xɺ − λ T δ Xɺ ) dt t0 Individual terms T ∂ϕ ∂X f δϕ ( t f , X f ) = (δ X f ) T ∂H T ∂H T ∂H δ H ( t , X ,U , λ ) = (δ X ) + (δ U ) + (δλ ) ∂X ∂U ∂λ OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 13 Necessary Conditions of Optimality tf ∫ (λ t0 tf T δ Xɺ ) dt = ∫ λ T t0 d (δ X ) dt dt t f ,δ X f = λ δ X t T 0 ,δ X 0 tf T dλ − ∫ δ X dt dt t0 tf T = λ δ X f − λ δ X 0 − ∫ (δ X ) λɺ T dt T f T 0 0 t0 tf T = λ Tf δ X f − ∫ (δ X ) λɺT dt t0 OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 14 Necessary Conditions of Optimality First Variation T ∂ϕ δ J = (δ X f ) ∂X f tf T − (δ X f ) λ f T ∂H + ∫ ( δ X ) ∂X t0 tf T ∂H + (δ U ) ∂U T ∂H + (δλ ) ∂λ dt tf T + ∫ (δ X ) λɺ dt − ∫ (δλ ) Xɺ dt T t0 t0 OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 15 Necessary Conditions of Optimality First Variation T ∂ϕ − λf ∂X f δ J = (δ X f ) tf tf + ∫ (δ X ) t0 T T ∂H ɺ ∂X + λ dt + ∫ (δ U ) t0 ∂H ∂U dt tf T ∂H + ∫ (δλ ) − Xɺ dt ∂λ t0 =0 OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 16 Necessary Conditions of Optimality: Summary State Equation ∂H Xɺ = = f (t , X ,U ) ∂λ Costate Equation λɺ = − Optimal Control Equation ∂H =0 ∂U Boundary Condition λf = ∂H ∂X ∂ϕ ∂X f X ( t0 ) = X0 : Fixed OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 17 Necessary Conditions of Optimality: Some Comments State and Costate equations are dynamic equations. If one is stable, the other turns out to be unstable! Optimal control equation is a stationary equation Boundary conditions are split: it leads to Two-PointBoundary-Value Problem (TPBVP) State equation develops forward whereas Costate equation develops backwards. It is known as “Curse of Complexity” in optimal control Traditionally, TPBVPs demand computationally-intensive iterative numerical procedures, which lead to “open-loop” control structure. OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 18 Control Constrained Problems: Pontryagin Minimum Principle Reference: D. S. Naidu: Optimal Control Systems, CRC Press, 2002. Pontryagin Prof. Radhakant Padhi Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Objective To find an "admissible" time history of control variable U ( t ) , t ∈ t0 , t f , where U (t ) ≤ U (or, component wise, U −j ≤ u j (t ) ≤ U +j ) , which: 1) Causes the system governed by Xɺ = f ( t , X ,U ) to follow an admissible trajectory 2) Optimizes (minimizes/maximizes) a "meanigful" performance index tf J = ϕ ( t f , X f ) + ∫ L ( t , X ,U ) dt t0 3) Forces the system to satisfy "proper boundary conditions". OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 20 Optimum of Control Functional Variation: δ u ( t ) = u ( t ) − u * ( t ) u, u* u (t ) δ u (t ) u * ( t ) : Optimum control a b OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore t 21 Optimum of Control Functional A functional u ( t ) is said to have a relative optimum at u * ( t ) , if ∃ ε > 0 such that for all functions u ( t ) ∈ Ω which satisfy u ( t ) − u * ( t ) < ε , the increment of J has the "same sign". 1) If ∆J = J ( u ) − J ( u * ) ≥ 0, then J ( u * ) is a relative (local) "Minimum". 2) If ∆J = J ( u ) − J ( u * ) ≤ 0, then J ( u * ) is a relative (local) "Maximum". Note: If the above relationships are satisfied for arbitrarily large ε > 0, then J ( u * ) is a "global optimum". OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 22 Pontryagin Minimum Principle With variations in control U = U * + δ U , ∆J (U * , δ U ) = J (U ) − J (U * ) ≥ 0 for Minimum = δ J (U * , δ U ) + HOT ∂J ≈ ∂U δ U (Neglecting HOT) However, when U (t ) ≤ U, δ U is no longer arbitrary for all t ∈ t0 , t f OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 23 Control Constrained Problems Reference: D. S. Naidu: Optimal Control Systems, CRC Press, 2002. Note: The condition δ J = 0 is valid only if u*(t) lies within the boundary (i.e. it has no constraint) for the entire time interval t0 , t f OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 24 Pontryagin Minimum Principle Necessary Condition: δ J (U * (t ), δ U (t ) ) ≥ 0 U * ( t ) : Optimal solution δ U (t ) : Allowable variation about U * (t ) First Variation: tf δJ = ∫ t0 T T ∂H ∂H ∂H ɺ ɺ δU + + λ δ X + − X δλ d t ∂U ∂λ ∂X ∂ϕ + − λ ∂X tf T δX f OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 25 Pontryagin Minimum Principle 1) In control constrained problems, variations in costates δλ (t ) can be arbitrary. This gives ∂H Xɺ − =0 ∂λ ∂H Xɺ = (state equation) = f ( t , X ,U ) ∂λ 2) If the costate λ (t ) is selected such that the coefficient of δ X (t ) is 0 (i.e. variations in states can be arbitrary), then ∂H =0 ∂X ∂H λɺ = − ∂X λɺ + (costate equation) OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 26 Pontryagin Minimum Principle 3) Boundary conditions are not effected by the control constraints. Hence, the following Transversality condition still holds good. ∂ϕ ∂X f λf = With the above observations, the necessary condition becomes tf T ∂H δ U dt ∂U t0 δ J (U , δ U ) = ∫ * tf = ∫ H X ,U * + δ U , λ − H X ,U * , λ dt ( ) ( ) t0 ≥0 ∀ admissible δ U arbitrarily small OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 27 Pontryagin Minimum Principle Since δ U (t ) is arbitrarily small, the integrand ≥ 0. This gives us * H X ,U + δ U , λ ≥ H X ,U * , λ U i.e. ( ( ) ) H X ,U * , λ ≤ H ( X ,U , λ ) "Necessary condition" for constrained optimal control U * is given by ( min H ( X ,U , λ ) = H X ,U * , λ U (t ) ≤ U ) i.e. the optimal control should minimize the Hamiltonian This is known as the "Pontryagin Minimum Principle". OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 28 Solution Procedure of a given Problem H ( X ,U , λ ) = L ( X ,U ) + λ T f ( X ,U ) Hamiltonian : Necessary Conditions : ∂H Xɺ = = f ( t , X ,U ) ∂λ ∂H (ii) Costate Equation: λɺ = − ∂X (iii) Optimal Control Equation: Minimize H with repect to U (t ) ≤ U (i) State Equation: i.e. H ( X ,U * , λ ) ≤ H ( X ,U , λ ) (iv) Boundary conditions: X ( 0 ) = Specified, ∂ϕ ∂X f λf = OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 29 Some Important Observations 1) The optimality condition H ( X ,U * , λ ) ≤ H ( X ,U , λ ) is valid for both constrained and unconstrained control system, whereas the control relation ( ∂H / ∂U ) = 0 is valid for unconstrained systems only. 2) The results given above provide the necessary conditions only. 3) The sufficient condition for unconstrained control problem is that ∂2H should be positive definite matrix ∀t ∈ t0 , t f 2 ∂U ( X * ,U * , λ* ) OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 30 A Simple Scalar Algebraic Example Problem : Minimize the function H = u 2 − 6u + 7 subject to the constraint relation u ≤ 2, i.e. − 2 ≤ u ≤ +2 Solution : Using the relation for unconstrained control, ∂H = 2u * − 6 = 0 ∂u u * = 3 (Not admissible!) OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 31 Plot of Hamiltonian Reference: D. S. Naidu: Optimal Control Systems, CRC Press, 2002. OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 32 A Simple Scalar Example In this case, the admissible optimal value is u * = +2 can also be obtained from static optimization results using Kharush-Kuhn-Tucker conditions. Note : If the constraint had been u ≤ 3, i.e. − 3 ≤ u ≤ +3, then either of the relation could be used and obtain the optimal value as u* = 3. However, unfortunately many practical constraints do not admit such solutions! OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 33 Additional Necessary Conditions (Due to Pontryagin & Co-workers) 1) If the final time t f is "fixed" and the Hamiltonian H does not depend on time t explicitly, then the Hamiltonian must be constant along the optimal trajectory, i.e. H = Constant ∀t ∈ t0 , t f 2) If the final time t f is "free" and the Hamiltonian does not depend on time t explicitly, then the the Hamiltonian must be identically zero along the optimal trajectory, i.e. H =0 ∀t ∈ t0 , t f OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 34 Proof for Unconstrained Problem Theorem: If the Hamiltonian H is not an explicit function of time, then H is ‘constant’ along the optimal path. Proof: ∂H = Xɺ But ∂λ dH ∂H ɺ T ∂H ɺ T ∂H ɺT ∂H = +X +U +λ dt ∂t ∂X ∂U ∂λ = ∂H ɺ T ∂H ɺ ɺ T ∂H +X + λ +U ∂t ∂X ∂U 0 dH ∂H = dt ∂t =0 and λɺT Xɺ = Xɺ T λɺ 0 ( on optimal path ) ( if H is not an explicit function of t ) . Hence, the result! OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 35 Conclusions Physical systems are always restricted by constraints on control and state variables. In this class we studied about • • Brief Summary of Unconstrained Optimal Control Pontryagin Minimum Principle for Control Constrained Optimal Control (in a generic sense) In the next two classes, we will study about • • • • Time Optimal Control of LTI Systems Fuel Optimal Control Energy Optimal Control State Constrained optimal control OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 36 References D. S. Naidu: Optimal Control Systems, CRC Press, 2002. L. M. Hocking: Optimal Control: An Introduction to Theory and Applications, Oxford University Press, New York, NY, 1966. L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko: The Mathematical Theory of Optimal Processes, Wiley-Intersciences, New York, NY, 1962 (Translated from Russian). OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 37 Thanks for the Attention….!! OPTIMAL CONTROL, GUIDANCE AND ESTIMATION Prof. Radhakant Padhi, AE Dept., IISc-Bangalore 38
© Copyright 2026 Paperzz