The Implicit Function Theorem with Applications in Dynamics and

48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition
4 - 7 January 2010, Orlando, Florida
AIAA 2010-174
The Implicit Function Theorem with Applications in
Dynamics and Control
Matthew Harris∗ and John Valasek†
Texas A&M University, College Station, TX 77843
The implicit function theorem is a statement of the existence, continuity, and differentiability of a function or set of functions. The theorem is closely related to the convergence
of Newton’s method for nonlinear equations, the existence and uniqueness of solutions to
nonlinear differential equations, and the sensitivity of solutions to these nonlinear problems.
The implicit function theorem is presented, and high order sensitivity equations are generated using implicit differentiation. Once a nominal solution is known, these sensitivities
are used to construct a family of neighboring solutions. Based on results presented in the
paper, the implicit function approach shows great promise compared with current methods
in generating families of neighboring solutions to problems in dynamics and control.
Nomenclature
a
b
C
D
E
e
H
J
M
O
p
q
R
r
s
Sy
t
u
V
x
y
y
z
Open-loop plant
Control coefficient
Class of continuous functions
Horizontal displacement
Eccentric anomaly
Orbit eccentricity
Hamiltonian
Performance index or cost function
Mean anomaly
Error function
Independent parameter
State weighting coefficient
Set of all real numbers
Control weighting coefficient
Scalar solution to Riccati equation
Final state weight
Time
Control variable
Horizontal velocity
State variable
State variable
State vector
Vector of states and costates
∗ Graduate Research Assistant, Vehicle Systems and Control Laboratory, Texas A&M University, Aerospace Engineering
Department, Student Member AIAA. m [email protected].
† Associate Professor and Director, Vehicle Systems and Control Laboratory, Texas A&M University, Aerospace Engineering
Department, Associate Fellow AIAA. [email protected].
1 of 11
American
of Aeronautics
andInc.,
Astronautics
Copyright © 2010 by Matthew Harris. Published by the American
InstituteInstitute
of Aeronautics
and Astronautics,
with permission.
Greek
∆
Finite difference
δ
Differential neighborhood
η
Differential neighborhood
Θ
Constraint vector function
θ
Constraint function
Λ
Costate vector
λ
Costate variable
Φ
State transition matrix
ψ
Constraint function
Subscript
0
Nominal solution, initial time
1
First variable
2
Second variable
f
Final time
Superscript
k
Index
n
Index
∗
Nominal solution
I.
Introduction
he implicit function theorem has many uses in theory, applied mathematics, and engineering. It is
T
related to the convergence of Newton’s method for nonlinear equations and the theoretical foundations
of nearly all algorithms for solving nonlinear differential equations.
More important to the purpose here,
1–3
the theorem is related to methods of implicit differentiation.4, 5 Engineers at Texas A&M University have
recognized that this concept can be generalized to arrive at high order sensitivity equations for algebraic
and differential equations.1 Once a nominal solution is known, these sensitivities allow the construction
of a family of neighboring solutions. This has great potential in orbital mechanics and trajectory design
where optimal control simulations may take 12 hours or more. As opposed to current methods where each
neighboring solution must be recalculated, the implicit function approach can generate numerous solutions
in a fraction of the time.
The generalizations made at Texas A&M University yield new results and algorithms in the fields of
dynamics and control.1 There are, however, important results in the literature today closely related to
the implicit function theorem. The Classical Davidenko Homotopy Method uses the theorem to solve nonlinear optimization problems and multi-dimensional root solving problems.6, 7 Neighboring optimal control problems using perturbation guidance schemes also invoke results of the implicit function theorem.8, 9
Malanowski and Maurer used the theorem to investigate parameter variations in constrained optimal control
problems,10, 11 and Pinho and Rosenblueth utilized the theorem to transform a constrained optimal control
problem into an unconstrained optimal control problem.12 In light of these theoretical and computational
advances, it is important to place these methods in a common theoretical framework, investigate the high
order sensitivities, and to develop more general methods which enable computation of extremal field maps
for neighboring optimal trajectories.1
A basic form of the implicit function theorem is presented to illuminate the necessary conditions under
which further analysis is carried out and to concisely present the results of the theorem. Generalized implicit
differentiation is performed to arrive at the high order sensitivities, which help form a Taylor series approximation of neighboring solutions. Three examples of increasing complexity are worked in detail to illustrate
the theory, method, and effectiveness of the implicit function approach. The first example is a nonlinear algebraic root solving problem. This is followed with applications to the nonlinear Riccati differential equation
and an optimal control problem.
2 of 11
American Institute of Aeronautics and Astronautics
II.
The Implicit Function Theorem
The implicit function theorem is a statement of the existence, continuity, and differentiability of a function
or set of functions. The original theorem, now known as the Lagrange inversion theorem, was used to solve
Kepler’s Equation,
E = M + esin(E),
(1)
where M is the mean anomaly, E is the eccentric anomaly, and e is the eccentricity of the orbit.7, 13 The
theorem gave a formula for the correction that must be made when, for some function ψ(·), ψ(M ) is replaced
by ψ(E). Lagrange’s proof of his theorem followed the formal power series argument, and Cauchy was the
first to write the theorem in a form that is known today as the implicit function theorem.7 More recently,
others such as Dini,7 Goursat,4 Young,5 and Widder14 have given proofs of the implicit function theorem
rooted in differential calculus. That is, the proofs rely on Rolle’s theorem, the law of the mean, and small
neighborhoods of continuity around a nominal solution.
The theorem can apply to vector functions of any number of independent and dependent variables. In fact,
the theorem has been generalized to many different forms involving different spaces, mappings, hypotheses,
and conclusions.7, 15–18 A very sophisticated version of the theorem is the late John Nash’s claim to fame.
For simplicity, the theorem is presented here without proof for a single scalar function in one independent
variable and one dependent variable. The presentation is basically that given by Widder.14 Clear expositions
of the more general cases are given by Goursat4 and Young.5 The hypotheses of the theorem are enumerated
by 1, 2, and 3, and the conclusions by A, B, and C.
Theorem 1 (Implicit Function Theorem). Let ψ(p, x) be a function with continuous first derivatives in a δneighborhood of a known solution (p0 , x0 ) where ψ(p0 , x0 ) = 0. Furthermore, let the Jacobian ψx (p0 , x0 ) 6= 0.
1.
ψ(p, x) ∈ C 1 ,
2.
ψ(p0 , x0 ) = 0
3.
ψx (p0 , x0 ) 6= 0
|p − p0 | ≤ δ,
|x − x0 | ≤ δ
Then, there exists a continuous function x(p) and a positive number η such that
A.
x0 = x(p0 )
B.
ψ(p, x(p)) = 0,
|p − p0 | < η
C.
x(p) ∈ C 1 ,
|p − p0 | < η.
Because of the existence, continuity, and differentiability of the function x(p), implicit differentiation can
be carried out on ψ treating p as the independent variable and x as the dependent variable. The first k
derivatives are below.
µ ¶
dψ ∂ψ ∂ψ dx
=
+
=0
(2a)
dp ∂p
∂x dp
µ ¶
µ ¶2
µ
¶
d2 ψ
∂2ψ
∂ 2 ψ dx
∂ 2 ψ dx
∂ψ d2 x
=
+
2
+
+
=0
(2b)
dp2 ∂p∂p
∂x∂p dp
∂x∂x dp
∂x dp2
..
.
µ
¶k
dk ψ
∂
dx ∂
=
+
ψ=0
(2c)
dpk
∂p dp ∂x
In Eq. (2c), algebraic notation has been used to simplify the differentiation process. Goursat4 and Young5
both introduce this notation. Basically, the derivative terms in parenthesis are treated as algebraic quantities
that can be expanded to higher powers as if expanding to a higher order polynomial. One can then rearrange
3 of 11
American Institute of Aeronautics and Astronautics
Eqs. (2a)-(2c) to obtain
µ
..
.
dk x
=−
dpk
µ
∂ψ
∂x
¶−1
∂ψ
∂p
µ
¶−1 " 2
µ ¶
µ ¶2 #
2
d x
∂ψ
∂ 2 ψ dx
∂ ψ
∂ 2 ψ dx
=−
+
+2
dp2
∂x
∂p∂p
∂x∂p dp
∂x∂x dp
dx
=−
dp
∂ψ
∂x
¶−1 ·
∂kψ
+
∂pk
(3a)
(3b)
¸
...
.
(3c)
These equations represent the sensitivity of the dependent variable, x, with respect to the independent
parameter, p. Thus, for small variations in p, denoted by ∆p, a Taylor series can be used to generate
neighboring solutions to ψ(p0 , x0 ) = 0. The Taylor series is given by
x(p) ∼
= x(p0 ) +
III.
¯
k
X
¢
¡
dn x ¯¯
∆pn
+ O ∆pk+1 .
¯
n
dp (p0 ,x0 ) n!
n=1
(4)
Numerical Examples
Three examples are given in detail to illustrate the theory, method, and effectiveness of the implicit
function theorem approach. The first example is a scalar algebraic root solving problem in a single variable.
It uses the exact equations derived above. The second example examines the Riccati differential equation,
shows how to compute derivatives from a differential equation, and computes sensitivities and approximations
with respect to the open-loop plant, a. The final example is an optimal control problem with free-final-state
and final-state weighting terms in the cost function. Again, the nominal solution is given, and the derivatives,
sensitivities, and Taylor series approximations are derived.
A.
Nonlinear Algebraic Root Solving Problem
Consider the nonlinear equation
ψ(p, x) = x3 − cos(px) = 0.
(5)
Setting p0 = 1, one can find a nominal solution to the root solving problem by Newton’s Method.19
(p0 , x0 ) = (1, 0.865474)
(6)
Now that a nominal solution is known, the interest shifts to studying how the solution changes with respect
to small changes in p. As stated before, the Taylor series approximation contains sensitivities calculated
through implicit differentiation guaranteed by the implicit function theorem. Clearly, the three hypothesis
of the theorem hold for this problem. The first and second order sensitivities evaluated at the nominal
solution are calculated below.
µ
¶−1
dx
∂ψ
∂ψ
=−
(7a)
dp
∂x
∂p
= − 0.219035
d2 x
=−
dp2
µ
∂ψ
∂x
¶−1 "
∂2ψ
∂2ψ
+2
∂p∂p
∂x∂p
µ
(7b)
dx
dp
¶
∂2ψ
+
∂x∂x
µ
= − .061988
dx
dp
¶2 #
(8a)
(8b)
If p increases to p1 = 1.1 such that
∆p = p1 − p0 = .1,
4 of 11
American Institute of Aeronautics and Astronautics
(9)
then a neighboring solution computed via Newton’s method is
(p1 , x1 ) = (1.1, 0.843305).
(10)
The point here, however, is that numerous iterations with Newton’s method are unnecessary. A first order
Taylor series approximation gives
¯
¡
¢
dx ¯¯
∼
x1 (p = 1.1) = x(p0 ) +
∆p + O ∆p2
(11a)
¯
dp (p0 ,x0 )
∼
(11b)
= 0.843571.
A second order approximation is indeed a better approximation.
¯
¯
¡
¢
d2 x ¯¯
∆p2
dx ¯¯
∼
∆p
+
+ O ∆p3
x1 (p = 1.1) = x(p0 ) +
¯
¯
2
dp (p0 ,x0 )
dp (p0 ,x0 ) 2
∼
= 0.843261
(12a)
(12b)
The absolute error associated with the first-order approximation is less than 3 (10−4 ). The absolute error
associated with the second-order approximation is less than 4.5 (10−5 ).
B.
Scalar Riccati Differential Equation
One of the most important equations in linear control systems is the Riccati differential equation. The
equation arises in linear quadratic regulator problems with a free-final-state. The problem investigated here
is inspired by Junkins.1 In the scalar case, the cost function is
Z
1
1 tf 2
2
min J = sf x (tf ) +
qx + ru2 dt,
(13)
2
2 t0
where sf is the final state weighting term (and solution to the Riccati equation at the final time), t0 is the
initial time, tf is the final time, x is the state variable, u is the control variable, and q and r are the state
and control weighting terms. The additional boundary condition arising from the free-final-state9 relates the
final costate, λ, and final state, and takes the form
λ(tf ) = sf x(tf ).
(14)
Assuming a linear relationship like above and using the sweep method, one can arrive at a differential
equation in s and t that must be solved backward in time.
ṡ = −2as +
b2 2
s − q,
r
s(tf ) = sf
(15)
In this particular case, an analytical solution for s(t) can be found using the separation of variables technique.9
Nonetheless, the differential equation and its matrix counterpart are frequently solved numerically. If there
are small variations in system parameters, e.g., the plant, a, then the equation must be solved again. As an
example, such a change may occur if the system mass changes. Using the implicit function approach, the
differential equation must only be solved once; and, as long as the variations in the parameters are small, a
Taylor series can approximate the neighboring solutions.
If a is considered as the only independent parameter, then
ṡ = f (a, s, t),
(16)
and a nominal solution is s(a0 , t). The sensitivities that make up the Taylor series approximation will be of
the form ds/da, d2 s/da2 , etc. Differentiating Eq. (16) with respect to a gives
· ¸
i
dh
d ds
=
f (a, s(a), t)
(17a)
da dt
da
∂f
∂f ds
=
+
(17b)
∂a
∂s
· ¸
µ da
¶
µ
¶
d ds
b2
ds
=⇒
= −2s + −2a + 2 s
,
(17c)
dt da
r
da
5 of 11
American Institute of Aeronautics and Astronautics
where all of the derivatives are evaluated at the nominal solution. Also, the fact that cross derivatives are
equal when f ∈ C 2 has been employed to move from (a) to (c).14 It is clear from Eq. (17c) that when
applying the implicit function theorem to a differential equation, the sensitivities are themselves differential
equations. In this case, the first order sensitivity given by Eq. (17c) must be solved backward in time with
the boundary condition
ds
(tf ) = 0.
(18)
da
The first-order Taylor series approximation is
¯
¡
¢
ds ¯¯
s(a, t) = s(a0 , t) +
∆a + O ∆a2 ,
(19)
da ¯(a0 ,s0 )
where, again, each term in the series is a numeric solution to a differential equation.
In this example, the nominal parameters are
a0 = 1,
b = 1,
tf = 5,
r = 2,
q = 10,
(20)
sf = 15,
(21)
and the plant is varied between .5 and 1.5. Using Euler’s method to perform the integration, the nominal
solution and neighboring solutions were generated in MATLAB. Results appear in Figure 1. The nominal
solution is given by a solid blue line, the recomputed neighboring solutions by dashed lines, and neighboring
solutions derived from the implicit function theorem by circles.
15
Nominal Solution
Exact Neighboring Solution
Approximate Neighboring Solution
14
13
12
s(a,t)
11
10
9
8
7
6
5
0
1
2
3
4
5
Time
Figure 1. The nominal and neighboring solutions to the scalar Riccati differential equation.
Figure 2 shows the absolute error of s(t0 ) with respect to variations in the parameter a. The graph shows
the expected radial divergence that is associated with Taylor series.
6 of 11
American Institute of Aeronautics and Astronautics
0.1
0.09
0.08
0
Error in s(t )
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0.5
1
Parameter Variation
1.5
Figure 2. Absolute error in s(t0 ) associated with the implicit approximation.
C.
Optimal Control Problem with Final State Weighting
This example is significantly more challenging than the previous problems since it treats the vector case.
Consider a simple rendezvous/intercept problem. A pursuit vehicle, S1 , has relative vertical position, y1 ,
and vertical velocity, y2 , with respect to a target vehicle, St . Initially, the target vehicle is a distance D
downrange, and the pursuit vehicle moves with constant relative velocity, V , toward to the target vehicle in
the x-direction. Figure 3 illustrates this scenario.
Figure 3. The pursuit and target relative positions in the x-y1 plane.
Because the relative velocity in the x-direction is constant, the final time is fixed.
tf = t0 +
D
V
(22)
The states capture the vertical dynamics of the problem,
y˙1 = y2 ,
y1 (t0 ) = y10 ,
(23)
y˙2 = u,
y2 (t0 ) = y20 ,
(24)
7 of 11
American Institute of Aeronautics and Astronautics
where u is the vertical control input acceleration. A performance index which minimizes the control energy
is
Z
1
1
1 tf 2
2
2
min J = p1 Sy1 y1 + p2 Sy2 y2 +
u dt,
(25)
2
2
2 t0
where Sy1 and Sy2 are final state weighting terms, and p1 and p2 are the independent parameters attached to
these weights. Once a nominal solution is known, neighboring solutions can be generated for small variations
in p, and hence Sy . For an intercept problem, Sy1 is made large and Sy2 is set to zero. In other words,
there is no preference toward velocity at the final time. For a rendezvous problem, both Sy1 and Sy2 are
made large. Lewis and Syrmos provide a more rigorous exposition of the problem statement and nominal
solution.9
The requirement of the controller, u, is to minimize the cost function, J, while satisfying the dynamic
constraints. The state variables are y1 and y2 , the initial time is t0 , the fixed final time is tf , and the
parameters p1 and p2 are fixed but can be chosen by the engineer. In the nominal case, p∗1 = p∗2 = 1. A
numerical solution involves a shooting method where one has to use optimizers to solve for the initial costate
vector.8, 9 In such a situation, even if there are small variations in any of the parameters, the optimization
problem must be solved all over again. The purpose here is to employ the implicit function theorem so
that once a nominal solution is known, neighboring solutions can be easily generated without re-solving the
optimal control problem.
The Hamiltonian for this system is
H,
1 2
u + λ1 y2 + λ2 u,
2
(26)
and the necessary conditions for minimization are
∂H
= y2 ,
∂λ1
∂H
y˙2 =
= u,
∂λ2
y˙1 =
∂H
− λ˙1 =
= 0,
∂y1
∂H
− λ˙2 =
= λ1 ,
∂y2
∂H
= u + λ2 .
∂u
Furthermore, an additional free-final-state condition must be satisfied.9

 

0=
Θ1 (tf ) λ1 (tf ) − p1 Sy1 y1 (tf )
Θ(tf ) , 
=
=0
Θ2 (tf )
λ2 (tf ) − p2 Sy2 y2 (tf )
(27)
(28)
(29)
(30)
Solving for the control and substituting into the state and costate equations yields the homogeneous Hamiltonian system.
  
 
 y˙1  0
  
  
 y˙2  0
 =
  
λ˙  0
 1 
  
λ˙2
0
1
0
0
0
0
0
0
−1
0   y1 
 
 
−1  y2 
 
 
 
0
 λ1 
 
0
λ2
(31)
The coefficient matrix is called the continuous Hamiltonian matrix, and solution of this matrix differential
equation via the matrix exponential function gives
z(t) = Φz(t0 ),
(32)
where Φ is the state transition matrix. The column vector of states and costates is denoted by z. Solution
of this system depends on the initial costates, which, in turn, are functions of the parameters p1 and p2 .
Thus, the purpose of the implicit function theorem is to determine how the initial costates depend on these
parameters. Once this is done and a nominal solution is known, the implicit function approach can generate
neighboring solutions without re-solving the optimal control problem.
8 of 11
American Institute of Aeronautics and Astronautics
The desired Taylor series should take the form
Λ(p, t0 ) ∼
= Λ(p∗ , t0 ) +
¯
¯
¡
¢
dΛ(t0 ) ¯¯
dΛ(t0 ) ¯¯
∆p
+
∆p2 + O ∆p2 ,
1
¯
¯
dp1 p∗
dp2 p∗
(33)
where the vector quantities have been introduced to simplify notation.
Λ = [λ1 λ2 ]T
p = [p1 p2 ]T
(34)
The first task is to compute the sensitivities dΛ/dp1 and dΛ/dp2 . Because the number of dependent variables
is two, i.e., Λ ∈ R2 , two functions are required to use the implicit function theorem. The free-final-state
condition, Θ(tf ), provides both. Employing the multi-variable form of the implicit function theorem gives
·
¸−1
dΛ(t0 )
∂Θ(tf )
∂Θ(tf )
=−
,
dp1
∂Λ(t0 )
∂p1
·
¸−1
∂Θ(tf )
∂Θ(tf )
dΛ(t0 )
=−
.
dp2
∂Λ(t0 )
∂p2
(35)
(36)
The only necessary condition is that the Jacobian, ∂Θ(tf )/∂Λ(t0 ), be full rank. In this case, the Jacobian
is a 2x2 matrix. The requisite derivatives are below,
 


∂Θ1 (tf ) ∂Λ(tf )
∂Θ1 (tf )
∂Θ1 (tf ) ∂y(tf )
∂Θ1 (tf )
 ∂y(t ) ∂Λ(t ) + ∂Λ(t ) ∂Λ(t )   ∂y(t ) Φ12 + ∂Λ(t ) Φ22 



f
0
f
0 
f
f
 

dΘ(tf ) 
=
,
=
(37)




dΛ(t0 )
 


 ∂Θ2 (tf ) ∂y(tf )

∂Θ2 (tf ) ∂Λ(tf )   ∂Θ2 (tf )
∂Θ2 (tf )
+
Φ12 +
Φ22
∂y(tf ) ∂Λ(t0 )
∂Λ(tf ) ∂Λ(t0 )
∂y(tf )
∂Λ(tf )




dΘ(tf ) −Sy1 (tf )y1 (tf )
=
,
dp1
0
0
dΘ(tf ) 

=
,
dp2
−Sy2 (tf )y2 (tf )
(38)
where Φ12 and Φ22 are the upper right and lower right square partitions of the state transition matrix Φ.
Using MATLAB, an optimal solution for the nominal conditions was generated.
y10 = 500 m,
y20 = 50 m/s,
D = 500 m,
Sy1 = 250,
V = 100 m/s,
Sy2 = 250
t0 = 0 s,
(39)
(40)
Neighboring solutions with p1 ∈ [.9 1.1] and p2 = 1 were also generated with MATLAB and by implicit
approximation. This corresponds to variations in Sy1 ∈ [225 275]. The results appear in Figures 4 and 5. The
complete nominal solution appears in Figure 4 as a solid blue curve. Figure 5 shows only the final moments of
the trajectory. The nominal solution appears as a blue curve, the re-computed neighboring solutions appear
as red curves, and the circle markers denote the implicitly approximated neighboring solutions. For larger
values of Sy1 , the state is driven closer to zero. The absolute error on Λ(t0 ) associated with the implicit
approximation is less that 6 (10−3 ).
9 of 11
American Institute of Aeronautics and Astronautics
600
Relative Altitude, y1 (m)
500
400
300
200
100
0
0
1
2
3
4
5
Time (s)
Figure 4. Nominal solution to the optimal control problem with final-state weighting.
.35
Nominal Solution
Exact Neighboring Solutions
Approximate Neighboring Solutions
Relative Altitude, y1 (m)
0.3
0.25
0.2
0.15
4.98
4.984
4.988
4.992
Time (s)
4.996
5
Figure 5. Nominal and neighboring solutions near final time for the optimal control problem with final-state
weighting.
10 of 11
American Institute of Aeronautics and Astronautics
IV.
Summary and Future Work
A basic version of the implicit function theorem rooted in differential calculus was presented, and implicit differentiation led to high order sensitivity equations with respect to independent parameters. These
sensitivities allowed construction of a family of neighboring solutions once a nominal solution was known.
This implicit function method was applied to three representative problems: a scalar nonlinear root solving
problem, the nonlinear Riccati differential equation, and an optimal control problem (two point boundary
value problem). All results obtained by the implicit function method agree to plotting accuracy with recomputed solutions. For the simple algebraic root solving problem, a first-order implicit function approach
approximates a neighboring root with less than 3 (10−4 ) absolute error. The nonlinear Riccati differential
equation and optimal control examples illustrated the flexibility and accuracy of the method, and error plots
demonstrated the radial divergence associated with the Taylor series approximations. The results presented
here are promising, and it is expected that generalizations of the implicit function theorem are likely to have
a favorable impact on a wide class of trajectory generation and optimal control problems.
This paper presents merely a starting place. Future work includes generalizations to higher dimensions,
more complex optimal control problems, and trajectory design and optimization. Of particular interest are
applications to the lunar ascent and descent phases of lunar trajectory design and mission planning.
Acknowledgments
The authors wish to acknowledge the support of NASA Johnson Space Center under award no. C09-0026.
The technical monitor is Lee Bryant. Any opinions, findings, and conclusions or recommendations expressed
in this material are those of the authors and do not necessarily reflect the views of the National Aeronautics
and Space Administration.
References
1 Junkins, J., Majji, M., and Turner, J., “Generalizations and Applications of the Lagrange Implicit Function Theorem,”
AAS 08-302.
2 Hubbard, J. and Hubbard, B., Vector Analysis, Linear Algebra, and Differential Forms: A Unified Approach, Matrix
Editions, 2001.
3 Bellman, R. and Cooke, K., Modern Elementary and Differential Equations, Dover Publications, Inc., 1971.
4 Goursat, E., A Course in Mathematical Analysis, Ginn and Company, 1904.
5 Young, W., The Fundamental Theorems of Differential Calculus, Cambridge University Press, 1910.
6 Dunyak, J., Junkins, J., and Watson, L., “Robust Nonlinear Least Squares Estimation Using the Chow-Yorke Homotopy
Method,” Journal of Guidance, Control and Dynamics, 1984.
7 Krantz, S. and Parks, H., The Implicit Function Theorem: History, Theory and Applications, Birkhauser, 2002.
8 Bryson, A. and Ho, Y., Applied Optimal Control: Optimization, Estimation, and Control, Hemisphere Publishing Corp.,
1975.
9 Lewis, F. and Syrmos, V., Optimal Control, John Wiley and Sons, Inc., 1995.
10 Malanowski, K. and Maurer, H., “Sensitivity Analysis for Parametric Control Problems with Control State Constraints,”
Computational Optimization and Applications, 1996.
11 Malanowski, K. and Maurer, H., “Sensitivity Analysis for Optimal Control Problems Subject to Higher Order State
Constraints,” Annals of Operations Research, 2001.
12 Pinho, M. D. and Rosenblueth, J., “Mixed Constraints in Optimal Control: An Implicit Function Theorem Approach,”
IMA Journal of Mathematical Control and Information, 2007.
13 Bate, R., Mueller, D., and White, J., Fundamentals of Astrodynamics, Dover Publications, Inc., 1971.
14 Widder, D., Advanced Calculus, Dover Publications, Inc., 1989.
15 Hamilton, R., “The Inverse Function Theorem of Nash and Mosher,” Bulletin of the American Mathematical Society,
1982.
16 Hormander, L., “Implicit Function Theorems,” Lectures at Stanford University.
17 Jittorntrum, K., “Technical Note: An Implicit Function Theorem,” Journal of Optimization Theory and Applications,
1978.
18 Kumagai, S., “Technical Comment: An Implicit Function Theorem,” Journal of Optimization Theory and Applications,
1980.
19 Stewart, J., Calculus Early Vectors, Thomson Brooks/Cole Publishing Co., 1999.
11 of 11
American Institute of Aeronautics and Astronautics