Sensitivity analysis for relaxed optimal control problems with final

Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Sensitivity analysis for relaxed optimal control
problems with final-state constraints
J. Frédéric Bonnans∗ , Laurent Pfeiffer∗ , and Oana Serea†
∗ INRIA
Saclay and CMAP, Ecole Polytechnique (France),
† University of Perpignan (France)
May 2012
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Introduction
We consider...
a family of optimal control problems (OCP), parametrized by
a nonnegative variable θ close to 0,
its value functionV (θ),
a strong solution (u, y ) for the value θ = 0.
Our goal:
computing a second-order expansion of V (θ) near 0.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Introduction
The main difficulty:
large perturbations in uniform norm for the control variable
may occur.
Our tools:
the abstract methodology of sensitivity analysis,
relaxation with Young measures.
Relaxation of optimal control problems
Methodology for sensitivity analysis
1
Relaxation of optimal control problems
Strong solutions
Relaxation
2
Methodology for sensitivity analysis
Upper estimate
Lower estimate
3
Application to optimal control
Multipliers
Linearizations of the problem
Decomposition principle
Application to optimal control
Relaxation of optimal control problems
Methodology for sensitivity analysis
1
Relaxation of optimal control problems
Strong solutions
Relaxation
2
Methodology for sensitivity analysis
Upper estimate
Lower estimate
3
Application to optimal control
Multipliers
Linearizations of the problem
Decomposition principle
Application to optimal control
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Strong solutions
For a control u in L∞ ([0, T ], Rm ) and θ ≥ 0, consider the
trajectory y [u, θ] solution of the following differential system:
(
ẏt = f (ut , yt , θ), for a. a. t in [0, T ],
y0 = y 0 .
Set K = {0nE } × Rn+I . Our family of OCP’s is
Min φ(yT [u, θ]),
u∈L∞
Reference problem: θ = 0.
s.t. Φ(yT [u], θ) ∈ K .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Strong solutions
For a control u in L∞ ([0, T ], Rm ) and θ ≥ 0, consider the
trajectory y [u, θ] solution of the following differential system:
(
ẏt = f (ut , yt , θ), for a. a. t in [0, T ],
y0 = y 0 .
Set K = {0nE } × Rn+I . Our family of OCP’s is
Min φ(yT [u, θ]),
u∈L∞
Reference problem: θ = 0.
s.t. Φ(yT [u], θ) ∈ K .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Strong solutions
A control u is a R−strong solution for the reference problem if
there exists η > 0 such that u is solution to the localized problem:


Min φ(yT [u, 0]),
||u||∞ ≤R

s.t. Φ(yT [u], 0) ∈ K , ||y [u, 0] − y ||∞ ≤ η.
Now, we fix u, R, and η.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Relaxation
Set:
BR , the ball of Rm of radius R
PR , the set of probabilities on BR ,
MR , the set of Young measures on [0, T ] × BR , defined by
L∞ ([0, T ], PR ).
For µ ∈ MR , denote by y [µ, θ] the solution to
(
R
ẏt = BR f (u, yt , θ) dµt (u),
y0 = y 0 .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Relaxation
Set:
BR , the ball of Rm of radius R
PR , the set of probabilities on BR ,
MR , the set of Young measures on [0, T ] × BR , defined by
L∞ ([0, T ], PR ).
For µ ∈ MR , denote by y [µ, θ] the solution to
(
R
ẏt = BR f (u, yt , θ) dµt (u),
y0 = y 0 .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Relaxation
Example: for µt = 12 δut1 + 12 δut2 , the dynamic is:
1
1
ẏt [µ, θ] = f (ut1 , y [µ, θ], θ) + f (ut2 , y [µ, θ], θ).
2
2
and can be approximated by this control:
( ) with probability 1/2
( ) with probability 1/2
0
T
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Relaxation
Example: for µt = 12 δut1 + 12 δut2 , the dynamic is:
1
1
ẏt [µ, θ] = f (ut1 , y [µ, θ], θ) + f (ut2 , y [µ, θ], θ).
2
2
and can be approximated by this control:
( ) with probability 1/2
v( )
( ) with probability 1/2
0
T
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Relaxation
We relax the OCP’s as follows

 V (θ) = Min φ(yT [µ, θ]),
µ∈MR

s.t. Φ(yT [µ], θ) ∈ K , ||y [µ, θ] − y ||∞ ≤ η.
Theorem
Under some qualification assumptions, the relaxed and the classical
OCP’s have the same value.
Therefore, µt = δut is a relaxed solution for θ = 0.
Relaxation of optimal control problems
Methodology for sensitivity analysis
1
Relaxation of optimal control problems
Strong solutions
Relaxation
2
Methodology for sensitivity analysis
Upper estimate
Lower estimate
3
Application to optimal control
Multipliers
Linearizations of the problem
Decomposition principle
Application to optimal control
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
Consider the family of optimization problems:
V (θ) = Min f (x, θ)
x∈H
s.t. g (x, θ) ∈ K ,
where K stands for inequalities and equalities. The Lagrangian is
L(x, λ, θ) = f (x, θ) + hλ, g (x, θ)i.
Consider
x an optimal solution for θ = 0,
Λ the set of Lagrange multipliers associated with x.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
Consider the family of optimization problems:
V (θ) = Min f (x, θ)
x∈H
s.t. g (x, θ) ∈ K ,
where K stands for inequalities and equalities. The Lagrangian is
L(x, λ, θ) = f (x, θ) + hλ, g (x, θ)i.
Consider
x an optimal solution for θ = 0,
Λ the set of Lagrange multipliers associated with x.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
Let d and h be in H, set
yθ = x + dθ + hθ2 ,
then,
f (yθ , θ) = f (x, 0) + Df (x, 0)(d, 1) θ
h
i
1
+ Dx f (x, 0)h + D 2 f (x, 0)(d, 1)2 θ2 + o(θ2 ),
2
g (yθ , θ) = g (x, 0) + Dg (x, 0)(d, 1) θ
h
i
1
+ Dx g (x, 0)h + D 2 g (x, 0)(d, 1)2 θ2 + o(θ2 ).
2
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
By a regularity metric theorem, if
Dg (x, 0)(d, 1) ∈ TK (g (x, 0)),
and
1
Dx g (x, 0)h + D 2 g (x, 0)(d, 1) ∈ TK2 g (x, 0), Dg (x, 0)(d, 1) ,
2
then, there exists
x̃θ = x + dθ + hθ2 + o(θ2 )
such that
g (x̃θ , θ) = 0.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
By a regularity metric theorem, if
Dg (x, 0)(d, 1) ∈ TK (g (x, 0)),
and
1
Dx g (x, 0)h + D 2 g (x, 0)(d, 1) ∈ TK2 g (x, 0), Dg (x, 0)(d, 1) ,
2
then, there exists
x̃θ = x + dθ + hθ2 + o(θ2 )
such that
g (x̃θ , θ) = 0.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
This justifies the two linearized problems:
at the first order,
Min Df (x, 0)(d, 1)
d∈H
s.t. Dg (x, 0)(d, 1) ∈ TK (...).
(LP)
Its dual is:
Max Dθ L(x, λ, 0).
λ∈Λ
(LD)
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
at the second order, for d in S(LP),

 Min Dx f (x, 0)h + 1 D 2 f (x, 0)(d, 1)2
2
h∈H

s.t.
Dx g (x, 0)h + 12 D 2 g (x, 0)(d, 1)2 ∈ TK2 (...).
(QP(d))
Its dual is:
Max D 2 L(x, λ, 0)(d, 1)2 .
(QD(d))
λ∈S(LD)
Finally,
V (θ) ≤ V (0) + Val(LP)θ + Min
d∈S(LP)
Val(QP(d)) θ2 + o(θ2 ).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Upper estimate
at the second order, for d in S(LP),

 Min Dx f (x, 0)h + 1 D 2 f (x, 0)(d, 1)2
2
h∈H

s.t.
Dx g (x, 0)h + 12 D 2 g (x, 0)(d, 1)2 ∈ TK2 (...).
(QP(d))
Its dual is:
Max D 2 L(x, λ, 0)(d, 1)2 .
(QD(d))
λ∈S(LD)
Finally,
V (θ) ≤ V (0) + Val(LP)θ + Min
d∈S(LP)
Val(QP(d)) θ2 + o(θ2 ).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Lower estimate
Denote by C (x) the critical cone:
n
o
C (x) = d, Dx f (x, 0)d = 0 & Dx g (x, 0)d ∈ TK (g (x, 0))
and the sufficient second order condition:

 for some α > 0, for all d in C (x),
Max Dx2 L(x, λ, 0)d 2 ≥ α|d|2 .

(SC2)
λ∈S(LD)
Then, up to a subsequence, with a proof by contradiction, the
solutions xθ satisfy:
xθ − x
−→ d ∈ S(LP).
θ↓0
θ
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Lower estimate
Denote by C (x) the critical cone:
n
o
C (x) = d, Dx f (x, 0)d = 0 & Dx g (x, 0)d ∈ TK (g (x, 0))
and the sufficient second order condition:

 for some α > 0, for all d in C (x),
Max Dx2 L(x, λ, 0)d 2 ≥ α|d|2 .

(SC2)
λ∈S(LD)
Then, up to a subsequence, with a proof by contradiction, the
solutions xθ satisfy:
xθ − x
−→ d ∈ S(LP).
θ↓0
θ
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Lower estimate
With a Taylor expansion, for all λ in S(LD),
V (θ) − V (0)
= f (x θ , θ) − f (x, 0)
≥ L(x θ , λ, θ) − L(x, λ, 0)
x θ − x 2 θ2 + o(θ2 )
= θDθ L(x, λ, 0) +
D(x,θ)2 L(x, λ, 0)
,1
2
θ
θ2 ≥ θDθ L(x, λ, 0) +
Min D(x,θ)2 L(x, λ, 0)(d, 1)2 + o(θ2 ).
2 d∈S(LP)
Finally,
V (θ) ≥ V (0) + Val(LP)θ + Min
d∈S(LP)
Val(QP(d)) θ2 + o(θ2 ).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Lower estimate
With a Taylor expansion, for all λ in S(LD),
V (θ) − V (0)
= f (x θ , θ) − f (x, 0)
≥ L(x θ , λ, θ) − L(x, λ, 0)
x θ − x 2 θ2 + o(θ2 )
= θDθ L(x, λ, 0) +
D(x,θ)2 L(x, λ, 0)
,1
2
θ
θ2 ≥ θDθ L(x, λ, 0) +
Min D(x,θ)2 L(x, λ, 0)(d, 1)2 + o(θ2 ).
2 d∈S(LP)
Finally,
V (θ) ≥ V (0) + Val(LP)θ + Min
d∈S(LP)
Val(QP(d)) θ2 + o(θ2 ).
Relaxation of optimal control problems
Methodology for sensitivity analysis
1
Relaxation of optimal control problems
Strong solutions
Relaxation
2
Methodology for sensitivity analysis
Upper estimate
Lower estimate
3
Application to optimal control
Multipliers
Linearizations of the problem
Decomposition principle
Application to optimal control
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Multipliers
Let us define:
the end-point Lagrangian Φ[λ],
Φ[λ](y , θ) = φ(y , θ) + hλ, Φ(y , θ)i,
the Hamiltonian H,
H[p](u, y , θ) = hp, f (u, y , θ)i,
the costate p λ associated with λ in NK (Φ(y T , 0)), the
solution to
(
ṗt = −Hy [pt ](u t , y t , 0)
pT = ΦyT [λ](y T , 0).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Let λ ∈ NK (Φ(y T , 0)), λ is said to be:
a Lagrange multiplier if for a.a. t,
Du H[ptλ ](u t , y t , 0) = 0,
a Pontryagin multiplier if for a.a. t, for all u in BR ,
H[ptλ ](u t , y t , 0) ≤ H[ptλ ](u, y t , 0).
The associated sets ΛL and ΛP satisfy
ΛP ⊂ ΛL .
Theorem (Pontryagin’s principle)
Under a qualification condition, if u is an R-strong solution, then
ΛP 6= ∅.
In the sequel: “f [t] = f (u t , y t , 0)”.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
Define
ξ θ , the solution to
(
ξ˙tθ = Dy f [t]ξtθ + Dθ f [t],
ξ0θ = 0,
z[v ], the standard linearization
(
żt [v ] = Dy f [t]zt [v ] + Du f [t]vt ,
z0 [v ] = 0.
ξ[µ], the Pontryagin linearization
(
R
ξ˙t [µ] = Dy f [t]ξt [µ] + UR f (u, y t , 0) − f [t] dµt (u),
ξ0 [µ] = 0,
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
We consider two kinds of paths,
the “standard” ones: u θ = u + v θ, for which
φ(yT [u θ , θ], θ) − φ(y T , 0)
= Dφ(y T , 0)(zT [v ] + ξTθ , 1)θ + o(θ),
the “Pontryagin” ones: µθ = (1 − θ)µ + θµ, for which
φ(yT [µθ , θ], θ) − φ(y T , 0)
Denote by:
= Dφ(y T , 0)(ξT [µ] + ξTθ , 1)θ + o(θ),
(
CS = cone z[v ], v ∈ L∞ (0, T ; Rm ) ,
CP = cone ξ[µ], µ ∈ MR .
Note that: CS ⊂ CP .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
We consider two kinds of paths,
the “standard” ones: u θ = u + v θ, for which
φ(yT [u θ , θ], θ) − φ(y T , 0)
= Dφ(y T , 0)(zT [v ] + ξTθ , 1)θ + o(θ),
the “Pontryagin” ones: µθ = (1 − θ)µ + θµ, for which
φ(yT [µθ , θ], θ) − φ(y T , 0)
Denote by:
= Dφ(y T , 0)(ξT [µ] + ξTθ , 1)θ + o(θ),
(
CS = cone z[v ], v ∈ L∞ (0, T ; Rm ) ,
CP = cone ξ[µ], µ ∈ MR .
Note that: CS ⊂ CP .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
At the first-order, we can consider:
the standard linearized problem and its dual

 Min Dφ(y T , 0)(ξ + ξ θ , 1)
T
ξ∈CS
s.t. DΦ(y T , 0)(ξ + ξTθ , 1) ∈ TK (Φ(y T , 0)).

Max
nZ
λ∈ΛL
T
o
Dθ H[ptλ ][t] dt + Dθ Φ[λ](y T , 0) .
ξ∈CP
s.t. DΦ(y T , 0)(ξ + ξTθ , 1) ∈ TK (Φ(y T , 0)).
Max
λ∈ΛP
(SLD)
0
the Pontyragin linearized problem and its dual

 Min Dφ(y T , 0)(ξ + ξ θ , 1)
T

(SLP)
nZ
0
T
o
Dθ H[ptλ ][t] dt + Dθ Φ[λ](y T , 0) .
(PLP)
(PLD)
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
At the first-order, we can consider:
the standard linearized problem and its dual

 Min Dφ(y T , 0)(ξ + ξ θ , 1)
T
ξ∈CS
s.t. DΦ(y T , 0)(ξ + ξTθ , 1) ∈ TK (Φ(y T , 0)).

Max
nZ
λ∈ΛL
T
o
Dθ H[ptλ ][t] dt + Dθ Φ[λ](y T , 0) .
ξ∈CP
s.t. DΦ(y T , 0)(ξ + ξTθ , 1) ∈ TK (Φ(y T , 0)).
Max
λ∈ΛP
(SLD)
0
the Pontyragin linearized problem and its dual

 Min Dφ(y T , 0)(ξ + ξ θ , 1)
T

(SLP)
nZ
0
T
o
Dθ H[ptλ ][t] dt + Dθ Φ[λ](y T , 0) .
(PLP)
(PLD)
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
Note that: Val(PLP) ≤ Val(SLP).
Theorem
Under a qualification condition, the following estimate holds:
V (θ) ≤ V (0) + Val(PLP)θ + o(θ).
Two difficulties arise now:
problem PLP does not have necessarily solutions
impossible to compute second-order expansions with
Pontryagin paths.
Therefore, we suppose that Val(PLP) = Val(SLP).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
Note that: Val(PLP) ≤ Val(SLP).
Theorem
Under a qualification condition, the following estimate holds:
V (θ) ≤ V (0) + Val(PLP)θ + o(θ).
Two difficulties arise now:
problem PLP does not have necessarily solutions
impossible to compute second-order expansions with
Pontryagin paths.
Therefore, we suppose that Val(PLP) = Val(SLP).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
At the second order, we “mix” linearizations. For v ∈ S(SLP), set
µθ = (1 − θ2 ) (u + v θ) +
| {z }
1st-order term
θ2 µ
|{z}
,
2nd-order term
with µ ∈ MR to be optimized.
The dual of the second-order linearized problem is:
Max Ωθ [λ](v ),
(QD(v ))
λ∈S(LD)
where Ωθ is the “Hessian of the Lagrangian”:
Ωθ [λ](v ) = D 2 Φ[λ](y T , 0)(zT [v ] + ξtθ , 1)
Z T
+
D 2 H[ptλ ][t](vt , zt [v ] + ξtθ , 1)2 dt.
0
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Linearizations of the problem
At the second order, we “mix” linearizations. For v ∈ S(SLP), set
µθ = (1 − θ2 ) (u + v θ) +
| {z }
1st-order term
θ2 µ
|{z}
,
2nd-order term
with µ ∈ MR to be optimized.
The dual of the second-order linearized problem is:
Max Ωθ [λ](v ),
(QD(v ))
λ∈S(LD)
where Ωθ is the “Hessian of the Lagrangian”:
Ωθ [λ](v ) = D 2 Φ[λ](y T , 0)(zT [v ] + ξtθ , 1)
Z T
+
D 2 H[ptλ ][t](vt , zt [v ] + ξtθ , 1)2 dt.
0
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Theorem
The following expansion is an upper estimate of V (θ):
θ2 V (0) + θVal(SLP) +
Min
Max Ωθ [λ](v ) + o(θ2 ).
2 v ∈S(SLP) λ∈S(LD)
We can extend:
the first-order linearized problem,
the Hessian Ωθ ,
the theorem,
by replacing v by a Young measure ν on [0, T ] × Rm with a
generalized L2 − norm (in a space denoted by M2 ),
V (θ) ≤ V (0)+θVal(SLP)+
θ2 Min
Max Ωθ [λ](ν) +o(θ2 ).
2 ν∈S(SLP 0 ) λ∈S(LD)
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Theorem
The following expansion is an upper estimate of V (θ):
θ2 V (0) + θVal(SLP) +
Min
Max Ωθ [λ](v ) + o(θ2 ).
2 v ∈S(SLP) λ∈S(LD)
We can extend:
the first-order linearized problem,
the Hessian Ωθ ,
the theorem,
by replacing v by a Young measure ν on [0, T ] × Rm with a
generalized L2 − norm (in a space denoted by M2 ),
V (θ) ≤ V (0)+θVal(SLP)+
θ2 Min
Max Ωθ [λ](ν) +o(θ2 ).
2 ν∈S(SLP 0 ) λ∈S(LD)
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Decomposition principle
The critical cone C∞ is the set of v in L∞ (0, T ; Rm ) such that
DyT φ(y T , 0)zT [v ] ≤ 0,
DyT Φ(y T , 0)zT [v ] ∈ TK (Φ(y T , 0)).
We also define the quadratic form Ω[λ](v ) by
Ω[λ](v ) = D(yT )2 Φ[λ](y T , 0)(zT [ν])
Z T
+
D(u,y )2 H[ptλ ](u t , y t , 0)(vt , zt [ν])2 dµt (u) dt.
0
Like previously, we can extend C∞ to a cone C2 ⊂ M2 and Ω to
elements of M2 .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Decomposition principle
Let θk ↓ 0 and µk be feasible points of the perturbed problems with
θ = θk , such that d2 (µk , µ) → 0. We decompose µk as follows:
µA,k , where the perturbations are small in L∞ −norm,
µB,k , where the perturbations are large on a subset Bk .
Theorem
Assume that: meas(Bk ) → 0 and d∞ (u, µA,k ) → 0, then, a lower
estimate of V (θk ) − V (0) is
Z
H[ptλ ](u, y t ) − H[ptλ ][t] dµB,k (u) dt
Bk
+ Ω[λ](µA,k − u) + o(d2 (µ, µk )2 ).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Decomposition principle
Let θk ↓ 0 and µk be feasible points of the perturbed problems with
θ = θk , such that d2 (µk , µ) → 0. We decompose µk as follows:
µA,k , where the perturbations are small in L∞ −norm,
µB,k , where the perturbations are large on a subset Bk .
Theorem
Assume that: meas(Bk ) → 0 and d∞ (u, µA,k ) → 0, then, a lower
estimate of V (θk ) − V (0) is
Z
H[ptλ ](u, y t ) − H[ptλ ][t] dµB,k (u) dt
Bk
+ Ω[λ](µA,k − u) + o(d2 (µ, µk )2 ).
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Decomposition principle
The strong second-order sufficient condition (SC2) is:
there exists α > 0 such that
1
for almost all t, for all u ∈ BR ,
n
o
H[ptλ ](u, y t , 0) − H[ptλ ](u t , y t , 0) ≥ α|u − u t |2 ,
sup
λ∈S(LDθ )
2
for all ν in C2 ,
sup
λ∈S(LDθ )
Ω[λ](ν) ≥ α||ν||22 .
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
We consider solutions µθ to the perturbed problem.
Theorem
θk
IF (SC2) holds, for all sequence θk ↓ 0, the sequence µ θk−u has a
limit point for the narrow topology in S(SLP 0 ). Moreover,
≥ V (0) + θVal(LPθ ) +
θ2 Min
Max Ωθ [λ](ν) + o(θ2 ).
2 ν∈S(SLP) λ∈S(LDθ )
is a lower estimate of V (θ).
Sketch of the proof:
we neglect the large perturbations
by contradiction, d2 (µ, µθk ) = O(θk )
conclusion with decomposition principle.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
We consider solutions µθ to the perturbed problem.
Theorem
θk
IF (SC2) holds, for all sequence θk ↓ 0, the sequence µ θk−u has a
limit point for the narrow topology in S(SLP 0 ). Moreover,
≥ V (0) + θVal(LPθ ) +
θ2 Min
Max Ωθ [λ](ν) + o(θ2 ).
2 ν∈S(SLP) λ∈S(LDθ )
is a lower estimate of V (θ).
Sketch of the proof:
we neglect the large perturbations
by contradiction, d2 (µ, µθk ) = O(θk )
conclusion with decomposition principle.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Bibliography I
J.F. Bonnans, A. Shapiro,
Perturbation analysis of optimization problems,
Springer-Verlag, New York, 2000.
M. Valadier,
Young measures.
In Methods of nonconvex analysis, Lecture Notes in Math.,
1446:152-188, 1990.
L.C. Young,
Lectures on the calculus of variations and optimal control
theory.
W.B. Saunders Co., 1969.
Relaxation of optimal control problems
Methodology for sensitivity analysis
Application to optimal control
Bibliography II
J.F. Bonnans, N. Osmolovskiı̆,
Second-order analysis of optimal control problems with control
and initial-final state constraints.
J. Convex Anal., 17(3-4):885-913, 2010.
J.F. Bonnans, L.P., O.S. Serea,
Sensitivity analysis for relaxed OCP’s with final-state
constraints,
Inria Research Report 7977, submitted.
Thank you for your attention!