An iterative Bregman regularization method for
optimal control problems with inequality
constraints
Frank Pörner
Joint work with Daniel Wachsmuth
Universität Würzburg
Joint Annual Meeting of DMV and GAMM 2016
Outline
1 Introduction
2 Iterative Bregman regularization method
Bregman distance
Iterative Bregman regularization
Regularity Assumption
Convergence results
3 Outlook/open problems
Introduction
Problem setting:
Minimize
such that
1
kSu − zk2Y
2
ua ≤ u ≤ ub
a.e. in Ω
Assumptions
S : L2 (Ω) → Y linear and continuous, Y Hilbert space, z ∈ Y
Example: Y = L2 (Ω), S = (−∆)−1 : L2 (Ω) → H01 (Ω) ⊆ Y
Control constraints
ua , ub ∈ L∞ (Ω) with ua ≤ ub a.e. in Ω, Uad is the set of admissible
functions: Uad := {u ∈ L2 (Ω) : ua ≤ u ≤ ub }
Introduction
Uad weakly compact → solution u † exists
optimal state: y † = Su † unique
Problems
solutions may be unstable w.r.t. to pertubations
convergence of numerical schemes
Solution: Regularize the problem
Tikhonov regularization
Let uk be the solution of
min
u∈Uad
1
αk
kSu − zk2Y +
kuk2L2
2
2
(αk )k positive, monotonically decreasing with αk → 0
Let ū be solution with minimal L2 -norm, solving
min kukL2
s.t.
u ∈ Uad , Su = y †
Properties
(kuk kL2 )k monotonically increasing, kuk kL2 ≤ kūkL2
unconditional strong convergence uk → ū for αk → 0
Problems
For αk → 0 (T) becomes ill-conditioned
(T)
Iterated Tikhonov / Proximal Point Method
Let uk+1 be the solution of
min
u∈Uad
1
αk+1
kSu − zk2Y +
ku − uk k2L2
2
2
(αk )k positive, bounded: 0 < αk ≤ ᾱ
Properties
nice monotonicity properties
uk * u ∗ , u ∗ not necessarily minimum norm solution
[Rockafellar ’76], [Tichatschke]
Problems
No strong convergence: Counterexample by Güler
Source condition not applicable if y † 6= z
[Güler ’91]
Bregman distance
Solution: regularize with Bregman distance
Bregman distance associated with regularization function
J : L2 (Ω) → R
D λ (u, v ) := J(u) − J(v ) − (λ, u − v )
and subdifferential λ ∈ ∂J(v ).
Properties
D λ (u, v ) non-negative, convex w.r.t to u
D λ (u, v ) = 0 if v = u (if and only if for strictly convex J)
[Bregman ’67]
For J(u) = 21 kuk2L2 one has D λ (u, v ) = 12 ku − v k2L2
Bregman distance
We use
1
J(u) := kuk2L2 + IUad (u)
2
hence
∂J(u) 3 λ = u + w
with w ∈ ∂IUad (u)
For v ∈ Uad we obtain:
1
D λ (u, v ) = ku − v k2L2 + IUad (u)
2 Z
+
w (ua − u) dx +
{v =ua }
Z
w (ub − u) dx
{v =ub }
Bregman distance adds two parts that measures u on sets where
the control constraints are active for v .
Iterative Bregman regularization
Prototypical iterative method:
Algorithm A0
Let u0 ∈ Uad , λ0 ∈ ∂J(u0 ) and k = 1.
1. Solve for uk :
Minimize
1
kSu − zk2Y + αk D λk−1 (u, uk−1 ).
2
2. Choose λk ∈ ∂J(uk ).
3. Set k := k + 1, go back to 1.
Problems
How to choose u0 and λ0 ?
How to choose λk ?
Iterative Bregman regularization
Problems
How to choose u0 and λ0 ?
How to choose λk ?
Set
u0 := arg min
u∈L2 (Ω)
1
2
kukL2 + IUad (u) = PUad (0)
2
and
λ0 := 0 ∈ ∂J(u0 )
Iterative Bregman regularization
Problems
How to choose u0 and λ0 ?
How to choose λk ?
First order conditions for uk : ∃wk ∈ ∂IUad (uk ) such that
S ∗ (Suk − z) + αk (uk − λk−1 + wk ) = 0
Rearranging ( J(u) = 12 kuk2L2 + IUad (u) ):
λk := uk + wk =
1 ∗
S (z − Suk ) + λk−1 ∈ ∂J(uk )
αk
By induction:
k
X
1 ∗
λk :=
S (z − Sui ) ∈ ∂J(uk )
αi
i=1
Iterative Bregman regularization
Algorithm A
Let u0 = PUad (0) ∈ Uad , λ0 = 0 ∈ ∂J(u0 ) and k = 1.
1. Solve for uk :
Minimize
2. Set λk :=
k
P
i=1
1
kSu − zk2Y + αk D λk−1 (u, uk−1 ).
2
1 ∗
αi S (z
− Sui ).
3. Set k := k + 1, go back to 1.
Here: 0 < αk ≤ ᾱ.
Algorithm A is well-defined (see also [Osher, Burger, Goldfarb, Xu,
Yin 2005])
Iterative Bregman regularization
Monotonicity
(kSuk − zkY )k is monotonically decreasing
Convergence rate
kSuk −
zk2Y
†
− kSu −
zk2Y
≤c
k
X
1
αi
!−1
i=1
Compare to [Osher, Burger, Goldfarb, Xu, Yin 2005]
Weak convergence
Weak limit points of (uk )k are solutions of the original problem
and Suk → Su † .
Regularity Assumption
Regularity Assumption
Let u † be a solution of the original problem and assume that there
exists a set I ⊆ Ω, a function w ∈ Y , and positive constants κ, c
such that the following holds
(source condition) I ⊃ {x ∈ Ω : p † (x) = 0} and
χI u † = χI PUad (S ∗ w ),
(structure of active set) A := Ω \ I and for all ε > 0
|{x ∈ A : 0 < |p † (x)| < ε}| ≤ cεκ ,
(regularity of solution) S ∗ w ∈ L∞ (Ω).
[D. Wachsmuth & G. Wachsmuth 2011]
Regularity Assumption
Let u † satisfy regularity assumption.
Improved optimality condition
Improved optimality condition for u † :
1+ 1
κ
(S ∗ (Su † − z), v − u † ) ≥ ckv − u † kL1 (A)
Similar to [Seydenschwanz 2015]
∀v ∈ Uad
Regularity Assumption
Special Case 1: I = Ω
Obtain Source Condition: u † = PUad (S ∗ w ) - Equivalent to
existence of Lagrange Multipliers for the minimum norm problem:
min
u∈Uad
1
kuk2L2
2
s.t.
Su = y †
[Neubauer]
Special Case 2: I = ∅
Obtain purely structural assumption:
|{x ∈ Ω : 0 < |p † (x)| < ε}| ≤ cεκ ,
suited for bang-bang problems.
[Wachsmuth & Wachsmuth]
Convergence results
Convergence results under the regularity assumption
Special Case: I = Ω
Convergence:
lim
k→∞
1
kuk − u † k2L2 = 0
αk
1
kS(uk − u † )k2Y = 0
2
k→∞ αk
lim
Convergence rates:
kuk − u † k2L2
1
+ min
kS(ui − u † )k2Y ≤ c
i=1,...,k αi
k
X
1
αi
i=1
!−1
Convergence results
Convergence results under the regularity assumption
General case: I 6= Ω
Convergence:
lim kuk − u † kL2 = 0
k→∞
Convergence rates:
1
kuk − u † k2L2 + min
kS(ui − u † )k2Y
i=1,...,k αi
k
X
≤ c γk−1 + γk−1
αj−1 γj−κ
j=1
with the abbreviation γk =
k
P
i=1
αi−1
Convergence results
Special case: αk := cα k −s with s ≥ 0, cα > 0:
General case: I 6= Ω
−κ(s+1)
k
kuk − u † k2L2 ≤ c k −(s+1) log k
−(s+1)
k
−κ(s+1)−s
k
† 2
min kS(uj − u )kY ≤ c k −(2s+1) log k
j=1,...,k
−(2s+1)
k
if κ < 1,
if κ = 1,
if κ > 1,
if κ < 1,
if κ = 1,
if κ > 1,
Outlook/open problems
Open Questions:
Stopping criteria
Discrepancy principles for disturbed data (a-priori,
a-posteriori)
Comparison of Bregman iteration with other methods
(Tikhonov, projected gradient,...)
© Copyright 2026 Paperzz