Variational Methods in CT Reconstruction
Part II
Yiqiu Dong
DTU Compute
Technical University of Denmark
17 Jan. 2017
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
1 / 25
Inverse Problems
Why are inverse problems difficult?
⇐= It’s often ILL-POSED!
How can we solve an ill-posed inverse problem?
I
I
I
Does the measurements actually contain the information we want?
Which of an infinite manifold of solutions do we want?
The measurement may not be enough by itself to completely determine
the unknown. What other prior information of the “unknown” do we
have?
⇐= We can use REGULARIZATION techniques!
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
2 / 25
Illustration of the need for regularization
54
Chapter 4. Computational Aspects: Regularization Methods
Rn = span{v1 , . . . , vn }
Rm = span{u1 , . . . , um }
Exact sol.: x exact
•
exact
= A x exact
✲ •b
❅
❘
❅
∗ xλ (Tikhonov)
exact
◦
+e
⋆
✑ b=b
✑
xk (TSVD)
✑
✑
✑
✑
✑
✑
✑
✑
✑
✑
✑
✑
Naive sol.: x = A−1 b
✰
◦ ✑
Figure 4.1. Illustration of the need for regularization.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
3 / 25
Regularization Techniques
Consider to solve ill-posed problem:
b = N (Aū)
Regularization: Approximate the inverse operator, A−1 , by a family of
stable operators Rγ , where γ is the regularization parameter.
We need: With the noisy measurement we can find appropriate
parameters γ such that uγ = Rγ (b) is a good approximation of the true
solution ū.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
4 / 25
Regularization Techniques
Let A be an injective bounded linear operator. Consider the measurement
b = Aū + η. A family of linear maps Rγ parameterized by 0 < γ < ∞ is
called a regularization strategy if
lim Rγ Au = u
γ→0
for every u.
Further, assume we are given a noise level σ > 0 so that kb − Aūk2 ≤ σ.
A choice of regularization parameter γ = γ(σ) as a function of σ is called
admissible if
γ(σ) → 0 as σ → 0, and
sup{kRγ(σ) b − uk : kAu − bk2 ≤ σ} → 0 as σ → 0 for every u.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
5 / 25
Regularization Techniques
There are many ways to do regularization. For example, in
TSVD method (first week)
Landweber’s method (second week)
both have incorporated some kinds of regularization techniques.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
6 / 25
Regularization Techniques
There are many ways to do regularization. For example, in
TSVD method (first week)
Landweber’s method (second week)
both have incorporated some kinds of regularization techniques.
In Variational Methods, regularization techniques are clearly shown as a
regularization term, R(u), which depends on prior information or
assumption of the object ū. For examples:
Tikhonov regularization:
1
2
2 kuk2
Generalized Tikhonov regularization: 12 k∇uk22
P
Total Variation regularization: i |[∇u]i |2
Regularization with higher-order derivatives
Sparsity: |u|0 or kuk1
······
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
6 / 25
Truncated SVD
Considering the linear inverse problem
Au = b
with b = Aū + η.
Based
on the SVD of A, the “naive” solution is given by
✐
✐
THIRD PROOFS
“hansen”
u = A−1 b =
2009/12/14
l
X
i=1
page 37
wi> b
vi = ū +
σi
l
X
i=1
3.4. Convergence and Nonconvergence of SVE Approximation∗
✐
wi> η
vi
σi
✐
37
2
10
0
10
0
10
−4
10
−8
10
−2
−12
10
−16
10
0
10
σi
|
i
uTb
i
σ
i
| uTb |
−4
10
|/σ
|u b|/σ
i
20
40
60
T
i
T
i
|u b|
0
i
12
24
Figure 3.7. The Picard plots for the discretized gravity surveying problem
with a noisy right-hand side. The two figures correspond to two different noise levels.
Yiqiu Dong (DTU The
Compute)
Methods
CT Reconstruction
larger the noiseVariational
level, the fewer
SVDincoefficients
u T b remain above the noise 17 Jan. 2017
7 / 25
Truncated SVD
The solution of Truncated SVD is
u=
V Σ†k W > b
=
k
X
w >b
i
i=1
σi
vi
with Σ†k = diag(σ1−1 , · · · , σk−1 , 0, · · · , 0).
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
8 / 25
Truncated SVD
The solution of Truncated SVD is
u=
V Σ†k W > b
=
k
X
w >b
i
i=1
σi
vi
with Σ†k = diag(σ1−1 , · · · , σk−1 , 0, · · · , 0).
Regularization parameter:
k, i.e, the number of SVD components.
Advantages:
I
I
Intuitive
Easy to compute, if we have the SVD
Drawback:
I
For large-scale problem, it is infeasible to compute the SVD
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
8 / 25
Landweber’s Method
With u 0 = 0, the kth Landweber’s iterate is
u k = u k−1 + ωA> (b − Ax k−1 )
=
k−1
X
(I − ωA> A)j ωA> b
j=0
=
n
X
ϕki
i=1
where
ϕki
= 1 − (1 −
Yiqiu Dong (DTU Compute)
ωσi2 )k
≈
wi> b
vi
σi
1,
if σi ω,
kωσi2 , if σi ω.
Variational Methods in CT Reconstruction
17 Jan. 2017
9 / 25
Landweber’s Method
With u 0 = 0, the kth Landweber’s iterate is
u k = u k−1 + ωA> (b − Ax k−1 )
=
k−1
X
(I − ωA> A)j ωA> b
j=0
=
n
X
ϕki
i=1
where
ϕki
= 1 − (1 −
ωσi2 )k
≈
wi> b
vi
σi
1,
if σi ω,
kωσi2 , if σi ω.
More SVD components are included, as we perform more iterations
Regularization paramter: k, i.e. the number of iterations
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
9 / 25
Tikhonov Regularization
Idea: If we control the norm of the solution, then we should be able to
suppress most of the large noise components.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
10 / 25
Tikhonov Regularization
Idea: If we control the norm of the solution, then we should be able to
suppress most of the large noise components.
The Tikhonov solution uγ is defined as the solution to
min kAu − bk22 + γkuk22
u
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
10 / 25
Tikhonov Regularization
Idea: If we control the norm of the solution, then we should be able to
suppress most of the large noise components.
The Tikhonov solution uγ is defined as the solution to
min kAu − bk22 + γkuk22
u
Regularization parameter: γ
γ large: strong regularity, over smoothing.
γ small: good fitting
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
10 / 25
The Solution of Tikhonov Regularization
Reformulate as a linear least squares problem
2
A
b u−
min √
u
γI
0 2
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
11 / 25
The Solution of Tikhonov Regularization
Reformulate as a linear least squares problem
2
A
b u−
min √
u
γI
0 2
The solution is
u = (A> A + γI )−1 A> b
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
11 / 25
The Solution of Tikhonov Regularization
Reformulate as a linear least squares problem
2
A
b u−
min √
u
γI
0 2
The solution is
u = (A> A + γI )−1 A> b
= V (Σ> Σ + γIn )−1 Σ> W > b
=
l
X
σi (w > b)
i
i=1
Yiqiu Dong (DTU Compute)
σi2 + γ
vi
Variational Methods in CT Reconstruction
17 Jan. 2017
11 / 25
Compare with TSVD
The solution of TSVD is
u=
k
X
w >b
i
i=1
σi
vi
The solution of Tikhonov regularization is
u=
l
X
σi (w > b)
i
i=1
Yiqiu Dong (DTU Compute)
σi2 + γ
vi
Variational Methods in CT Reconstruction
17 Jan. 2017
12 / 25
Compare with TSVD
The solution of TSVD is
u=
k
X
w >b
i
i=1
σi
vi =
l
X
ϕT
i
i=1
wi> b
vi
σi
1, 1 ≤ i ≤ k,
0, k < i ≤ l.
The solution of Tikhonov regularization is
with ϕT
i =
u=
l
X
σi (w > b)
i
i=1
with ϕTik
=
i
σi2 + γ
vi =
l
X
ϕTik
i
i=1
wi> b
vi
σi
σi2
σi2 +γ
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
12 / 25
Generalized Tikhonov Regularization
γ
ku − u∗ k22
u
2
Prior information: u is close to a known u∗ .
min kAu − bk22 +
γ
kLuk22
2
Prior information: u is known to be smooth.
min kAu − bk22 +
u
γ
kL(u − u∗ )k22
2
Prior information: u has similar smoothing structures as u∗ .
min kAu − bk22 +
u
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
13 / 25
Generalized Tikhonov Regularization
A common used generalized Tikhonov regularization based variational
model is
γ
min kAu − bk22 + k∇uk22
u
2
∇x1
2
2
∈ R2n ×n
∇=
∇x2
Prior information: ∇u follows Gaussian distribution (Roughly
speaking, u is smooth)
First-order condition: the minimizer ũ should satisfy
0 = A> (Au − b) − γ4u
where 4 ∈ Rn
2 ×n2
Yiqiu Dong (DTU Compute)
is Laplacian operator
Variational Methods in CT Reconstruction
17 Jan. 2017
14 / 25
Discrete Operators
2
Recall u ∈ Rn with n-by-n pixels.
2
2
Discrete gradient operator: ∇ ∈ R2n ×n is defined by
∇x1
(∇x1 u)i
∇=
and [∇u]i =
∈ R2 ,
∇x2
(∇x2 u)i
where ∇x1 , ∇x2 ∈ Rn
2 ×n2
use forward differences, i.e,
us+1,t − us,t , if s < n,
(∇x1 u)i =
0,
if s = n,
us,t+1 − us,t , if t < n,
(∇x2 u)i =
0,
if t = n,
and i = (s − 1)n + t, s, t ∈ {1, 2, · · · , n}.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
15 / 25
Discrete Operators
2
2
Discrete divergence operator: div ∈ Rn ×2n is adjoint operator of
2
−∇, that is, h∇u, ~v i = −hu, div~v i with u ∈ Rn and
2
~v = [(v 1 )> , (v 2 )> ]> ∈ R2n . Further, div = −∇> .
Use the backward differences to get the adjoint operators of ∇x1 and
∇x2 :
u1,t ,
if s = 1,
˜
us,t − us−1,t , if s ≤ n − 1,
(∇x1 u)i =
−un−1,t ,
if s = n.
us,1 ,
if t = 1,
˜
us,t − us,t−1 , if t ≤ n − 1,
(∇x2 u)i =
−us,n−1 ,
if t = n.
˜ x1 v1 + ∇
˜ x2 v2 .
then we have div~v = ∇
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
16 / 25
Discrete Operators
Discrete Laplacian operator: ∆ ∈ Rn
2 ×n2
is defined as
∆u = div(∇u),
which is a semi-definite negative operator.
Based on the definitions of discrete gradient and divergence
operators, we have
(∆u)i = us+1,t + us−1,t + us,t+1 + us,t−1 − 4us,t ,
where 1 ≤ s ≤ n and 1 ≤ t ≤ n with the corresponding boundary
condition.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
17 / 25
Total Variation
For one dimensional function u: The total variation of u : [a, b] → R is
defined as
np −1
X
|u(xi+1 ) − u(xi )|,
TV (u) = sup
p∈P i=0
where P = {p = {x0 , · · · , xnp }|p is a partition of [a, b]}. The supremum is
taken over all partitions of the interval [a, b].
Rb
If u is differentiable, then TV (u) = a |u 0 (x)| dx
P
In discrete case, TV (u) = i |ui0 |.
−1, if − 1 ≤ x ≤ 0,
Example: If u(x) =
, then TV (u) = 2.
1,
if 0 < x ≤ 1
Bounded variation (BV) space is defined as
u ∈ BV ([a, b]) ⇐⇒ TV (u) < +∞.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
18 / 25
Total Variation
For high dimensional function u: i.e., u : [a, b]N → R with Ω = [a, b]N :
R
If u is differentiable, then TV (u) = Ω |∇u|2 dx.
P
In discrete case, TV (u) = i |[∇u]i |2 .
General definition:
Z
u div~v dx : ~v ∈
TV (u) = sup
C01 (Ω, RN ), |~v (x)|2
≤ 1 for ∀x ∈ Ω
Ω
P
∂vi
1
N
where ~v = (v1 , · · · , vN )> , div~v = N
i=1 ∂xi , and C0 (Ω, R ) is the
space of continuously differentiable functions with compact support in
Ω.
Let Ω = R2 and u = χaB2 be a indicator function on the disk
centered at the origin with radius a > 0, then TV (u) = 2πa.
BV space: BV (Ω) = u ∈ L1 (Ω) : TV (u) < +∞ . It endowed with
the norm kukBV (Ω) = kukL1 (Ω) + TV (u) is a Banach space.
W 1,1 (Ω) ⊂ BV (Ω) ⊂ L1 (Ω)
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
19 / 25
Total Variation
For high dimensional function u: i.e., u : [a, b]N → R with Ω = [a, b]N :
R
If u is differentiable, then TV (u) = Ω |∇u|2 dx.
P
In discrete case, TV (u) = i |[∇u]i |2 .
General definition:
Z
u div~v dx : ~v ∈
TV (u) = sup
C01 (Ω, RN ), |~v (x)|2
≤ 1 for ∀x ∈ Ω
Ω
P
∂vi
1
N
where ~v = (v1 , · · · , vN )> , div~v = N
i=1 ∂xi , and C0 (Ω, R ) is the
space of continuously differentiable functions with compact support in
Ω.
Let Ω = R2 and u = χaB2 be a indicator function on the disk
centered at the origin with radius a > 0, then TV (u) = 2πa.
BV space: BV (Ω) = u ∈ L1 (Ω) : TV (u) < +∞ . It endowed with
the norm kukBV (Ω) = kukL1 (Ω) + TV (u) is a Banach space.
W 1,1 (Ω) ⊂ BV (Ω) ⊂ L1 (Ω)
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
19 / 25
Total Variation
For high dimensional function u: i.e., u : [a, b]N → R with Ω = [a, b]N :
R
If u is differentiable, then TV (u) = Ω |∇u|2 dx.
P
In discrete case, TV (u) = i |[∇u]i |2 .
General definition:
Z
u div~v dx : ~v ∈
TV (u) = sup
C01 (Ω, RN ), |~v (x)|2
≤ 1 for ∀x ∈ Ω
Ω
P
∂vi
1
N
where ~v = (v1 , · · · , vN )> , div~v = N
i=1 ∂xi , and C0 (Ω, R ) is the
space of continuously differentiable functions with compact support in
Ω.
Let Ω = R2 and u = χaB2 be a indicator function on the disk
centered at the origin with radius a > 0, then TV (u) = 2πa.
BV space: BV (Ω) = u ∈ L1 (Ω) : TV (u) < +∞ . It endowed with
the norm kukBV (Ω) = kukL1 (Ω) + TV (u) is a Banach space.
W 1,1 (Ω) ⊂ BV (Ω) ⊂ L1 (Ω)
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
19 / 25
Total Variation
For high dimensional function u: i.e., u : [a, b]N → R with Ω = [a, b]N :
R
If u is differentiable, then TV (u) = Ω |∇u|2 dx.
P
In discrete case, TV (u) = i |[∇u]i |2 .
General definition:
Z
u div~v dx : ~v ∈
TV (u) = sup
C01 (Ω, RN ), |~v (x)|2
≤ 1 for ∀x ∈ Ω
Ω
P
∂vi
1
N
where ~v = (v1 , · · · , vN )> , div~v = N
i=1 ∂xi , and C0 (Ω, R ) is the
space of continuously differentiable functions with compact support in
Ω.
Let Ω = R2 and u = χaB2 be a indicator function on the disk
centered at the origin with radius a > 0, then TV (u) = 2πa.
BV space: BV (Ω) = u ∈ L1 (Ω) : TV (u) < +∞ . It endowed with
the norm kukBV (Ω) = kukL1 (Ω) + TV (u) is a Banach space.
W 1,1 (Ω) ⊂ BV (Ω) ⊂ L1 (Ω)
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
19 / 25
Total Variation
For high dimensional function u: i.e., u : [a, b]N → R with Ω = [a, b]N :
R
If u is differentiable, then TV (u) = Ω |∇u|2 dx.
P
In discrete case, TV (u) = i |[∇u]i |2 .
General definition:
Z
u div~v dx : ~v ∈
TV (u) = sup
C01 (Ω, RN ), |~v (x)|2
≤ 1 for ∀x ∈ Ω
Ω
P
∂vi
1
N
where ~v = (v1 , · · · , vN )> , div~v = N
i=1 ∂xi , and C0 (Ω, R ) is the
space of continuously differentiable functions with compact support in
Ω.
Let Ω = R2 and u = χaB2 be a indicator function on the disk
centered at the origin with radius a > 0, then TV (u) = 2πa.
BV space: BV (Ω) = u ∈ L1 (Ω) : TV (u) < +∞ . It endowed with
the norm kukBV (Ω) = kukL1 (Ω) + TV (u) is a Banach space.
W 1,1 (Ω) ⊂ BV (Ω) ⊂ L1 (Ω)
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
19 / 25
Total Variation Regularization
X
1
|[∇u]i |2
min kb − Auk22 + γ
u 2
i
Total variation regularization term is convex.
Under certain condition, it is equivalent with
X
min
|[∇u]i |2 ,
u
i
s.t. kb − Auk22 ≤ σ 2 mn.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
20 / 25
First-Order Condition
Variational Model for Gaussian Noise Removal
X
1
min kb − Auk22 + γ
|[∇u]i |2
u 2
i
The minimizer ũ should satisfy the first-order condition
[∇u]i
>
0 = A (Au − b) i − γ · div
,
|[∇u]i |2
where the divergence operator div = −∇> and satisfies
h∇u, ~v i = −hu, div~v i.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
21 / 25
First-Order Condition
>
0 = A (Au − b) i − γ · div
[∇u]i
,
|[∇u]i |2
1
In homogeneous region, where |[∇u]i |2 = 0, |[∇u]
is unbounded. In
i |2
order to satisfy the equation, this region is kept homogenous.
In noise case, where |[∇u]i |2 > 0 but small,
is smoothed.
Near edges, |[∇u]i |2 is large,
and edges are kept.
Yiqiu Dong (DTU Compute)
1
|[∇u]i |2
1
|[∇u]i |2
is large, the region
is near 0, coefficient vanishes,
Variational Methods in CT Reconstruction
17 Jan. 2017
22 / 25
First-Order Condition
>
0 = A (Au − b) i − γ · div
[∇u]i
,
|[∇u]i |2
1
In homogeneous region, where |[∇u]i |2 = 0, |[∇u]
is unbounded. In
i |2
order to satisfy the equation, this region is kept homogenous.
In noise case, where |[∇u]i |2 > 0 but small,
is smoothed.
Near edges, |[∇u]i |2 is large,
and edges are kept.
1
|[∇u]i |2
1
|[∇u]i |2
is large, the region
is near 0, coefficient vanishes,
Since |[∇u]i |2 can be 0, a q
lot of methods use its global
approximation, |[∇u]i | = |[∇u]i |22 + . Smaller is, more ill-posed
the problem is, but more near to original problem.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
22 / 25
Sparse Regularization
We consider an orthogonal basis {ψs }, e.g. wavelet basis, then the noisy
coefficients satisfy
hb, ψs i = hū, ψs i + hη, ψs i,
where hη, ψs i is still a white Gaussian noise with mean 0 and variance σ 2 .
If the basis {ψs } is efficient to represent ū, i.e., the most of the coefficient
hū, ψs i are close to zero. But noise leads to a large set of small value
coefficients, so we can use the sparse regularization in order to obtain
sparse coefficients.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
23 / 25
Sparse Regularization
2
We use the wavelet basis as an example, and define W ∈ RM×n as the
2
wavelet transform and W −1 ∈ Rn ×M as the inverse wavelet transform.
Then, we can compute the wavelet coefficients by w = Wu ∈ RM . In
order to obtain the sparsity on w , we can solve
min
u∈Rn2
or
min
w ∈RM
1
kAu − bk22 + γ|Wu|0
2
1
kAW −1 w − bk22 + γ|w |0
2
followed by u = W −1 w .
l 0 -“norm”: |w |0 = ]{i = 1, · · · , M : wi 6= 0}.
It’s NP-hard problem: It is as difficult as trying all possible subsets of
variables.
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
24 / 25
Convex Relaxation
A natural approximation of l 0 regularization would be
min
w ∈RM
1
kAW −1 w − bk22 + αkw k1
2
followed by u = W −1 w .
It is solvable, but still nonlinear.
It is convex, but not strictly convex.
Why not use Tikhonov regularization?
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
25 / 25
Convex Relaxation
A natural approximation of l 0 regularization would be
min
w ∈RM
1
kAW −1 w − bk22 + αkw k1
2
followed by u = W −1 w .
It is solvable, but still nonlinear.
It is convex, but not strictly convex.
Why not use Tikhonov regularization?
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
25 / 25
Convex Relaxation
A natural approximation of l 0 regularization would be
min
w ∈RM
1
kAW −1 w − bk22 + αkw k1
2
followed by u = W −1 w .
It is solvable, but still nonlinear.
It is convex, but not strictly convex.
Why not use Tikhonov regularization?
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
25 / 25
Convex Relaxation
A natural approximation of l 0 regularization would be
min
w ∈RM
1
kAW −1 w − bk22 + αkw k1
2
followed by u = W −1 w .
It is solvable, but still nonlinear.
It is convex, but not strictly convex.
Why not use Tikhonov regularization?
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
25 / 25
Convex Relaxation
A natural approximation of l 0 regularization would be
min
w ∈RM
1
kAW −1 w − bk22 + αkw k1
2
followed by u = W −1 w .
It is solvable, but still nonlinear.
It is convex, but not strictly convex.
Why not use Tikhonov regularization?
Yiqiu Dong (DTU Compute)
Variational Methods in CT Reconstruction
17 Jan. 2017
25 / 25
© Copyright 2026 Paperzz