Announcement

The DFG-SNF Research Group FOR916
Statistical Regularization and Qualitative Constraints
presents a Lecture Series on
Sparse Regularisation for Inverse Problems
Dr. Markus Grasmair
Computational Science Center
University of Vienna
February 20 - 22, 2012
at the Institute for Mathematical Stochastics, University of Göttingen.
Abstract. We consider the stable solution of an ill-posed operator equation F (x) = y δ with
noisy data y δ under the assumption that the true solution x† in the noise free case has certain
sparsity properties, for instance that it has a finite expansion with respect to a given basis
(φλ ) or at least rapidly decaying coefficients in this basis. More precisely, we study Tikhonov
regularisation, which defines a regularised solution as a minimiser of the Tikhonov functional
T (x; α, y δ ) := kF (x) − y δ k2 + αR(x) ,
where the regularisation term R should encode the knowledge about the sparsity of the solution.
Of particular interest are terms of the form
X
R(x) =
|hx, φλ i|p
λ
with 0 < p ≤ 1.
During this lecture series, we will study the properties of such regularisation methods. In
particular, we will focus on the question of convergence rates, that is, asymptotic estimates for
the accuracy of the regularised solutions in dependence of the noise level δ and the regularisation
parameter α.
Lecture 1: Well-posedness of Sparse Regularisation Methods
(Date: February 20, 2012. Time: 10:30–11:30. Venue: IMS seminar room 5.501)
The first lecture will be concerned with basics of regularisation theory, though already with a
focus on the application to sparsity promoting methods. We will introduce and shortly discuss
the basic properties required of a (well-posed) variational regularisation method. The major
question is the existence of a minimiser of the Tikhonov functional, which can be treated using
the direct method in the calculus of variations. We recall the required conditions the regularisation term has to satisfy and discuss the consequences for sparsity promoting regularisation
methods.
Regularisation with `1 Penalty Terms
(Date: February 21, 2012. Time: 10:30–11:30. Venue: IMS seminar room 5.501)
In the second lecture, we will shift the focus on `1 -like regularisation methods on sequence spaces.
Here it can be shown that the accuracy of the regularised solution is of the same order as the
noise level. That is, the inaccuracy on the data, if the regularisation parameter is chosen in an
appropriate manner and the operator and the true solution satisfy certain properties. These
properties are related to the restricted isometry property, which is a basic assumption in the
theory of compressed sensing, but only has a limited applicability to inverse problems. Finally,
we will show, how the results on convergence rates can be extended to more general positively
homogeneous regularisation terms, for instance discrete total variation, but also to terms aimed
at the treatment of group of joint sparsity.
Non-convex Regularisation Methods
(Date: February 22, 2012. Time: 10:30–11:30. Venue: IMS seminar room 5.501)
In order to increase the sparsity promoting properties of `1 -regularisation, one is tempted to
replace the positively homogeneous regularisation term by a term with a sublinear growth at
zero, for instance an `p term with 0 < p < 1. This necessarily results in non-convex and nondifferentiable regularisation terms. We will show in this lecture that we nevertheless obtain a
well-posed regularisation method provided that the regularisation term is coercive. In addition,
we will again derive linear convergence rates under the assumption of a unique true solution
(which is a non-trivial condition because of the non-convexity of the regularisation term).