Approximation of a non-increasing rearrangement of a function

Charles University in Prague
Faculty of Mathematics and Physics
MASTER THESIS
Martin Franců
Approximation of a non-increasing
rearrangement of a function
Department of Mathematical Analysis
Supervisor of the master thesis: prof. RNDr. Luboš Pick, CSc., DSc.
Study programme: Mathematica, post-Bachelor
Specialization: Mathematical Analysis
Prague 2012
Bez cenných rad svého vedoucı́ho profesora Luboše Picka a námětů od
profesora Rona Kermana by tato práce nemohla vzniknout, za to jim patřı́
mé velké dı́ky.
Prohlašuji, že jsem tuto diplomovou práci vypracoval samostatně a výhradně
s použitı́m citovaných pramenů, literatury a dalšı́ch odborných zdrojů.
Beru na vědomı́, že se na moji práci vztahujı́ práva a povinnosti vyplývajı́cı́
ze zákona č. 121/2000 Sb., autorského zákona v platném zněnı́, zejména
skutečnost, že Univerzita Karlova v Praze má právo na uzavřenı́ licenčnı́
smlouvy o užitı́ této práce jako školnı́ho dı́la podle 60 odst. 1 autorského
zákona.
V Praze, dne ............
podpis autora
Název práce: Approximation of a non-increasing rearrangement of a function
Autor: Martin Franců
Katedra: Katedra matematické analýzy
Vedoucı́ diplomové práce: prof. RNDr. Luboš Pick, CSc., DSc.
Abstrakt: Nerostoucı́ přerovnánı́ meřitelné realné funkce definované na měřitelném
prostoru má obrovský význam v takových disciplı́nách jako je teorie prostorů
funkcı́ nebo teorie interpolacı́ (mezi prostory funkcı́) a jejich aplikace v parcialnı́ch diferencialnı́ch rovnicı́ch. Ačkoliv má nerostoucı́ přerovnánı́ dobré
a široce uplatnitelné vlastnosti jako zobrazenı́, je bohužel témeř nemožné vypočı́tat nerostoucı́ přerovnánı́ konkrétnı́ funkce přesně. Z tohoto důvodu jsou
numerické algoritmy pro aproximaci žádoucı́. V této práci se budeme zabývat
takovou metodou postavenou na interpolaci pomocı́ lineárnı́ch splinů. V prvnı́
polovině této práce bude tato metoda popsána, zatı́mco odhady chyb budou
předmětem druhé části.
Klı́čová slova: nerostoucı́ přerovnánı́, approximace, metoda konečných prvků
Title:Approximation of a non-increasing rearrangement of a function
Author: Martin Franců
Department: Department of Mathematical Analysis
Supervisor: prof. RNDr. Luboš Pick, CSc., DSc.
Abstract: The non-increasing rearrangement of a measurable real function
defined on an appropriate measure space is of the enormous significance in
disciplines such as theory of function spaces or interpolation theory and their
applications in PDEs. Unfortunately, while it has good and widely applicable
mapping properties, it is virtually impossible to calculate the non-increasing
rearrangement of a concrete given function precisely. Numerical algorithms for
approximation are desirable for this reason. Such method of approximation,
based on interpolation by a linear spline, is presented in this thesis. In the
first half of this thesis, the developed method is described, while the error
estimates of the method are subject to the second part.
Keywords: non-increasing rearrangement, approximation, finite element method
Contents
1 Preliminaries
1.1 Elementary definitions . . . . . . . . . .
1.2 Non-increasing rearrangement . . . . . .
1.3 Rearrangement invariant spaces . . . . .
1.4 The finite element method . . . . . . . .
1.5 Classical estimates on interpolation error
.
.
.
.
.
4
4
6
8
12
16
.
.
.
.
.
19
20
20
21
24
27
3 Error estimates
3.1 Error estimates based on the interpolation error . . . . . . . .
3.2 Supporting theorems . . . . . . . . . . . . . . . . . . . . . . .
3.3 Error estimates . . . . . . . . . . . . . . . . . . . . . . . . . .
29
29
29
40
Conclusion
46
A Implementation notes
47
2 Description of the algorithm
2.1 The case n = 1 . . . . . . . . . . . . . .
2.2 The case n = 2 . . . . . . . . . . . . . .
2.3 The case n = 3 . . . . . . . . . . . . . .
2.4 Approximation by splines of higher order
2.5 Older algorithm . . . . . . . . . . . . . .
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
in one dimension
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
Introduction
Given two finite sequences of nonnegative numbers, {aj }nj=1 and {bj }nj=1 , it is
quite natural to say that one is a rearrangement of the other. The rearrangement in this case is realized via a permutation. Of particular significance and
interest is the rearrangement of the finite sequence {aj }nj=1 in a non-increasing
order. It turns out that, if an appropriate care is exercised, one can do similar
things with measurable functions even though, of course, no simple technique
such as permutation is available here. Two measurable functions (not even
necessarily defined on the same measure space) may be considered equimeasurable if they have the same distribution function (that is, the same measure
of all level sets). Again, the function which is equimasurable to a given one
and which is defined and non-increasing on an interval is of particular significance and interest. Such function, denoted f ∗ , is called the non-increasing
rearrangement of the given function f .
The non-increasing rearrangement of a measurable real function defined
on an appropriate measure space was defined and first studied as early as in
1880’s by Steiner (see [11]). Then it was almost forgotten for half a century. It resurfaced again thanks to the efforts of Hardy, Littlewood and Pólya
in 1930’s and of Luxemburg, Lorentz and others in 1950’s. It was the work
of Hardy and Littlewood that first proved the enormous significance of the
non-increasing rearrangement, and the rapid development of then new disciplines such as interpolation theory, function spaces, Sobolev-type inequalities
and their applications in PDEs and mathematical physics only confirmed its
absolute indispensability when fine and sharp description of properties of operators on function spaces was required. It also found important applications
in symmetrization and isoperimetric inequalities.
The non-increasing rearrangement of a function is defined as a certain generalized inverse of the distribution function. The mapping f 7→ f ∗ is a rather
crude operation that reduces phenomena occurring on a general measure space
to the one-dimensional ones. However, while it has good and widely applicable mapping properties that lead to deep theorems concerning its action
on function spaces, it is virtually impossible to calculate the non-increasing
rearrangement of a concrete given function precisely. For this reason, it is
quite desirable to have instead at least some numerical algorithms for approximation of f ∗ in order to remedy the lack of the precise formula.
Our main objective in this thesis is to develop such algorithms.
The idea of our method is to approximate the non-increasing rearrangement of a given function by the rearrangement of its linear interpolation.
Basis for our work is the article [8] where our algorithm is introduced and
2
most results originated.
The first chapter is of a preliminary nature. We introduce all the necessary
background material on both rearrangement-invariant norms and spaces and
the finite element method. Moreover, we specify the interpolation we use;
that will take place in Section 1.4, where our method is considered within
the context of general finite element methods. We also present some known
results on the error of interpolation.
An outline of our algorithm and the algorithm for computing the nonincreasing rearrangement of partially linear functions are described in Chapter 2. In the first three sections we focus on the case of domain of dimensions
1, 2 and 3, studying their intrinsic properties separately. In the case of one
dimension splines of higher order can be used. This approach is resumed in
Section 2.4. The older algorithm is decribed in Section2.5.
The last chapter contains error estimates of the described method of approximation. This chapter is divided in three sections. Supporting theorems
are stated in Section 3.2, while the actual estimates are proved in Section 3.3.
Yet another way of obtaining error estimates, based on the use of theorems
on error of interpolation from theory of finite elements, is described in Section 3.1.
3
1. Preliminaries
1.1
Elementary definitions
Before we move on to a more interesting matter, let us recall some elementary
definitions that will be used later.
Definition 1.1. Let (R, µ) be a measurable space. Then the set of all realvalued measurable functions from R will be denoted by M. The set of all
measurable functions R → R+ will be denoted by M+ .
Definition 1.2 (Multi-index). Let be n ∈ N, then a finite sequence of integers
α = (α1 , α2 , . . . , αn ) ∈ (N0 )n is called a multi-index. The length of α is given
by
n
X
|α| :=
αi .
i=1
n
Given an x = (x1 , x2 , . . . , xn ) ∈ R , we define
xα := xα1 1 xα2 2 . . . xαnn .
We use the notation
α
D f=
∂
∂x1
α1 ∂
∂x2
α2 αn
∂
...
f,
∂xn
for f sufficiently smooth. If k ∈ N, we define
X
Dk f =
Dα f.
|α|=k
Notation 1.3 (Sobolev pseudonorm). Let f ∈ L1,loc (Ω), k ∈ N, p ∈ [1, ∞].
Suppose that the weak derivatives Dα f exist for all |α| = k. Define the
Sobolev pseudonorm

p1
 R P
p
α
p < ∞,
|α|=k |D f (x)| dx
Ω
|f |k,p,Ω =
max
ess sup
|Dα f |
p = ∞,
|α|=k
x∈Ω
if the right-hand side of the equation exists.
Definition 1.4. The space of all polynomials on a set E ⊂ R of degree up
to k will be denoted as Pk (E), in other words


X

Pk (E) =
cα x α ; cα ∈ R .


|α|≤k
4
Notation 1.5. The measure of the unit ball in Rn will be denoted by γn .
Then
n
π2
,
γn = n
Γ( 2 + 1)
where Γ is the Gamma function.
Definition 1.6 (Domain). An open connected subset of Rn will be called a
domain.
Definition 1.7 (Simplex). Let a1 , a2 , . . . , an+1 ∈ Rn , aj = {aij }ni=1 be points
in Rn such that the matrix


a1,1 a1,2 . . . a1,n a1,n+1
 a2,1 a2,2 . . . a2,n a2,n+1 


 ..

..
..
..
...
 .

.
.
.


 an,1 an,2 . . . an,n an,n+1 
1
1 ...
1
1
is regular, then the convex hull of the points a1 , . . . , an+1 ,
(
)
n+1
n+1
X
X
T = x ∈ Rn : x =
λj aj , λj ∈ [0, 1],
λj = 1
j=1
j=1
is called a simplex in Rn . Points a1 , a2 , . . . , an+1 are called vertices of simplex
T . We say that the set
(
)
n+1
n+1
X
X
FI = x ∈ T : x =
λj aj , λj ∈ [0, 1],
λj = 1, λi = 0, for i ∈ I ,
j=1
j=1
where I is a subset of {1, . . . , n} of cardinality m + 1, is an m-facet of the
simplex T .
Remarks 1.8. Although this notion is well known, let us recall couple of
facts about simplices.
• Simplices in the one-dimensional space R are closed intervals. Simplices
in R2 are called triangles and a simplex in a three dimensional space R3
is called a tetrahedron.
• 0-facets are vertices, 1-facets are called edges and (n − 1)-facets of a
simplex in Rn are often called faces of simplex.
• Let T be a simplex in Rn with vertices a1 , . . . , an+1 . If we denote A ∈
Rn×n a matrix whose columns are vectors ai − an+1 , i = 1, . . . , n, then
1
|det (A)| .
|T | =
n!
5
1.2
Non-increasing rearrangement
We start with focusing on the notion of the non-increasing rearrangement of
a function. To illustrate the idea of this notion let us consider a step function
f : R+ → R defined as follows
X
f (x) =
ai χ[i−1,i) (x),
i∈{1,2,...,k}
where ai ∈ R+ and ai 6= aj for i, j ∈ {1, 2, . . . , k}, j 6= i. The non-increasing
rearrangement, f ∗ , is then a very similar step function, for which it holds
that x > y implies f ∗ (x) ≤ f ∗ (y). In principle, we just sort the values of f
in non-increasing order. If π : {1, . . . , k} → {1, . . . , k} is a permutation such
that it holds that if l > s, then aπ(l) < aπ(s) , then f ∗ could be written in the
form
X
f ∗ (x) =
aπ(i) χ[i−1,i) (x).
i∈{1,...,k}
However this approach fails in more complex cases or higher dimensions.
Therefore the non-increasing rearrangement is defined through a distribution
function in way showed in the following definition.
Definition 1.9 (distribution function, non-increasing rearrangement). Let E
be a measurable subset of Rn and suppose f : E → R is a measurable function.
Then, the distribution function, µf , of f is given by
µf (λ) := |{x ∈ E : |f (x)| > λ}| ,
where λ ≥ 0. The non-increasing rearrangement, f ∗ , of f is defined on interval
(0, |E|) by
f ∗ (t) := inf{λ ∈ R : λ ≥ 0, µf (λ) ≤ t},
where 0 < t < ∞. In particular, µf ∗ = µf .
Some basic properties of the non-increasing rearrangement are summarized
in the following proposition.
Proposition 1.10. Suppose f , g, and fi , i ∈ N, belong to M(R, µ) and let
a ∈ R. The decreasing rearrangement f ∗ is a nonnegative, decreasing rightcontinuous function on [0, ∞). Furthermore,
|g| ≤ |f | µ-a. e. implies that g ∗ ≤ f ∗ ;
(af )∗ = |a| f ∗ ;
(f + g)∗ (t1 + t2 ) ≤ f ∗ (t1 ) + g ∗ (t2 ), (t1 , t2 ≥ 0);
|f | ≤ lim inf |fn | µ-a. e. implies that f ∗ ≤ lim inf fn∗ ;
n→∞
n→∞
6
in particular,
|fn | ↑ |f | µ-a. e. implies that fn∗ ↑ f ∗ ;
f ∗ (µf (λ)) ≤ λ, (µf (λ) < ∞);
µf (f ∗ (t)) ≤ t, (f ∗ (t) < ∞);
f and f ∗ are equimeasurable;
(|f |p )∗ = (f ∗ )p , (0 < p < ∞).
Proof. The proof can be found in [2, Chapter 2, Proposition 1.7].
One of the basic tools to work with non-increasing rearrangements is the
following theorem.
Theorem 1.11. Let Ω ⊂ Rn , f, g ∈ M(Ω), then
Z
Z
|Ω|
|f (x)g(x)| dx ≤
Ω
f ∗ (t)g ∗ (t) dt.
(1.1)
0
Proof. This is Theorem 2.2 from [2, Chapter 2].
We will often use that, in some sense, rearrangement is a non-expansive
mapping with respect to Lebesgue norms, which is a consequence of the following result.
Theorem 1.12. Let Φ : [0, ∞) → R be convex, non-negative, non-decreasing,
and u, v ∈ M(Rn ). Then
Z ∞
Z
∗
∗
Φ (|u (s) − v (s)|) ds ≤
Φ |u(x)| − |v(x)| dx.
Rn
0
Proof. This result is the main theorem in [4], the proof can be found there.
Corollary 1.13. Let Ω ⊂ Rn be a domain and let f, s ∈ Lp (Ω), p ∈ [1, ∞],
then it holds that
kf ∗ − s∗ kp ≤ kf − skp .
Proof. The case p = ∞ will be discussed in Remark 1.22 below. In case
p ∈ [1, ∞) we will use Theorem 1.2. First, we note that the function
C(t) := tp ,
7
where p ∈ [1, ∞) and t ≥ 0, is non-decreasing, non-negative and convex,
hence it can play the role of the function Φ in Theorem 1.2. Let us define for
f ∈ M(Ω) the zero extension as
(
f (x), if x ∈ Ω,
f0 (x) =
0,
otherwise.
Now, we use Theorem 1.2 to obtain
∗
kf −
s∗ kpp,Ω
=
kf0∗
−
s∗0 kpp,Rn
Z
=
∞
(|f0∗ (t) − s∗0 (t)|)p dt
0
Z
(||f0 (x)| − |s0 (x)||)p dx
≤
Rn
Z
≤
ZΩ
≤
(||f (x)| − |s(x)||)p dx
(|f (x) − s(x)|)p dx = kf − skpp,Ω .
Ω
1.3
Rearrangement invariant spaces
The notion of the non-increasing rearrangement allows us to study a new
class of spaces, the rearrangement invariant Banach function spaces. It is
interesting that this notion covers the similarities between all Lebesgue spaces,
Orlicz spaces and Lorentz spaces.
Definition 1.14 (rearrangement invariant norms). A rearrangement invariant Banach function norm, %, on the set, M(E), of measurable function on
the E ⊂ Rn satisfies the following seven axioms:
(A1) %(f ) = %(|f |) ≥ 0 with %(f ) = 0 if and only if f = 0 a. e. on E;
(A2) %(cf ) = |c| %(f ), c ∈ R;
(A3) %(f + g) ≤ %(f ) + %(g);
(A4) fn ↑ f implies %(fn ) ↑ %(f ), {fn } is a sequence in M(E);
(A5) %(χF ) < ∞ for all F ⊂ E, |F | < ∞, where χF denotes the characteristic
function of F ;
R
(A6) F |f (x)| dx ≤ CF %(f ), for all measurable F ⊂ E, with |F | < ∞, where
CF is independent of f ;
8
(A7) %(f ) = %(g) whenever f ∗ = g ∗ ,
where f, g ∈ M(E). Usually, the rearrangement invariant Banach function
norms are called just r. i. norms. The collection of functions from M(E) such
that %(f ) < ∞ is called a rearrangement invariant Banach function space and
we denote it as X% (E). For f ∈ X% (E) we define the norm
kf kX% = %(f ).
Luxemburg has shown that corresponding to any r. i. norm % on M(E)
there is a r. i. norm, %, on M (0, |E|) for which
%(f ) = %(f ∗ ), f ∈ M(E).
The classical Lebesgue norm, k.kp , is rearrangement invariant since
Z
kf kp :=
|f (x)|p dx
p1
"Z
=
E
|E|
# p1
(f ∗ (t))p dt
, 1 ≤ p < ∞,
0
and
kf k∞ := esssup |f (x)| = f ∗ (0+).
x∈E
Observe that, even if E is a complicated domain, the second integral can be
readily calculated once an approximation to f ∗ is known. The same is true for
the norm of the more general Orlicz gauge norm, k.kΦ , which we now define.
Definition 1.15 (Young function, Orlicz gauge norm). Let E ⊂ Rn and
suppose Φ is a Young function, that is,
Z t
Φ(x) :=
φ(s) ds,
0
where s > 0 and φ(t) is increasing, φ(0+) = 0 and limt→∞ φ(t) = ∞. Then,
if f : E → R is measurable, its Orlicz gauge norm is
Z
|f (x)|
kf kΦ := inf λ > 0 :
Φ
dx ≤ 1 .
λ
E
One can show that
Z
|f | (t)
kf kΦ = %Φ (f ) = %Φ (f ) = inf λ > 0 :
Φ
dt ≤ 1 .
λ
E
∗
9
Remark 1.16. We remark that if Φp (t) = tp , 1 ≤ p < ∞, then
kf kΦp = kf kp .
A more recent generalization of k.kp is Lorentz norm defined in the following definition.
Definition 1.17 (Lorentz norm). Let E ⊂ Rn , p, q ∈ (0, ∞], then the Lorentz
norm is given by

1q
1 q
 R |E| ∗
1
p
f
(t)
·
t
·
dt
, if 0 < q < ∞,
t
0
kf kp,q =
1
sup
p ∗
if q = ∞.
0<t<∞ t f (t),
Similar to an associated Lebesgue spaces Lp and Lp0 there exists associated
pairs of r. i. spaces.
Definition 1.18 (Associated norm). If % is a function norm, its associate
norm %0 is defined on M(E) by
Z
0
% (f ) =
sup
f g dµ , f ∈ M(E).
g∈M(E):%(g)≤1
R
If % is a function norm, then the associated function norm %0 is itself a
function norm. Let us note that %00 = %. The Lebesgue function norm k.kp
has the associated function norm k.kp0 if p ∈ (1, ∞), where p0 is such number
that
1
1
+ 0 = 1.
p p
The associated norm to k.k1 is k.k∞ and vice versa.
Let 1 < p < ∞ and 1 ≤ q ≤ ∞, then the associated space to the Lorentz
space Lp,q , is the Lorentz space Lp0 ,q0 where p0 and q 0 satisfy
1
1
1
1
+ 0 = 1 and + 0 = 1.
p p
q q
Let us note that in r. i. spaces we have the Hölder inequality of the following
form.
Theorem 1.19. Let X be a r. i. space with associated space X 0 . If f ∈ X
and g ∈ X 0 , then f g is integrable and
Z
|f g| ≤ kf kX kgkX 0 .
10
Proof. This is basically Theorem 2.4 in [2, Chapter 1].
The basic tool for working with r. i. norms is the so-called Hardy-LittlewoodPólya Principle.
Theorem 1.20. Let E ⊂ Rn be a measurable set and suppose that f and g
are locally-integrable on E. Then, for any r. i. norm % on M(E), one has
Z t
Z t
∗
g ∗ (s) ds, 0 < t < |E| ,
f (s) ds ≤
0
0
implies
%(f ) ≤ %(g).
Proof. This is Theorem 4.2 from [2, Chapter 2].
We will apply the Hardy-Littlewood-Pólya Principle to the following inequality.
Theorem 1.21. As before, let E ⊂ Rn be measurable and suppose f and g
are integrable functions on E. Then,
Z t
Z t
∗
∗ ∗
(f − g ) (s) ds ≤
(f − g)∗ (s) ds,
0 < t < |E| .
0
0
Proof. The proof can be found in [10].
Remark 1.22. The previous two theorems ensure that
kf ∗ − s∗ k∞ ≤ k(f − s)∗ k∞ = kf − sk∞ .
Definition 1.23. Let (R, µ) be a totally σ-finite measure space.
• The space L1 + L∞ consists of all functions f in M(R, µ) that are representable as a sum f = g + h of functions g ∈ L1 and h ∈ L∞ . For
each f ∈ L1 + L∞ , let
kf kL1 +L∞ = inf kgkL1 + khkL∞ ,
where the infimum is taken over all representations f = g + h of the
kind described above.
• For each f in the intersection L1 ∩ L∞ of L1 and L∞ , let
kf kL1 ∩L∞ = max kf kL1 , kf kL∞ .
11
Because it will be useful later, let us recall the following interesting fact.
If the (R, µ) is a non-atomic, σ-finite measurable space and X is an arbitrary
rearrangement-invariant Banach function space over (R, µ), then
L1 ∩ L∞ ,→ X ,→ L1 + L∞ .
This is proved in Theorem 6.6 in [2, Chapter 2].
It is possible to create a similar construction to that of Sobolev spaces over
Lebesgues spaces. This generalization is showed in the following definition.
Definition 1.24. Let Ω ⊂ Rn be an open domain and let X(Ω) be a r. i. space,
we define the following Sobolev type spaces:
V 1 X(Ω) = {u : Ω → R : u is weakly differentiable on Ω and |∇u| ∈ X(Ω)},
W 1 X(Ω) = X(Ω) ∩ V 1 X(Ω)
and
V01 X(Ω) = {u : Ω → R : u0 is weakly differentiable on Rn and |∇u| ∈ X(Ω)},
where
(
u(x),
u0 (x) =
0,
for x ∈ Ω,
otherwise.
Moreover,
W01 X(Ω) = X(Ω) ∩ V01 X(Ω).
These spaces are equipped with the following norm
kuk = kukX(Ω) + |∇u|X(Ω) .
1.4
The finite element method
Before we can present the discussed algorithm, we need to build basics of
theory of the finite element methods. Doing that, we will follow the approach
of the book [3]. First, we define a finite element.
Definition 1.25 (finite element). Let
1. K ⊂ Rn be a domain with piecewise smooth boundary,
2. P be a finite-dimensional space of functions on K and
3. N = {N1 , N2 , . . . , Nk } be a basis for a dual space P ∗ .
12
Then (K, P, N ) is called a finite element.
Remark 1.26. It is implicitly assumed that the elements of N belong to the
dual space of some larger function space.
Definition 1.27. Let (K, P, N ) be a finite element, and let {φ1 , φ2 , . . . , φk }
be the basis for P such that Ni (φj ) = δij , Ni ∈ N , then it is called the nodal
basis for P or a dual base to N .
Definition 1.28 (Local interpolant). Given a finite element (K, P, N ), let
the set {φi : 1 ≤ i ≤ k} ⊂ P be the basis dual to N . If v is a function for
which all Ni (v), i = 1, . . . , d, are defined, then we define the local interpolant
by
k
X
ΠK v :=
Ni (v)φi .
i=1
The most frequent finite elements are the triples of a simplex, some subspace of polynomials and a functional which appoints to a function it’s value
or a derivative at a given point. This is also the case of algorithms discussed
in this thesis.
Our method of approximation will be based on linear Lagrange elements.
The finite element method is usually based on a collection of finite elements
which locally determines approximation of a problem and it’s solution is taken
to be approximation of the original solution. The following definitions specify
the relationship of domains of finite elements used in our algorithm.
Definition 1.29 (linear Lagrange element). Let T be a simplex in Rn with
vertices a1 , a2 , . . . , an+1 and let us denote N1 = {F1 , F2 , . . . , Fn+1 }, where
Fi (f ) = f (ai ) for any (continuous) f : T → R. Then the finite element
(T, P1 , N1 ) is called a linear Lagrange element.
Usually all the finite elements used to solve a problem are similar and
differ only in the domains. Therefore it is useful to introduce a referencing
simplex and define on it a similar finite element, so algorithms, properties and
estimates can be described on an example of this particular finite element.
Definition 1.30. Let T ⊂ Rn be a simplex with diameter d, let assign V0 =
[0, 0, . . . , 0] and Vi = d·ei , where ei = [0, . . . , 0, 1, 0, . . . , 0] has a non-zero value
in the i-th coordinate. Then the referencing simplex, Td , of the simplex T is
the simplex with vertices V0 , V1 , · · · , Vn . There are n+1 affine transformations
which map simplex Td on T . These affine transformations will be called
referencing tranformations.
13
Definition 1.31 (triangulation). Let Ω ⊂ Rn . Then a finite collection of
subsets of Ω, T , is called a triangulation if it meets the following conditions:
(T 1) Each set T ∈ T is closed, connected and has non-empty interior.
(T 2) The boundary, ∂T , of each T ∈ T is Lipschitz-continuous.
(T 3) It holds that
Ω = ∪T ∈T T .
(T 4) Intersection of interiors of any two sets in T is empty.
If all sets in T are simplices, then T can be called a simplex triangulation.
Definition 1.32 (conforming triangulation). Let Ω ⊂ Rn be a polygonal
domain. Let T be a simplex triangulation. Then T is called a conforming
triangulation if the following condition holds.
(T 5) If I = Ti ∩ Tj , then I is an m-facet of Ti and also of Tj for some
m ∈ {0, . . . , n − 1}.
Now we can state the main definition of this section.
Definition 1.33 (finite element space). Let Ω ⊂ Rn be a bounded domain
with Lipschitz-continuous boundary and let T be a triangulation of domain
Ω which fulfil conditions (T 1) - (T 4). To all sets T ∈ T let there be assigned
a finite element (T, PT , NT ). Let T, T 0 ∈ T a φ ∈ NT , φ0 ∈ NT 0 . Then φ and
φ0 are equivalent if
φ (v|T ) = φ0 (v|T 0 )
holds for all v ∈ C ∞ (Rn ). Suppose that for all T ∈ T , the functionals in NT
are pairwise non-equivalent. Therefore the set ∪T ∈T NT can be divided into
subsets Nh,1 , Nh,2 , . . . , Nh,N , where Nh,i contains equivalent functionals and
any two equivalent functionals from ∪T ∈T NT are in the same set Nh,i . Let us
introduce for i = 1, 2, . . . , N the symbol
Th,i = {T ∈ T : Nh,i ∩ NT 6= ∅} .
Under these assumptions the set Nh,i ∩ NT contains exactly one functional for a chosen T ∈ Th,i , we will denote this functional by φT,i . Let us
define the finite element space corresponding to the finite element domain
(Ω, T , {(T, PT , NT ) ; T ∈ T } as follows:
X = {vh ∈ L2 (Ω) : vk |T ∈ PT ; φK,i (vh |K ) = φK 0 ,i (vh |K 0 )} ,
where T ∈ T and K, K 0 ∈ Th,i for i = 1, 2, . . . , N .
14
Definition 1.34. Let P ⊂ Rn be a polyhedron, let T be a conforming simplex
triangulation of P . Let us assign to each T ∈ T a linear Lagrangian finite
element and let Xh be the finite element space which corresponds to T and
assigned finite elements. Then Xh is called a Lagrangian finite element space
which corresponds to the triangulation T .
Definition 1.35 (Finite element domain). Let P ⊂ Rn be a polygon domain
and let T be a triangulation of P which fulfills conditions (T 1) – (T 5). To all
sets T ∈ T let there be assigned a finite element (T, PT , NT ). Then for sake
of brevity let us call the domain P equipped by triangulation T and assigned
finite elements a finite element domain. By simplex linear Lagrange finite
element domain we mean a finite element domain such that the triangulation
T contains only simplices and all assigned finite elements are linear Lagrange
finite elements.
The following definition describes how an approximation from the finite
elements space is assigned to a given function.
Definition 1.36 (interpolation operator). Let (Ω, T , {(T, PT , NT ) ; T ∈ T })
be a finite element domain and let Q(Ω) be a space of all real functions
defined on Ω, such that for all T ∈ T all functionals from NT are defined on
Q(Ω)|T . The interpolation operator ΠΩ : Q(Ω) → X assigns to v ∈ Q(Ω) a
function ΠΩ v ∈ X such that φT,i (v|T ) = φT,i ((ΠΩ v)|T ) for all T ∈ T and
i ∈ {1, . . . , N }.
Remark 1.37. Let Ω be a finite element domain with triangulation T and
finite elements {(T, PT , NT ); T ∈ T }. Let X be the corresponding finite element space. Because ΠΩ f is in X and therefore in L2 (Ω), formally speaking,
ΠΩ f is a class of functions such that they are equal to each other almost
everywhere. This is useful because of non-emptiness of intersection of some
simplices in triangulation, where a problem could occur with the ambiguity
of value of ΠΩ f . However, to simplify our argumentation, let us assume that
for all x ∈ Ω holds
ΠΩ f (x) ∈ ∪T ∈T :x∈T {P (x) : P ∈ PT such that φj (P ) = φj (f ), ∀φj ∈ NT } .
In other words, the values of ΠΩ f are spanned only from “suggestions” of
involved finite elements. If we say that ΠΩ f is continuous, we mean that
there exists a continuous representative in the class ΠΩ f .
Lemma 1.38. Let P be a linear Lagrange finite element domain with triangulation T = {Ti }i∈I . Let f ∈ C(P ). Then s = ΠP f is continuous, too.
15
Proof. Let be x ∈ P , we prove that s is continuous in x. Our proof splits in
the following two cases.
• If x is in the interior of some T ∈ T , then s is continuous in x because
it is continuous on some neighborhood of x because it is linear on T .
• Let x be in the boundary of some Ti ∈ T . Then, thanks to (T 5), it must
lie in an s-facet , 0 ≤ s ≤ n − 1, common to simplices Tj , j ∈ J ⊂ I.
Let us denote that s-facet as F . Any linear function on s-simplex is
defined by prescribed values on vertices of that simplex. Because all
simplices Tj , j ∈ J, contain vertices of F and the linear function s Tj
has values on F determined by values of f in vertices of F , it holds that
s Tj (x) = s Tk (x), for all j, k ∈ J. Using the classical Heine’s theorem
we can prove that for any sequence {xk }∞
k=1 , xk ∈ P , for all k ≥ 1 such
that limk→∞ xk = x it hodls that limk→∞ s(xk ) = x. Let us just consider
maximal subsequences of xk denoted {xtj }∞
tj =1 such that xtj ∈ Tj , for all
tj ≥ 1, for j ∈ J. Then to a given ε > 0 we can found a mj ∈ N such
that s(xtj ) − s(x) < ε for all tj > mj . We set m = maxj∈J mj . Then
s(xk ) − s(x) < ε for all k > m because xk ⊂ ∪j∈J ∪tj ≥1 {xtj }. Therefore
s is continuous in x.
We can describe our algorithm for approximation of the non-increasing
rearrangement of a function in the following way. Let Ω ⊂ Rn be a polyhedron.
Let f : Ω → R. We approximate the rearrangement of the original function
f by the rearrangement of the function ΠΩ f from a finite element space
corresponding to some simplex-triangulation of domain Ω, where each simplex
got assigned a linear Lagrange finite element.
1.5
Classical estimates on interpolation error
In this section we present some known results on bound of error of interpolation. The following two theorems are borrowed from [3].
Definition 1.39. Let (K, P, N ) be a finite element and let F be an affine
map. The finite element (K̂, P̂, N̂ ) is affine equivalent to (K, P, N ) if
• F (K) = K̂,
n
o
• P̂ ◦ F : P̂ ∈ P̂ = P and
•
n
o
ΦN : ΦN (f ) = N (fˆ ◦ F ); f from domain space of N̂ ; N ∈ N = N̂ .
16
Definition 1.40. Ω ⊂ Rn is star-shaped with respect to B if, for all x ∈ Ω,
the closed convex hull of {x} ∪ B is a subset of Ω.
Remark 1.41. Let us note that a simplex is star-shaped with respect to any
ball it contains.
Definition 1.42. Suppose Ω has diameter d and is star-shaped with respect
to a ball B, let B denote a set of all balls such that Ω is star-shaped with
respect to them. We denote
%Ω = sup diam B and hΩ = diam Ω,
B∈B
then the chunkiness parameter of Ω is defined by
γ=
hΩ
.
%Ω
Let Ω be a given domain and let {T h }, 0 < h ≤ 1, be a family of triangulations
such that
max diam T : T ∈ T h ≤ h diam Ω,
for all h ∈ (0, 1]. If, for such a family, there exists a σ > 0 such that for all
T ∈ T h and for h ∈ (0, 1] it holds
%T ≥ σhT ,
(1.2)
then this family is called a non-degenerate.
Remark 1.43. The family T h is non-degenerate if and only if the chunkiness
parameter is uniformly bounded for all T ∈ T h and for all h ∈ (0, 1].
To a given family of triangulations we can create a family of finite element
domains by assigning to all sets in each triangulation a finite element. Such
a family we will call a family of finite element domains.
Theorem 1.44. Let (K, P, N ) be a finite element satisfying
1. K is a star-shaped with respect to some ball,
2. Pm−1 ⊂ P ⊂ Wm,∞ (K) and
∗
3. N ⊂ Cl K .
17
Suppose 1 ≤ p ≤ ∞ and either m − l − np > 0 when p > 1 or m − l − n ≥ 0
when p = 1. Then for 0 ≤ i ≤ m nad v ∈ Wm,p (K) we have
|v − ΠK v|i,p,K ≤ Cm,n,γ,σ(K̂) (diam K)m−i |v|m,p,K ,
1
where K̂ = diam
x : x ∈ K , γ is the chunkiness parameter for K and σK =
K
kΠK vkL(Cl (K),Wm,p (K)) .
Proof. Proof can be found in [3, Theorem 4.4.4].
Theorem 1.45. Let {T h }, 0 < h ≤ 1, be a non-degenerate family of subdivisions of a polyhedral domain Ω ∈ Rn . Let (K, P, N ) be a reference element, satysfying the conditions of Theorem 1.44 for some l, m and p. For all
T ∈ T h , 0 < h ≤ 1, let (T, PT , NT ) be the affine-equivalent element. Then
there exists a positive constant C depending on the reference element, n, m,
p and the number σ in (1.2) such that for 0 ≤ s ≤ m,
! p1
X
kv − ΠT h vkps,p,T
≤ Chm−s |v|m,p,Ω
T ∈T h
for all v ∈ Wm,p (Ω), where the left hand side should be interpreted, in the case
p = ∞, as maxT ∈T h kv − ΠT h vks,∞,T . For 0 ≤ s ≤ l,
n
max kv − ΠT h vks,∞,T ≤ Chm−s− p |v|m,p,Ω ∀v ∈ Wm,p (Ω).
T ∈T h
Proof. Proof can be found in [3], Theorem 4.4.20.
Remark 1.46. Let us examine the validity of assumptions of the preceding
theorem in the case of a family which contains only simplex linear Lagrangian
finite element domains.
In that case all assigned finite elements are affine equivalent to one linear
Lagrange element (T, P1 , N1 ), where T is a simplex. It is clear that (T, P1 , N1 )
satisfies the first assumption from Theorem 1.44, the second condition is fulfilled only for m ≤ 2 and the third condition is fulfilled for any l > 0. This
leaves us with p ∈ (n, ∞] for n > 1. Working with the simple linear Lagrange finite elements, we can apply Theorem 1.45 with s = 0, 1, m = 2 and
p ∈ (n, ∞] provided that our family of triangulations is non-degenerated.
18
2. Description of the algorithm
To shortly introduce the algorithm presented in this thesis, we say that it uses
the Lagrange finite element method with conforming simplex triangulation to
approximate the rearrangement of the original function by the rearrangement
of the approximation. In this chapter, we will go through this procedure in
detail. Later, we will also quickly present some results in one dimension with
splines of higher order. We conclude this chapter by presentation of an older
algorithm.
We will summarize the situation in which our algorithm could be used.
Let P ⊂ Rn , be a polyhedron, f : P → R a real function. Assume also that
T is a conforming triangulation of P . If T ∈ T is a simplex, then we denote
the linear function on T which agrees with f in all vertices of T as lT . We
assign to each T ∈ T a linear Lagrangian finite element and create the finite
element space Xh of functions which are linear on each T ∈ T . We denote
s = ΠP (f ). It is worth noting that s|T = lT for all T ∈ T . Finally we can
proceed to the description of our algorithm.
P
In order to determine s∗ it suffices to compute µs = T µlT . To do this
we need to measure level sets
EλlT ,T := {x ∈ T : |lT (x)| > λ}
for each simplex T in the decomposition of P .
Suppose the vertices V1 , V2 , . . . , Vn+1 of T have been so ordered that z1 ≤
z2 ≤ · · · ≤ zn+1 , where zk = |f (Vk )|, k = 1, 2, . . . , n + 1. If λ ≤ z1 , then
µlT (λ) = |T |, while if λ ≥ zn+1 , µlT (λ) = 0. Suppose, then, zk0 −1 ≤ λ ≤ zk0
for some k0 ∈ {2, . . . , n + 1}. In that case the hyperplane lT (x) = λ divides
T into two polyhedra P1 and P2 .
The next step is to decide on which part, P1 or P2 , is lT > λ almost
everywhere. The following notation will be useful in more complicated cases.
Notation 2.1. Let be a, b ∈ Rn and let l be a linear function on Rn . Let
be l(a) = C, l(b) = D, C < D and let λ ∈ [C, D]. Then there exists exactly
one point δa,b,l,λ on the line segment between a and b such that l(δa,b,l,λ ) = λ.
Usually it will be obvious to which linear function and λ our symbol belongs
and we will write only δa,b instead of δa,b,l,λ . It holds that
δa,b,l,λ = a
λ−C
D−λ
+b
.
D−C
D−C
Motivation of the previous notation is in denoting a point on edge of
simplex with given value lT = λ.
19
Figure 2.1: Example of a linear function on triangular domain, with z1 < λ <
z2 . The level set a gray polygon in base, the red triangle is the complement.
At this point our description of algorithm will split into cases according
to dimension of domain of function. We restrict ourselves to cases n = 1, 2, 3.
The following sections will deal with each case separately.
2.1
The case n = 1
However in the case n = 1 the situation is simpler and splines of higher
order can be used (this matter is a subject of Section 2.4), let us present the
algorithm with linear splines for the sake of completeness. Simplices in R
are intervals. Let us have the interval I with endpoints x and y. We denote
a = lI (x) and b = lI (y). Then the formula for distribution function of lI is


λ ≤ min{a, b},
|x − y| ,
max{a,b}−λ
µlI (λ) = |x − y| |b−a| , min{a, b} < λ ≤ max{a, b},


0,
max{a, b} < λ.
2.2
The case n = 2
Now, let n = 2. Let us leave aside degenerate cases and let us assume that
z1 < z2 < z3 . When z1 < λ ≤ z2 , we are in the situation shown on Figure 2.1,
in which
20
Figure 2.2: The second case, where z1 < λ < z2 . The level set is the red
triangle in base.
lT ,T Eλ = |T | − |T1 | ,
T1 being the red triangle
in
the plane v1 v2 v3 . The case z2 < λ ≤ z3 is pictured
lT ,T on Figure 2.2; here Eλ = |T2 |, where T2 is the red triangle in plane v1 v2 v3 .
As is well-known, the area of a triangle can be expressed in terms of a
determinant involving the coordinates of its vertices. Using this fact one can
show that, on [z1 , z2 ], µlT (λ) is a quadratic spline equal to |T | at z1 and equal
−z2
|T | at z2 and to 0 at z3 , namely,
to zz33 −z
1

|T | ,
λ ≤ z1 ,



2

(λ−z1 )
 1−
|T | , z1 < λ ≤ z2 ,
(z2 −z1 )(z3 −z1 )
µlT (λ) =
2
 (z3 −λ)
|T | ,
z2 < λ ≤ z3 ,


(z −z )(z −z )

 3 2 3 1
0,
z3 < λ.
2.3
The case n = 3
If n = 3 then the formula for µlT (λ) will be split into five cases.
1. λ ≤ z1 : This case is trivial, µlT (λ) = |T |.
2. z1 < λ < z2 : The level set in this case is a polyhedron M , whose complement in T , let us call it TM , is a tetrahedron with vertices V1 , δV1 ,V2 ,
21
Figure 2.3: The situation in R3 if z1 < λ < z2 . The red tetrahedron is the
complement of the level set.
δV1 ,V3 and δV1 ,V4 . The situation is depicted in Figure 2.3. Therefore, we
have
lT ,T Eλ = |T | − |TM | .
3. z2 < λ < z3 : When λ is between z2 and z3 the situation is more complicated. Both the level set and its complement in T are not tetrahedra.
We choose to compute the measure of a level set by dividing it into
tetrahedra with intersection of measure zero. This can be done in several ways. Here we will divide this area into three tetrahedra, but in [13]
could be find a decomposition into four tetrahedra with the advantage
of a better proportion of tetrahedra in division. It might be worth noting that this problem is equivalent to that of dividing a triangle-based
prismoid into tetrahedra. The situation and division of the level set is
shown in Figure 2.4.
The level set in this case is the polyhedron with vertices δV1 ,V3 , δV1 ,V4 ,
δV2 ,V3 , δV2 ,V4 , V3 and V4 . Such a polyhedron can be divided into three
tetrahedra TR , TG and TB , where TR has vertices δV2 ,V3 , δV1 ,V4 ,δV2 ,V4 and
V4 , the tetrahedron TB has vertices δV2 ,V3 , δV1 ,V3 , δV1 ,V4 , V4 and the last
tetrahedron TG is defined by vertices δV2 ,V3 , V3 , δV1 ,V3 and V4 .
22
Figure 2.4: The situation if z2 < λ < z3 , the tetrahedra TR , TG and TB are
the red, green and blue tetrahedra.
Hence, we get a formula
lT ,T E
λ = |TR | + |TG | + |TB | .
4. z3 < λ < z4 : In this case the only vertex which lies in EλlT ,T is V4 . The
plane lT (x) = λ cuts tetrahedron T into polyhedron P1 with vertices
V1 , V2 , V3 , O1 , O2 and O3 and tetrahedron TO with vertices O1 , O2 , O3
and V4 , where points Oi , i = 1, 2, 3, lies on edge between vertices Vi and
Vk and it holds lT (Oi ) = λ. Situation is illustrated in Figure 2.5. It
holds that lT (x) ≥ λ on TO and therefore we get
lT ,T Eλ (λ) = |TO | .
To be specific we add vertices of TO : δV1 ,V4 , δV2 ,V4 , δV3 ,V4 , V4 .
5. z4 ≤ λ: Finally, volume of the level set in this case is zero, hence
µlT (λ) = 0.
The previous paragraph shows how to convert the problem of computing a
distribution function of a linear function defined on a tetrahedron into a sum of
volumes of smaller tetrahedra. The previous facts, together with Remark 1.8,
23
Figure 2.5: If z3 < λ < z4 , then the level set is a tetrahedron similar to the
marked red tetrahedron.
ensures that there is a similar formula as in the two-dimensional case but
it is rather complicated and gives no further information, therefore we shall
not present it here. This concludes the description of algorithm in three
dimensional space.
Let us note that our method was implemented in the case n = 2 with linear
splines and in the case n = 1 with clamped cubic splines. More information
can be found in Appendix A. We will focus on the approximation by cubic
splines in the following section.
2.4
Approximation by splines of higher order
in one dimension
Focused on domain as finite interval I ⊂ R we can use the approximation by
splines of higher order. Let us present the error estimates based on approximation by the so-called clamped cubic splines and natural cubic splines.
Definition 2.2 (Partition of interval). Let I = [a, b] ⊂ R be an arbitrary
interval. Let
π : a = x0 < x1 < · · · < xn = b
be a collection of knots partitioning interval I. We define
∆xi = xi+1 − xi , i = 0, 1, . . . , n − 1.
24
The norm of the partition is defined as follows:
kπk =
max
i∈{0,...,n−1}
∆xi ,
and the mesh ratio is
Mπ =
kπk
mini∈{0,...,n−1} ∆xi
.
Definition 2.3 (cubic spline approximation). Let π be a partition of interval
I = [a, b], a, b ∈ R. Then the function s such that
s [xi ,xi+1 ] ∈ P ([xi , xi+1 ]) ,
for i ∈ {0, 1, . . . , n − 1}, and s ∈ C2 (I), is called a cubic spline. Let f : I → R
be such that there exist limits l = limx→a+ f 0 (x) and r = limx→b− f 0 (x), then
the clamped cubic spline approximation of f is the only spline such that
s(xi ) = f (xi )
for all i ∈ {0, 1, . . . , n}, in addition it holds
lim s0 (x) = l and
x→a+
lim s0 (x) = r.
x→b−
If there exist limits L = limx→a+ f (2) (x) and R = limx→b− f (2) (x), then the
natural cubic spline approximation of function f is a cubic spline which satisfies
s(xi ) = f (xi )
for all i ∈ {0, 1, . . . , n} while also
lim s(2) (x) = L and
x→a+
lim s(2) (x) = R.
x→b−
Let us shortly discuss the algorithm of computing values of distribution
function in case of approximation by a cubic spline. The definition of a cubic
spline and error bounds are mentioned in Section 2.4. We will split the interval
I into subintervals such that s in not only a cubic polynomial on each of them,
but that I is monotone on them in addition. This will lead to a new partition
of I, say π 0 . Computing of the value of the distribution function of a monotone
cubic polynomial then turns into the question of finding its roots.
We present a theorem about error bounds for spline approximation which
originated in [9, Theorem 5].
25
Theorem 2.4. Let s be a clamped cubic or natural spline approximation of
f ∈ C4 (I), I = [a, b], a, b ∈ R corresponding to a partition π. Then
(f − s)(r) ≤ Cr f (4) kπk4−r ,
∞
∞
for r = 0, 1, 2, 3, with constant
C0 =
Mπ +
1
3
5
, C1 = , C2 = , C3 =
384
24
8
2
1
Mπ
.
Proof. This is the main result of the paper [9].
This leads us to estimate the error involved in the approximation of a
non-increasing rearrangement in one dimension. Advantage of approximation
by splines of higher order is that we can use an easily calculated derivative
of approximation and its rearrangement to approximate the derivative of the
non-increasing rearrangement. The error bounds for this kind of approximation are included in the following theorem, too.
Theorem 2.5. Let I = [a, b], a, b ∈ R and consider f ∈ C(I). Suppose that
π is a partition of I. Let s1 be a linear spline interpolating f in the nodes of
π. Then
Z xi+1 0
f
(x
)
−
f
(x
)
i+1
i
∗
∗
dx.
f (x) −
kf − s1 k∞ ≤
max
i∈{0,1,...,n−1} x
xi+1 − xi i
when f ∈ C1 (I). Moreover, if f ∈ C4 (I) and s3 is a clamped cubic spline
approximation of f with respect to the partition π, then
kf ∗ − s∗3 k∞
∗ (1) ∗ f (1) − s3
∞
∗ ∗
(2)
f (2) − s3
∞
∗
∗
(3)
f (3) − s3
∞
5 f (4) kπk4 ,
∞
384
1 f (4) kπk3 ,
≤
∞
24
3
f (4) kπk2 ,
≤
∞
8
1 1
f (4) kπk .
Mπ +
≤
∞
2
Mπ
≤
Proof. First, we use the claim of Remark 1.22 to get
kf ∗ − s∗ k∞ ≤ kf − sk∞ ,
26
for both s1 and s3 . On [xi , xi+1 ] we have
Z x
Z x
f (xi+1 ) − f (xi )
0
0
f 0 (t) −
f (t) − s1 (t) dt =
dt
f (x) − s1 (x) =
xi+1 − xi
xi
xi
Z x
0
f (xi+1 ) − f (xi ) ≤
dt
f (t) −
xi+1 − xi xi
Z xi+1 0
f
(x
)
−
f
(x
)
i+1
i
f (t) −
dt.
≤
xi+1 − xi xi
Together then
kf ∗ − s∗1 k∞ ≤ kf − s1 k∞,I
≤
max
kf − s1 k∞,[xi ,xi+1 ]
i∈{0,1,...,n−1}
Z xi+1 0
f
(x
)
−
f
(x
)
i+1
i
f (t) −
dt.
≤
max
i∈{0,1,...,n−1} x
xi+1 − xi i
For the cubic spline approximation we use Theorem 2.4 and get
5 f (4) kπk4 .
kf ∗ − s∗3 k∞ ≤ kf − s3 k∞ ≤
∞
384
Estimates for derivative can be achieved analogously.
2.5
Older algorithm
The only algorithm dealing with an approximation of the non-increasing rearrangement which is known to the author is that presented in the paper
[12], due to Talenti. This algorithm approximates the rearrangement of the
input function by that of a step function approximation. Considered are only
real-valued functions defined on a finite interval in R.
Let us shortly present the results of the mentioned paper, for proofs of
stated theorems and more details please consult [12].
First, we need to define a step function approximation.
Notation 2.6. Let f : I → R, I = [a, b], a, b ∈ R and n ∈ N. Then we define
a step function approximation to f as follows:
n
X
xi−1 + xi
χ[xi−1 ,xi ) (t),
Sf,n (t) =
f
2
i=1
where
xi = a +
i(b − a)
, i = 0, 1, 2 . . . , n.
n
27
Such an approximation of a function yields the following estimates.
Theorem 2.7. Let be a, b ∈ R, I = [a, b], f : I → R, f is absolutely continuous and it’s derivative u0 is in Lq (I) for some q ≥ 1 and let there be p ≥ 1
such that q ≤ p. Then it holds
1+ p1 − 1q
h
ku0 kq ,
ku − Sf,n kp ≤ C
2
where
C = 1+
q
p(q − 1)
1q − p1
1
p
1+p 1−
q
B p1 , 1 − 1q
and B stands for Euler’s Beta function, and
h=
b−a
.
n
Moreover, we have
∗
∗ f − Sf,n
≤ kf − Sf,n kp ≤ C
p
1+ p1 − 1q
h
ku0 kq .
2
We will now describe how to compute a non-increasing rearrangement of
Sf,n . Let V = {v1 , v2 , . . . , vn } be a finite sequence such that
xi−1 + xi
vi = f
, i = 1, . . . , n.
2
We denote by S = {s1 , s2 , . . . , sn } the sequence V sorted in a decreasing order.
Then the non-increasing rearrangement of the step function approximation
Sf,n is defined as follows:
∗
(Sf,n ) (t) =
n
X
si χ[xi−1 ,xi ) .
i=1
This algorithm could be easily generalized to higher dimensions. If we use
an equidistant partition into cubes in Rn or a simplex partitions we get
the similar algorithm for R2 or R3 . Splines of higher order can be used, too.
The relation of the algorithm which is the subject to this thesis to the algorithm described in this section is that it uses the linear interpolation instead
of an approximation by a step function. The linear approximation then yields
a sharper estimates in exchange for more complicated computation of rearrangement of approximation, which is no surprise.
28
3. Error estimates
The subject of this chapter is the presentation of error estimates of our method
of approximation. We start with the result which follows from the known
results on the error bound of interpolation from finite element spaces.
3.1
Error estimates based on the interpolation error
Taking advantage of general finite element theory we can readily state a
general result.
Theorem 3.1. Let P ⊂ Rn be a polyhedron, let be f ∈ W2,∞ (P ). Let Ωh , h ∈
(0, 1] be a non-degenerate family of linear Lagrangian finite element domains
with conforming simplex triangulations. We denote sh = ΠΩh f
kf ∗ − s∗h k∞ ≤ Ch2 |f |2,∞ .
Proof. Using Remark 1.22 we get that
kf ∗ − s∗h k∞ ≤ kf − sh k∞ .
Now, we are going to use Theorem 1.45 with reference to Remark 1.46. This
gives us
kf − sh k∞ = |f − sh |0,∞,P ≤ Ch2 |f − sh |2,∞,P .
3.2
Supporting theorems
Before we proceed to the main theorems themselves we will state some results
used in proofs of main theorems.
Lemma 3.2. Let X be an r. i. space and let Ω ⊂ Rn be an open set. Let
u ∈ V 1 X(Ω) be such that |{x ∈ Ω : |u(x)| > t}| < ∞ for t > 0. Then u∗ is
locally absolutely continuous and
∗
1 1
du
1−
n
nγn n ≤ |∇u|X(Ω) .
(3.1)
− ds s
X,0,|Ω|
29
Our proof will follow the direction of proof of Lemma 4.1. in [5] because
these lemmas are almost the same. The only difference is that our lemma has
slightly weaker conditions for u.
Proof. Let us set
1
du∗
s1− n ,
φ(s) = nγn −
ds
1
n
for 0 < s < |Ω|. We will prove that
Z s
Z s
∗
φ (r) dr ≤
|∇u|∗ (r) dr,
0
(3.2)
0
for t ∈ (0, |Ω|). The inequality (3.1) is then a consequence of Theorem 1.20
and (3.2). Let 0 ≤ a < b, a, b ∈ R, and let v be the function defined by


|u(x)| ≤ u∗ (b),
0,
v(x) = u(x) − u∗ (b), u∗ (b) < |u(x)| < u∗ (a),

 ∗
u (a) − u∗ (b), u∗ (a) ≤ |u(x)| .
Since |∇u| ∈ X(Ω) we have that |∇u| ∈ X(Ω) ⊂ L1 (Ω) + L∞ (Ω) by Theorem 6.6 in the second chapter of [2]. Then, with aid of Theorem 6.2 from the
same source, we have for s > 0
(
Z s
k|∇u|∗ kL1 +L∞ ,
s ≤ 1,
∗
|∇u| (r) dr = inf
kgkL1 + s khkL∞ ≤
∗
∗
|∇u| =g+h
s k|∇u| kL1 +L∞ , 1 < s,
0
hence
Z
s
|∇u|∗ (r) dr < ∞
0
from which follows |∇u| ∈ L1 (G) for every G ⊂ Ω having finite measure. We
will denote
Ea,b = {x ∈ Ω : u∗ (a) > |u(x)| > u∗ (b)}.
Because |∇u| ∈ L1 (G) for every G ⊂ Ω having finite measure and
|Ea,b | ≤ b − a,
(3.3)
we have that v ∈ V 1 L1 (Rn ). Because the v is also a bounded function with
support of a finite measure, we get also that v ∈ W 1 L1 (Rn ). The coarea
formula for functions of bounded variation applied to v yields
Z
Z u∗ (a)
|∇u| dx =
P ({|u| > t}, Rn ) dt,
(3.4)
Ea,b
u∗ (b)
30
where P (E, Rn ) denotes a perimeter of set P . The coarea formula for BV
function can be found for example in [7], p. 185. The standard isoperimetric
theorem tells us that
1
1
P ({|u| > t}, Rn ) ≥ nγnn |{|u| > t}|1− n .
(3.5)
The last two inequalities imply that
Z
1
1
|∇u| dx ≥ nγnn a n0 [u∗ (a) − u∗ (b)] .
(3.6)
Ea,b
The estimates (3.3) and (3.6) together ensure that u∗ is locally absolutely
continuous. Moreover, the inequalities (3.4) and (3.5) yield
Z u∗ (a)
Z
1
1
n
|{|u| > t}|1− n dt.
|∇u| dx ≥ nγn
u∗ (b)
Ea,b
We have
Z
b
1
n
b
Z
r
φ(r) dr = nγn
1
1− n
a
a
du∗ (r)
−
dr
dr.
If we acknowledge that for r > 0 it holds that
if
du∗ (r)
6= 0 then r = |{|u| > u∗ (r)}| ,
dr
∗
then the change of variables u∗ (r) = t together with the fact that dudr(r) ≤ 0
yields
Z
Z b
Z u∗ (b)
1
1
1−
|∇u| dx.
φ(r) dr = nγnn
|{|u| > t}| n dt ≤
u∗ (a)
a
Ea,b
Let {(ai , bi )}∞
i=1 be any countable family of disjoint intervals in (0, |Ω|). Then
Theorem 1.1 yields
Z
X Z bi
φ(r) dr
φ(r) dr =
∪i (ai ,bi )
ai
i
≤
XZ
|∇u| (x) dx
Eai ,bi
i
Z
≤
|∇u| (x)
Ω
Z P
X
χEai ,bi (x) dx
i
i
≤
0
31
|Eai ,bi |
|∇u|∗ (r) dr.
Thus, by (3.3), we obtain
Z
Z
φ(r) dr ≤
∪i (ai ,bi )
P
i (bi −ai )
|∇u|∗ (r) dr.
0
The last estimate yields
Z
φ(r) dr ≤
sup
|E|=s
s
Z
E
|∇u|∗ (r) dr,
0
since every measurable set E ⊂ (0, |Ω|) can be approximated from outside by
sets of the form ∪i (ai , bi ). Hence (3.2) follows, as its left-hand side coincides
with the left-hand side of the previous inequality.
The following theorem is a paraphrase of Theorem 6.3 in [6]. The presented
proof is a adjustment of the original proof to our conditions.
Theorem 3.3. Let Ω ⊂ Rn be a bounded domain and let k.kR and k.kD be
r. i. norms on M+ (0, |Ω|). If there exists K > 0 for which
Z
|Ω|
1
−1
n
f
(s)s
ds
(3.7)
≤ K kf kD , f ∈ M+ (0, |Ω|),
t
R
then it holds
ku∗ kR ≤ C k|∇u|∗ kD ,
for all continuous u ∈ V 1 LD (Ω) such that
lim u∗ (t) = 0.
(3.8)
t→|Ω|−
Moreover, the constant C reads as
C=
K
1
.
nγnn
Proof. Let u ∈ V 1 LD (Ω) satisfy the condition (3.8). According to Lemma 3.2,
u∗ is absolutely continuous on [t, |Ω| − ε] for eacht t, ε > 0 such that 0 < t <
|Ω| − ε. Therefore
Z |Ω|−ε
du∗
∗
∗
u (t) − u (|Ω| − ε) =
−
ds.
ds
t
Because of (3.8), we have, for t ∈ (0, |Ω|),
Z |Ω| ∗ Z |Ω| ∗
1 du
1
du
∗
1− n
u (t) =
−
(s) ds =
−s
s n −1 ds.
ds
ds
t
t
32
Thus, given (3.7) and the rearrangement-invariance of k.kD ,
Z
!
|Ω| ∗
∗ ∗
1 du
1
1 du
1− n
∗
−1
1−
.
−s
ku kR ≤ s n ds ≤ K −s n
t
ds
ds D
R
From Lemma 3.2 and Theorem 1.20,
∗ ∗
1 du
1−
≤ K1 |∇u|
n
−s
K
D
ds D nγ n
n
we conclude
ku∗ kR ≤
K
k|∇u|∗ kD .
1
n
nγn
Corollary 3.4. Let Ω be a bounded domain in Rn , n > 1, and let f be
a continuous function on Ω having weak first order derivatives on Ω with
lim f ∗ (t) = 0.
t→|Ω|−
If f ∈ W 1 Ln,1 (Ω) then
1 |∇f |
1
n,1,Ω
nγnn
(3.9)
1 |∇f | n .
1
,1,Ω
2
γnn
(3.10)
n 1 |∇f | ,
1
1
n − 1γn
(3.11)
kf k∞,Ω ≤
and if f ∈ W 1 L n2 ,1 (Ω) then
kf kn,1,Ω ≤
Moreover,
kf kn0 ,1,Ω ≤
n
1
provided that f ∈ W L1 (Ω).
Proof. To prove the inequality (3.9) we will use Theorem 3.3 with the following
set-up: k.kR = k.k∞,(0,|Ω|) and k.kD = k.kn,1,(0,|Ω|) . We have
Z
t
1
h(s)s
1
−1
n
ds
∞,(0,|Ω|)
Z
≤
|Ω|
1
h(s)s n −1 ds = khkn,1,(0,|Ω|) ,
0
33
for h ∈ M(0, |Ω|), therefore the assertion of Theorem 3.3 holds with constant
K = 1.
Proof of the second inequality is similar. We set k.kR = k.kn,1,(0,|Ω|) and
k.kD = k.k n ,1,(0,|Ω|) which gives us
2
Z
|Ω|
1
∗
−1
n
h (s)s
ds
t
|Ω|
Z
Z
=
0
n,1,(0,|Ω|)
|Ω|
1
1
h∗ (s)s n −1 ds t n −1 dt.
t
Using Fubini’s theorem we obtain
Z
Z |Ω|
Z s
|Ω|
1
1
1
∗
−1
∗
−1
h (s)s n ds
=
h (s)s n
t n −1 dt ds
t
0
0
n,1,|Ω|
Z |Ω|
1
1
h∗ (s)s n −1 s n ds
= n
0
= n khk n ,1,(0,|Ω|) ,
2
for h ∈ M(0, |Ω|). We have verified the assumptions of Theorem 3.3 with the
corresponding choice of k.kR and k.kD to prove
kf kn,1,(0,1) ≤
1 |∇f | n
.
1
,1,(0,1)
2
n
γn
(3.12)
The inequality (3.11) is proved in the same way. We invoke Theorem 3.3 and
check its assertions. We have %R = k.kn0 ,1 and %D = k.k1 , thus
Z
|Ω|
1
f (s)s n −1 ds t
|Ω|
Z
|Ω|
Z
0
n0 ,1
|Ω|
0
Z
0
=
n
n−1
n
.
n−1
34
1
t
s
1
1
f (s)s n −1 t− n dt ds
=
n0 ,1
Thus, in this case, K =
|Ω|
Z
1
f (s)s n −1 t− n ds dt.
=
Fubini’s theorem yields
Z
|Ω|
1
f (s)s n −1 ds t
1
t
|Ω| Z
Z
1
f (s)s n −1 ds t1− n −1 dt
=
0
Z
|Ω|
f (s) ds =
0
n
kf k1 .
n−1
Lemma 3.5. Let T be a simplex in Rn and f ∈ C(T ) such that there exists
a point x ∈ T where f vanishes. Then the function f satisfies
lim f ∗ (t) = 0.
t→|T |−
Proof. Let there be ε > 0. Provided that f is continuous there is neighborhood B (in T ) of x such that |f | < ε on B. Then we set δ := |B|. Then for
any t ∈ (|T | − δ, |T |] it holds
f ∗ (t) := inf{λ ∈ R : λ ≥ 0, µf (λ) ≤ t} ≤ ε,
because
µf (ε) = |{|f | > ε}| < |T \ B| = |T | − δ < t.
The rest of this section consists of few technical results.
Lemma 3.6. Consider a simplex T ⊂ Rn and a function f ∈ C(T ) having
weak derivatives up to order two on T . Suppose that A is a referencing affine
transformation onto T from the referencing simplex T 0 .Then for the following
function
g(x) := (f ◦ A) (x), x ∈ T 0 ,
and a linear function
ld (x) = g(V0 ) +
n
X
(g(Vr ) − g(V0 ))
r=1
xr
, x = (x1 , . . . , xn ) ∈ T 0 ,
d
it holds
"
|gij | ≤ α
n
X
# 21
(fkl ◦ A)2
,
(3.13)
k,l=1
where
"
α=
max
j,k∈{1,...,n}
n
X
#"
(alj )
2
n
X
#
2
(amk )
.
m=1
l=1
Proof. We have
n
gi
gij
X
∂
∂
∂g
=
g=
f ◦A=
(fk ◦ A) aki ,
=
∂xi
∂xi
∂xi
k=1
!
n
n X
n
X
X
∂
=
(fk ◦ A) aki =
(fkl ◦ A) aki alj .
∂xj k=1
k=1 l=1
35
Applying the Hölder inequality to sum over index l we obtain
n X
n
X
|gij (x)| = (fkl ◦ A) (x)aki alj k=1 "l=1
#
n
n
X
X
= (fkl ◦ A) (x)alj aki k=1 l=1

" n
#2  12 " n
# 12
n
X
X
X
≤ 
(fkl ◦ A) (x)alj 
(aki )2 .
k=1
l=1
k=1
Using the Hölder inequality again one gets the rest of the proof:
"
≤
" n
n
X
X
k=1
""
=
#" n
## 21 " n
# 12
X
X
(fkl ◦ A)2 (x)
(alj )2
(aki )2
l=1
n
X
l=1
#
(alj )2
=
n
X
# 21 "
(aki )2
k=1
"
≤ α
n
n
X
X
k=1
l=1
"
n
X
k=1
## 21 "
"
n
X
(fkl ◦ A)2 (x)
l=1
n
X
# 12
(aki )2
k=1
# 21 "
(alj )2
l=1
n X
n
X
# 21
(fkl ◦ A)2 (x)
k=1 l=1
# 21
(fkl ◦ A)2 (x)
.
k,l=1
Lemma 3.7. Consider a simplex T ⊂ Rn and a function f ∈ C(T ) having
weak derivatives up to order two on T . Suppose that A is a referencing affine
transformation onto T from the referencing simplex T 0 . Then for the following
function
g(x) := (f ◦ A) (x), x ∈ T 0 ,
and a linear function
ld (x) = g(V0 ) +
n
X
(g(Vr ) − g(V0 ))
r=1
xr
, x = (x1 , . . . , xn ) ∈ T 0 ,
d
we have
|∇(|∇(g − ld )|)| ≤ n2 α
36
max
i,j∈{1,...,n}
|fij ◦ A|0,∞,T 0 ,
where
"
α=
max
n
X
k,l∈{1,...,n}
# 12 "
a2rk
n
X
# 21
a2sl
,
s=1
r=1
and aij denotes the ij-th element of the transformation matrix of A. In addition,
n X
∇ ∂ (g − ld ) ≤ n2+ 12 α max kfij ◦ Ak 0 .
∞,T
i,j∈{1,...,n}
∂xk
k=1
Proof. We have
g(Vi ) − g(V0 )
∂
(g − ld ) = gi −
.
∂xi
d
Let us denote
(3.14)
g(Vi )−g(V0 )
d
by ∆i g, then we have

! 21 
n
X
∂
∂ 
(gi − ∆i g)2 
(|∇ (g − ld )|) =
∂xj
∂xj
i=1
n
X
1
=
2
!− 12
(gi − ∆i g)2
2
n
X
(gi − ∆i g) gij .
i=1
i=1
If we apply the Hölder inequality to the second sum in the last expression, we
get
∂
(|∇ (g − ld )|) ≤
∂xj
n
X
(gi − ∆i g)2
!− 12
n
X
i=1
! 21
2
(gi − ∆i g)
i=1
i=1
Together we have
2
|∇ (|∇ (g − ld )|)|
2
n X
∂
=
(|∇ (g − ld )|)
∂x
j
j=1
!
n
n
X
X
2
≤
(gij ) .
j=1
i=1
The estimate for gij from Lemma 3.6 yields
|∇ (|∇ (g − ld )|)|2 ≤
n
X
i,j=1
37
"
α
n
X
k,l=1
n
X
# 21
(fkl ◦ A)2
.
! 21
2
(gij )
.
If we want our estimate to be independent of the indices i and j, we get
|gij | ≤ αn
max |fkl ◦ A|0,∞,T 0 .
(3.15)
k,l∈{1,...,n}
Combining the previous results, we have
! 12
n
X
|∇ (|∇ (g − ld )|)| ≤
gij2
i,j=1
≤ nα
max
k,l∈{1,...,n}
2
≤ nα
n
X
|fkl ◦ A|0,∞,T 0
! 12
12
i,j=1
max
k,l∈{1,...,n}
|fkl ◦ A|0,∞,T 0 ,
which completes the proof of first part of claim. From the (3.14) it follows
that
2 X
2 X
n n
2
∂
∂
∇
(g − ld ) =
(g − ld ) =
(gik )2 ,
∂xk
∂xi ∂xk
i=1
i=1
hence
! 12
n
n
n
n X
X
X
√ X
∂
2
∇
(gik )
(g − ld ) =
max |gik |
≤ n
i∈{1,...,n}
∂xk
k=1
k=1
≤ n
i=1
k=1
1+ 12
max |gi j|
2+ 12
max |fkl ◦ A|0,∞,T 0 ,
≤ n α
i,j∈{1,...,n}
k,l∈{1,...,n}
where the last inequality is a consequence of the inequality (3.15).
Remark 3.8. Let us note that if A is a referencing affine transformation
which maps a referencing simplex T 0 ⊂ Rn onto a simplex T ⊂ Rn , f : T →
Rn , g = f ◦ A and
ld (x) = g(V0 ) +
n
X
(g(Vr ) − g(V0 ))
r=1
xr
, x = (x1 , . . . , xn ) ∈ T 0 ,
d
then the linear function ld ◦A agrees with g on the vertices of T 0 . Consequently,
having defined a linear Lagrange finite element on T 0 , it holds
ld = ΠT 0 (f ).
38
Lemma 3.9. Let Ω ⊂ Rn be measurable, let f ∈ C(Ω) have weak derivatives
of the first order. Then
∗
Z |Ω|
n Z |Ω| X
1
1
1
∂
∗
−1
+ 12
n
n
f (t) t n −1 dt.
|∇f | (t) t
dt ≤ n
(3.16)
∂xk
0
k=1 0
Proof. Using basic properties of the rearrangement, we get
v

2 ∗
Z |Ω|
Z |Ω| u
n X
u
1
1
∂
t
|∇f |∗ (t) t n −1 dt =
f (t)  t n −1 dt
∂xk
0
0
k=1
v
2 !∗
Z |Ω| u
n u X
1
∂
t
f (t)
t n −1 dt
=
∂xk
0
k=1
v
2 !∗
Z |Ω| u
n
uX
1
∂
t
t
≤
f
t n −1 dt.
∂xk
n
0
k=1
Simple adjustments yield:
Z
|Ω|
|∇f |∗ (t) t
1
−1
n
dt ≤
0
≤
≤
≤
v
u
u
tn max
2 !∗
1
t
∂
f
t n −1 dt
k∈{1,...,n}
∂xk
n
0
v
u 2 !∗
Z |Ω|
u
√
1
t
∂
t n −1 max t
f
n
dt
k∈{1,...,n}
∂xk
n
0
s

2 ∗
Z |Ω|
√
1
t
∂
 dt
n
t n −1 max 
f
k∈{1,...,n}
∂x
n
k
0
∗
Z |Ω|
√
1
∂
t
−1
n
tn
max
f
dt.
k∈{1,...,n}
∂xk
n
0
Z
|Ω|
We use the change of variables with y = nt , consequently we have dt = n dy,
1
1
1
t n −1 = n n −1 y n −1 , and we obtain
∗
Z |Ω|
Z |Ω|
n
√
1
1
1
∂
∗
−1
−1 n
−1
n
n
dt ≤
n
n
y
max
f (y) n dy
|∇f | (t) t
k∈{1,...,n}
∂xk
0
0
∗
Z |Ω|
n X
n
1√
1
∂
−1
≤ nn n
yn
f (y)
dy
∂x
k
0
k=1
∗
n Z |Ω| X
√
1
1
∂
n
≤ n n
f (y) y n −1 dy.
∂xk
k=1 0
39
We shall also need the following classical imbedding theorem by GagliardnoNirenberg-Sobolev. It can be found in [1].
Theorem 3.10 (Gagliardo-Nirenberg-Sobolev). Let us assume that u is a
continuously differentiable real-valued function on Rn with compact support.
Then for 1 ≤ p < n there exists a constant C depending only on n and p such
that
kukp∗ ≤ C |∇u|p ,
(3.17)
where
pn
>p
n−p
is the Sobolev conjugate to p. Such a constant is
p∗ =
C=
p (n − 1)
,
n−p
but this may not be the best constant.
3.3
Error estimates
Theorem 3.11. Consider a polyhedron, P , in Rn . Assume that f ∈ C 1 (P )
with
Z |P |
1
|∇f |∗ (t)t n −1 dt < ∞.
0
Let the P be a linear Lagrangian
finite element domain with triangulation
T = {Ti }i∈I . Let Ai = ajk (i) be a matrix representing a referencing affine
trasformation onto Ti from the referencing simplex Tdi . Then, given the linear
spline s = ΠT (f ), one has
kf ∗ − s∗ k∞,P ≤
1
1
n
1
n! γnn
max di |∇((f ◦ Ai ) − ldi )|0,∞,Td
i∈I
(3.18)
i
and if f has a weak derivatives up to second order and in addition it holds
that
Z |P |
2
|Df |∗ (t)t n −1 dt < ∞,
0
then
1
∗
∗
kf − s k∞,P ≤
n n +3
2
n
2
n
2 n! γn
40
|f |2,∞,P max d2i αi ,
i∈I
(3.19)
in which αi denotes
"
αi =
max
k,l∈{1,...,n}
# 12 "
n
X
airk
2
r=1
# 21
n
X
aisl
2
s=1
and di is the diameter of simplex Ti and aisr denotes sr-th element of transformation matrix of the affine transformation Ai . Moreover, it holds
1
∗
n n +3
∗
kf − s k1,P ≤
n!
2
1+ n
2
n
2 γn
|f |2,∞,P
X
αi dn+2
.
i
i∈I
Proof. We fix an index i ∈ I and denote T = Ti , T 0 = Tdi and A = Ai .
Function s restricted on T will be denoted as l and ld = l ◦ A. First, we derive
an error estimate on the simplex T . From the fact that g − ld vanishes in
vertices of T 0 and Corollary 3.4 it follows, with g = f ◦ A,
Z |T 0 |
1
1
|∇(g − ld )|∗ (t) t n −1 dt.
(3.20)
|f − l|0,∞,T = |g − ld |0,∞,T 0 ≤
1
nγnn 0
Hölder’s inequality now yields
Z
1
|f − l|0,∞,T ≤
|T 0 |
|∇(g − ld )|0,∞,T
1
n
nγn
1
t n −1 dt
0
1
|T 0 | n
≤
1
γnn
|∇(g − ld )|0,∞,T .
Using the formula for volume of a simplex one obtains
|f − l|0,∞,T ≤
1
1
n
1
n! γnn
d |∇(g − ld )|0,∞,T .
Hence, we have
|f − l|0,∞,T ≤
1
1
n
1
n! γnn
max di |∇ ((f ◦ Ai ) − ldi )|0,∞,T ,
i∈I
which completes the proof of (3.18). Now, let us go back to inequality (3.20).
With aid of Lemma 3.9 we get
Z |T 0 |
1
1
|f − l|0,∞,T ≤
|∇(g − ld )|∗ (t) t n −1 dt
1
nγnn 0
∗
0 1
1
n Z
1
n n − 2 X |T |
∂
≤
f (t)t n −1 dt.
1
∂xk
γ n k=1 0
n
41
Using Rolle’s theorem we get that for i = 1, . . . , n there exist a point zi on
edge of T 0 between vertices V0 and Vi , such that ∂x∂ i (g − ld ) (zi ) = 0, because
g − ld vanishes at vertices of T 0 . Using the previous fact and the second
inequality from Corollary 3.4 on all elements in sum, we obtain
∗
0 1
1
n Z
2
∂
n n − 2 X |T | (g − ld ) (t) t n −1 dt.
∇
|f − l|0,∞,T ≤
2
∂xk
γnn k=1 0
Pn ∂
Invoking the estimate of k=1 ∇ ∂xk g − ld from Lemma 3.7, we have
1
|f − l|0,∞,T ≤
≤
1
nn−2
2
n
Z
2+ 12
α
n
γn
1
n n +3
2
n
max
i,j∈{1,...,n}
|fij ◦ A|0,∞,T 0
2
|T 0 | n α
max
i,j∈{1,...,n}
2γn
|T 0 |
2
t n −1 dt
0
|fij ◦ A|0,∞,T 0 .
Formula for volume of a simplex yields:
2
1
n n +3 dn n
|f − l|0,∞,T ≤
α max |fij ◦ A|0,∞,T 0
2
i,j∈{1,...,n}
n!
2γnn
1
n n +3 2
≤
max |fij ◦ A|0,∞,T 0
2 d α
2
i,j∈{1,...,n}
2n! n γnn
1
n n +3 2
≤
max |fij |0,∞,T ,
2 d α
2
i,j∈{1,...,n}
n
n
2 n! γn
where d denotes diam (T ).
Finally, we can combine the estimates of error on each simplex in decomposition and get
|(f ∗ − s∗ )|0,∞,P ≤ max |f − li |0,∞,Ti
i∈I
1
≤
≤
n n +3
2
n
2
n
2
n
2
n
2 n! γn
1
n n +3
2 n! γn
max d2i αi
i∈I
max
k,j∈{1,...,n}
|fkj |0,∞,Ti
|f |2,∞,P max d2i αi ,
i∈I
which is the inequality (3.19). It remains to prove the estimate of kf ∗ (t) − s∗ k1,P .
We have
XZ
X
∗
∗
|f − s |0,1,P =
|f ∗ (t) − s∗ (t)| ≤
|Ti | |f ∗ (t) − s∗ (t)|0,∞,Ti .
i∈I
Ti
i∈I
42
We can use the estimate for |f ∗ (t) − s∗ (t)|0,∞,Th obtained above and we get
X
|f ∗ − s∗ |0,1,P ≤
|Ti | |f ∗ − s∗ |0,∞,Ti
i∈I
1
n n +3
≤
X
2
n
2
n
2 n! γn
|Ti | αi d2i |f |2,∞,Ti .
i∈I
We use the formula for the volume of a simplex and our proof is completed:
1
∗
n n +3
∗
|f − s |0,1,P ≤
2
1+ n
n!
n
≤
X
2
n
2 γn
i∈I
1
+3
n
2
1+ n
n!
αi dn+2
|f |2,∞,Ti
i
X
|f |2,∞,P
2
n
2 γn
αi dn+2
.
i
i∈I
Theorem 3.12. Consider a polytope P ⊂ Rn , n > 1. Assume that f ∈ C(P )
has weak first-order derivatives, with
Z
|∇f | < ∞.
P
Suppose P is a simplex linear Lagrange finite element domain with simplex
triangulation T = {Ti }i∈I . Let Ai = (ajk (i)) be a transformation matrix of an
affine transformation on Ti from n-simplex, Tdi , with vertices V0 = {0, . . . , 0},
V1 = {di , 0, . . . , 0} and Vn = {0, . . . , 0, di }, where di is the diameter of Ti ,
i ∈ I.
Then, for the linear spline s = ΠP (f ), one has
Z
X
1
n
∗
n
|∇(f − s)| .
(3.21)
|Ti |
kf − sk1 ≤
1
Ti
(n − 1)γnn i∈I
If, further, f ∈ C1 (P ) has weak second-order derivatives, with
Z
2 D f < ∞,
P
then,
∗
2
∗
kf − s k1 ≤ (n − 1) n
1
2
X
αi |Ti |
2
n
Z
2 D f ;
Ti
i∈I
here,
"
αi =
max
j,k∈{1,...,n}
n
X
#"
(alj (i))2
n
X
m=1
l=1
43
#
(amk (i))2 .
(3.22)
Proof. Fix i0 ∈ I and define g on Tdi0 by
g(x) = f ◦ Ai0 (x), x ∈ Tdi0 .
Given the linear function, lTi0 , interpolating f at the vertices of Ti0 , one has
that
lTdi := l ◦ Ai0
0
is the linear function doing the same for g on Tdi0 , in fact
ldi0 (x) = g(V0 ) +
n
X
(g(Vr ) − g(V0 ))
r=1
xr
, x = (x1 , . . . , xn ).
di0
Using Corollary 1.13 and the general Hölder inequality for associated
Lorentz spaces Ln0 ,1 and Ln,∞ , we get
Z
XZ
∗
∗
kf − s k1 ≤
|f − s| =
|f − s|
P
i∈I
≤
X
Ti
k1kn,∞,Ti kf − skn0 ,1Ti
i∈I
≤
X
1
|Ti | n kf − skn0 ,1Ti .
i∈I
The imbedding inequality (3.11) yields
X
n
kf ∗ − s∗ k1 ≤
1 k|∇ (f − s)|k1 .
n
i∈I (n − 1)γn
which is (3.21).
Next, from the Hölder’s inequality and Theorem 3.10, again, we have
Z
Z
f − lT |f − s| =
i0
T
Ti0
Z i0 =
g − lTdi |det Ai0 |
Tdi
0
0
2 ≤ Tdi0 n g − lTdi n 0
|det Ai0 |
0 ( ) ,Td
2
i0
2 ≤ (n − 1)2 Tdi0 n ∇(g − lTdi )
|det Ai0 |
0
n0 ,Tdi
0
n 2 X
g(V
)
−
g(V
)
l
0
2 1 n
gl −
≤ (n − 1) n 2 Tdi0 |det Ai0 | .
0
d
i0
n
,T
d
l=1
i0
44
Now, we may invoke the imbedding inequality Theorem 3.10. This yields:
Z
n Z
2 X
g(V
)
−
g(V
)
k
0
2 1 n
∇ gk −
|det Ai0 |
|f − s| ≤ (n − 1) n 2 Tdi0 d
i
Ti0
T
0
di
k=1
0
Z
n X
n
X
2
2 1 n
|glk | |det Ai0 | .
≤ (n − 1) n 2 Tdi0 Tdi
0
l=1 k=1
Altogether with aid of Lemma 3.6, then,
XZ
kf − sk1,P ≤
|f − s|
i∈I
Ti
2
≤ (n − 1) n
1
2
X
2
αi Td n
Z
i0
Tdi
i∈I
1
≤ (n − 1)2 n 2
X
2
Z
αi |Tdi | n
Ti
i∈I
45
2 D f (Ai ) |det Ai |
2 D f .
Conclusion
Our main goal was to develop a new algorithm for approximation of the nonincreasing rearrangement of a function. The algorithm can be applied to
functions with a convex polyhedral domain in R1 , R2 and R3 . The new algorithm yields better estimates than the older one. Further, we present a couple
of error estimates of our method. The first estimate depends mostly on results from the finite element theory, the other estimates use aid of embedding
inequalities. The algorithm can be generalized to higher dimensions. This
can be subject to further research.
46
A. Implementation notes
The described algorithm for approximation of a non-increasing rearrangement
of a function was implemented for the case n = 2 and the case n = 1 with
clamped cubic splines. The aim of this chapter is to give overview of some
details of this implementation. Though the algorithm is rather straightforward, there are few issues which were left out in the description of algorithms
in Chapter 2. But first we should note some general facts about implementation.
The program is written in language JAVA and it uses the arbitrary precision floating-point number format, therefore the only limitation of precision
of calculation is the capability of computer which execute it. Number operations with the arbitrary precision numbers are supported by the library
Apfloat.Plotting graphic output is done with the aid of the libraries JCommon and JFreeChart.
The first issue which needs more attention is how to create a triangulation
which determines the used partially linear approximation. This is done by a
two-step algorithm which is described in the following. The input data for our
algorithm is a convex domain and a bound for area of triangles. The desired
output is a triangulation of a given domain such that area of no triangle in it
exceed the given bound.
The first step of the algorithm divides the domain into initial triangles.
This is done by adding a central point as an arithmetical average of vertices of
domain and then creating a triangulation which consists of triangles with one
vertex being the central point and the other two being endpoints of an edge
of the domain. To this initial triangulation is then applied the second step
of the algorithm which consists of several iterations. In each iteration step
each triangle in triangulation is checked if it’s area is smaller than a given
parameter. If the triangle is too large, then it is divided into four smaller
triangles. This iteration is repeated until all triangles have the sufficiently
small area.
Let us show the process of division on the case of a triangle T with vertices
v1 , v2 and v3 . Triangle T is divided into four smaller triangles T1 , T2 , T3 and
T4 . Vertices of T1 are v1 , v1 2 and v1 3 , vertices of T2 are v2 , v1 2 , v2 3 , vertices
of T3 are v3 , v2 3 , v1 3 and vertices of the last triangle are v1 2 , v2 3 , v1 3 , where
v +v
vi j = i 2 j . Triangle T is then removed from triangulation and triangles
T1 , T2 , T3 , T4 are added. Newly added triangles have smaller area.
However, the triangulation described above does not satisfy condition
(T 5). This condition could be satisfied if we slightly change the second step
of the algorithm. We change the condition for division of a triangle in trian47
Figure A.1: Example of different density of nodes of a graph of a nonincreasing rearrangement.
gulation. If there is any triangle in triangulation with too big area we have
to divide all triangles in the triangulation from beginning of this iteration.
The second issue is how to plot the approximation of the rearrangement.
The values of the non-increasing rearrangement are calculated through the values of the distribution function. We could divide the interval [min(|f |), max(|f |)]
with equidistant division and evaluate the distribution function in nodes of
this division. Unfortunately, if we evaluate the distribution function in some
equidistant division of its domain, then nodes for graph of the rearrangement
usually accumulate at some part of the domain and in other areas the density
of nodes could be too small as it is illustrated in Figure A.1. This effect is
diminished by the following algorithm. The user specifies the maximal gap
between two nodes of the final graph. The program then uses more precise
division in parts of domain of distribution function where the distribution
function changes more quick and less points where it is not necessary.
48
Bibliography
[1] Adams, R. A., and Fournier, J. J. F. Sobolev Spaces. Elsevier,
Oxford, 2007.
[2] Bennett, C., and Sharpley, R. Interpolation of Operators. Academic Press, Princeton, 1988.
[3] Brenner, S. C., and Scott, L. R. The Mathematical Theory of
Finite Element Methods. Springer-Verlag New York, Inc., New York,
1994.
[4] Chiti, G. Rearrangements of functions and convergence in orlicz spaces.
Applicable Analysis 9 (1979), 23–27.
[5] Cianchi, A., and Pick, L. Sobolev embeddings into BMO, VMO, and
L∞ . Arkiv för Matematik 36 (1998), 433–446.
[6] Edmunds, D. E., Kerman, R., and Pick, L. Optimal Sobolev
imbeddings involving rearrangement-invariant quasinorms. Journal of
Functional Analysis 170 (2000), 307–355.
[7] Evans, L. C., and Gariepy, R. F. Measure theory and fine properties
of functions. CRC Press Inc., Boca Raton, Florida, 1992.
[8] Franců, M., Kerman,R., Sayfy, A., and Phipps, C. Finite element approximations to rearrangements. Preprint no. MATH-KMA 396
(2012), 1–20.
URL: “http://www.karlin.mff.cuni.cz/kma-preprints/”
[9] Hall, C. A., and Meyer, W. W. Optimal error bounds for cubic
spline interpolation. Journal of Approximation Theory 16 (1976), 105–
122.
[10] Lorentz, G. G., and Shimogaki, T. Interpolation theorems for
operators in function spaces. Journal of Functional Analysis 2 (1968),
31–51.
[11] Steiner, J. Gesammelte Werke. Berlin, 1881–1882.
[12] Talenti, G. Rearrangements of functions and partial differential equations. Lecture Notes in Mathematics 1224 (1986), 153–178.
49
[13] Zhang, S. On a nested refinement of anisotropic tetrahedral grids under
hessian metrics. Unpublished manuscript, (2006).
URL: “http://www.math.udel.edu/˜szhang/research/p/3dhessian.pdf”
50