Chapter 1
Hilbert space techniques
The goal of this chapter is to collect the main features of the Hilbert spaces and to introduce the Lax–Milgram theorem which is a key tool for solving elliptic partial differential
equations.
1.1
The projection on a closed convex set
Definition 1.1. A Hilbert space H over R is a vector space equipped with a scalar
product (·, ·) which is complete for the associated norm
1
|u| = (u, u) 2 ,
(1.1)
i.e. such that every Cauchy sequence admits a limit in H.
Examples.
1. Rn equipped with the Euclidean scalar product
(x, y) = x · y =
n
X
xi yi
∀ x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ Rn .
i=1
(We will prefer the notation with a dot for this scalar product).
R
2. L2 (A) = { v : A → R, v measurable | A v 2 (x) dx < +∞ } with A a measurable
subset of Rn . Recall that L2 (A) is in fact a set of “class” of functions. L2 (A) is a
Hilbert space when equipped with the scalar product
Z
(u, v) =
u(x)v(x) dx.
(1.2)
A
Remark 1.1. Let us recall the important “Cauchy–Schwarz inequality” which asserts that
|(u, v)| ≤ |u| |v| ∀ u, v ∈ H.
i
(1.3)
ii
CHAPTER 1. HILBERT SPACE TECHNIQUES
One of the important results is the following theorem.
Theorem 1.1 (Projection on a convex set). Let K 6= ∅ be a closed convex subset of a
Hilbert space H. For every h ∈ H there exists a unique u such that
(
u ∈ K,
(1.4)
|h − u| ≤ |h − v|, ∀ v ∈ K,
(i.e. u realizes the minimum of the distance between h and K). Moreover u is the unique
point satisfying (see Figure 1.1)
(
u ∈ K,
(1.5)
(u − h, v − u) ≥ 0, ∀ v ∈ K.
u is called the orthogonal projection of h on K and will be denoted by PK (h).
h
u = PK (h)
v
K
Figure 1.1: Projection on a convex set
Proof. Consider a sequence un ∈ K such that when n → +∞
|h − un | → Inf |h − v| := d.
v∈K
The infimum above clearly exists and thus such a minimizing sequence also. Note now
the identities
|un − um |2 = |un − h + h − um |2 = |h − un |2 + |h − um |2 + 2(un − h, h − um )
|2h − un − um |2 = |h − un + h − um |2 = |h − un |2 + |h − um |2 + 2(h − un , h − um ).
Adding up we get the so called parallelogram identity
|un − um |2 + |2h − un − um |2 = 2|h − un |2 + 2|h − um |2 .
(1.6)
Recall now that a convex set is a subset of H such that
αu + (1 − α)v ∈ K
∀ u, v ∈ K,
∀ α ∈ [0, 1].
(1.7)
1.1. THE PROJECTION ON A CLOSED CONVEX SET
iii
From (1.6) we derive
un + um 2
|un − um | = 2|h − un | + 2|h − um | − 4h −
2
≤ 2|h − un |2 + 2|h − um |2 − 4d2
2
2
2
(1.8)
m
∈ K (take α = 12 in (1.7)). Since the right hand side of (1.8) goes to 0 when
since un +u
2
n, m → +∞, un is a Cauchy sequence. It converges toward a point u ∈ K – since K is
closed – such that
|h − u| = Inf |h − v|.
(1.9)
v∈K
This shows the existence of u satisfying (1.4). To prove the uniqueness of such a u one
goes back to (1.8) which is valid for any un , um in K. Taking u, u0 two solutions of (1.4),
(1.8) becomes
|u − u0 |2 ≤ 2|h − u|2 + 2|h − u0 |2 − 4d2 = 0
i.e. u = u0 . This completes the proof of the existence and uniqueness of a solution to
(1.4). We show now the equivalence of (1.4) and (1.5). Suppose first that u is solution to
(1.5). Then we have
|h − v|2 = |h − u + u − v|2 = |h − u|2 + |u − v|2 + 2(h − u, u − v) ≥ |h − u|2
∀ v ∈ K.
Conversely suppose that (1.4) holds. Then for any α ∈ (0, 1) – see (1.7) – we have for
v∈K
|h − u|2 ≤ |h − [αv + (1 − α)u]|2 = |h − u − α(v − u)|2
= |h − u|2 + 2α(u − h, v − u) + α2 |v − u|2 .
This implies clearly
2α(u − h, v − u) + α2 |v − u|2 ≥ 0.
(1.10)
Dividing by α and letting α → 0 we derive that (1.5) holds. This completes the proof of
the theorem.
Remark 1.2. If h ∈ K, PK (h) = h. (1.5) is an example of variational inequality.
In the case where K is a closed subspace of H (this is a special convex set) Theorem 1.1
takes a special form.
Corollary 1.2. Let V be a closed subspace of H. Then for every h ∈ H there exists a
unique u such that
(
u ∈ V,
(1.11)
|h − u| ≤ |h − v|, ∀ v ∈ V.
Moreover u is the unique solution to
(
u ∈ V,
(h − u, v) = 0, ∀ v ∈ V.
(1.12)
iv
CHAPTER 1. HILBERT SPACE TECHNIQUES
Proof. It is enough to show that (1.5) is equivalent to (1.12). Note that if (1.12) holds
then (1.5) holds. Conversely if (1.5) holds then for any w ∈ V one has
v =u±w ∈V
since V is a vector space. One deduces
±(u − h, w) ≥ 0 ∀ w ∈ V
which is precisely (1.12). u = PV (h) is described in the the figure below. It is the unique
vector of V such that h − u is orthogonal to V .
h
u
V
Figure 1.2: Projection on a vector space
1.2
The Riesz representation theorem
If H is a real Hilbert space we denote by H ∗ its dual – i.e. H ∗ is the set of continuous
linear forms on H. If h ∈ H then the mapping
v 7→ (h, v)
(1.13)
is an element of H ∗ . Indeed this is a linear form that is to say a linear mapping from H
into R and the continuity follows from the Cauchy–Schwarz inequality
|(h, v)| ≤ |h| |v|.
(1.14)
The Riesz representation theorem states that all the elements of H ∗ are of the type (1.13)
which means can be represented by a scalar product. This fact is easy to see on Rn . We
will see that it extends to infinite dimensional Hilbert spaces. First let us analyze the
structure of the kernel of the elements of H ∗ .
Proposition 1.3. Let h∗ ∈ H ∗ . If h∗ 6= 0 the set
V = { v ∈ H | hh∗ , vi = 0 }
(1.15)
is a closed subspace of H of codimension 1 i.e. a hyperplane of H. (We denote with
brackets the duality writing hh∗ , vi = h∗ (v)).
1.2. THE RIESZ REPRESENTATION THEOREM
v
Proof. Since h∗ is continuous V is a closed subspace of H. Let h ∈
/ V . Such a h exists
∗
since h 6= 0. Then set
v0 = h − PV (h) 6= 0.
(1.16)
Any element of v ∈ H can be decomposed in an unique way as
v = λv0 + w
(1.17)
where w ∈ V . Indeed if w ∈ V one has necessarily
hh∗ , vi = λhh∗ , v0 i
i.e. λ = hh∗ , vi/hh∗ , v0 i and then
hh∗ , vi
hh∗ , vi
v
+
v
−
v0 .
0
hh∗ , v0 i
hh∗ , v0 i
This completes the proof of the proposition.
v=
We can now show
Theorem 1.4 (Riesz representation theorem). For any h∗ ∈ H ∗ there exists a unique
h ∈ H such that
(h, v) = hh∗ , vi ∀ v ∈ H.
(1.18)
Moreover
|h| = |h∗ |∗ = Sup
v∈H
v6=0
hh∗ , vi
.
|v|
(1.19)
(This last quantity is called the strong dual norm of h∗ ).
Proof. If h∗ = 0, h = 0 is the only solution of (1.18). We can assume then that h∗ 6= 0.
Let v0 6= 0 be a vector orthogonal to the hyperplane
V = { v ∈ H | hh∗ , vi = 0 },
(see (1.16), (1.17)). We set
h=
hh∗ , v0 i
v0 .
|v0 |2
(1.20)
Due to the decomposition (1.17) we have
(h, v) = (h, λv0 + w) = λ(h, v0 ) = λhh∗ , v0 i = hh∗ , λv0 + wi = hh∗ , vi
for every v ∈ H. Thus h satisfies (1.18). The uniqueness of h is clear since
(h − h0 , v) = 0 ∀ v ∈ H =⇒ h = h0
(take v = h − h0 ).
Now from (1.20) we have
|h| =
|hh∗ , v0 i|
≤ |h∗ |∗
|v0 |
and from (1.18)
(h, v)
|h| |v|
≤ Sup
= |h|.
|v|
|v|
v6=0
v6=0
This completes the proof of the theorem.
|h∗ |∗ = Sup
(1.21)
vi
CHAPTER 1. HILBERT SPACE TECHNIQUES
1.3
The Lax–Milgram theorem
Instead of a scalar product one can consider more generally a continuous bilinear form.
That is to say if a(u, v) is a continuous bilinear form on H, then for every u ∈ H
v 7→ a(u, v)
(1.22)
is an element of H ∗ . As for the Riesz representation theorem one can ask if every element
of H ∗ is of this type. This can be achieved with some assumptions on a which reproduce
the properties of the scalar product, namely:
Theorem 1.5 (Lax–Milgram). Let a(u, v) be a bilinear form on H such that
• a is continuous i.e. ∃Λ > 0 such that |a(u, v)| ≤ Λ|u| |v| ∀ u, v ∈ H,
• a is coercive i.e. ∃λ > 0 such that a(u, u) ≥ λ|u|2 ∀ u ∈ H.
(1.23)
(1.24)
Then for every f ∈ H ∗ there exists a unique u ∈ H such that
a(u, v) = hf, vi
∀ v ∈ H.
(1.25)
In the case where a is symmetric that is to say
a(u, v) = a(v, u) ∀ u, v ∈ H
(1.26)
then u is the unique minimizer on H of the functional
1
J(v) = a(v, v) − hf, vi.
2
(1.27)
Proof. For every u ∈ H, by (1.23), v 7→ a(u, v) is an element of H ∗ . By the Riesz
representation theorem there exists a unique element in H that will be denoted by Au
such that
a(u, v) = (Au, v) ∀ v ∈ H.
We will be done if we can show that A is a bijective mapping from H into H. (Indeed
one will then have hf, vi = (h, v) = (Au, v) = a(u, v) ∀ v ∈ H for a unique u in H).
• A is linear.
By definition of A for any u1 , u2 ∈ H, α1 , α2 ∈ R one has
a(α1 u1 + α2 u2 , v) = (A(α1 u1 + α2 u2 ), v) ∀ v ∈ H.
By the bilinearity of a and the definition of A one has also
a(α1 u1 + α2 u2 , v) = α1 a(u1 , v) + α2 a(u2 , v) = α1 (Au1 , v) + α2 (Au2 , v)
= (α1 Au1 + α2 Au2 , v) ∀ v ∈ H.
1.3. THE LAX–MILGRAM THEOREM
vii
Then we have
(A(α1 u1 + α2 u2 ), v) = (α1 Au1 + α2 Au2 , v) ∀ v ∈ H
and thus
A(α1 u1 + α2 u2 ) = α1 Au1 + α2 Au2
∀ u1 , u2 ∈ H
∀ α1 , α2 ∈ R,
hence the linearity of A.
• A is injective.
Due to the linearity of A it is enough to show that
Au = 0
=⇒
u = 0.
If Au = 0, by (1.24) and the definition of A one has
0 = (Au, u) = a(u, u) ≥ λ|u|2
and our claim is proved.
• AH the image of H by A is a closed subspace of H.
Indeed consider Aun a sequence such that
Aun → y
in H.
We want to show that y ∈ AH. For that note that by (1.24) one has
λ|un − um |2 ≤ a(un − um , un − um ) = (Aun − Aum , un − um )
≤ |Aun − Aum | |un − um |
1
i.e.
|un − um | ≤ |Aun − Aum |.
λ
Since Aun converges it is a Cauchy sequence and so is un . Then there exists u ∈ H
such that
un → u in H
when n → +∞. By definition of A we have
a(un , v) = (Aun , v) ∀ v ∈ H.
Passing to the limit in n we get
a(u, v) = (y, v) ∀ v ∈ H
which is also y = Au ∈ H. This completes the proof of the claim.
viii
CHAPTER 1. HILBERT SPACE TECHNIQUES
• A is surjective.
If not V = AH is a closed proper subspace of H. Then – see Corollary 1.2 – there
exists a vector v0 6= 0 such that v0 is orthogonal to V . Then
λ|v0 |2 ≤ a(v0 , v0 ) = (Av0 , v0 ) = 0
which contradicts v0 6= 0. This completes the first part of the theorem.
If we assume now that a is symmetric and u solution to (1.25) one has
J(v) = J(u + v − u)
1
= a(u + (v − u), u + (v − u)) − hf, u + (v − u)i
2
1
1
= a(u, u) − hf, ui + a(u, v − u) − hf, v − ui + a(v − u, v − u)
2
2
(by linearity and since a is symmetric). Using (1.25) we get
λ
1
J(v) = J(u) + a(v − u, v − u) ≥ J(u) + |v − u|2 ≥ J(u) ∀ v ∈ H,
2
2
the last inequality being strict for v 6= u. Hence u is the unique minimizer of J on
H. This completes the proof of the theorem.
1.4
Convergence techniques
We recall that if hn is a sequence in H then
hn * h∞
in H
as n → +∞
(i.e. hn converges towards h weakly) iff
lim (hn , h) = (h∞ , h) ∀ h ∈ H.
n→+∞
Due to the Cauchy–Schwarz inequality it is easy to show that
hn −→ h in H
=⇒
hn * h in H.
The converse is not true in general. We also have
Theorem 1.6 (Weak compactness of balls). If hn is a bounded sequence in H, there exists
a subsequence hnk of hn and h∞ ∈ H such that
hnk * h∞ .
1.4. CONVERGENCE TECHNIQUES
ix
Proof. See [70]
Finally we will find useful the following theorem.
Theorem 1.7. Let xn , yn be two sequences in H such that
xn −→ x∞ ,
yn * y ∞
then we have
(xn , yn ) −→ (x∞ , y∞ )
in R.
Proof. One has
|(xn , yn ) − (x∞ , y∞ )| = |(xn − x∞ , yn ) − (x∞ , y∞ − yn )|
≤ |xn − x∞ | |yn | + |(x∞ , y∞ − yn )|.
Since yn * y∞ , yn is bounded (Theorem of Banach–Steinhaus), then the result follows
easily.
Exercises
1. Show (1.3). (Hint: use the fact that (u + λv, u + λv) ≥ 0 ∀ λ ∈ R).
2. In Rn consider
K = { x | xi ≥ 0 ∀ i = 1, . . . , n }.
Show that K is a closed convex set of Rn .
Show that
PK (y) = (y1 ∨ 0, y2 ∨ 0, . . . , yn ∨ 0) ∀ y ∈ Rn
where ∨ denotes the maximum of two numbers.
3. Set
2
` =
X
+∞ 2
x = (xi ) xi < +∞
i=1
2
i.e. ` is the set of sequences square summable.
(a) Show that `2 is a Hilbert space for the scalar product
(x, y) =
+∞
X
i=1
xi yi
∀ x, y ∈ `2 .
x
CHAPTER 1. HILBERT SPACE TECHNIQUES
(b) One sets
en = (0, 0, . . . , 1, 0, . . . )
where 1 is located at the nth slot. Show that en * 0 in `2 but en 6→ 0.
4. Let g ∈ L2 (A) where A is a measurable subset of Rn . Set
K = { v ∈ L2 (A) | v(x) ≤ g(x) a.e. x ∈ A }.
Show that K is a non empty closed convex set of L2 (A). Show that
PK (v) = v ∧ g
∀ v ∈ L2 (A).
(∧ denotes the minimum of two numbers – i.e. (v ∧ g)(x) = v(x) ∧ g(x) a.e. x).
5. If PK denotes the projection on a non empty closed convex set K of a Hilbert space
H show that
|PK (h) − PK (h0 )| ≤ |h − h0 | ∀ h, h0 ∈ H.
6. Let a be a continuous, coercive, bilinear form on a real Hilbert space H. Let
f, ` ∈ H ∗ , ` 6= 0. Set
V = { h ∈ H | h`, hi = 0 }.
(a) Show that there exists a unique u solution to
(
u ∈ V,
a(u, v) = hf, vi ∀ v ∈ V.
(b) Show that there exists a unique k ∈ R such that u satisfies
a(u, v) = hf + k`, vi ∀ v ∈ H.
(Hint: If h ∈ H, `(h) = 1, v − `(v)h ∈ V , ∀ v ∈ H).
(c) Show that there exists a unique u` ∈ H solution to
(
u` ∈ H,
a(v, u` ) = h`, vi ∀ v ∈ H,
`i
.
and that k = − hf,u
h`,u` i
(d) If `1 , `2 , . . . , `p ∈ H ∗
V = { h ∈ H | `i (h) = 0, ∀ i = 1, . . . , p }
what can be said about u the solution to
u ∈ V,
a(u, v) = hf, vi ∀ v ∈ V ?
Chapter 2
A survey of essential analysis
Lp-techniques
2.1
We recall here some basic techniques regarding Lp -spaces in particular those involving
approximation by mollifiers.
Definition 2.1. Let p ≥ 1 be a real number, Ω an open subset of Rn , n ≥ 1
Z
p
p
|v(x)| dx < +∞
L (Ω) = “Class of functions” v : Ω → R s.t.
Ω
∞
L (Ω) = { “Class of functions” v : Ω → R s.t. ∃C s.t. |v(x)| ≤ C a.e. x ∈ Ω }.
Equipped with the norms
Z
p1
p
|v(x)| dx
|v|p,Ω =
Ω
|v|∞,Ω = Inf{ C s.t. |v(x)| ≤ C a.e. x ∈ Ω }.
Lp (Ω), L∞ (Ω) are Banach spaces (see [63]). Moreover for any 1 ≤ p < +∞ the dual of
0
Lp (Ω) can be identified with Lp (Ω) where p0 is the conjugate number of p which is defined
by
1
1
+ 0 =1
(2.1)
p p
p
i.e. p0 = p−1
with the convention that p0 = +∞ when p = 1. (We refer the reader to [63]
for these notions).
Definition 2.2. We denote by Lploc (Ω) the set of functions v defined on Ω and such that
for any Ω0 bounded such that Ω0 ⊂ Ω one has
v ∈ Lp (Ω0 ).
We will note Ω0 ⊂⊂ Ω when Ω0 – the closure of Ω0 in Rn – is included in Ω and compact.
xi
xii
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
We denote by D(Ω) the space of functions infinitely differentiable in Ω with compact
support in Ω. Recall that the support of a function ρ is defined as
Supp ρ = the closure of the set { x ∈ Ω | ρ(x) 6= 0 } = { x ∈ Ω | ρ(x) 6= 0 }.
(2.2)
Examples 2.1.
1. Let us denote by |x| the euclidean norm of the vector x = (x1 , . . . , xn ) defined as
|x| =
X
n
x2i
21
.
(2.3)
i=1
The function defined, for c a constant, by
c exp − 1
if |x| < 1,
1 − |x|2
ρ(x) =
0
if |x| ≥ 1,
(2.4)
is a function of D(Rn ) with support
B1 = { x ∈ Rn | |x| ≤ 1 }.
2. If x0 ∈ Ω and ε positive is such that
Bε (x0 ) = { x ∈ Rn | |x − x0 | ≤ ε } ⊂ Ω
(2.5)
then the function
x − x 0
rε (x) = ρ
ε
is a function of D(Ω) with support Bε (x0 ).
(2.6)
Considering linear combinations of functions of the type (2.6) it is easy to construct
many other functions of D(Ω) and to see that D(Ω) is an infinite dimensional space on
R. Suppose that in (2.4) we choose c such that
Z
ρ(x) dx = 1
(2.7)
Rn
and set
1 x
ρ
.
εn ε
Z
Supp ρε = Bε (0),
(2.8)
ρε (x) =
We then have
ρε ∈ D(Ω),
ρε (x) dx = 1.
(2.9)
Rn
For u ∈ L1loc (Ω) we define the “mollifier of u” as
Z
uε (x) =
u(y)ρε (x − y) dy = (ρε ∗ u)(x).
Ω
(2.10)
2.1. LP -TECHNIQUES
xiii
It is clear that if x ∈ Ω, uε (x) is defined as soon as
ε < dist(x, ∂Ω)
(2.11)
where dist(x, ∂Ω) denotes the distance from x to ∂Ω. Recall that dist(x, ∂Ω) = Inf |x−y|
y∈∂Ω
where | · | is the usual euclidean norm in Rn . Then we have
Theorem 2.1. Let Ω0 ⊂⊂ Ω.
1. For ε < dist(Ω0 , ∂Ω), uε ∈ C ∞ (Ω0 ).
2. If u ∈ C(Ω) – i.e. is a continuous function in Ω – then when ε → 0
uε −→ u
uniformly in Ω0 .
(2.12)
3. If u ∈ Lploc (Ω), 1 ≤ p < +∞ then when ε → 0
uε −→ u
in Lp (Ω0 ).
(2.13)
Proof. 1. Let x ∈ Ω0 . Since for ε < dist(Ω0 , ∂Ω) one has Bε (x) ⊂ Ω it is clear that (2.10) is
well defined. Now from the theorem of derivation under an integral one has if ∂xi denotes
the partial derivative in the direction xi
Z
∂xi uε (x) =
u(y)∂xi ρε (x − y) dy.
Ω
Repeating this process in the other directions and for any order derivative it is clear that
uε ∈ C ∞ (Ω0 ).
2. Since u ∈ C(Ω), u is uniformly continuous on any compact subset of Ω and for any
δ > 0 there exists an ε < dist(Ω0 , ∂Ω) = Inf
|x − y| such that
0
x∈Ω ,y∈∂Ω
x ∈ Ω0 ,
|x − y| ≤ ε
=⇒
|u(x) − u(y)| ≤ δ.
Let x ∈ Ω0 one has, see (2.9):
Z
u(x) − uε (x) =
{u(x) − u(y)}ρε (x − y) dy
Ω
(by (2.10)). Thus
Z
|u(x) − uε (x)| = {u(x) − u(y)}ρε (x − y) dy ZΩ
≤
|u(x) − u(y)|ρε (x − y) dy.
Ω
In the integral above one integrates only on Bε (x) and by (2.14) it comes
Z
Z
|u(x) − uε (x)| ≤ δ
ρε (x − y) dy = δ
ρε (z) dz = δ
Bε (x)
Bε
(2.14)
xiv
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
which completes the proof of the assertion 2.
3. Let Ω0 ⊂⊂ Ω00 ⊂⊂ Ω. u ∈ Lp (Ω00 ) and thus there exists û ∈ C(Ω00 ) such that
δ
|u − û|p,Ω00 ≤ ,
(2.15)
3
since the continuous functions are dense in Lp (Ω00 ) – see [63]. Then we have if ûε = ρε ∗ û
|u − uε |p,Ω0 = |u − û + û − ûε + ûε − uε |p,Ω0
≤ |u − û|p,Ω0 + |û − ûε |p,Ω0 + |ûε − uε |p,Ω0 .
(2.16)
Now let us notice that
Z
p
|ûε − uε |p,Ω0 =
|ûε (x) − uε (x)|p dx
0
p
ZΩ Z
dx
{û(y)
−
u(y)}ρ
(x
−
y)
dy
=
ε
Ω0 Bε (x)
p
Z Z
1
1
0
=
|û(y) − u(y)|ρε p (x − y)ρε p (x − y) dy dx
Ω0 B (x)
Z Z ε
|û(y) − u(y)|p ρε (x − y) dy dx (by Hölder’s inequality)
≤
0
ZΩ ZBε (x)
|û(x − z) − u(x − z)|p ρε (z) dz dx
=
Ω0
Bε
(we made the change of variable z = x − y, Bε = Bε (0)). Thus, by the Fubini theorem,
we obtain
Z
Z
p
|û(x − z) − u(x − z)|p dx dz
ρε (z)
|ûε − uε |p,Ω0 ≤
Ω0
ZBε
Z
≤
ρε (z) dz
|û(ξ) − u(ξ)|p dξ
Ω00
Z Bε
=
|û(ξ) − u(ξ)|p dξ,
Ω00
0
00
for ε < dist(Ω , ∂Ω ). Thus from (2.15), (2.16) we derive
2δ
+ |û − ûε |p,Ω0
3
≤δ
|u − uε |p,Ω0 ≤
for ε small enough by the point 2 since the uniform convergence implies the Lp convergence
in the bounded set Ω0 . This completes the proof.
Remark. If u ∈ Lp (Ω), Ω bounded, and if we denote by ū the extension of u by 0 outside
of Ω, then as a corollary of Theorem 2.1 we have
ūε = ρε ∗ ū −→ u in Lp (Ω).
We show now the following compactness result which is a Lp -generalization of the Arzela–
Ascoli theorem.
2.1. LP -TECHNIQUES
xv
Theorem 2.2 (Fréchet–Kolmogorov). Let Ω0 ⊂⊂ Ω and L a subset of Lp (Ω) 1 ≤ p < +∞
such that
• L is bounded in Lp (Ω)
• L is equicontinuous in Lp (Ω0 ) – that is to say
∀ η > 0 ∃ δ > 0, δ < dist(Ω0 , ∂Ω) such that
∀ h ∈ Rn , |h| < δ =⇒ |σh f − f |p,Ω0 < η ∀ f ∈ L.
Then LΩ0 = {f |Ω0 | f ∈ L} is relatively compact in Lp (Ω0 ). (σh f is the translated of f
defined as σh f (x) = f (x + h) ∀ h ∈ Rn ).
Proof. We can without loss of generality assume that Ω is bounded. If f ∈ L we denote
by f¯ its extension by 0 outside Ω and set
L0 = { f¯ | f ∈ L }.
It is clear then that L0 is a bounded set of Lp (Rn ) and L1 (Rn ). First we claim that for
ε < δ we have
|ρε ∗ f¯ − f¯|p,Ω0 < η ∀ f¯ ∈ L0 .
Indeed if ε < δ < dist(Ω0 , ∂Ω)
Z
p
¯
¯
|ρε ∗ f − f |p,Ω0 =
|ρε ∗ f¯(x) − f¯(x)|p dx
0
p
ZΩ Z
¯
¯
{f (y) − f (x)}ρε (x − y) dy dx
=
Ω0 Bε (x)
p
Z Z
{f¯(x − z) − f¯(x)}ρε (z) dz dx
=
Ω0 B
(2.17)
p
Z Z ε
1
1
|f¯(x − z) − f¯(x)|ρε p (z)ρε p0 (z) dz dx
≤
0
ZΩ Z Bε
≤
|f¯(x − z) − f¯(x)|p ρε (z) dz dx, by Hölder’s inequality
Ω0
Bε
≤ |σz f − f |pp,Ω0 < η p .
(Recall that
R
Bε
ρε dz = 1). Consider then
Lε = { ρε ∗ f¯ | f¯ ∈ L0 }.
We claim that Lε satisfies the assumptions of the Arzela–Ascoli theorem which are
• Lε is bounded in C(Ω0 ).
Indeed this follows from
Z
¯
|ρε ∗ f |∞ = Bε (x)
(| · |∞ is the L∞ -norm in C(Ω0 )).
¯
f (y)ρε (x − y) dy ≤ |ρε |∞ |f¯|1,Rn
∞
xvi
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
• Lε is equicontinuous
Indeed if x1 , x2 ∈ Rn we have
|ρε ∗ f¯(x1 ) − ρε ∗ f¯(x2 )| ≤
Z
|f¯(y)| |ρε (x1 − y) − ρε (x2 − y)| dy
Rn
≤ Cε |x1 − x2 | |f¯|1,Rn
∀ f¯ ∈ L0
where Cε is the Lipschitz constant of ρε . The claim follows then from the fact that
L0 is bounded in L1 (Rn ).
From above it follows then that Lε is relatively compact in C(Ω0 ) and thus also in
Lp (Ω0 ). Given η we fix then ε such that
|ρε ∗ f¯ − f¯|p,Ω0 < η
∀ f¯ ∈ L0 .
Then we cover Lε by a finite number of balls of radius η. It is then clear that the
same balls of radius 2η cover L0 – which completes the proof.
It is very useful to consider derivatives of objects having no derivative in the usual
sense. This is done through duality and it is one of the great achievement of the distributions theory which emerged in the beginning of the fifties (see [64]). This is what we
would like to consider next.
2.2
Introduction to distributions
For any multiindex α = (α1 , . . . , αn ) ∈ Nn we denote by Dα the partial derivative given
by
∂ α1 +···+αn
.
(2.18)
D α = α1
∂x1 . . . ∂xαnn
We define first a notion of convergence in the space of functions D(Ω).
Definition 2.3. Let ϕi , i ∈ N be a sequence of functions in D(Ω). We say that
ϕi −→ ϕ in D(Ω) when i −→ +∞
(2.19)
if the ϕi ’s and ϕ have all their supports contained in a compact subset K of Ω and if
Dα ϕi −→ Dα ϕ uniformly in K, ∀ α ∈ Nn ,
i.e.
lim Sup |Dα ϕi (x) − Dα ϕ(x)| = 0 ∀ α ∈ Nn .
i→+∞ x∈K
(2.20)
Remark 2.1. One can show that this notion of convergence can be defined by a topology
on D(Ω). We refer the reader to [64], [69].
2.2. INTRODUCTION TO DISTRIBUTIONS
xvii
We can now introduce the notion of distribution.
Definition 2.4. A distribution T on Ω is a continuous linear form on D(Ω) – i.e. a linear
form on D(Ω) such that
lim T (ϕi ) = T (ϕ)
(2.21)
i→+∞
for any sequence ϕi such that ϕi → ϕ in D(Ω). T (ϕ) will be denoted by hT, ϕi and the
space of distributions on Ω by D0 (Ω).
Examples.
1. Let T ∈ L1loc (Ω). Then
Z
T (x)ϕ(x) dx ∀ ϕ ∈ D(Ω)
hT, ϕi =
(2.22)
Ω
defines a distribution. Indeed this follows from
Z
Z
|T | dx.
|hT, ϕi − ϕi| = T (x)(ϕi (x) − ϕ(x)) dx ≤ Sup |ϕi (x) − ϕ(x)|
x∈K
K
Ω
2. If T1 , T2 are distributions, α1 , α2 ∈ R then α1 T1 + α2 T2 defined as
hα1 T1 + α2 T2 , ϕi = α1 hT1 , ϕi + α2 hT2 , ϕi ∀ ϕ ∈ D(Ω)
is a distribution. This shows that D0 (Ω) is a vector space on R.
3. The Dirac mass. Let x0 ∈ Ω. Then
hδx0 , ϕi = ϕ(x0 ) ∀ ϕ ∈ D(Ω)
(2.23)
defines a distribution called the Dirac mass at x0 . This is an example of a distribution which is not a function, i.e. one cannot find T ∈ L1loc (Ω) such that
Z
T (x)ϕ(x) dx = ϕ(x0 ) ∀ ϕ ∈ D(Ω).
(2.24)
Ω
δx0 is a “measure”.
To see the impossibility of having (2.24) one needs the following theorem. It establishes
the consistency between equality in the distributional sense and the equality in the L1 sense for functions. It is clear that if T1 , T2 ∈ L1loc (Ω) and
T1 = T2
a.e. in Ω
(2.25)
then T1 , T2 define the same distribution through the formula (2.22). Conversely we have
Theorem 2.3. Suppose that T1 , T2 ∈ L1loc (Ω) and
hT1 , ϕi = hT2 , ϕi
Then T1 = T2 a.e. in Ω.
∀ ϕ ∈ D(Ω).
(2.26)
xviii
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
Proof. Consider Ω0 ⊂⊂ Ω – i.e. Ω0 is an open set included in a compact subset of Ω. If
∂Ω denotes the boundary of Ω then
d = dist(Ω0 , ∂Ω) =
Inf
x∈Ω0 ,y∈∂Ω
|x − y| > 0.
If ρε is defined by (2.8) for ε < d one has for every x ∈ Ω0
Z
T1 (y)ρε (x − y) dy
(T1 ∗ ρε )(x) =
Ω
(2.27)
(2.28)
= hT1 , ρε (x − ·)i = hT2 , ρε (x − ·)i = (T2 ∗ ρε )(x).
Now when ε → 0 we have
Ti ∗ ρε −→ Ti
in L1 (Ω0 ) i = 1, 2,
(2.29)
(see Theorem 2.1). The result follows then from (2.28).
Remark 2.2. To show that (2.24) cannot hold for T ∈ L1loc (Ω) it is enough to notice that
(2.24) implies that hT, ϕi = 0 ∀ ϕ ∈ D(Ω \ {x0 }). Then, by Theorem 2.3, T = 0 a.e. in
Ω \ {x0 } i.e. T = 0 a.e. in Ω and T is the zero distribution which is not the case of the
Dirac mass.
As we fixed us as goal at the beginning of this section we can now differentiate any
distribution. Indeed we have:
Definition 2.5. For α = (α1 , . . . , αn ) ∈ Nn we set |α| = α1 +· · ·+αn . Then for T ∈ D0 (Ω)
ϕ 7→ (−1)|α| hT, Dα ϕi
defines a distribution on Ω which we denote by Dα T . Thus we have
hDα T, ϕi = (−1)|α| hT, Dα ϕi ∀ ϕ ∈ D(Ω).
(2.30)
Example. Suppose that T is a function on Ω, k times differentiable. For |α| ≤ k denote
provisionally by ∂ α the derivative of T in the usual sense – i.e.
∂ α T (x) =
∂ |α| T (x)
∂xα11 . . . ∂xαnn
∀ x ∈ Ω.
(2.31)
Then we have
Dα T = ∂ α T
(2.32)
i.e. the derivative in the distributional sense of T coincides with the distribution defined
by the function ∂ α T . To see this consider α = (0, . . . , 1, . . . 0) the 1 being at the ith -slot.
One has
Z
Z
α
hD T, ϕi = −hT, ∂xi ϕi = − T (x)∂xi ϕ(x) dx = − (∂xi (T ϕ) − ∂xi T ϕ) dx.
(2.33)
Ω
Ω
2.2. INTRODUCTION TO DISTRIBUTIONS
xix
∂
∂xi
For simplicity we set ∂xi =
Fubini theorem
Z
and will do that also in what follows. Now clearly by the
Z Z
∂xi (T ϕ) dx =
∂xi (T ϕ) dx = 0.
x0
Ω
xi
(In the integral above we integrate first in xi then in the other variables that we denote
by x0 . Note that T ϕ vanishes on the boundary of Ω). Then (2.33) becomes
Z
α
∂xi T ϕ dx = h∂ α T, ϕi.
hD T, ϕi =
Ω
(2.32) follows then since for any α ∈ Nn , Dα can be obtained by iterating operators of
the type ∂xi . From now on we will use the same notation for the derivative in the usual
sense and in the distributional sense for C k -functions.
Like we defined a notion of convergence in D(Ω), we define now convergence in D0 (Ω).
Definition 2.6. Let Ti be a sequence of distributions on Ω. We say that
lim Ti = T
i→+∞
in D0 (Ω)
(2.34)
iff
lim hTi , ϕi = hT, ϕi ∀ ϕ ∈ D(Ω).
(2.35)
i→+∞
One can show that the convergence above defines a topology – see [64], [69].
Example. Suppose that xi is a sequence of points in Ω such that xi converges toward
x∞ ∈ Ω. Then one has
δxi −→ δx∞ in D0 (Ω).
Since
Lp (Ω) ⊂ L1loc (Ω) ∀ 1 ≤ p ≤ +∞
(2.36)
the functions of Lp (Ω) are distributions on Ω. Moreover we have
Proposition 2.4. Let Ti , T ∈ Lp (Ω). Suppose that when i → +∞
Ti → T
in Lp (Ω),
(resp: Ti * T in Lp (Ω))
1≤p<∞
(2.37)
then one has
Ti −→ T
in D0 (Ω).
(2.38)
Proof. We have denoted by * the weak convergence in Lp (Ω). Note that the strong
convergence in Lp (Ω) implies the weak one and this one can be expressed as
Z
Z
0
Ti ϕ dx −→
T ϕ dx ∀ ϕ ∈ Lp (Ω).
Ω
Ω
0
The result follows then from the fact that D(Ω) ⊂ Lp (Ω).
xx
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
We also have the following
Proposition 2.5. The operator Dα , α ∈ Nn is continuous on D0 (Ω) i.e.
Ti −→ T
in D0 (Ω)
=⇒
Dα Ti −→ Dα T
in D0 (Ω)
for i → +∞.
Proof. This follows immediately from
hDα Ti , ϕi = (−1)|α| hTi , Dα ϕi −→ (−1)|α| hT, Dα ϕi = hDα T, ϕi
since Dα ϕ ∈ D(Ω) ∀ α ∈ Nn , ∀ ϕ ∈ D(Ω).
2.3
Sobolev Spaces
The Sobolev Spaces are useful tools to solve partial differential equations. In this chapter
we concentrate ourselves to the most simple ones.
Definition 2.7. Let Ω be an open subset of Rn . We denote by H 1 (Ω) the subset of L2 (Ω)
defined by
(2.39)
H 1 (Ω) = { v ∈ L2 (Ω) | ∂xi v ∈ L2 (Ω) ∀ i = 1, . . . , n }.
In this definition ∂xi v denotes the derivative of v in the distributional sense.
Remark 2.3. In other words v ∈ H 1 (Ω) iff v ∈ L2 (Ω) and for every i there exists a function
vi in L2 (Ω) – that we denote ∂xi v – such that
Z
Z
vi ϕ dx ∀ ϕ ∈ D(Ω).
− v∂xi ϕ dx =
Ω
Ω
Example. If Ω is a bounded open set the restriction to Ω of every continuously differentiable function of Rn belongs to H 1 (Ω).
For u, v ∈ H 1 (Ω) one can define the scalar product
Z
(u, v)1,2 =
uv + ∇u · ∇v dx.
(2.40)
Ω
In the formula above ∇u denotes the gradient of u – i.e. the vector
∇u = (∂x1 u, . . . , ∂xn u)
and ∇u · ∇v the euclidean scalar product of ∇u, ∇v that is to say
∇u · ∇v =
n
X
i=1
One has:
∂xi u∂xi v.
(2.41)
2.3. SOBOLEV SPACES
xxi
Theorem 2.6. Equipped with the scalar product (2.40) H 1 (Ω) is a Hilbert space. The
associated norm will be denoted
Z
21
2
2
(2.42)
|v|1,2 =
v + |∇v| dx
Ω
where | · | stands for the euclidean norm in Rn .
Proof. It is easy to see that (2.40) defines a scalar product. So, we will be done provided
we show that H 1 (Ω) equipped with (2.42) is complete. Let us denote by vn a Cauchy
sequence in H 1 (Ω) which is a sequence such that for any ε
Z
Ω
(vn − vm )2 +
n
X
(∂xi vn − ∂xi vm )2 dx
12
= |vn − vm |1,2 ≤ ε
(2.43)
i=1
for n, m large enough. It follows that
vn ,
∂xi vn
i = 1, . . . , n
are Cauchy sequences in L2 (Ω). Since L2 (Ω) is a Hilbert space there exist functions
u ∈ L2 (Ω),
ui ∈ L2 (Ω) i = 1, . . . , n
such that
vn −→ u,
∂xi vn −→ ui
in L2 (Ω).
Due to Propositions 2.4, 2.5 one has also
∂xi vn −→ ∂xi u,
∂xi vn −→ ui
in D0 (Ω).
It follows that
∂xi u = ui ∈ L2 (Ω) ∀ i = 1, . . . , n
i.e. u ∈ H 1 (Ω). Moreover letting m → +∞ in (2.43) we get
|vn − u|1,2 ≤ ε
for n large enough that is to say vn → u in H 1 (Ω). This completes the proof.
In what follows we will be in need of functions vanishing on the boundary ∂Ω of Ω.
However for a class of functions in L2 (Ω) the meaning of its value on ∂Ω is not clear. So
we will overcome this problem by introducing H01 (Ω) the subspace of H 1 (Ω) defined as
H01 (Ω) = the closure of D(Ω) in H 1 (Ω)
(2.44)
the closure being understood for the norm (2.42). H01 (Ω) will play the rôle of the functions
of H 1 (Ω) which vanish on ∂Ω. We have
xxii
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
Theorem 2.7. Equipped with the scalar product (2.40) and the norm (2.42) H01 (Ω) is a
Hilbert space.
Proof. This follows immediately from the fact that H01 (Ω) is a closed subspace of
H 1 (Ω).
Since the functions of H01 (Ω) are vanishing – in a certain sense – one does not need the
L2 (Ω)-norm to control their convergence in H 1 (Ω). The L2 (Ω)-norm of the derivatives
will be enough. This will be a consequence of the following theorem. Before to state it
let us introduce briefly the notion of directional derivative. Let ν be a unit vector in Rn .
If v is a differentiable function then the limit
v(x + hν) − v(x)
h→0
h
lim
(2.45)
exists and it is called the derivative of v in the direction ν. For instance ∂x1 v is the
derivative in the direction e1 = (1, . . . , 0), where e1 is the first vector of the canonical
basis in Rn . To see that the limit of (2.45) exists for v smooth it is enough to note that
by the mean value theorem and the chain rule one has
d
v(x + thν)θ (θ ∈ (0, 1))
dt
= ∇v(x + θhν) · hν
v(x + hν) − v(x) =
(2.46)
(recall that the · denotes here the euclidean scalar product).
Dividing by h and letting h → 0 it follows that
v(x + hν) − v(x)
= ∇v(x) · ν = ∂ν v(x).
h→0
h
lim
(2.47)
This derivative in the ν-direction will be denoted in the following as above as ∂ν v or also
∂v
. Note that if v ∈ H 1 (Ω) then for a fixed vector ν, the last equality defines a function
∂ν
∂ν v which is in L2 (Ω). Then we can now state
Theorem 2.8 (Poincaré Inequality). Let ν be a unit vector in Rn , a > 0. Suppose that
Ω is bounded in one direction more precisely
Ω ⊂ { x ∈ Rn | |x · ν| ≤ a }.
Then we have
|v|2,Ω ≤
(Recall that
∂v
∂ν
√
∂v 2a ∂ν 2,Ω
∀ v ∈ H01 (Ω).
is defined by (2.47). | · |2,Ω was defined in Section 2.1).
(2.48)
(2.49)
2.3. SOBOLEV SPACES
xxiii
aν
ν
0
Ω
−aν
Figure 2.1: Bounded open set in one direction
∂v
Proof. By definition of H01 (Ω) and ∂ν
it is enough to show (2.49) for any v ∈ D(Ω).
Consider then v ∈ D(Ω) and suppose it extended to all Rn by 0 outside Ω. Without loss
of generality we can assume the coordinate system chosen such that ν = e1 . Then for
x ∈ Ω we have
Z x1
v(x) = v(x) − v(−a, x2 , . . . , xn ) =
∂x1 v(s, x2 , . . . , xn ) ds.
−a
Squaring this inequality and using the inequality of Cauchy–Schwarz we get
Z
2
2
x1
∂x1 v(s, x2 , . . . , xn ) ds
v (x) =
−a
Z x1
2
≤
|∂x1 v(s, x2 , . . . , xn )| ds
Z x1
≤ |x1 + a|
∂x1 v(s, x2 , . . . , xn )2 ds
−a
Z a
∂x1 v(s, x2 , . . . , xn )2 ds.
≤ |x1 + a|
−a
−a
Integrating this inequality in x1 , we derive
a Z a
(x1 + a)2 ∂x1 v(x1 , . . . , xn )2 dx1
v (x1 , x2 , . . . , xn ) dx1 ≤
2
−a
−a
Z a −a
= 2a2
∂x1 v(x1 , . . . , xn )2 dx1 .
Z
a
2
−a
Integrating in the other directions leads to (2.49).
Then we can show
(2.50)
xxiv
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
Theorem 2.9. Suppose that Ω is bounded in one direction i.e. satisfies (2.48) then on
H01 (Ω) the norms
|∇v|
|v|1,2 ,
(2.51)
2,Ω
are equivalent.
Proof. |∇v|2,Ω is the L2 (Ω)-norm of the gradient defined as
Z
21
2
|∇v| =
|∇v(x)| dx .
2,Ω
Ω
Thus one has
|∇v|
2,Ω
∀ v ∈ H01 (Ω).
≤ |v|1,2
Next due to Theorem 2.8 one has for v ∈ D(Ω)
Z
Z 2
Z
∂v
2
2
2
v dx ≤ 2a
dx ≤ 2a
|∇v|2 dx
Ω
Ω ∂ν
Ω
(2.52)
(see (2.47) and recall that ν is a unit vector). It follows that
Z
Z
2
2
2
v + |∇v| dx ≤ (1 + 2a ) |∇v|2 dx
Ω
or also
Ω
1
|v|1,2 ≤ (1 + 2a2 ) 2 |∇v|2,Ω .
This completes the proof of the theorem since D(Ω) is dense in H01 (Ω).
We introduce now the dual space of H01 (Ω). It is denoted by H −1 (Ω) i.e.
H −1 (Ω) = (H01 (Ω))∗ .
(2.53)
The notation is justified by the following result:
Theorem 2.10. T ∈ H −1 (Ω) iff there exist T0 , T1 , . . . , Tn ∈ L2 (Ω) such that
n
X
T = T0 +
∂xi Ti .
i=1
(The derivative in (2.54) is understood in the distributional sense).
Proof. First consider a distribution of the type (2.54). For ϕ ∈ D(Ω) one has
n
X
|hT, ϕi| = |hT0 , ϕi −
hTi , ∂xi ϕi|
i=1
Z n
X
= T0 ϕ −
Ti ∂xi ϕ dx
Ω
i=1
Z n
X
≤
|T0 | |ϕ| +
|Ti | |∂xi ϕ| dx
Ω
≤
i=1
Z X
n
Ω
i=0
Ti2
21
1
(|ϕ|2 + |∇ϕ|2 ) 2 dx.
(2.54)
2.3. SOBOLEV SPACES
xxv
(We used here the Cauchy–Schwarz inequality in Rn+1 ). By the Cauchy–Schwarz inequality again we get
Z X
21
n
2
(2.55)
|hT, ϕi| ≤
Ti dx |ϕ|1,2 ∀ ϕ ∈ D(Ω).
Ω i=0
By density (2.55) holds for any ϕ ∈ H01 (Ω) and T ∈ (H01 (Ω))∗ .
Conversely let T be in H −1 (Ω). By the Riesz representation theorem there exists
u ∈ H01 (Ω) such that
Z
uv + ∇u · ∇v dx = hT, vi ∀ v ∈ H01 (Ω).
Ω
Setting T0 = u, Ti = −∂xi u it is clear from the equality above that
T = T0 +
n
X
∂xi Ti
i=1
in D0 (Ω). This completes the proof of the theorem.
Remark 2.4. Note that T0 can be
R chosen equal to 0 if we use the Riesz representation
theorem with the scalar product ∇u · ∇v dx. The strong dual norm on H −1 (Ω) can be
defined by
|hT, vi|
.
(2.56)
|T |∗ =
Sup
v∈H01 (Ω)\{0} |v|1,2
From Theorem 2.10 one deduces easily that
L2 (Ω) ,→ H −1 (Ω)
(2.57)
,→ meaning that L2 (Ω) is continuously imbedded in H −1 (Ω) namely the identity is a
continuous mapping. Indeed note that if T = T0 ∈ L2 (Ω) then
|hT0 , vi| ≤ |T0 |2,Ω |v|2,Ω ≤ |T0 |2,Ω |v|1,2
∀ v ∈ H01 (Ω)
(by Cauchy–Schwarz inequality and (2.42)). It follows that
|T |∗ ≤ |T |2,Ω
which proves our assertion. One should remark that Theorem 2.10 is valid for an arbitrary
domain Ω of Rn .
Regarding the derivatives of mollifiers we have the following:
Proposition 2.11. Suppose that u ∈ L1loc (Ω) is such that ∂xi u ∈ L1loc (Ω) (we mean here
the derivative in the distributional sense). Then if d(x, ∂Ω) > ε, the usual derivative in
the direction xi of uε = ρε ∗ u(x) exists at x and is given by
∂xi uε (x) = ρε ∗ ∂xi u(x).
xxvi
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
Proof. We recall the formula (2.10). Then differentiating under the integral sign we have
Z
∂xi uε =
u(y)∂xi ρε (x − y) dy
ΩZ
= − u(y)∂yi (ρε (x − y)) dy
Ω
= h∂yi u, ρε (x − ·)i
Z
=
∂yi u(y)ρε (x − y) dy
Ω
which completes the proof.
Then we can show
Proposition 2.12. Suppose that f is a function of R into R such that
f 0 ∈ L∞ (R).
f ∈ C 1 (R),
Then if u ∈ L1loc (Ω), ∂xi u ∈ L1loc (Ω) we have f (u) ∈ L1loc (Ω) and the derivative in the
distributional sense is given by
∂xi f (u) = f 0 (u)∂xi u ∈ L1loc (Ω).
In particular if f (0) = 0, u ∈ H 1 (Ω) (resp. H01 (Ω)) implies f (u) ∈ H 1 (Ω) (resp. H01 (Ω)).
Proof. Consider uε = ρε ∗ u as in the previous proposition. By the chain rule for ε <
dist(x, ∂Ω), f (uε ) has a derivative in the usual sense given by
∂xi (f (uε )) = f 0 (uε )∂xi uε .
Let ϕ ∈ D(Ω), ε < dist(Supp ϕ, ∂Ω). We have
−hf (uε ), ∂xi ϕi = hf 0 (uε )∂xi uε , ϕi.
(2.58)
When ε → 0 we have also for any compact subset of Ω
Z
Z
0
|uε − u| dx → 0,
|f (uε ) − f (u)| dx ≤ Sup |f |
K
K
Z
Z
0
0
|f (uε )∂xi uε − f (u)∂xi u| dx =
|f 0 (uε )(∂xi uε − ∂xi u) + ∂xi u(f 0 (uε ) − f 0 (u))| dx
K
K
Z
0
≤ Sup |f |
|∂xi uε − ∂xi u| dx
K
Z
+
|∂xi u| |f 0 (uε ) − f 0 (u)| dx → 0
K
by the Lebesgue theorem and Theorem 2.1. This shows the first part of the theorem by
passing to the limit in (2.58). If u ∈ H 1 (Ω), by the formula giving the weak derivative
we have of course f (u) ∈ H 1 (Ω). If u ∈ H01 (Ω) and if ϕn ∈ D(Ω) is a sequence such that
ϕn → u in H01 (Ω), ϕn → u a.e., we have f (ϕn ) → f (u) in H01 (Ω) (just replace uε by ϕn
in the inequalities above). Now f (ϕn ) ∈ H01 (Ω) as a C 1 -function with compact support
which can be approximated by mollifyers. This completes the proof.
2.3. SOBOLEV SPACES
xxvii
Remark 2.5. If u ∈ H 1 (Ω) one can drop the assumption f (0) = 0 when Ω is bounded.
Indeed, in this case f (u) ∈ H 1 (Ω) since f (u) ∈ L2 (Ω) due to the inequality
|f (u)| ≤ |f (u) − f (0)| + |f (0)| ≤ Sup |f 0 | |u| + |f (0)|.
We denote by u+ , u− the positive and negative part of a function. Recall that
u− = Max(−u, 0) = −u ∨ 0,
|u| = u+ + u− .
u+ = Max(u, 0) = u ∨ 0,
u = u+ − u− ,
Then we have
Theorem 2.13. Suppose that u ∈ L1loc (Ω), ∂xi u ∈ L1loc (Ω) (we mean here the derivative
in the distributional sense) then in the distributional sense we have
∂xi u+ = χ{u>0} ∂xi u
where χ{u>0} denotes the characteristic function of the set {u > 0}.
Proof. Let hε be a continuous function such that
hε (x) = 0 x ≤ 0,
hε (x) = 1 x ≥ ε,
Define
Z
0 ≤ hε ≤ 1.
(2.59)
x
Hε (x) =
hε (s) ds.
−∞
Clearly Hε ∈ C 1 (R) and Hε0 = hε ∈ L∞ (Ω). Thus in the distributional sense we have (see
Proposition 2.12)
∂xi Hε (u) = hε (u)∂xi u.
This can be written as
Z
Z
− Hε (u)∂xi ϕ dx =
hε (u)∂xi uϕ dx ∀ ϕ ∈ D(Ω).
Ω
Ω
Passing to the limit we obtain
Z
Z
+
− u ∂xi ϕ dx =
χ{u>0} ∂xi uϕ dx ∀ ϕ ∈ D(Ω)
Ω
Ω
and the result follows.
Remark 2.6. One has of course
∂xi u− = −χ{u<0} ∂xi u
with an obvious notation for χ{u<0} . As a consequence we also have
(1 − χ{u6=0} )∂xi u = 0
namely ∂xi u = 0 a.e. on {u = 0} = { x ∈ Ω | u(x) = 0 }.
xxviii
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
As a corollary we have
Corollary 2.14. Suppose that u ∈ H 1 (Ω) (resp. H01 (Ω) then u+ ∈ H 1 (Ω) (resp. H01 (Ω))
and we have in the distributional sense
∂xi u+ = χ{u>0} ∂xi u.
(2.60)
Proof. The formula (2.60) is an immediate consequence of the previous proposition. If
now ϕn ∈ D(Ω) is such that
ϕn → u in H 1 (Ω),
choosing in (2.59) hε ∈ C ∞ (R) – which is always possible – we see that
Hε (ϕn ) ∈ D(Ω) −→ Hε (u) ∈ H01 (Ω).
When ε → 0, Hε (u) → u+ in H 1 (Ω) and thus u+ ∈ H01 (Ω). This completes the proof of
the theorem.
We will need the following corollary
Corollary 2.15. Let f be a continuous piecewise C 1 -function such that f 0 ∈ L∞ (R),
f (0) = 0. If u ∈ H 1 (Ω) then f (u) ∈ H 1 (Ω) and
∂xi f (u) = f 0 (u)∂xi u.
If u ∈ H01 (Ω), f (u) ∈ H01 (Ω). (See also Remark 2.5.)
Proof. Without loss of generality we can assume that f 0 has only one jump at 0. Then
one can write
f (u) = f1 (u+ ) + f2 (u− )
where fi ∈ C 1 (R). The result follows then from the previous results.
Definition 2.8 (Compact mapping). A mapping T : A → B, A, B Banach spaces is said
to be compact iff the image of a ball of A is relatively compact in B.
Then we have:
Theorem 2.16. Suppose that Ω is a bounded open set of Rn , then the canonical embedding
(i.e. the identity map) is compact from H01 (Ω) into L2 (Ω).
Proof. Consider
L = { v ∈ H01 (Ω) | |v|1,2 ≤ R }
(L is the ball of radius R of H01 (Ω)). Denote by Ω0 a bounded open set such that
Ω ⊂⊂ Ω0 .
(Careful, this is Ω which is compactly embedded in Ω0 !) Suppose the functions of L
extended by 0 outside Ω. We have
2.3. SOBOLEV SPACES
xxix
• L is bounded in L2 (Ω0 ).
This follows from
|v|2,Ω0 = |v|2,Ω ≤ |v|1,2 ≤ R.
• The extension v of a function of L belongs to H01 (Ω0 ) – see the definition of H01 .
Moreover for any h such that
|h| ≤ dist(Ω, ∂Ω0 )
h ∈ Rn ,
we have
Z
1
v(x + h) − v(x) =
0
d
v(x + th) dt =
dt
Z
≤
1
Z
1
∇v(x + th) · h dt
0
21
|∇v(x + th) · h| dt .
2
0
Squaring and integrating the resulting inequality on Ω we get
Z
Z Z 1
2
|v(x + h) − v(x)| dx ≤
|∇v(x + th)|2 |h|2 dt dx
Ω
Ω 0
Z 1Z
2
= |h|
|∇v(x + th)|2 dx dt
Z0 1 ZΩ
≤ |h|2
|∇v(y)|2 dy dt
2
0
2
Ω0
≤ |h| R .
The result follows then from Theorem 2.2. Note that the inequality above can be first
established for functions in D(Ω) and then holds by density.
The theorem is used in the following and in partial differential equations in general
under the following form:
Theorem 2.17. Let un be a bounded sequence of H01 (Ω)that is to say such that
|un |1,2 ≤ R
∀n ∈ N
for some R > 0. Then there exists a subsequence of n, nk and u ∈ H01 (Ω) such that
unk −→ u
in L2 (Ω),
unk * u
in H01 (Ω).
Proof. There exists a subsequence n0k and u0 ∈ H01 (Ω) such that
un0k * u0
in H01 (Ω).
From Theorem 2.16 there exists a subsequence of n0k , say nk , such that
unk −→ u in L2 (Ω).
Of course u, u0 have to be the same and this completes the proof of the theorem.
Remark 2.7. Theorem 2.16 can be extended to H 1 (Ω) and to more general spaces –
provided we have a suitable extension of the functions of H 1 (Ω) in a neighbourhood of Ω.
This is the case for instance if we assume Ω to be piecewise C 1 – see [14], [29].
xxx
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
Exercises
1. (Generalized Hölder’s Inequality)
Let fi , i = 1, . . . , k be functions in Lpi (Ω) with
1
1
1
+
+ ··· +
= 1.
p1 p2
pk
Show that
Qk
i=1
fi ∈ L1 (Ω) and that
Y
k fi i=1
2. We set
≤
1,Ω
k
Y
|fi |pi ,Ω .
i=1
( 1
e− t
η(t) =
0
t > 0,
t ≤ 0.
Show that η is infinitely differentiable in R. Deduce that the function ρ defined by
(2.4) belongs to D(Rn ).
3. Show that D(Ω) is an infinite dimensional vector space on R.
4. Let H ∈ L1loc (R) be the function defined by
(
1 if x > 0
H(x) =
0 if x < 0.
Show that the derivative H 0 of H in D0 (R) is given by
H 0 = δ0
where δ0 denotes the Dirac mass at 0.
5. Show that (2.36) holds.
6. Let T ∈ D0 (R) such that T 0 = 0. Show that T is constant.
7. Show – for instance for Ω bounded – that the function constant equal to 1 belongs
to H 1 (Ω) but not in H01 (Ω) (Hint: one can use (2.49)).
8. Show that the two norms
|u|2,Ω + |∂x1 u|2,Ω ,
are not equivalent on H 1 (Ω).
|u|1,2
2.3. SOBOLEV SPACES
xxxi
9. Let Ω = (0, 1). Show that
u 7→ u(1)
is a continuous linear form on H 1 (Ω) – i.e. belongs to H 1 (Ω)∗ – but is not a distribution (otherwise it would be the 0 one).
10. Let u be a Lipschitz continuous function in Ω. Show, when Ω is bounded, that
u ∈ H 1 (Ω) and
∂xi u ∈ L∞ (Ω) ∀ i = 1, . . . , n.
11. Let u be a Lipschitz continuous function in Ω vanishing on ∂Ω. Show that u ∈
H01 (Ω). Show that
H 1 (Ω) ∩ C0 (Ω) ⊂ H01 (Ω).
(C0 (Ω) denotes the space of continuous functions vanishing on ∂Ω).
12. Let u ∈ H 1 (Ω) where Ω is a domain in Rn i.e. a connected open subset. Show that
∇u = 0 in Ω
=⇒
u = Cst .
13. Let Ω be a bounded open set in R. Let p ∈ L2 (Ω) and
T = p0 ∈ H −1 (Ω).
Show that (see (2.56))
|T |∗ = |p|2,Ω .
14. Show that on H 1 (R) the two norms
|u|1,2 ,
are not equivalent.
|u0 |2,R
xxxii
CHAPTER 2. A SURVEY OF ESSENTIAL ANALYSIS
Chapter 3
Weak formulation
of elliptic problems
Solving a partial differential equation is not an easy task. We explain here the weak
formulation method which allows to obtain existence and uniqueness of a solution “in
a certain sense” which coincides necessarily with the solution in the usual sense if it
exists. First let us explain briefly in which context second order elliptic partial differential
equations do arise.
3.1
Motivation
Let Ω be a domain of R3 . Suppose that u denotes a density of some quantity (population,
heat, fluid. . . ) which diffuses in Ω at a constant rate i.e. the velocity of the diffusion is
independent of the time. It is well known in such a situation that the velocity of the
diffusion of such quantity is proportional to the gradient of u that is to say one has if ~v
is the diffusion velocity
~v = −a∇u
(3.1)
where a is some constant depending on the problem at hand. To understand the idea
behind the formula, suppose that u is a temperature. Consider two neighbouring points
x,
x + hν
where ν is a unit vector. The flow of temperature between these two points has a velocity
proportional to the difference of temperature and inversely proportional to the distance
between these points. In other words the component of ~v in the direction ν is given by
~v · ν = −a
u(x + hν) − u(x)
,
h
(3.2)
where a is some positive constant. The sign “−” takes into account the fact that if
u(x + hν) > u(x) then the flow of heat will go in the opposite direction to ν. (The heat,
xxxiii
xxxiv
CHAPTER 3. WEAK FORMULATION OF ELLIPTIC PROBLEMS
the population. . . diffuses from high concentrations to low ones.) Passing to the limit in
h we get
~v · ν = −a∇u(x) · ν.
(3.3)
This equality holding for every ν, one has clearly (3.1). (We refer the reader to [17] for a
more substantial analysis).
Assuming (3.1) and – to simplify a = 1 – consider then a cube Q contained in Ω. If
∂Q denotes the different faces of Q the quantity diffusing outside of Q is given by
Z
~v · ~n dσ(x)
∂Q
where ~n denotes the outward normal to ∂Q – dσ the local element of surface on ∂Q. Since
we suppose that the situation is steady, that is to say does not change with time this flow
through ∂Q has to be compensated by an input
Z
f dx
Q
where f (x) is the local input – of population, heat – brought into the system and we have
Z
Z
~v · ~n dσ(x) =
f dx.
(3.4)
∂Q
Q
Taking into account (3.1) with a = 1 we have
Z
Z
−
∇u · ~n dσ(x) =
f dx.
∂Q
Q
F2
x3
F1
0
x2
x1
Figure 3.1:
Let us for instance evaluate the first integral above on the faces F1 , F2 given by
Fi = { x = (x1 , x2 , x3 ) ∈ ∂Q | x1 = αi } i = 1, 2.
(3.5)
3.1. MOTIVATION
xxxv
One has
Z
Z
∇u·~n dσ(x)−
−
F1
Z
∇u·~n dσ(x) = −
F2
Π(F1 )
∂u
∂u
(α1 , x2 , x3 )−
(α2 , x2 , x3 ) dx2 dx3
∂x1
∂x1
where Π(F1 ) denotes the orthogonal projection of F1 on the plane x2 , x3 . But the quantity
above can also be written
Z
Z α2 2
Z
∂ u
−
(x1 , x2 , x3 ) dx1 dx2 dx3 = − ∂x21 u(x) dx.
2
(∂x
)
1
Π(F1 ) α1
Q
Arguing for the other faces similarly we obtain from (3.5)
Z
Z
2
2
2
f dx
− (∂x1 u + ∂x2 u + ∂x3 u) dx =
Q
Q
i.e.
Z
Z
−∆u dx =
Q
f dx
(3.6)
Q
where ∆ is the usual second order operator defined by
3
X
∂2u
∆u =
.
∂xi 2
i=1
Now the equality (3.6) is true for any Q and thus if for instance ∆u and f are continuous
this will force
−∆u(x) = f (x) ∀ x ∈ Ω.
(3.7)
This equation is called the Laplace equation. To this equation is generally added a
boundary condition. For instance
u = 0 on ∂Ω,
(3.8)
(in the case of diffusion of temperature, the temperature is prescribed on the boundary
of the body – this is for instance the room temperature that we can normalize to 0) or
∂u
= ~v · ~n = 0 on ∂Ω
(3.9)
∂n
(~n is the outward unit normal to ∂Ω – the body is isolated and there is no flux of temperature through its boundary).
Thus we are lead to the problem of finding u such that
(
−∆u = f in Ω,
(3.10)
u=0
on ∂Ω.
This problem is called the “Dirichlet problem”. Or we could be lead to look for u solution
to
−∆u = f in Ω,
(3.11)
∂u
=0
on ∂Ω.
∂n
This problem is a “Neumann problem”.
xxxvi
3.2
CHAPTER 3. WEAK FORMULATION OF ELLIPTIC PROBLEMS
The weak formulation
Suppose that we want to solve the Dirichlet problem. That is to say we give us an open
subset of Rn with boundary ∂Ω and we would like to find a function u which satisfies
(
−∆u = f in Ω,
(3.12)
u=0
on ∂Ω.
∆ is the Laplace operator in Rn given by
n
X
∆=
∂x2i ,
(3.13)
i=1
f is a function that, for the time being, we can assume continuous on Ω̄.
If u ∈ C 2 (Ω) ∩ C 0 (Ω̄) satisfies (3.12) pointwise one says that u is a “strong solution”
of the problem (3.12) (C k (Ω) denotes the space of functions k times differentiable in Ω,
C 0 (Ω̄) the space of continuous functions on Ω̄). Such a solution is not easy to find. One
way to overcome the problem is to weaken our assumption on u. For instance one could
only require the first equation of (3.12) to hold in the distributional sense. To proceed in
this direction let us assume that u is a strong solution. Let ϕ ∈ D(Ω). Multiplying both
sides of (3.12) by ϕ and integrating on Ω we get
Z
Z
−∆uϕ dx =
f ϕ dx.
Ω
This can also be written
n
X
Ω
h−∂x2i u, ϕi
i=1
Z
=
f ϕ dx.
(3.14)
Ω
2
(We have seen that for a C -function the usual derivatives and the derivatives in the
distributional sense coincide – see (2.32)). Due to the definition of the derivative in the
sense of distributions this can also be written
Z
n
X
h∂xi u, ∂xi ϕi =
f ϕ dx
Ω
i=1
or since ∂xi u is a function
Z
Z
∇u · ∇ϕ dx =
Ω
f ϕ dx ∀ ϕ ∈ D(Ω).
(3.15)
Ω
Thus a strong solution satisfies (3.15). Now one could replace the pointwise condition
u = 0 on ∂Ω
by u ∈ H01 (Ω) then one would be lead to find u such that
1
u
Z ∈ H0 (Ω),
Z
∇u · ∇v dx =
f v dx ∀ v ∈ H01 (Ω),
Ω
Ω
(3.16)
3.2. THE WEAK FORMULATION
xxxvii
(if u ∈ H01 (Ω) then by density of D(Ω) in H01 (Ω), (3.15) holds for every function ϕ in
H01 (Ω)). (3.16) is called the weak formulation of (3.12). If u is a strong solution and if
u = 0 on ∂Ω pointwise implies that u ∈ H01 (Ω) then u will be solution to (3.16). If on the
other hand the solution to (3.16) is unique it will be the strong solution we are looking
for. Thus in case of uniqueness, (3.16) will deliver the strong solution to (3.12). Now it
will also deliver a “generalized solution” when a strong solution does not exist. To see
that we have reached this we can state:
Theorem 3.1. Let f ∈ H −1 (Ω). If Ω is bounded in one direction there exists a unique u
solution to
1
u
Z ∈ H0 (Ω),
(3.17)
∇u · ∇v dx = hf, vi ∀ v ∈ H01 (Ω).
Ω
Proof. By Theorem 2.9, the scalar product on H01 (Ω) can be chosen to be
Z
∇u · ∇v dx.
Ω
Then (3.17) follows immediately from the Riesz representation theorem.
Remark 3.1. If for instance f ∈ L2 (Ω) – see (2.54) – then (3.17) is uniquely solvable. Now
the pointwise equality (3.12) can fail – imagine a discontinuous f . Nevertheless we have
found a good ersatz for the solution: we have obtained a solution in a generalized sense.
Remark 3.2. One should note that u minimizes also the functional
Z
1
|∇v|2 dx − hf, vi
J(v) =
2 Ω
(see Theorem 1.5). The integral
Z
|∇v|2 dx
Ω
is sometimes called Dirichlet integral.
In the same spirit as above we have
Theorem 3.2. Let Ω be an arbitrary domain of Rn and f ∈ L2 (Ω). There exists a unique
u solution to
1
u
Z ∈ H (Ω),
Z
(3.18)
∇u · ∇v + uv dx =
f v dx ∀ v ∈ H 1 (Ω).
Ω
Ω
Proof. One has
Z
Z
f v dx ≤
|f | |v| dx ≤ |f |2,Ω |v|2,Ω ≤ |f |2,Ω |v|1,2
Ω
Ω
i.e.
Z
v 7→
f v dx
Ω
is a linear continuous form on H 1 (Ω). The result follows then from the Riesz representation
theorem.
xxxviii
CHAPTER 3. WEAK FORMULATION OF ELLIPTIC PROBLEMS
Remark 3.3. We do not have of course u ∈ H01 (Ω). Indeed supposing Ω bounded f = 1
the solution is given by u = 1 ∈
/ H01 (Ω) – see exercises.
Now since D(Ω) ⊂ H 1 (Ω) we have from (3.18)
n
X
h∂xi u, ∂xi ϕi + hu, ϕi = hf, ϕi ∀ ϕ ∈ D(Ω)
⇐⇒
i=1
n
X
h−∂x2i u, ϕi + hu, ϕi = hf, ϕi ∀ ϕ ∈ D(Ω)
i=1
i.e. u is solution of
−∆u + u = f
in D0 (Ω).
(3.19)
The boundary condition is here hidden. Suppose that ∂Ω, u are “smooth” then one has
for v smooth
Z
Z
∇u · ∇v dx =
div(v∇u) − v∆u dx
Ω
Ω
P
(recall that div w = ni=1 ∂xi wi for every vector w = (w1 , . . . , wn )). By the divergence
formula – see [17], [29] – one has then
Z
Z
Z
∂u
v dσ(x) + −v∆u dx.
∇u · ∇v dx =
Ω
Ω
∂Ω ∂n
Going back to (3.18) we have obtained
Z
Z
Z
∂u
v dσ(x) + (−∆u + u)v dx =
f v dx
∂Ω ∂n
Ω
Ω
i.e. by (3.19), if (3.19) holds in the usual sense
Z
∂u
v dσ(x) = 0 ∀ v smooth.
(3.20)
∂Ω ∂n
In a weak sense we have
∂u
= 0 on ∂Ω
∂n
and (3.18) is the weak formulation of the “Neumann problem”
−∆u + u = f in Ω,
(3.21)
∂u
=0
on ∂Ω.
∂n
Remark 3.4. In (3.18) one can replace H 1 (Ω) by any closed subspace V of H 1 (Ω) to get
existence and uniqueness of a u such that
u
Z ∈ V,
Z
(3.22)
∇u · ∇v + uv dx =
f v dx ∀ v ∈ V.
For instance if V =
Ω
1
H0 (Ω),
Ω
(3.22) is the weak formulation of the problem
(
−∆u + u = f in Ω,
u=0
on ∂Ω.
3.2. THE WEAK FORMULATION
xxxix
Exercises
1. Let h, f be two continuous functions in Ω. If
Z
Z
h dx =
f dx ∀ Q cube included in Ω
Q
Q
show that h = f (this shows (3.7)).
2. Let Ω be a bounded open set of Rn and
Z
1
V = v ∈ H (Ω) v dx = 0 .
Ω
Show that V is a closed subspace of H 1 (Ω). Show that the unique solution u to
(3.22) satisfies in the distributional sense
Z
−∆u + u = f − − f dx
Ω
R
where −Ω f dx is the average of f on Ω defined as
Z
Z
1
f (x) dx
− f dx =
|Ω| Ω
Ω
(|Ω| denotes the measure of Ω).
What is the boundary condition satisfied by u?
3. Let f ∈ L2 (Ω). Consider u the solution to
(
−∆u = f
in Ω,
1
u ∈ H0 (Ω).
Let k ∈ R. Show that in the distributional sense
−∆(u ∨ k) ≤ f χ{u>k}
(Consider as test function
two numbers).
in Ω
u∨k−k +
ε
∨ maximum of two numbers.
∧ v, ε > 0, v ∈ H01 (Ω), v ≥ 0, ∧ minimum of
xl
CHAPTER 3. WEAK FORMULATION OF ELLIPTIC PROBLEMS
Chapter 4
Elliptic Problems
in divergence form
The goal of this chapter is to introduce general elliptic problems extending those already
addressed in the preceding chapter and putting them all in the same framework.
4.1
Weak formulation
We suppose here that Ω is an open subset of Rn , n ≥ 1. Denote by A = A(x) a n × n
matrix with entries aij i.e.
A(x) = (aij (x))i,j=1,...,n .
(4.1)
Suppose that aij ∈ L∞ (Ω) ∀ i, j = 1, . . . , n and A uniformly positive definite. In other
words suppose that for some positive constants λ and Λ we have
|A(x)ξ| ≤ Λ|ξ|
λ|ξ|2 ≤ A(x)ξ · ξ
∀ ξ ∈ Rn ,
∀ ξ ∈ Rn ,
a.e. x ∈ Ω,
a.e. x ∈ Ω.
(4.2)
(4.3)
(In these formulas “·” denotes the canonical scalar product in Rn , | · | the associated norm
and A(x)ξ denotes the vector of Rn obtained by multiplying A(x) by the column vector
ξ).
Remark 4.1. The assumption (4.2) is equivalent to assume that the aij are uniformly
bounded in Ω. Indeed
|Aξ|
(4.4)
kAk = Sup
ξ6=0 |ξ|
is a norm on the space of matrices which is equivalent to any other norm since the space
of matrices is finite dimensional. In particular it is equivalent to
kAk∞ = Max |aij |
i,j
which establishes our claim.
xli
(4.5)
xlii
CHAPTER 4. ELLIPTIC PROBLEMS IN DIVERGENCE FORM
In addition we denote by a a function satisfying
a ∈ L∞ (Ω),
a(x) ≥ 0 a.e. x ∈ Ω,
(4.6)
(this last assumption could be relaxed slightly – see Remark 4.8).
Then we have
Theorem 4.1. Let f ∈ H −1 (Ω). If
(i)
(ii)
or
Ω is bounded in one direction,
a(x) ≥ a > 0 a.e. x ∈ Ω,
for a positive constant a, then there exists a unique weak solution to
1
u
Z ∈ H0 (Ω),
A(x)∇u · ∇v + a(x)uv dx = hf, vi ∀ v ∈ H01 (Ω).
(4.7)
(4.8)
(4.9)
Ω
Proof. The proof is an easy consequence of the Lax–Milgram theorem. Indeed consider
H01 (Ω) equipped with the scalar product defined by (2.40)
Z
∇u · ∇v + uv dx.
(u, v)1,2 =
Ω
We have seen that
H01 (Ω)
is a Hilbert space. Consider then
Z
A(x)∇u · ∇v + a(x)uv dx.
a(u, v) =
(4.10)
Ω
• a(u, v) is a continuous bilinear form on H01 (Ω).
The bilinearity is clear. For the continuity one remarks that
Z
|a(u, v)| ≤
|A(x)∇u · ∇v| + a(x)|u| |v| dx
Ω
Z
|A(x)∇u| |∇v| + a(x)|u| |v| dx (Cauchy–Schwarz)
≤
Ω
Z
≤
Λ|∇u| |∇v| + |a|∞ |u| |v| dx
(4.11)
Ω
where |a|∞ denotes the L∞ (Ω)-norm of a. It follows that
Z
|a(u, v)| ≤ Max{Λ, |a|∞ } |∇u| |∇v| + |u| |v| dx
Ω
≤ Max{Λ, |a|∞ } |∇u|2 |∇v|2 + |u|2 |v|2 by the Cauchy–Schwarz inequality,
≤ Max{Λ, |a|∞ }|u|1,2 |v|1,2 , by the Cauchy–Schwarz inequality in R2 .
This completes the proof of our claim. Note that (4.11) shows at the same time
that the function under the integral sign of (4.10) is integrable and thus a(u, v) is
well defined.
4.1. WEAK FORMULATION
xliii
• a(u, v) is coercive on H01 (Ω).
Indeed one has
Z
a(u, v) =
A∇u·∇v+au2 dx ≥
Ω
Z
λ|∇u|2 +au2 dx ≥
Ω
2
λ|∇u|2,Ω
in case (i),
Min{λ, a}|u|21,2 in case (ii).
The coerciveness of a(u, v) follows since in case (i) one has for some constant c
|∇u| ≥ c|u|1,2
2,Ω
(see Theorem 2.9). Since f ∈ H −1 (Ω) the result follows then by the Lax–Milgram
theorem (Theorem 1.5).
In the case of (ii) i.e. if
a(x) ≥ a > 0
(4.12)
then one can replace in (4.9) H01 (Ω) by any closed subspace of H 1 (Ω). More precisely we
have
Theorem 4.2. Assume that (4.2), (4.3), (4.6) and (4.12) hold. Let V be a closed subspace
of H 1 (Ω) and f ∈ V ∗ where V ∗ denotes the dual space of V . Then there exists a unique
u solution to
u
Z ∈ V,
(4.13)
A∇u · ∇v + auv dx = hf, vi ∀ v ∈ V.
Ω
Proof. This is a consequence of the Lax–Milgram theorem since, as we have seen in the
proof of Theorem 4.1, the form a(u, v) is bilinear, continuous and coercive on V when
equipped with the | · |1,2 -norm.
Remark 4.2. If f ∈ L2 (Ω) then
Z
hf, vi =
f v dx
Ω
defines a continuous linear form on any V as in Theorem 4.2 since by the Cauchy–Schwarz
inequality we have
|hf, vi| ≤ |f |2 |v|2 ≤ |f |2 |v|1,2 ∀ v ∈ V.
Remark 4.3. One could replace (ii) or (4.12) by
a(x) ≥ 0,
(see [17]).
a 6≡ 0
xliv
CHAPTER 4. ELLIPTIC PROBLEMS IN DIVERGENCE FORM
Remark 4.4. In the case where A is symmetric i.e.
A(x) = A(x)T
then for any ξ, η ∈ Rn one has
Aξ · η = ξ · AT η = ξ · Aη
and a(u, v) is symmetric. It follows (see Theorem 1.5) that in Theorems 4.1, 4.2 u is the
unique minimizer of the functional
Z
1
J(v) =
A∇v · ∇v + av 2 dx − hf, vi
2 Ω
on H01 (Ω) and V respectively. Note also that the two theorems above extend results that
we had already established in the previous chapter in the case A = id. The terminology
“elliptic” comes of course from (4.3) since for a matrix A satisfying (4.3) the set
E = { ξ ∈ Rn | Aξ · ξ = 1 }
is an ellipsoid – an ellipse in the case n = 2.
Remark 4.5. As we could allow a to degenerate we can do the same for A(x) – i.e. we can
relax (4.3). Indeed in the case of Theorem 4.1, (i) for instance, what we only need is the
existence of a constant α > 0 such that
Z
A(x)∇u · ∇u + a(x)u2 dx ≥ α|u|21,2 ∀ u ∈ H01 (Ω).
(4.14)
Ω
As in the case of the Laplace operator – i.e. when A = id – the problems above are
weak formulations of elliptic partial differential equations in the usual or strong sense.
This is what we would like to see now. Let us consider first the case of Theorem 4.1.
Taking v = ϕ ∈ D(Ω) we obtain
n Z
X
(4.15)
(A(x)∇u)i ∂xi ϕ dx + ha(x)u, ϕi = hf, ϕi
i=1
Ω
where (A(x)∇u)i denotes the ith component of the vector A(x)∇u. This can be written
as
n
X
−
h∂xi (A(x)∇u)i , ϕi + ha(x)u, ϕi = hf, ϕi
(4.16)
i=1
i.e. u is solution in the distributional sense of
−
n
X
∂xi (A(x)∇u)i + a(x)u = f
i=1
or if we use the divergence notation
− div(A(x)∇u) + a(x)u = f
in Ω.
(4.17)
4.1. WEAK FORMULATION
xlv
P
(Recall that for a vector ω
~ = (w1 , . . . , wn ) ∈ Rn , div ω
~ = ni=1 ∂xi ωi ). Thus since H01 (Ω)
is the space of functions vanishing on ∂Ω the solution u of (4.9) is the weak solution to
(
− div(A(x)∇u) + a(x)u = f
u=0
in Ω,
on ∂Ω.
(4.18)
To have something which looks more like an equation – i.e. which does not involve vectors
one can note that
n
X
(A(x)∇u)i =
aij (x)∂xj u
j=1
and thus the first equation of (4.18) can be written
−
⇐⇒
n
X
∂xi
X
n
aij (x)∂xj u + a(x)u = f
i=1
j=1
n
n
XX
−
∂xi (aij (x)∂xj u) + a(x)u = f.
i=1 j=1
With the Einstein convention of repeated indices – i.e. one drops the sign Σ when two
indices repeat each others, the system (4.18) has the form:
(
−∂xi (aij (x)∂xj u) + a(x)u = f
u=0
in Ω,
on ∂Ω.
(4.19)
We now turn to the interpretation of Theorem 4.2. For that we will need the divergence
formula that we recall here.
Theorem 4.3 (Divergence formula). Suppose that Ω is a “smooth” open subset of Rn
with outward unit normal ~n. For any “smooth” vector field ω
~ in Ω we have
Z
Z
div ω
~ dx =
ω
~ · ~n dσ(x)
(4.20)
Ω
∂Ω
(dσ(x) is the measure area on ∂Ω).
Note that in this theorem we do not precise what “smooth” means. Basically (4.20)
holds when one can make sense of the different quantities ~n, dσ(x), the integrals, div ω
~
occurring – see [6], [17], [29]. Note also that in the case of a simple domain as a cube
the formula (4.20) can be easily established by arguing as we did between (3.5) and (3.6)
with ω
~ = ∇u. To interpret Theorem 4.2 in a strong form we consider only the case where
V = H 1 (Ω), f ∈ L2 (Ω). Taking then v = ϕ ∈ D(Ω) we derive as above that
− div(A(x)∇u) + au = f
in D0 (Ω).
(4.21)
xlvi
CHAPTER 4. ELLIPTIC PROBLEMS IN DIVERGENCE FORM
Now, of course, more test-functions v are available in (4.13) and assuming everything
“smooth” the second equation of (4.13) can be written as
Z
Z
div(vA(x)∇u) − div(A(x)∇u)v + auv dx =
f v dx.
Ω
Ω
Taking into account (4.21) we derive
Z
div(vA(x)∇u) dx = 0
Ω
and by the divergence formula
Z
A(x)∇u · n v dσ(x) = 0 ∀ v “smooth”.
∂Ω
Thus if every computation can be performed we are ending up with
A(x)∇u · n = 0 on ∂Ω.
Thus (4.13) is a weak formulation of the so called Neumann problem
(
− div(A(x)∇u) + a(x)u = f in Ω,
A(x)∇u · n = 0
on ∂Ω.
(4.22)
Recall that the first equation of (4.22) can be also written as in (4.19).
Remark 4.6. In the case where A(x) = Id then the second equation of (4.22) is
∂u
= 0 on ∂Ω
∂n
namely the normal derivative of u vanishes on ∂Ω. We already encountered this boundary
condition in the preceding chapter. In the general case and with the summation convention
the second equation above can be written
aij (x)∂xj uni = 0 on ∂Ω
if n = (n1 , . . . , nn ). This expression is called the “conormal derivative of u”.
Remark 4.7. In (4.13) if V is a finite dimensional subspace of H 1 (Ω), f a linear form on
V then there exists a unique solution u. This follows from the fact that a finite dimensional subspace is automatically closed and a linear form on it automatically continuous.
Similarly in (4.9) one can replace H01 (Ω) by any closed subspace V and get existence and
uniqueness for any f ∈ V ∗ . The case of finite dimensional subspaces is especially important in the so called finite elements method to compute approximations of the solutions
to problems (4.9) and (4.13). We refer the reader to Chapter ??.
4.2. THE WEAK MAXIMUM PRINCIPLE
4.2
xlvii
The weak maximum principle
The maximum principle has several aspects. One of it is that if u = u(f ) is solution to
(4.9) for instance then, if f increases, so does u. First we have to make precise what do
we mean by f increase. For that we have
Definition 4.1. Let T1 , T2 ∈ D0 (Ω). We say that
T1 ≤ T2
(4.23)
hT1 , ϕi ≤ hT2 , ϕi ∀ ϕ ∈ D(Ω), ϕ ≥ 0.
(4.24)
iff
As we gave a meaning for a function of H 1 (Ω) to vanish on ∂Ω we can give a meaning for
such a function to be negative on ∂Ω. We have
Definition 4.2. Let u1 , u2 ∈ H 1 (Ω) we say that
u1 ≤ u2
on ∂Ω
iff
(u1 − u2 )+ ∈ H01 (Ω).
(4.25)
We can now state
Theorem 4.4. Let A = A(x) be a matrix satisfying (4.2), (4.3). Let a ∈ L∞ (Ω) be a non
negative function. For u ∈ H 1 (Ω) we define −L as
−Lu = − div(A(x)∇u) + a(x)u.
(4.26)
Then we have: let u1 , u2 ∈ H 1 (Ω) such that
−Lu1 ≤ −Lu2
u1 ≤ u2
in D0 (Ω)
on ∂Ω
(4.27)
(4.28)
then
u1 ≤ u2
in Ω,
that is to say u1 is a function a.e. smaller than u2 .
Proof. By (4.27), (4.24) we have
Z
Z
A(x)∇u1 · ∇ϕ + au1 ϕ dx ≤
A(x)∇u2 · ∇ϕ + au2 ϕ dx
Ω
Ω
for any ϕ ∈ D(Ω), ϕ ≥ 0 which is also
Z
A(x)∇(u1 − u2 ) · ∇ϕ + a(u1 − u2 )ϕ dx ≤ 0 ∀ ϕ ∈ D(Ω), ϕ ≥ 0.
Ω
Then we have the following claim.
(4.29)
xlviii
CHAPTER 4. ELLIPTIC PROBLEMS IN DIVERGENCE FORM
Claim. Let u ∈ H01 (Ω), u ≥ 0 then there exists ϕn ∈ D(Ω), ϕn ≥ 0 such that ϕn → u in
H 1 (Ω).
Proof of the claim: Since u ∈ H01 (Ω) there exists ϕn ∈ D(Ω) such that
ϕn → u in H01 (Ω).
We also have
Z
|∇(ϕ+
n
Ω
2
Z
Z
2
|∇(ϕn − u)| dx +
|∇u|2
{ϕ >0}
{ϕn <0}
Z
Z n
|∇(ϕn − u)|2 +
χ{ϕn <0} |∇u|2 ,
≤
− u)| dx =
(4.30)
{u>0}
Ω
see Remark 2.6. Up to a subsequence we have
ϕn → u ≥ 0 a.e.
i.e. χ{ϕn <0} → 0 a.e. on {u > 0}. It follows from (4.30)
ϕ+
n −→ u.
(The whole sequence converges since the limit is unique). Then it is enough to show that
(ϕn )+ can be approximated by a nonnegative function of D(Ω).
ρε ∗ (ϕn )+
for ε small enough is such a function. This completes the proof of the claim.
Returning to (4.29) we then have
Z
A∇(u1 − u2 ) · ∇v + a(u1 − u2 )v dx ≤ 0 ∀ v ∈ H01 (Ω), v ≥ 0.
Ω
Taking v = (u1 − u2 )+ – see (4.25), (4.28) we derive
Z
2
A∇(u1 − u2 ) · ∇(u1 − u2 )+ + a (u1 − u2 )+ dx ≤ 0
Z Ω
2
⇐⇒
A∇(u1 − u2 )+ · ∇(u1 − u2 )+ + a (u1 − u2 )+ dx ≤ 0
(4.31)
Ω
(see (2.60)). It follows that (u1 − u2 )+ = 0 which completes the proof (see Exercise 12,
Chapter 2).
Remark 4.8. One can weaken slightly the assumption on a. Indeed if λ1 is the first
eigenvalue for the Dirichlet problem associated with A namely if
Z
Z
2
λ1 = Inf
A∇u · ∇u dx
u dx
(4.32)
1
u∈H0 (Ω)
u6=0
Ω
Ω
(see also Chapter ??) then one can assume only a(x) > −λ1 a.e. x (cf. (4.31)).
4.2. THE WEAK MAXIMUM PRINCIPLE
xlix
For a fixed a the remark above shows that the weak maximum principle holds provided
Ω is located in a strip of width small enough or (cf. Paragraph 12.4) has a Lebesgue
measure small enough – see (2.49). We can state that as the following corollary.
Corollary 4.5. Let A(x) be a matrix satisfying (4.2), (4.3) and a ∈ L∞ (Ω). Suppose
that Ω is located in a strip of width less or equal to
12
2λ
|a− |∞,Ω
then with the notation of Theorem 4.4 we have
−Lu1 ≤ −Lu2
in D0 (Ω),
u1 ≤ u2
on ∂Ω
implies
u1 ≤ u2
in Ω.
Proof. Going back to (4.31) we have
Z
A∇(u1 − u2 )+ · ∇(u1 − u2 )+ + (a+ − a− )((u1 − u2 )+ )2 dx ≤ 0.
Ω
Thus it comes
Z
+ 2
Z
|∇(u1 − u2 ) | dx −
λ
Ω
−
a− ((u1 − u2 )+ )2 dx ≤ 0,
Ω
−
and since a ≤ |a |∞,Ω
Z
Z
+ 2
−
λ |∇(u1 − u2 ) | dx − |a |∞,Ω ((u1 − u2 )+ )2 dx ≤ 0.
Ω
Ω
Using (2.49) we deduce (a is here the half width of the strip containing Ω!)
Z
λ
−
− |a |∞,Ω
((u1 − u2 )+ )2 dx ≤ 0
2a2
Ω
which implies (u1 − u2 )+ = 0 when
(2a)2 <
2λ
|a− |∞,Ω
.
(2a) is the width of the strip containing Ω. This completes the proof of the corollary.
Another corollary is
Corollary 4.6. Under the assumptions of Theorem 4.4 let u ∈ H 1 (Ω) satisfy
−Lu ≤ 0 in D0 (Ω),
u ≤ 0 on ∂Ω,
(4.33)
then we have
u ≤ 0 on Ω.
(4.34)
l
CHAPTER 4. ELLIPTIC PROBLEMS IN DIVERGENCE FORM
Proof. It is enough to apply Theorem 4.4 with u1 = u, u2 = 0.
Another consequence of Theorem 4.4 is that if u is a weak solution of a homogeneous
elliptic equation in divergence form – i.e. if
− div(A(x)∇u) + a(x)u = 0
(4.35)
then a positive maximum of u cannot exceed what it is on the boundary. More precisely
Corollary 4.7. Under the assumptions of Theorem 4.4 let u ∈ H 1 (Ω) be such that
−Lu ≤ 0 in D0 (Ω),
u≤M
on ∂Ω
(4.36)
where M is a nonnegative constant (an arbitrary constant if a ≡ 0). Then
u ≤ M.
(4.37)
Proof. Just remark that
−L(u − M ) = −Lu − aM ≤ 0
and apply Corollary 4.6 to u − M .
Remark 4.9. One should note that when the extension of Theorem 4.4 mentioned in
Remark 4.8 holds i.e. when one only assumes a(x) > −λ1 , a.e. x, then the two corollaries
above extend as well.
Finally one should notice that the weak maximum principle also holds for the Neumann
problem and we have
Theorem 4.8. Let A = A(x) satisfying (4.2), (4.3) and a = a(x) ∈ L∞ (Ω) satisfying
(4.12). For f1 , f2 ∈ L2 (Ω) let ui , i = 1, 2 be the solution to
1
u
Z
Zi ∈ H (Ω),
(4.38)
A(x)∇ui · ∇v + a(x)ui v dx =
fi v dx ∀ v ∈ H 1 (Ω)
Ω
Ω
(cf. Theorem and Remark 4.2). Then if
f1 ≤ f2
in Ω
(4.39)
u1 ≤ u2
in Ω.
(4.40)
we have
Proof. Taking v = (u1 − u2 )+ in (4.38) for i = 1, 2 we get by subtraction
Z
A(x)∇(u1 − u2 ) · ∇(u1 − u2 )+ + a(x)(u1 − u2 )+2 dx
Ω
Z
= (f1 − f2 )(u1 − u2 )+ dx ≤ 0 (by (4.39)).
This is also
Z
A(x)∇(u1 − u2 )+ · ∇(u1 − u2 )+ + a(x)(u1 − u2 )+2 dx ≤ 0
Ω
which leads to (4.40). This completes the proof of the theorem.
4.3. INHOMOGENEOUS PROBLEMS
li
Remark 4.10. The result of Theorem 4.8 can be extended to problems of the type (4.13).
The only assumptions needed being
(u1 − u2 )+ ∈ V,
4.3
hf1 − f2 , (u1 − u2 )+ i ≤ 0.
Inhomogeneous problems
By inhomogeneous we mean that the boundary condition is non homogeneous that is to
say is not identically equal to 0. Suppose for instance that one wants to solve the Dirichlet
problem
(
− div(A(x)∇u) + a(x)u = f in Ω,
(4.41)
u=g
on ∂Ω.
(f ∈ H −1 (Ω) and g is some function). As we found a way to give a sense to u = 0 on ∂Ω
we need to give a sense to the second equation to (4.41). The best way is to assume
u − g ∈ H01 (Ω).
Thus let us also suppose
g ∈ H 1 (Ω).
Setting
U =u−g
we see that if u is solution to (4.41) – in a weak sense – then U is solution to
− div(A(x)∇(U + g)) + a(x)(U + g) = f
in a weak sense. This can also be written as
− div(A(x)∇U ) + a(x)U = f − a(x)g + div(A(x)∇g).
(4.42)
Clearly due to our assumptions the right hand side of (4.42) belongs to H −1 (Ω) and thus
there exists a unique weak solution to
(
− div(A(x)∇U ) + a(x)U = f − ag + div(A(x)∇g) in Ω,
U ∈ H01 (Ω).
Then the weak solution to (4.41) is given by
u = U + g.
More precisely u = U + g is the unique function satisfying
Z
A(x)∇u · ∇v + a(x)uv dx = hf, vi ∀ v ∈ H 1 (Ω),
0
Ω
u − g ∈ H01 (Ω).
(4.43)
lii
CHAPTER 4. ELLIPTIC PROBLEMS IN DIVERGENCE FORM
Exercises
1. Establish the divergence formula for a triangle.
2. Consider for Γ0 closed subset of Γ = ∂Ω
W = { v ∈ C 1 (Ω̄) | v = 0 on Γ0 }
(C 1 (Ω) denotes the space of the restrictions of C 1 (Rn )-functions on Ω),
V = W̄ where the closure is taken in the H 1 (Ω)-sense.
Show that the solution to (4.13) for f ∈ L2 (Ω) is a weak solution to
(
− div(A(x)∇u) + a(x)u = f
in Ω,
u = 0 on Γ0 , A(x)∇u · n = 0 on Γ \ Γ0 .
Such a problem is called a mixed (Dirichlet–Neumann) problem.
3. Let T1 , T2 ∈ D0 (Ω). Show that
T1 ≤ T2
T2 ≤ T1
)
=⇒
T1 = T2 .
4. Let T ∈ D0 (Ω), T ≥ 0. Show that for every compact K ⊂ Ω there exists a constant
CK such that
|hT, ϕi| ≤ CK Sup |ϕ| ∀ ϕ ∈ D(Ω), Supp(ϕ) ⊂ K.
K
5. Define Li by
−Li u = − div(A(x)∇u) + ai u i = 1, 2.
Show that if u1 , u2 ∈ H 1 (Ω)
−L1 u1 ≤ −L2 u2
0 ≤ a2 ≤ a1
0∨ u1 ≤ u2
then 0∨ u1 ≤ u2 in Ω.
6. Show that for a constant symmetric matrix A
Aξ · ξ
ξ6=0 |ξ|2
λ1 = Inf
is the smallest eigenvalue of A.
in D0 (Ω),
in Ω,
on ∂Ω,
4.3. INHOMOGENEOUS PROBLEMS
7. Let A =
1
1
2
1
3
1
liii
. Show that
7 2
|ξ| ≤ Aξ · ξ
12
∀ ξ ∈ R2 .
8. Let Ω1 ⊂ Ω2 be two bounded subsets of Rn . Let u1 , u2 be the solutions to
Z
Z
A∇ui · ∇v dx =
fi v dx,
Ωi
Ωi
ui ∈ H01 (Ωi ),
for i = 1, 2 where fi ∈ L2 (Ωi ). Show that
f2 ≥ f1 ≥ 0
=⇒
u2 ≥ u 1 ≥ 0
i.e. for nonnegative f the solution of the Dirichlet problem increases with the size
of the domain.
9. Under the assumptions of Theorem 4.1 show that the mapping
f 7→ u
where u is the solution to (4.9) is continuous from H −1 (Ω) into H01 (Ω).
10. We suppose to simplify that Ω is bounded. Let (An ) = (An (x)) be a sequence of
matrices satisfying (4.2), (4.3) for λ, Λ independent of n. For f ∈ H −1 (Ω) we denote
by un the solution to
1
u
Zn ∈ H0 (Ω),
An (x)∇un · ∇v dx = hf, vi ∀ v ∈ H01 (Ω).
Ω
We suppose that
An * A a.e. in Ω
(i.e. the convergence holds entry to entry).
Show that un → u in H01 (Ω) where u is the solution to
1
u
Z ∈ H0 (Ω),
A(x)∇u · ∇v dx = hf, vi ∀ v ∈ H01 (Ω).
Ω
11. Show that a subspace of H 1 (Ω) of finite dimensions is closed in H 1 (Ω) and that any
linear form on it is automatically continuous.
liv
CHAPTER 4. ELLIPTIC PROBLEMS IN DIVERGENCE FORM
Chapter 5
Singular perturbation problems
This theory is devoted to study problems with small diffusion velocity. One is mainly
concerned with the asymptotic behaviour of the solution of such problems when the
diffusion velocity approaches 0. Let us explain the situation on a classical example (see
[48], [49]).
5.1
A prototype of a singular perturbation problem
Let Ω be a bounded open subset of Rn . For f ∈ H −1 (Ω), ε > 0 consider uε the weak
solution to
(
−ε∆uε + uε = f in Ω,
(5.1)
uε ∈ H01 (Ω).
It follows from Theorem 4.1 with A = ε Id that a solution to this problem exists and is
unique. The question is then to determine what happens when ε → 0. Passing to the
limit formally one expects that
uε −→ u0 = f.
(5.2)
Of course this cannot be for an arbitrary norm. For instance if f ∈
/ H01 (Ω) one can never
1
have uε → f in H0 (Ω). Let us first prove:
Theorem 5.1. Let uε be the solution to (5.1). Then we have
uε −→ f
in H −1 (Ω).
Proof. We suppose H01 (Ω) equipped with the norm
|∇v| .
2,Ω
(5.3)
(5.4)
If g ∈ H −1 (Ω) then, by the Riesz representation theorem, there exists a unique u ∈ H01 (Ω)
such that
Z
∇u · ∇v dx = hg, vi ∀ v ∈ H01 (Ω)
Ω
lv
lvi
CHAPTER 5. SINGULAR PERTURBATION PROBLEMS
and
|g|H −1 (Ω) = |∇u|2,Ω
(see (1.19)) – we denote by |g|H −1 (Ω) the strong dual norm of g in H −1 (Ω) defined as
|hg, vi|
.
|g|H −1 (Ω) = Sup v∈H 1 (Ω) |∇v|
0
v6=0
2,Ω
Since (5.1) is meant in a weak sense we have
Z
hf − uε , vi =
∇(εuε ) · ∇v dx ∀ v ∈ H01 (Ω)
Ω
and thus
|f − uε |H −1 (Ω) = |∇(εuε )|2,Ω .
(5.5)
As we just saw:
Z
ε
∇uε · ∇w + uε w dx = hf, wi ∀ w ∈ H01 (Ω).
(5.6)
Ω
Choosing w = εuε we get
|∇(εuε )|2 + ε|uε |22,Ω = hf, εuε i ≤ |f |H −1 (Ω) |∇(εuε )| .
2,Ω
2,Ω
(5.7)
From this we derive that
|∇(εuε )|2
2,Ω
≤ |f |H −1 (Ω) |∇(εuε )|2,Ω
and thus
|∇(εuε )|
2,Ω
≤ |f |H −1 (Ω) .
This shows that εuε is bounded in H01 (Ω) independently of ε. So, up to a subsequence
there exists v0 ∈ H01 (Ω) such that
εuε * v0
in H01 (Ω).
(5.8)
From (5.7) we derive then that
ε|uε |22,Ω ≤ |f |2H −1 (Ω) .
Thus
|εuε |22,Ω = ε · ε|uε |22,Ω −→ 0.
This means that
εuε −→ 0 in L2 (Ω)
(5.9)
and also in D0 (Ω). Since the convergence (5.8) implies convergence in D0 (Ω) we have
v0 = 0 and by uniqueness of the limit it follows that εuε * 0 in H01 (Ω). From (5.7) we
derive then
|∇(εuε )|2 ≤ hf, εuε i → 0.
2
By (5.5) this completes the proof of the theorem.
5.1. A PROTOTYPE OF A SINGULAR PERTURBATION PROBLEM
lvii
The convergence we just obtained is a very weak convergence, however it will always
hold. If we wish to have a convergence of uε in L2 (Ω) one needs to assume f ∈ L2 (Ω), a
convergence in H01 (Ω) one needs f ∈ H01 (Ω) and so on. Let us examine the convergence
in L2 (Ω).
Theorem 5.2. Suppose that f ∈ L2 (Ω) then if uε is the solution to (5.1) we have
uε −→ f
in L2 (Ω).
(5.10)
Proof. The weak formulation of (5.1) is
Z
Z
ε∇uε · ∇v + uε v dx =
f v dx ∀ v ∈ H01 (Ω).
Ω
(5.11)
Ω
Taking v = uε it comes
Z
2
2
ε |∇uε | 2,Ω + |uε |2,Ω =
f uε dx ≤ |f |2,Ω |uε |2,Ω
Ω
by the Cauchy–Schwarz inequality. Thus it follows that
2
|uε |2,Ω ≤ |f |2,Ω ,
ε|∇uε |2,Ω ≤ |f |22,Ω .
(5.12)
Since uε is bounded in L2 (Ω) independently of ε there exists u0 ∈ L2 (Ω) such that up to
a subsequence
uε * u0 in L2 (Ω).
(5.13)
(In fact at this stage by Theorem 5.1 we already know that uε * f in L2 (Ω), but we give
here a proof of Theorem 5.2 independent of Theorem 5.1). To identify u0 we go back to
(5.11) that we write
Z
Z
Z
√
√
f v dx ∀ v ∈ H01 (Ω).
ε ∇( εuε ) · ∇v dx + uε v dx =
Ω
Ω
Ω
Passing to the limit – using (5.12) we derive
Z
Z
u0 v dx =
f v dx ∀ v ∈ H01 (Ω).
Ω
By density of
H01 (Ω)
Ω
2
(or D(Ω)) in L (Ω) we get
Z
Z
u0 v dx =
f v dx ∀ v ∈ L2 (Ω)
Ω
Ω
that is to say u0 = f and by uniqueness of the possible limit the whole sequence uε satisfies
(5.13), which we recall, could be derived from Theorem 5.1.
Next we remark that
|uε − f |22,Ω = |uε |22,Ω − 2(uε , f )2,Ω + |f |22,Ω
≤ 2|f |22,Ω − 2(uε , f )2,Ω
by (5.12).
(5.14)
(·, ·)2,Ω denotes the scalar product in L2 (Ω). Letting ε → 0 we see that the right hand
side of (5.14) goes to 0 and we get uε → f in L2 (Ω). This completes the proof of the
theorem.
lviii
CHAPTER 5. SINGULAR PERTURBATION PROBLEMS
To end this section we consider the case where f ∈ Lp (Ω) which leads to interesting
techniques since we are no more in a Hilbert space framework. So let us show:
Theorem 5.3. Suppose that f ∈ Lp (Ω), 2 ≤ p < +∞. Then if uε is the solution to (5.1)
uε ∈ Lp (Ω),
uε −→ f
in Lp (Ω).
Proof. As before we have
Z
Z
Z
ε ∇uε · ∇v dx + uε v dx =
f v dx ∀ v ∈ H01 (Ω).
Ω
Ω
(5.15)
(5.16)
Ω
The idea is to choose
v = |uε |p−2 uε
to get for instance the Lp (Ω)-norm of uε in the second integral. However one cannot do
that directly since this function is not a priori in H01 (Ω) and one has to work a little bit.
One introduces the approximation of the function |x|p−2 x given by
p−1
if x ≥ n,
n
γn (x) = |x|p−2 x if |x| ≤ n,
(5.17)
p−1
−n
if x ≤ −n.
Then – see Corollary 2.15 – one has for every n > 0
γn (uε ) ∈ H01 (Ω).
Letting this function in (5.16) we get
Z
Z
Z
0
ε ∇uε · ∇uε γn (uε ) dx + uε γn (uε ) dx =
f γn (uε ) dx.
Ω
Ω
Ω
Since γn0 ≥ 0 we obtain
Z
Z
uε γn (uε ) dx ≤
f γn (uε ) dx ≤ |f |p,Ω |γn (uε )|p0 ,Ω
Ω
Ω
(by Hölder’s inequality). Let us set
δn (x) = |x| ∧ n
where ∧ denotes the minimum of two numbers. The inequality above implies that
Z
Z
p
p
|δn (uε )|p,Ω =
δn (uε ) dx ≤
uε γn (uε ) dx ≤ |f |p,Ω |γn (uε )|p0 ,Ω
Ω
Ω
≤ |f |p,Ω |δn (uε )|p−1
p,Ω .
It follows that
Z
p
Z
δn (uε ) dx ≤
Ω
Ω
|f |p dx.
5.2. ANISOTROPIC SINGULAR PERTURBATION PROBLEMS
lix
Letting n → +∞ by the monotone convergence theorem we get
Z
Z
p
|uε | dx ≤
|f |p dx
Ω
Ω
and thus uε ∈ Lp (Ω) with
|uε |p,Ω ≤ |f |p,Ω .
(5.18)
We already know that uε → f in H −1 (Ω), in L2 (Ω) thus we have
uε * f
in Lp (Ω)-weak.
(5.19)
Now from (5.18) we deduce
Z
Z
|uε | dx ≤
lim sup
ε→0
p
Ω
|f |p dx
and from the weak lower semi-continuity of the norm in Lp (Ω)
Z
Z
p
lim inf
|uε | dx ≥
|f |p dx.
ε→0
(5.20)
Ω
Ω
(5.21)
Ω
(A norm is a continuous, convex function and is thus also weakly lower semi continuous
– see [14]). (5.20) and (5.21) imply that
Z
Z
p
lim |uε | dx =
|f |p dx.
ε→0
Ω
Ω
Together with (5.19) this leads to (5.15) (see exercises) and completes the proof of the
theorem.
5.2
Anisotropic singular perturbation problems
Let Ω be a bounded domain in R2 and f a function such that for instance
f = f (x1 , x2 ) ∈ L2 (Ω).
Then for every ε > 0 there exists a unique uε solution to
(
−ε2 ∂x21 uε − ∂x22 uε = f in Ω,
uε = 0
on ∂Ω.
(5.22)
(5.23)
This is what we call an anisotropic singular perturbation problem since the diffusion
velocity (see Chapter 3) is very small in the x1 direction. Of course (5.23) is understood
in a weak sense, namely uε satisfies
1
u
Zε ∈ H0 (Ω),
Z
(5.24)
2
ε ∂x1 uε ∂x1 v + ∂x2 uε ∂x2 v dx =
f v dx ∀ v ∈ H01 (Ω).
Ω
Ω
lx
CHAPTER 5. SINGULAR PERTURBATION PROBLEMS
We would like to study the behaviour of uε when ε goes to 0. Formally, at the limit, if u0
denotes the limit of uε when ε → 0, we have
Z
Z
(5.25)
∂x2 u0 ∂x2 v dx =
f v dx ∀ v ∈ H01 (Ω).
Ω
Ω
Even though we can show that u0 ∈ H01 (Ω) the bilinear form on the left hand side of
(5.25) is not coercive on H01 (Ω) and the problem (5.25) is not well posed.
Let us denote by Πx1 (Ω) the projection of Ω on the x1 -axis defined by
Πx1 (Ω) = { (x1 , 0) | ∃ x2 s.t. (x1 , x2 ) ∈ Ω }.
(5.26)
Then for every x1 ∈ Πx1 (Ω) = ΠΩ – we drop the x1 index for simplicity – we denote by
Ωx1 the section of Ω above x1 – i.e.
Ωx1 = { x2 | (x1 , x2 ) ∈ Ω }.
(5.27)
The closest problem to (5.25) to be well defined and suitable for the limit is the following.
x2
Ωx1
Ω
0
x1
x1
ΠΩ
Figure 5.1:
For almost all x1 – or for every x1 ∈ ΠΩ if f is a smooth function – we have
f (x1 , ·) ∈ L2 (Ωx1 ),
(this is a well known result of integration).
Then there exists a unique u0 = u0 (x1 , ·) solution to
u0 ∈ H01 (Ωx1 ),
Z
Z
∂x2 u0 ∂x2 v dx =
f (x1 , ·)v dx2 ∀ v ∈ H01 (Ωx1 ).
Ωx1
(5.28)
(5.29)
Ωx1
Note that the existence and uniqueness of a solution to (5.29) is an easy consequence of
the Lax–Milgram theorem since
Z
2
(∂x2 v) dx
Ωx1
5.2. ANISOTROPIC SINGULAR PERTURBATION PROBLEMS
lxi
is a norm on H01 (Ωx1 ). What we would like to show now is the convergence of uε toward
u0 solution of (5.29) for instance in L2 (Ω). Instead of doing that on this simple problem
we show how it could be embedded relatively simply in a more general class of problems.
However in a first lecture the reader can just apply the technique below to this simple
example. It will help him to catch the main ideas.
Let Ω be a bounded open subset of Rn . We denote by x = (x1 , . . . , xn ) the points in
Rn and use the decomposition
x = (X1 , X2 ),
X1 = x1 , . . . , xp ,
X2 = xp+1 , . . . , xn .
With this notation we set, with T denoting the transposition
∇X1 u
T
∇u = (∂x1 u, . . . , ∂xn u) =
∇X2 u
(5.30)
(5.31)
with
∇X1 u = (∂x1 u, . . . , ∂xp u)T ,
∇X2 u = (∂xp+1 u, . . . , ∂xn u)T .
(5.32)
We denote by A = (aij (x)) a n × n matrix such that
aij ∈ L∞ (Ω) ∀ i, j = 1, . . . , n,
(5.33)
and such that for some λ > 0 we have
λ|ξ|2 ≤ Aξ · ξ
∀ ξ ∈ Rn , a.e. x ∈ Ω.
We decompose A into four blocks by writing
A11 A12
A=
A21 A22
(5.34)
(5.35)
where A11 , A22 are respectively p × p and (n − p) × (n − p) matrices. We then set for ε > 0
2
ε A11 εA12
.
(5.36)
Aε = Aε (x) =
εA21 A22
For ξ ∈ Rn if we set ξ = ξ̄ξ̄12 where ξ¯1 = (ξ1 , . . . , ξp )T , ξ¯2 = (ξp+1 , . . . , ξn )T we have for
a.e. x ∈ Ω and every ξ ∈ Rn
Aε ξ · ξ = Aξε · ξε ≥ λ|ξε |2 = λ{ε2 |ξ¯1 |2 + |ξ¯2 |2 }
(5.37)
where we have set ξε = (εξ¯1 , ξ¯2 ). Thus Aε satisfies
Aε ξ · ξ ≥ λ(ε2 ∧ 1)|ξ|2
∀ ξ ∈ Rn , a.e. x ∈ Ω.
(5.38)
(∧ denotes the minimum of two numbers). It follows that Aε is positive definite and for
f ∈ H −1 (Ω) there exists a unique uε solution to
Z
A ∇u · ∇v dx = hf, vi ∀ v ∈ H 1 (Ω),
ε
ε
0
(5.39)
Ω
uε ∈ H01 (Ω).
lxii
CHAPTER 5. SINGULAR PERTURBATION PROBLEMS
Clearly uε is the solution of a problem generalizing (5.24). We would like then to study
the behavior of uε when ε → 0.
Let us denote by ΠX1 the orthogonal projection from Rn onto the space X2 = 0. For
any X1 ∈ ΠX1 (Ω) := ΠΩ we denote by ΩX1 the section of Ω above X1 defined as
ΩX1 = { X2 | (X1 , X2 ) ∈ Ω }.
To avoid unnecessary complications we will suppose that
f ∈ L2 (Ω).
(5.40)
Then for a.e. X1 ∈ ΠΩ we have f (X1 , ·) ∈ L2 (ΩX1 ) and there exists a unique u0 = u0 (X1 , ·)
solution to
Z
A22 ∇X2 u0 (X1 , X2 ) · ∇X2 v(X2 ) dX2
Ω X1
Z
(5.41)
=
f (X1 , X2 )v(X2 ) dX2 ∀ v ∈ H01 (ΩX1 ),
Ω
X1
u (X , ·) ∈ H 1 (Ω ).
X1
0
1
0
Clearly u0 is the natural candidate for the limit of uε . Indeed we have
Theorem 5.4. Under the assumptions above if uε is the solution to (5.39) we have
uε −→ u0 ,
∇X2 uε −→ ∇X2 u0 ,
ε∇X1 uε −→ 0 in L2 (Ω)
(5.42)
where u0 is the solution to (5.41).
(In these convergences the vectorial convergence in L2 (Ω) means the convergence component by component).
Proof. Let us take v = uε in (5.39). By (5.37) we derive
Z
λ ε2 |∇X1 uε |2 + |∇X2 uε |2 dx ≤ hf, uε i ≤ |f |2,Ω |uε |2,Ω .
(5.43)
Ω
Since Ω is bounded, by the Poincaré inequality we have for some constant C independent
of ε
|v|2,Ω ≤ C |∇X2 v|2,Ω ∀ v ∈ H01 (Ω).
(5.44)
From (5.43) we then obtain
Z
λ ε2 |∇X1 uε |2 + |∇X2 uε |2 dx ≤ C|f |2,Ω |∇X2 uε |2,Ω .
Ω
Dropping in this inequality the first term we get
2
λ|∇X2 uε |2,Ω ≤ C|f |2,Ω |∇X2 uε |2,Ω
(5.45)
5.2. ANISOTROPIC SINGULAR PERTURBATION PROBLEMS
and thus
|∇X2 uε |
2,Ω
≤C
lxiii
|f |2,Ω
.
λ
Using this in (5.45) we are ending up with
Z
|f |22,Ω
ε2 |∇X1 uε |2 + |∇X2 uε |2 dx ≤ C 2 2 .
λ
Ω
(5.46)
Thus – due to (5.44) – we deduce that
uε ,
|ε∇X1 uε |,
|∇X2 uε | are bounded in L2 (Ω).
(5.47)
(This of course independently of ε). It follows that there exist u0 ∈ L2 (Ω), u1 ∈ (L2 (Ω))p ,
u2 ∈ (L2 (Ω))n−p such that – up to a subsequence
uε * u0 ,
ε∇X1 uε * u1 ,
in “L2 (Ω)”.
∇X2 uε * u2
(u1 , u2 are vectors with components in L2 (Ω). The convergence is meant component
by component). Of course the convergence in L2 (Ω) –weak – implies the convergence in
D0 (Ω) and by the continuity of the derivation in D0 (Ω) we deduce that
u ε * u0 ,
ε∇X1 uε * 0,
∇X2 uε * ∇X2 u0
in L2 (Ω).
(5.48)
We then go back to the equation satisfied by uε that we expand using the different blocks
of A. This gives
Z
Z
Z
2
ε A11 ∇X1 uε · ∇X1 v dx + εA12 ∇X2 uε · ∇X1 v dx + εA21 ∇X1 uε · ∇X2 v dx
Ω
ZΩ
ZΩ
+ A22 ∇X2 uε · ∇X2 v dx =
f v dx ∀ v ∈ H01 (Ω).
Ω
Ω
Passing to the limit in each term using (5.48) we get
Z
Z
A22 ∇X2 u0 · ∇X2 v dx =
f v dx ∀ v ∈ H01 (Ω).
Ω
(5.49)
Ω
At this point we do not know yet if for a.e. X1 ∈ ΠΩ we have
u0 (X1 , ·) ∈ H01 (ΩX1 ).
(5.50)
To see this – and more – one remarks first that taking v = uε in (5.49) and passing to the
limit we get
Z
Z
A22 ∇X2 u0 · ∇X2 u0 dx =
Ω
f u0 dx.
(5.51)
Ω
Next we compute
Z
Iε =
Aε
Ω
∇X1 uε
∇X1 uε
·
dx.
∇X2 (uε − u0 )
∇X2 (uε − u0 )
(5.52)
lxiv
CHAPTER 5. SINGULAR PERTURBATION PROBLEMS
We get
Z
Z
2
ε A11 ∇X1 uε · ∇X1 uε dx +
Iε =
εA12 ∇X2 (uε − u0 ) · ∇X1 uε dx
Ω
Ω
Z
εA21 ∇X1 uε · ∇X2 (uε − u0 ) dx
+
ZΩ
A22 ∇X2 (uε − u0 ) · ∇X2 (uε − u0 ) dx.
+
Ω
Using (5.39) with v = uε , we obtain
Z
Z
Z
f uε dx − εA12 ∇X2 u0 · ∇X1 uε dx − εA21 ∇X1 uε · ∇X2 u0
Iε =
Ω
ZΩ
Z Ω
− A22 ∇X2 u0 · ∇X2 uε dx − A22 ∇X2 uε · ∇X2 u0 dx
Ω
ZΩ
+ A22 ∇X2 u0 ∇X2 u0 dx.
Ω
Passing to the limit in ε we get
Z
Z
f u0 dx − A22 ∇X2 u0 · ∇X2 u0 dx = 0.
lim Iε =
ε→0
Ω
Ω
Using the coerciveness assumption we have (see (5.52))
Z
λ ε2 |∇X1 uε |2 + |∇X2 (uε − u0 )|2 dx ≤ Iε .
Ω
It follows that
ε∇X1 uε −→ 0,
Now we also have
Z
ΠΩ
Z
∇X2 uε −→ ∇X2 u0
in L2 (Ω).
|∇X2 (uε − u0 )|2 dX2 dX1 −→ 0.
Ω X1
It follows that for almost every X1
Z
|∇X2 (uε − u0 )|2 dX2 −→ 0.
Ω X1
Since
Z
2
21
|∇X2 v| dX2
Ω X1
is a norm on H01 (ΩX1 ) and uε (X1 , ·) ∈ H01 (ΩX1 ) we have
u0 (X1 , ·) ∈ H01 (ΩX1 )
(5.53)
5.2. ANISOTROPIC SINGULAR PERTURBATION PROBLEMS
lxv
and this for almost every X1 . Using then the Poincaré inequality implies
Z
Z
2
|uε − u0 | dX2 ≤ C
|∇X2 (uε − u0 )|2 dX2
Ω X1
Ω
Z
Z X1
=⇒
|uε − u0 |2 dx ≤ C
|∇X2 (uε − u0 )|2 dx −→ 0
Ω
Ω
(by (5.53)) and thus
uε −→ u0
in L2 (Ω).
(5.54)
All this is up to a subsequence. If we can identify u0 uniquely then all the convergences
above will hold for the whole sequence. For this purpose recall first that
u0 (X1 , ·) ∈ H01 (ΩX1 ).
(5.55)
One can cover Ω by a countable family of open sets of the form
Ui × Vi ⊂ Ω,
i∈N
where Ui , Vi are open subsets of Rp , Rn−p respectively. One can even choose Ui , Vi
hypercubes. Then choosing ϕ ∈ H01 (Vi ) we derive from (5.49)
Z
Z
η(X1 )
A22 (X1 , X2 )∇X2 u0 (X1 , X2 ) · ∇X2 ϕ(X2 ) dX2 dX1
Ui
Vi
Z
Z
f (X1 , X2 )ϕ(X2 ) dX2 dX1 ∀ η ∈ D(Ui ),
η(X1 )
=
Ui
Vi
since ηϕ ∈ H01 (Ω). Thus there exists a set of measure zero, N (ϕ), such that
Z
Z
A22 (X1 , X2 )∇X2 u0 (X1 , X2 ) · ∇X2 ϕ(X2 ) dX2 =
f (X1 , X2 )ϕ(X2 ) dX2
Vi
(5.56)
Vi
for all X1 ∈ Ui \ N (ϕ). Denote by ϕn a Hilbert basis of H01 (Vi ). Then (5.56) holds
(replacing ϕ by ϕn ) for all X1 such that
X1 ∈ Ui \ Ni (ϕn )
where Ni (ϕn ) is a set of measure 0. Thus for
X1 ∈ Ui \ ∪n Ni (ϕn )
we have (5.56) for any ϕ ∈ H01 (Vi ). This follows easily from the density in H01 (Vi ) of the
linear combinations of the ϕn . Let us then choose
X1 ∈ ΠΩ \ ∪i ∪n Ni (ϕn )
(note that ∪i ∪n Ni (ϕn ) is a set of measure 0). Let
ϕ ∈ D(ΩX1 ).
lxvi
CHAPTER 5. SINGULAR PERTURBATION PROBLEMS
If K denotes the support of ϕ we have clearly
K ⊂ ∪i Vi
and thus K can be covered by a finite number of Vi that for simplicity we will denote
by V1 , . . . , Vk . Using a partition of unity (see [64], [70] and the exercises) there exists
ψi ∈ D(Vi ) such that
k
X
ψi = 1 on K.
i=1
By (5.56) we derive
Z
Z
X
A22 ∇X2 u0 · ∇X2 ϕ dX2 =
A22 ∇X2 u0 · ∇X2
(ψi ϕ) dX2
K
K
=
i
XZ
=
A22 ∇X2 u0 · ∇X2 (ψi ϕ) dX2
Vi
i
XZ
f ψi ϕ dX2
Vi
i
Z
f ϕ dX2 .
=
K
This is also
Z
Z
A22 ∇X2 u0 · ∇X2 ϕ dX2 =
Ω X1
f ϕ dX2
∀ ϕ ∈ D(ΩX1 )
Ω X1
and thus u0 is the unique solution to (5.41) for a.e. X1 ∈ ΠΩ . This completes the proof
of the theorem.
Exercises
1. Show that if f ∈ H01 (Ω) and if uε is the solution to (5.1) one has
uε −→ f
in H01 (Ω).
Show that if f ∈ D(Ω) then for every k one has
∆k uε −→ ∆k f
∀ k ≥ 1 in H01 (Ω),
(∆k denotes the operator ∆ iterated k times).
2. One denotes by A(x) a uniformly positive definite matrix. Study the singular perturbation problem
uεZ∈ H01 (Ω),
Z
ε A(x)∇uε · ∇v dx + a(x)uε v dx = hf, vi ∀ v ∈ H01 (Ω)
Ω
Ω
5.2. ANISOTROPIC SINGULAR PERTURBATION PROBLEMS
lxvii
when a(x) is a function satisfying
a(x) ≥ a > 0 a.e. x ∈ Ω,
and f ∈ H −1 (Ω).
3. Compute the solution to
(
−εu00ε + uε = 1 in Ω = (0, 1),
uε ∈ H01 (Ω).
Examine its behaviour when ε → 0.
4. For f ∈ Lp (Ω), p ≥ 2 let u be the solution to
(
− div(A(x)∇u) + u = f
u ∈ H01 (Ω)
in Ω,
where A is a n × n matrix satisfying (4.2), (4.3). Show that u ∈ Lp (Ω) and that
|u|p,Ω ≤ |f |p,Ω .
5. Suppose that p ≥ 2.
(i) Show that there exists a constant c > 0 independent of z such that
|1 + z|p ≥ 1 − pz + c|z|p
∀ z ∈ R.
(*)
(ii) Let uε ∈ Lp (Ω) be a sequence such that when ε → 0
uε * f in Lp (Ω),
|uε |p → |f |p .
Show that
uε → f
(Hint: take z =
uε −f
f
in Lp (Ω) strong.
in (∗)).
Show the same result for 1 < p < 2.
6. Let Ω be a bounded domain in Rn . For σ > 0 one considers uσ the solution to
uσZ∈ H 1 (Ω),
Z
Z
σ ∇uσ · ∇v dx + uσ v dx =
f v dx.
Ω
Ω
Ω
(i) Show that for f ∈ L2 (Ω) there exists a unique solution to the problem above.
lxviii
CHAPTER 5. SINGULAR PERTURBATION PROBLEMS
(ii) Show that when σ → +∞
1
uσ →
|Ω|
Z
f (x) dx in H 1 (Ω).
Ω
7. (Partition of unity).
Let K be a compact subset of Rp such that
K ⊂ ∪ki=1 Vi
where Vi are open subsets.
(i) Show that one can find open subsets Vi0 , Vi00 such that
00
0
Vi00 ⊂ V i ⊂ Vi0 ⊂ V i ⊂ Vi
∀ i = 1, . . . , k,
K ⊂ ∪ki=1 Vi00
(ii) Show that there exists a continuous function αi such that
αi = 1 on Vi00 ,
αi = 0 outside Vi0 .
(iii) Show that for εi small enough
γi = ρεi ∗ αi ∈ D(Vi ).
(iv) One sets
γi
ψi = Pk
i=1
Show that ψi ∈ D(Vi ) and
Pk
i=1
γi
.
ψi = 1 on K.
8. Let Ω = ω1 × ω2 where ω1 , ω2 are bounded domains in Rp , Rn−p respectively
(0 < p < n). Under the assumptions of Theorem 5.4 show that, in general, for
f 6= 0 one cannot have
uε −→ u0 in H 1 (Ω).
Chapter 6
Asymptotic analysis for problems
in large cylinders
6.1
A model problem
Let us denote by Ω` the rectangle in R2 defined by
Ω` = (−`, `) × (−1, 1).
(6.1)
For convenience we will also set ω = (−1, 1). If f = f (x2 ) is a regular function – as we
x2
1
0
ℓ
x1
−ℓ
−1
Figure 6.1:
will see it can be more general – there exists a unique u` solution to
(
−∆u` = f (x2 ) in Ω` ,
u` = 0
on ∂Ω` .
(6.2)
Of course as usual this solution is understood in the weak sense as explained in Chapters 3
and 4. Since u` vanishes at both ends of Ω` , if f 6≡ 0 it is clear that u` has to depend
on x1 . However due to our choice of f – independent of x1 – u` has to experience some
lxix
lxx
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
special properties and if ` is large one expects u` to be fast independent of x1 far away
from the end sides of the rectangle. In other words one expects u` to converge locally
toward a function independent of x1 when ` → +∞. A natural candidate for the limit is
of course u∞ the weak solution to
(
−∂x22 u∞ = f in ω,
(6.3)
u∞ = 0
on ∂ω.
We recall that ω = (−1, 1), ∂ω = {−1, 1}. Indeed we are going to show that if `0 > 0 is
fixed then
u` −→ u∞ in Ω`0
(6.4)
with an exponential rate of convergence, that is to say the norm of u` − u∞ in Ω`0 is a
O(e−α` ) for some positive constant α. At this point we do not precise in what norm this
convergence takes place since several choices are possible.
Somehow f is just having an auxilliary rôle in our analysis and we will take it the
most general possible assuming
f ∈ H −1 (ω).
(6.5)
Then for v ∈ H01 (Ω` ) for almost every x1
v(x1 , ·) ∈ H01 (ω)
(6.6)
(see [18] or the exercises) and the linear form
Z
`
hf, vi =
hf, v(x1 , ·)i dx1
(6.7)
−`
defines clearly an element of H −1 (Ω` ) that we will yet denote by f . Note that for a smooth
function f one has as usual
Z
hf, vi =
f v dx.
Ω`
Then, with these definitions, there exist weak solutions to (6.2), (6.3) respectively that is
to say u` , u∞ such that
1
u
Z` ∈ H0 (Ω` ),
(6.8)
∇u` · ∇v dx = hf, vi ∀ v ∈ H01 (Ω` ),
Ω`
1
u
Z∞ ∈ H0 (ω),
∂x2 u∞ ∂x2 v dx2 = hf, vi ∀ v ∈ H01 (ω).
(6.9)
ω
We would like to show now that in this general framework – in terms of f – one has
u` → u∞ when ` → +∞.
6.1. A MODEL PROBLEM
lxxi
As we have seen in Chapter 2 by the Poincaré inequality there exists a constant c > 0
such that
Z
Z
2
(6.10)
c u dx ≤ (∂x2 u)2 dx ∀ u ∈ H01 (ω).
ω
ω
We will set
R
0 < λ1 =
Inf
u∈H01 (ω)
u6=0
(∂ u)2 dx
ω R x2
.
u2 dx
ω
(6.11)
(As we will see later λ1 is the first eigenvalue of the operator −∂x22 for the Dirichlet
problem. We will not use this here but the introduction of λ1 is useful in order to get
sharp estimates).
Let us start with the following fundamental estimate.
Theorem 6.1. Suppose that `2 < `1 ≤ ` then we have
√
− λ1 (`1 −`2 ) |∇(u` − u∞ )|
≤
e
|∇(u
−
u
)|
.
`
∞
2,Ω
2,Ω
`2
`1
(6.12)
Remark 6.1. This estimate shows in particular that the function
√
0
`0 7→ e− λ1 ` |∇(u` − u∞ )|2,Ω 0
`
is nondecreasing.
Proof of Theorem 6.1. Let v ∈ H01 (Ω` ). By (6.6) and (6.9) we have for almost every x1
Z
∂x2 u∞ ∂x2 v(x1 , ·) dx2 = hf, v(x1 , ·)i.
ω
Integrating this equality on (−`, `) we deduce, since u∞ is independent of x1 ,
Z
∇u∞ · ∇v dx = hf, vi.
Ω`
Combining this with (6.8) it follows that
Z
∇(u` − u∞ ) · ∇v dx = 0 ∀ v ∈ H01 (Ω` ).
(6.13)
Ω`
(Note the “ghost” rôle of f in these estimates).
We introduce then the function ρ whose graph is depicted on the figure below. In
particular one has
0 ≤ ρ ≤ 1,
ρ = 1 on (−`2 , `2 ),
ρ = 0 on R\(−`1 , `1 ),
Then
v = (u` − u∞ )ρ(x1 ) ∈ H01 (Ω` )
|ρ0 | ≤
1
.
`1 − `2
(6.14)
lxxii
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
1
0
ℓ2 ℓ1
−ℓ1 −ℓ2
x1
Figure 6.2:
and from (6.13) we derive
Z
∇(u` − u∞ ) · ∇{(u` − u∞ )ρ} dx = 0.
(6.15)
Ω`1
It follows that we have
Z
Z
2
|∇(u` − u∞ )| ρ dx = −
(∇(u` − u∞ ) · ∇ρ)(u` − u∞ ) dx
Ω`1 \Ω`2
Ω`1
Z
∂x1 (u` − u∞ )∂x1 ρ(u` − u∞ ) dx
=−
Ω`1 \Ω`2
since ρ is independent of x2 and ∂x1 ρ vanishes everywhere except on Ω`1 \Ω`2 .
Then – see (6.14):
Z
Z
1
2
|∇(u` − u∞ )| ρ dx ≤
|∂x (u` − u∞ )| |u` − u∞ | dx.
`1 − `2 Ω`1 \Ω`2 1
Ω `1
We then use the following Young inequality
1 2 p 2
1
√ a + λ1 b
ab ≤
2
λ1
to get
Z
|∇(u` − u∞ )|2 ρ dx
Ω`1
(6.16)
Z
p Z
1
1
2
2
√
≤
(∂x1 (u` − u∞ )) dx + λ1
(u` − u∞ ) dx .
2(`1 − `2 )
λ1 Ω`1 \Ω`2
Ω`1 \Ω`2
Now by the definition of λ1 – see also (6.6) – we have for a.e. x1
Z
Z
1
2
(u` − u∞ ) (x1 , ·) dx2 ≤
(∂x2 (u` − u∞ ))2 (x1 , ·) dx2 .
λ
1
ω
ω
6.1. A MODEL PROBLEM
lxxiii
Integrating for |x1 | ∈ (`2 , `1 ) we derive
Z
Z
1
2
(∂x2 (u` − u∞ ))2 dx.
(u` − u∞ ) dx ≤
λ
1 Ω` \Ω`
Ω`1 \Ω`2
1
2
Going back to (6.16) we get
Z
Z
1
2
|∇(u` − u∞ )| ρ dx ≤ √
|∇(u` − u∞ )|2 dx.
2 λ1 (`1 − `2 ) Ω`1 \Ω`2
Ω`1
Since ρ = 1 on (−`2 , `2 ) – see (6.14) – this implies that
Z
Z
1
2
|∇(u` − u∞ )| dx ≤ √
|∇(u` − u∞ )|2 dx.
2
λ
(`
−
`
)
1 1
2
Ω`2
Ω`1 \Ω`2
Let us set
Z
0
(6.17)
|∇(u` − u∞ )|2 dx.
(6.18)
1 F (`1 ) − F (`2 )
F (`2 ) ≤ √
.
`1 − `2
2 λ1
(6.19)
1
F (`2 ) ≤ √ F 0 (`2 ) for a.e. `2 .
2 λ1
(6.20)
F (` ) =
Ω`0
We can then write (6.17)
Letting `1 → `2 we obtain
(Note that F (`0 ) is almost everywhere derivable). The formula above can also be written
(e−2
which implies
e−2
√
λ1 `2
√
λ1 `2
F (`2 ))0 ≥ 0
F (`2 ) ≤ e−2
√
λ1 `1
F (`1 )
for any `2 ≤ `1 . By (6.18) this implies (6.12) and completes the proof of the theorem.
To complete our convergence result we will need the following proposition.
Proposition 6.2. We have
|∇(u` − u∞ )|
where |v|21,2 =
R
ω
2,Ω`
≤
√
2|u∞ |1,2 ,
(6.21)
v 2 + ∂x2 v 2 dx.
Proof. Note that the left hand side of (6.21) cannot a priori go to 0, but the estimate
shows already that u` is close to u∞ .
We denote by η the function whose graph is depicted below. Clearly u` − u∞ + ηu∞ ∈
lxxiv
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
1
ℓ−1
−ℓ −ℓ + 1
ℓ
x1
Figure 6.3: Graph of η
H01 (Ω` ). From (6.13) we derive
Z
∇(u` − u∞ ) · ∇(u` − u∞ + ηu∞ ) dx = 0
Ω`
hence
Z
Z
2
∇(u` − u∞ ) · ∇{ηu∞ } dx
|∇(u` − u∞ )| = −
Ω`
Ω`
Z
2
21 Z
|∇(u` − u∞ )| dx
≤
Ω`
2
12
|∇{ηu∞ }| dx
Ω`
by the Cauchy–Schwarz inequality. Since η vanishes on (−` + 1, ` − 1) we then derive
Z
Z
2
|∇(u` − u∞ )| dx ≤
|∇(ηu∞ )|2 dx.
Ω` \Ω`−1
Ω`
This last integral can be evaluated to give
Z
Z
2
|∇(ηu∞ )| dx =
((∂x1 η)u∞ )2 + (η∂x2 u∞ )2 dx
Ω` \Ω`−1
Ω \Ω
Z ` `−1
≤
u2∞ + (∂x2 u∞ )2 dx = 2|u∞ |21,2 .
Ω` \Ω`−1
The result follows.
We can now state our exponential convergence result, namely
Theorem 6.3. For any `0 > 0 we have
√
√ √λ `
− λ1 `
1 0
|∇(u` − u∞ )|
≤
2e
|u
|
e
∞
1,2
2,Ω
`0
(6.22)
i.e. u` → u∞ in Ω`0 with an exponential rate of convergence.
Proof. We just apply (6.12) with `2 = `0 , `1 = `. The result follows then from (6.21).
6.2. ANOTHER TYPE OF CONVERGENCE
lxxv
Remark 6.2. Taking `2 = 2` , `1 = ` we also have from (6.12)
√
√
− λ1 2`
|∇(u` − u∞ )|
≤
.
2|u
|
e
∞
1,2
2,Ω
`/2
(6.23)
However we do not have
|∇(u` − u∞ )|
2,Ω`
−→ 0
(6.24)
(see Exercise 1).
6.2
Another type of convergence
Somehow the convergence in H 1 (Ω) is not appealing. Pointwise convergence is more
satisfactory. This can be achieved very simply by a careful use of the maximum principle.
We will even extend this property to very general domains.
Let us consider the function
π
x2
(6.25)
v1 (x2 ) = cos
2
the first eigenfunction of the one dimensional Dirichlet problem on (−1, 1) (we refer to
Chapter ??). Note that we will just be using the expression of v1 . Then we have:
Theorem 6.4. Let u` , u∞ be the solutions to (6.2), (6.3). Suppose that for some constant
C we have
|u∞ (x2 )| ≤ Cv1 (x2 ) ∀ x2 .
(6.26)
Then
cosh π2 x1
v1 (x2 ).
|u` − u∞ | ≤ C
cosh π2 `
(6.27)
Proof. Let us remark that if u denotes the function in the right hand side of (6.27) one
has
−∆u = 0
in the usual sense and also in the weak sense. It follows that
−∆{u` − u∞ − u} = 0
in a weak sense in Ω` . Moreover on ∂Ω`
u` − u∞ − u = −u∞ − u = −u∞ − Cv1 ≤ 0.
By the maximum principle it follows that
u` − u∞ ≤ u.
Arguing similarly with u` − u∞ + u one derives
−u ≤ u` − u∞ ≤ u
which completes the proof of (6.27).
lxxvi
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
Remark 6.3. The assumption (6.26) holds in many cases, for instance if ∂x2 u∞ ∈ L∞ (ω)
which is the case for f ∈ L2 (ω). As a consequence of (6.27) we have
π
π
`0 e− 2 `
(6.28)
|u` − u∞ |∞,Ω`0 ≤ C cosh
2
which gives an exponential rate of convergence for the uniform norm locally.
We can now address the case of more general domains. Suppose that Ω0` is a domain
of the type of Figure 6.4. Our unique assumption being that
x2
−1
ℓ
−ℓ
x1
1
Figure 6.4:
Ω0` ⊂ R × ω,
Ω0` ∩ (−`, `) × R = Ω` .
(6.29)
Let f be a function such that
f ∈ L2 (ω).
Then there exists a unique u0` solution to
0
u` ∈ H01 (Ω0` ),
Z
Z
0
∇u` · ∇v dx =
Ω0`
Ω0`
f v dx ∀ v ∈ H01 (Ω0` ).
(6.30)
Then we have
Theorem 6.5. Let u∞ be the solution to (6.3). For any `0 there exists a constant C =
C(`0 , f ) such that
π
|u0` − u∞ |∞,Ω`0 ≤ Ce− 2 ` .
(6.31)
Proof. We suppose first that f ≥ 0. Let u` be the solution to (6.2). By the maximum
principle we have
0 ≤ u0` in Ω0` ,
0 ≤ u∞ on ω.
Thus it follows that
u` − u0` ≤ 0 on ∂Ω` ,
u0` − u∞ ≤ 0 on ∂Ω0` ,
−∆(u` − u0` ) = 0 in Ω` ,
−∆(u0` − u∞ ) = 0 in Ω0` .
6.3. THE GENERAL CASE
lxxvii
By the maximum principle we deduce that
0 ≤ u` ≤ u0` ≤ u∞
in Ω` .
By the estimate (6.28) it follows for `0 < `
π
|u0` − u∞ |∞,Ω`0 ≤ |u` − u∞ |∞,Ω`0 ≤ Ce− 2 `
(see also Remark 6.3). Thus the result follows in this case.
In the general case one writes
f = f+ − f−
±
where f + and f − are the positive and negative parts of f . Then introducing u±
` , u∞ the
solutions to
1
0
u±
` ∈ H0 (Ω` ),
1
u±
∞ ∈ H0 (ω),
±
−∆u±
` = f
±
−∆u±
∞ = f
in Ω0` ,
in ω,
we have by the results above
π
±
−2`
|u±
.
` − u∞ |∞,Ω`0 ≤ Ce
Due to the linearity of the problems and uniqueness of the solution we have
−
u0` = u+
` − u` ,
−
u∞ = u +
∞ − u∞ .
The result follows then easily.
Remark 6.4. Assuming f extended by 0 outside ω one can drop the left hand side assumption of (6.29).
6.3
The general case
We denote a point x ∈ Rn also as x = (X1 , X2 ) where
X1 = (x1 , . . . , xp ), X2 = (xp+1 , . . . , xn )
(6.32)
in other words we split the components of a point in Rn into the p first components and
the n − p last ones.
Let ω1 be an open subset of Rp that we suppose to satisfy
ω1 is a bounded convex domain containing 0.
(6.33)
Let ω2 be a bounded open subset of Rn−p , then we set
Ω` = `ω1 × ω2 .
(6.34)
lxxviii
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
Note that by (6.33) we have Ω` ⊂ Ω`0 for ` < `0 .
We denote by
A11 (X1 , X2 ) A12 (X2 )
A(x) =
= (aij (x))
A21 (X1 , X2 ) A22 (X2 )
(6.35)
a n × n-matrix divided into four blocks such that
A11 is a p × p − matrix, A22 is a (n − p) × (n − p) − matrix.
(6.36)
We assume that
aij ∈ L∞ (Rp × ω2 )
(6.37)
and that for some constants λ, Λ we have
λ|ξ|2 ≤ A(x)ξ · ξ
|A(x)ξ| ≤ Λ|ξ|
∀ξ ∈ Rn , a.e. x ∈ Rp × ω2
∀ξ ∈ Rn , a.e. x ∈ Rp × ω2 .
(6.38)
(6.39)
f ∈ L2 (ω2 )
(6.40)
Then by the Lax-Milgram theorem for
there exists a unique u∞ solution to
Z
Z
1
u∞ ∈ H0 (ω2 ),
A22 ∇X2 u∞ · ∇X2 vdX2 =
f vdX2
ω2
∀v ∈ H01 (ω2 ).
(6.41)
ω2
(In this system ∇X2 stands for the gradient in X2 , that is (∂xp+1 , . . . , ∂xn ), dX2 =
dxp+1 · · · dxn ).
By the Lax-Milgram theorem there exists also a unique u` solution to
Z
Z
1
u` ∈ H0 (Ω` ),
A∇u` · ∇vdx =
f vdx ∀v ∈ H01 (Ω` ).
(6.42)
Ω`
Ω`
Moreover we have
Theorem 6.6. There exist two constants c, α > 0 independent of ` such that
Z
|∇(u` − u∞ )|2 dx ≤ ce−α` |f |22,ω2
Ω`
2
Proof. The proof is divided into three steps.
• Step 1. The equation satisfied by u` − u∞ .
If v ∈ H01 (Ω` ) then for almost every X1 in `ω1 we have
v(X1 , ·) ∈ H01 (ω2 )
(6.43)
6.3. THE GENERAL CASE
lxxix
and thus by (6.41)
Z
Z
A22 ∇X2 u∞ · ∇X2 v(X1 , ·)dX2 =
f v(X1 , ·)dX2 .
ω2
ω2
Integrating in X1 we get
Z
Z
A22 ∇X2 u∞ · ∇X2 vdx =
Ω`
f vdx ∀v ∈ H01 (Ω` )
(6.44)
Ω`
Now for v ∈ H01 (Ω` ) we have
Z
Z
Z
A∇u∞ · ∇vdx =
A12 ∇X2 u∞ · ∇X1 vdx +
Ω`
Ω`
A22 ∇X2 u∞ · ∇X2 vdx
Ω`
Z
A22 ∇X2 u∞ · ∇X2 vdx =
=
(6.45)
Z
Ω`
f vdx
Ω`
(since A12 , u∞ are depending on X2 only). Combining (6.42), (6.45) we get
Z
A∇(u` − u∞ ) · ∇vdx = 0 ∀v ∈ H01 (Ω` ).
(6.46)
Ω`
• Step 2. An iteration technique.
Set 0 < `0 ≤ ` − 1. There exists ρ a function of X1 only such that
0 ≤ ρ ≤ 1, ρ = 1 on `0 ω1 , ρ = 0 on Rp \(`0 + 1)ω1 , |∇X1 ρ| ≤ c0
(6.47)
where c0 is a universal constant (cf. Exercise 4). Then we have
(u` − u∞ )ρ2 ∈ H01 (Ω` )
and from (6.46) we derive
Z
Z
∇X1 ρ
2
A∇(u` − u∞ ) · ∇(u` − u∞ )ρ dx = −2
A∇(u` − u∞ ) ·
(u` − u∞ )ρdx
0
Ω`
Ω`
Z
≤2
|A∇(u` − u∞ )||∇X1 ρ||u` − u∞ |ρdx.
Ω`0 +1 \Ω`0
Using (6.38), (6.39), (6.47) and the Cauchy-Schwarz inequality we derive
Z
Z
2 2
λ
|∇(u` − u∞ )| ρ dx ≤ 2c0 Λ
|∇(u` − u∞ )|ρ|u` − u∞ |dx
Ω`
Ω`0 +1 \Ω`0
Z
≤ 2c0 Λ
Ω`
2 2
|∇(u` − u∞ )| ρ dx
12 Z
Ω`0 +1 \Ω`0
(u` − u∞ )2 dx
12
.
lxxx
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
It follows that (recall that ρ = 1 on Ω`0 )
Z
Λ 2 Z
2
|∇(u` − u∞ )| dx ≤ 2c0
(u` − u∞ )2 dx.
λ
Ω`0
Ω`0 +1 \Ω`0
(6.48)
Since u` − u∞ vanishes on the lateral boundary of Ω` there exists a constant cp independent of ` such that (see Theorem 2.8)
Z
Z
2
2
(u` − u∞ ) dx ≤ cp
|∇X2 (u` − u∞ )|2 dx.
(6.49)
Ω`0 +1 \Ω`0
Ω`0 +1 \Ω`0
Combining this Poincaré inequality with (6.48) we get
Z
Z
2
|∇(u` − u∞ )| dx ≤ C
|∇(u` − u∞ )|2 dx
which is also
where C =
(6.50)
Ω`0 +1 \Ω`0
Ω`0
Z
Z
C
|∇(u` − u∞ )| dx ≤
1+C
Ω`0
2
|∇(u` − u∞ )|2 dx
Ω`0 +1
(2c0 cp Λλ )2 .
Z
Ω`
2
Iterating this formula starting from 2` we obtain
Z
C [ 2` ]
2
|∇(u` − u∞ )| dx ≤
|∇(u` − u∞ )|2 dx
1+C
Ω` `
2 +[ 2 ]
where [ 2` ] denotes the integer part of 2` . Since 2` − 1 < [ 2` ] ≤ 2` , it comes
Z
Z
2
(− 2` +1) ln( 1+C
)
C
|∇(u` − u∞ )| dx ≤ e
|∇(u` − u∞ )|2 dx
Ω`
Ω`
2
−α0 `
(6.51)
Z
2
|∇(u` − u∞ )| dx
= c0 e
Ω`
where c0 =
1+C
C
and α0 = 12 ln( 1+C
).
C
• Step 3. Evaluation of the last integral.
In (6.42) we take v = u` . We get
Z
Z
A∇u` · ∇u` dx =
Ω`
f u` dx ≤ |f |2,Ω` |u` |2,Ω` .
Ω`
Of course by the Poincaré inequality we have
|u` |2,Ω` ≤ cp |∇u` |2,Ω
`
where cp is independent of `. Using the ellipticity condition we derive
Z
λ
|∇u` |2 dx ≤ cp |f |2,Ω` |∇u` |2,Ω
Ω`
`
6.3. THE GENERAL CASE
lxxxi
and thus
Z
2
|∇u` | dx ≤
Ω`
cp |f |2,Ω`
λ
2
.
By a simple computation
|f |22,Ω`
Z
Z
=
`ω1
f 2 (X2 ) dX2 dX1
ω2
= |`ω1 | |f |22,ω1 = `p |ω1 | |f |22,ω2
where | · | denotes the measure of sets. It follows that
Z
c2p |ω1 | p 2
|∇u` |2 dx ≤
` |f |2,ω2 .
λ2
Ω`
(6.52)
Similarly choosing v = u∞ in (6.41) we get
Z
Z
Z
2
λ
|∇u∞ | dX2 ≤
A22 ∇X2 u∞ ∇X2 u∞ dX2 =
f u∞ dX2
ω2
ω2
ω2
≤ |f |2,ω2 |u∞ |2,Ω2
≤ |f |2,ω2 cp |∇u∞ |
2,Ω2
.
From this it follows that we have
Z
Z
c2p
2
f 2 dX2 .
|∇u∞ | dX2 ≤ 2
λ
ω2
ω2
Integrating this in X1 we derive
Z
|∇u∞ |2 dx ≤
Ω`
c2p |ω1 | p 2
` |f |2,ω2
λ2
(6.53)
which is the same estimate as in (6.52). Going back to (6.51) we obtain
Z
Z
2
−α0 `
|∇(u` − u∞ )| dx ≤ 2c0 e
|∇u` |2 + |∇u∞ |2 dx
Ω`/2
Ω`
≤ 4c0 c2p
|ω1 | p −α0 ` 2
`e
|f |2,ω2 .
λ2
The estimate (6.43) follows by taking α any constant smaller than α0 .
Remark 6.5. In (6.43) one can replace Ω ` by Ωa` for any a ∈ (0, 1) at the expense of
2
lowering α. The proof is exactly the same. However we recall that one cannot choose
a = 1 (see the exercises).
Remark 6.6. One cannot allow A12 to depend on X1 . One can allow A11 , A21 to depend
on ` provided (6.38), (6.39) are still valid.
Remark 6.7. The technique above follows [23]. It can be applied to a wide class of
problems (see for instance [19] for the Stokes problem).
lxxxii
6.4
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
An application
We would like to apply the result of the previous section to the anisotropic singular
perturbation problem. Indeed the two problems can be connected to each other via a
scaling. Let us explain this.
With the notation of the previous section consider
Ω1 = ω1 × ω2
(6.54)
where ω1 satisfies (6.33). Let us denote by A = A(x) a matrix like in (6.35) i.e.
A11 (x) A12 (X2 )
A(x) =
.
A21 (x) A22 (X2 )
Assume that
λ|ξ|2 ≤ A(x)ξ · ξ
|A(x)ξ| ≤ Λ|ξ|
∀ ξ ∈ Rn , a.e. x ∈ Ω1 ,
∀ ξ ∈ Rn , a.e. x ∈ Ω1 ,
(6.55)
(6.56)
holds for λ, Λ some positive constants. Then like in (5.36) one can define for ε > 0
2
ε A11 εA12
Aε = Aε (x) =
.
(6.57)
εA21 A22
Due to (5.37) and (5.38) for
f = f (X2 ) ∈ L2 (ω2 ),
there exists a unique uε solution to
Z
Z
Aε ∇uε · ∇v dx =
f v dx ∀ v ∈ H01 (Ω1 ),
Ω1
Ω
1
uε ∈ H01 (Ω1 ).
(6.58)
(6.59)
Due to Theorem 5.4 we know that when ε → 0, uε → u0 in L2 (Ω1 ) where u0 is the solution
to
Z
Z
A22 ∇X2 u0 · ∇X2 v dX2 =
f (X2 )v dX2 ∀ v ∈ H01 (ω2 ),
(6.60)
ω
ω2
2
u0 ∈ H01 (ω2 ).
Note that here since A22 , f are independent of X1 so is u0 . Due to a special situation we
can have more information on the convergence of uε . We have
Theorem 6.7. If Ωa = aω1 × ω2 , a ∈ (0, 1) there exist two positive constants c, α such
that
Z
α
(6.61)
|∇(uε − u0 )|2 dx ≤ ce− ε |f |22,ω2 .
Ωa
6.4. AN APPLICATION
lxxxiii
Proof. To prove that, we rely on a scaling argument. Indeed we set
X1
1
u` (X1 , X2 ) = uε
, X2 .
ε= ,
`
`
(6.62)
With this definition it is clear that
u` ∈ H01 (Ω` ).
(6.63)
Moreover
1
X1
∇X1 u` (X1 , X2 ) = ∇X1 uε
, X2
`
`
X1
, X2 .
∇X2 u` (X1 , X2 ) = ∇X2 uε
`
Making the change of variable X1 →
X1
`
in the equation of (6.59) we obtain
X1
X1
X1
, X2 ∇uε
, X2 · ∇v
, X2 dx
Aε
`
`
`
Ω`
Z
X1
=
f (X2 )v
, X2 dx ∀ v ∈ H01 (Ω1 ).
`
Ω`
Z
For w ∈ H01 (Ω` ) we have v(X1 , X2 ) = w(`X1 , X2 ) ∈ H01 (Ω1 ) and
X1
(∇v)
, X2 = (`∇X1 w(X1 , X2 ), ∇X2 w(X1 , X2 ))T .
`
Combining this with (6.64), (6.65) – see also (6.62) we obtain
X1
X1
X1
, X2 ∇uε
, X2 · ∇v
, X2
Aε
`
`
`
!
!
!
1
X1
1
A
,
X
A
(X
)
`∇
u
`∇
w
11
2
12
2
X
`
X
2
1
1
`
`
= `1
·
X2
∇
u
∇
A
,
X
A
(X
)
X2 `
X2 w
2
22
2
` 21
`
!
!
X1
1
1
A
,
X
∇
u
+
A
∇
u
`∇
w
11
2
X
`
12
X
`
X
1
2
1
`
`
·
= `
∇X2 w
A21 X`1 , X2 ∇X1 u` + A22 ∇X2 u`
!
!
!
A11 X`1 , X2 A12 (X2 )
∇X1 u`
∇X1 w
=
·
.
∇X2 u`
∇X2 w
A12 X`1 , X2 A22 (X2 )
Thus setting
e` (X1 , X2 ) =
A
A11
A12
X1
, X2
`
X1
, X2
`
!
A12 (X2 )
A22 (X2 )
(6.64)
(6.65)
lxxxiv
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
we see that u` is solution to
Z
Z
e
A` ∇u` · ∇w dx =
Ω
f w dx ∀ w ∈ H01 (Ω` ),
Ω`
`
u` ∈ H01 (Ω` ).
Applying Theorem 6.6 taking into account Remarks 6.5, 6.6 we obtain the existence of
some constants c0 , α0 such that
Z
|∇(u` − u0 )|2 dx ≤ c0 e−α0 ` |f |22,ω2 .
Ωa`
(Note that u0 = u∞ ). Changing X1 → `X1 in the integral above we obtain
Z
p
`
|∇(u` − u0 )|2 (`X1 , X2 ) dx ≤ c0 e−α0 ` |f |22,ω2
Ωa
and by (6.64) for ` > 1 i.e. ε < 1
Z
`p
|∇(uε − u0 )|2 dx ≤ c0 e−α0 ` |f |22,ω2
`2 Ωa
Z
|∇(uε − u0 )|2 dx ≤ c0 `2−p e−α0 ` |f |22,ω2
⇐⇒
Ωa
α
≤ ce− ε |f |22,ω2
for some constant c and any α smaller than α0 . This completes the proof of the theorem.
Exercices
1. Let λ1 be the first eigenvalue of the problem
(
−∂x22 u∞ = λ1 u∞ in ω = (−1, 1)
u∞ = 0
on {−1, 1}
2
(i.e. λ1 = π2 , u∞ = v1 given by (6.25)). Show that u` the solution to
(
−∆u` = λ1 u∞ in Ω` = (−`, `) × ω,
u` = 0
on ∂Ω`
is given by
u` =
Show then that
Z
Ω`
when ` → +∞.
√
cosh λ1 x1
√
1−
u∞ (x2 ).
cosh λ1 `
|∇(u` − u∞ )|2 dx −→
6
0
6.4. AN APPLICATION
lxxxv
2. Let u∞ , u` be the solutions to (6.41), (6.42). Show that when p ≥ 3 we have
Z
|∇(u` − u∞ )|2 dx → +∞
Ω`
when ` → +∞, Ω` = (−`, `)p × ω2 .
3. Show that (6.26) holds when f ∈ L2 (ω) (see Remark 6.3).
4. Let ω1 be a bounded convex domain containing 0. Prove that there exists a function
ρ satisfying (6.47).
5. Let Ω = ω1 × ω2 where ω1 is an open subset of Rp and ω2 an open subset of Rn−p .
The notation is the one of Section 6.3.
Let u ∈ H01 (Ω). Let ϕn ∈ D(Ω) such that
ϕn → u in H01 (Ω).
(i) Show that up to a subsequence
Z
|∇X2 (u − ϕn )(X1 , X2 )| + (u − ϕn )2 (X1 , X2 ) dX2 → 0
ω2
for a.e. X1 ∈ ω1 .
(ii) For a.e. X1 ∈ ω1 , ∇X2 u(X1 , ·) is a function in (L2 (Ω))n−p . Show that this
function is the gradient of u(X1 , ·) in ω2 in the distributional sense.
(iii) Conclude that u(X1 , ·) ∈ H01 (ω2 ) for a.e. X1 ∈ ω1 .
6. We would like to show that in Theorem 6.6 one cannot relax the assumption
A12 independent of X1 .
Suppose indeed that we are under the assumptions of Theorem 6.6 with
A12 = A12 (X1 , X2 ).
Suppose that u` solution of (6.42) is converging toward u∞ the solution to (6.41) in
H 1 (Ω`0 ) weak.
(i) Show that
Z
Z
A∇u` · ∇v dx =
Ω`0
A22 ∇X2 u∞ · ∇X2 v dx ∀ v ∈ H01 (Ω`0 ).
Ω`0
(ii) Show that
Z
Ω`0
A12 ∇X2 u∞ · ∇X1 v = 0 ∀ v ∈ H01 (Ω`0 ).
lxxxvi
CHAPTER 6. PROBLEMS IN LARGE CYLINDERS
(iii) Show that the equality above is impossible in general.
7. Let Ω` = (−`, `) × (−1, 1), ω = (−1, 1). We denote by C01 (Ω` ) the space of functions
in C 1 (Ω` ) vanishing on {−`, `} × ω. We set
V` = the closure in H 1 (Ω` ) of C01 (Ω` ).
1. Show that on V` the two norms
|∇v| ,
2,Ω
2 1
|v|22,Ω` + |∇v|2,Ω 2
`
`
are equivalent.
2. Let f ∈ L2 (ω). Show that there exists a unique u` solution to
u
Z` ∈ V` ,
Z
∇u` · ∇v dx =
f v dx ∀ v ∈ V` .
Ω`
(1)
Ω`
3. Show that the solution to (1) is a weak solution to the problem
−∆u = f in Ω` ,
∂u`
u` = 0 on {−`, `} × ω,
= 0 on (−`, `) × ∂ω.
∂ν
4. Let us assume from now on that
Z
f (x2 ) dx2 = 0.
ω
Set
W =
Z
v ∈ H (ω) v dx = 0 .
1
ω
Show that there exists a unique u∞ solution to
u
Z∞ ∈ W,
Z
∂x2 u∞ ∂x2 v dx =
f v dx ∀ v ∈ W.
ω
(2)
ω
5. Let v ∈ V` . Show that v(x1 , ·) ∈ H 1 (ω) for almost every x1 and that we have
Z
Z
∇u∞ · ∇v dx =
f v dx.
Ω`
Ω`
6. Show that u` converges towards u∞ in H 1 (Ω`/2 ) with an exponential rate of
convergence.
R
8. Show that when ω f (x2 ) dx2 6= 0 u` might be unbounded (cf. [13]).
Bibliography
[1] R. A. Adams. Sobolev Spaces. Academic Press, New York, 1975.
[2] H. Amann. Existence of multiple solutions for nonlinear elliptic boundary value
problems. Indiana Univ. Math. J., 21:925–935, 1971/72.
[3] H. Amann. Existence and multiplicity theorems for semi-linear elliptic boundary
value problems. Math. Z., 150:281–295, 1976.
[4] W. Arendt, C. J. K. Batty, and P. Bénilan. Asymptotic stability of schrödinger
semigroups on L1 (RN ). Math. Z., 209:511–518, 1992.
[5] C. Baiocchi. Sur un problème à frontière libre traduisant le filtrage de liquides à
travers des milieux poreux. C.R. Acad. Sc. Paris, Série A 273:1215–1217, 1971.
[6] C. Baiocchi and A. Capelo. Disequazioni Variazionali e Quasivariazionali, volume 1
and 2. Pitagora Editrice, Bologna, 1978.
[7] C. J. K. Batty. Asymptotic stability of Schrödinger semigroups: path integral methods. Math. Ann., 292:457–492, 1992.
[8] A. Bensoussan and J. L. Lions. Application des Inéquations Variationnelles en
Contrôle Stochastique. Dunod, Paris, 1978.
[9] A. Bensoussan, J. L. Lions, and G. Papanicolaou. Asymptotic Analysis for Periodic
Structures. North Holland, Amsterdam, 1978.
[10] J. Blat and K. J. Brown. Global bifurcation of positive solutions in some systems of
elliptic equations. SIAM J. Math. Anal., 17:1339–1353, 1986.
[11] J. M. Bony. Principe du maximum dans les espaces de Sobolev. C. R. Acad. Sc.
Paris, 265:333–336, 1967.
[12] H. Brezis. Problèmes unilatéraux. J. Math. Pures Appl., 51:1–168, 1972.
[13] H. Brezis. Operateurs maximaux monotones et semigroupes de contractions dans les
espaces de Hilbert, volume 5 of Math. Studies. North Holland, Amsterdam, 1975.
[14] H. Brezis. Analyse fonctionnelle. Masson, Paris, 1983.
lxxxvii
lxxxviii
BIBLIOGRAPHY
[15] N. Bruyère. Limit behaviour of a class of nonlinear elliptic problems in infinite
cylinders. Advances in Diff. Equ., 10:1081–1114, 2007.
[16] J. Carrillo-Menendez and M. Chipot. On the dam problem. J. Diff. Eqns., 45:234–
271, 1982.
[17] M. Chipot. Elements of Nonlinear Analysis. Birkhäuser, 2000.
[18] M. Chipot. ` goes to plus infinity. Birkhäuser, 2002.
[19] M. Chipot and S. Mardare. Asymptotic behavior of the Stokes problem in cylinders
becoming unbounded in one direction. J. Math. Pures Appl., 90:133–159, 2008.
[20] M. Chipot and J. F. Rodrigues. On a class of nonlocal nonlinear elliptic problems.
M2 AN, 26, 3:447–468, 1992.
[21] M. Chipot and A. Rougirel. On the asymptotic behavior of the solution of parabolic
problems in domains of large size in some directions. DCDS Series B, 1:319–338,
2001.
[22] M. Chipot and Y. Xie. Elliptic problems with periodic data: an asymptotic Analysis.
Journ. Math. pures et Appl., 85:345–370, 2006.
[23] M. Chipot and K. Yeressian. Exponential rates of convergence by an iteration technique. C.R. Acad. Sci. Paris, Sér. I 346:21–26, 2008.
[24] P. G. Ciarlet. The finite element method for elliptic problems. North Holland, Amsterdam, 1978.
[25] P. G. Ciarlet. Mathematical Elasticity. North Holland, Amsterdam, 1988.
[26] D. Cioranescu and P. Donato. An Introduction to homogenization, volume #17 of
Oxford Lecture Series in Mathematics and its Applications. Oxford Univ. Press, 1999.
[27] D. Cioranescu and J. Saint Jean Paulin. Homogenization of Reticulated Structures,
volume 139 of Applied Mathematical Sciences. Springer Verlag, New York, 1999.
[28] E. Conway, R. Gardner, and J. Smoller. Stability and bifurcation of steady-state
solutions for predator-prey equations. Adv. Appl. Math., 3:288–334, 1982.
[29] R. Dautray and J. L. Lions. Mathematical Analysis and Numerical Methods for
Science and Technology. Springer-Verlag, 1988.
[30] G. Duvaut and J. L. Lions. Les Inéquations en Mécanique et en Physique. Dunod,
Paris, 1972.
[31] L. C. Evans. Weak Convergence Methods for Nonlinear Partial Differential Equations, volume # 74 of CBMS. American Math. Society, 1990.
BIBLIOGRAPHY
lxxxix
[32] L. C. Evans. Partial Differential Equations, volume 19 of Graduate Studies in Mathematics. American Math. Society, Providence, 1998.
[33] G. Folland. Introduction to Partial Differential Equations. Princeton University
Press, 1976.
[34] G. Folland. Real Analysis: Modern Techniques and their Applications. Wiley–
Interscience, 1984.
[35] A. Friedman. Variational principles and free-boundary problems. R.E. Krieger Publishing Company, Malabar, Florida, 1988.
[36] B. Gidas, W.-M. Ni, and L. Nirenberg. Symmetry and related properties via the
maximum principle. Comm. Math. Phys., 68:209–243, 1979.
[37] D. Gilbarg and N. S. Trudinger. Elliptic Partial Differential Equations of Second
Order. Springer Verlag, 1983.
[38] V. Girault and P. A. Raviart. Finite Element Methods for Navier–Stokes Equations,
volume 749 of Lecture Notes in Mathematics. Springer Verlag, Berlin, 1981.
[39] E. Giusti. Equazioni ellittiche del secondo ordine, volume #6 of Quaderni dell’
Unione Matematica Italiana. Pitagora Editrice, Bologna, 1978.
[40] A. Grigor’yan. Bounded solutions of the Schrödinger equation on non-compact Riemannian manifolds. J. Sov. Math., 51:2340–2349, 1990.
[41] C. Gui and Y. Lou. Uniqueness and nonuniqueness of coexistence states in the
Lotka-Volterra model. Comm. Pure Appl. Math., XLVII:1–24, 1994.
[42] P. Hartman and G. Stampacchia. On some nonlinear elliptic differential functional
equations. Acta Math., 115:153–188, 1966.
[43] V. V. Jikov, S. M. Kozlov, and O. A. Oleinik. Homogenization of Differential Operators and Integral Functionals. Springer-Verlag, Berlin, Heidelberg, 1994.
[44] D. Kinderlehrer. Variational inequalities and free boundary problems. Bull. Amer.
Math. Soc., 84:7–26, 1978.
[45] D. Kinderlehrer and G. Stampacchia. An Introduction to Variational Inequalities and
their Applications, volume 31 of Classic Appl. Math. SIAM, Philadelphia, 2000.
[46] O. Ladyzhenskaya and N. Uraltseva. Linear and Quasilinear Elliptic Equations.
Academic Press, New York, 1968.
[47] E. H. Lieb and M. Loss. Analysis, Graduate Studies in Mathematics, volume # 14.
AMS, Providence, 2000.
xc
BIBLIOGRAPHY
[48] J. L. Lions. Quelques méthodes de résolution des problèmes aux limites non linéaires.
Dunod–Gauthier–Villars, Paris, 1969.
[49] J. L. Lions. Perturbations singulières dans les problèmes aux limites et en contrôle
optimal, volume # 323 of Lecture Notes in Mathematics. Springer-Verlag, 1973.
[50] J. L. Lions and E. Magenes. Nonhomogeneous boundary value problems and applications, volume I–III. Springer-Verlag, 1972.
[51] J. L. Lions and G. Stampacchia. Variational inequalities. Comm. Pure Appl. Math.,
20:493–519, 1967.
[52] G. Dal Maso. An introduction to Γ-convergence. Birkhäuser, Boston, 1993.
[53] M. Meier. Liouville theorem for nonlinear elliptic equations and systems. Manuscripta
Mathematica, 29:207–228, 1979.
[54] J. Moser. On Harnack’s theorem for elliptic differential equations. Comm. Pure Appl.
Math., 14:577–591, 1961.
[55] J. Nec̆as. Les méthodes directes en théorie des équations elliptiques. Masson, Paris,
1967.
[56] Y. Pinchover. On the equivalence of green functions of second order elliptic equations
in Rn . Diff. and Integral Equations, 5:481–493, 1992.
[57] Y. Pinchover. Maximum and anti-maximum principles and eigenfunctions estimates
via perturbation theory of positive solutions of elliptic equations. Math. Ann.,
314:555–590, 1999.
[58] R. G. Pinsky. Positive harmonic functions and diffusions: An integrated analytic and
probabilistic approach. Cambridge studies in advanced mathematics, 45, 1995.
[59] M. H. Protter and H. F. Weinberger. Maximum Principles in Differential Equations.
Prentice-Hall, Englewood Cliffs, NJ, 1967.
[60] M. H. Protter and H. F. Weinberger. A maximum principle and gradient bounds for
linear elliptic equations. Indiana U. Math. J., 23:239–249, 1973.
[61] P. Pucci and J. Serrin. The Maximum Principle, volume #73 of Progress in Nonlinear
Differential Equations and Their Applications. Birkhäuser, 2007.
[62] P. A. Raviart and J. M. Thomas. Introduction à l’analyse numérique des équations
aux dérivées partielles. Masson, Paris, 1983.
[63] W. Rudin. Real and complex analysis. McGraw Hil, 1966.
[64] L. Schwartz. Théorie des distributions. Hermann, 1966.
BIBLIOGRAPHY
xci
[65] A. S. Shamaev, O. A. Olenik, and G. A. Yosifian. Mathematical problems in elasticity
and homogenization. Studies in mathematics and its applications, volume 26. NorthHolland Publ., Amsterdam and New York, 1992.
[66] B. Simon. Schrödinger semigroups. Bull. Am. Math. Soc., 7:447–526, 1982.
[67] G. Stampacchia. Equations elliptiques du second ordre à coefficients discontinus.
Presses de l’Université de Montréal, 1965.
[68] G. Talenti. Best constants in Sobolev inequality. Ann. Math. Pura Appl., 110:353–
372, 1976.
[69] F. Treves. Topological vector spaces, distributions and kernels. Academic Press, 1967.
[70] K. Yosida. Functional Analysis. Springer Verlag, Berlin, 1978.
© Copyright 2026 Paperzz